Search Results

Search found 2436 results on 98 pages for 'verify'.

Page 85/98 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • ASP.NET Multi-Select Radio Buttons

    - by Ajarn Mark Caldwell
    “HERESY!” you say, “Radio buttons are for single-select items!  If you want multi-select, use checkboxes!”  Well, I would agree, and that is why I consider this a significant bug that ASP.NET developers need to be aware of.  Here’s the situation. If you use ASP:RadioButton controls on your WebForm, then you know that in order to get them to behave properly, that is, to define a group in which only one of them can be selected by the user, you use the Group attribute and set the same value on each one.  For example: 1: <asp:RadioButton runat="server" ID="rdo1" Group="GroupName" checked="true" /> 2: <asp:RadioButton runat="server" ID="rdo2" Group="GroupName" /> With this configuration, the controls will render to the browser as HTML Input / Type=radio tags and when the user selects one, the browser will automatically deselect the other one so that only one can be selected (checked) at any time. BUT, if you user server-side code to manipulate the Checked attribute of these controls, it is possible to set them both to believe that they are checked. 1: rdo2.Checked = true; // Does NOT change the Checked attribute of rdo1 to be false. As long as you remain in server-side code, the system will believe that both radio buttons are checked (you can verify this in the debugger).  Therefore, if you later have code that looks like this 1: if (rdo1.Checked) 2: { 3: DoSomething1(); 4: } 5: else 6: { 7: DoSomethingElse(); 8: } then it will always evaluate the condition to be true and take the first action.  The good news is that if you return to the client with multiple radio buttons checked, the browser tries to clean that up for you and make only one of them really checked.  It turns out that the last one on the screen wins, so in this case, you will in fact end up with rdo2 as checked, and if you then make a trip to the server to run the code above, it will appear to be working properly.  However, if your page initializes with rdo2 checked and in code you set rdo1 to checked also, then when you go back to the client, rdo2 will remain checked, again because it is the last one and the last one checked “wins”. And this gets even uglier if you ever set these radio buttons to be disabled.  In that case, although the client browser renders the radio buttons as though only one of them is checked the system actually retains the value of both of them as checked, and your next trip to the server will really frustrate you because the browser showed rdo2 as checked, but your DoSomething1() routine keeps getting executed. The following is sample code you can put into any WebForm to test this yourself. 1: <body> 2: <form id="form1" runat="server"> 3: <h1>Radio Button Test</h1> 4: <hr /> 5: <asp:Button runat="server" ID="cmdBlankPostback" Text="Blank Postback" /> 6: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 7: <asp:Button runat="server" ID="cmdEnable" Text="Enable All" OnClick="cmdEnable_Click" /> 8: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 9: <asp:Button runat="server" ID="cmdDisable" Text="Disable All" OnClick="cmdDisable_Click" /> 10: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 11: <asp:Button runat="server" ID="cmdTest" Text="Test" OnClick="cmdTest_Click" /> 12: <br /><br /><br /> 13: <asp:RadioButton ID="rdoG1R1" GroupName="Group1" runat="server" Text="Group 1 Radio 1" Checked="true" /><br /> 14: <asp:RadioButton ID="rdoG1R2" GroupName="Group1" runat="server" Text="Group 1 Radio 2" /><br /> 15: <asp:RadioButton ID="rdoG1R3" GroupName="Group1" runat="server" Text="Group 1 Radio 3" /><br /> 16: <hr /> 17: <asp:RadioButton ID="rdoG2R1" GroupName="Group2" runat="server" Text="Group 2 Radio 1" /><br /> 18: <asp:RadioButton ID="rdoG2R2" GroupName="Group2" runat="server" Text="Group 2 Radio 2" Checked="true" /><br /> 19:  20: </form> 21: </body> 1: protected void Page_Load(object sender, EventArgs e) 2: { 3:  4: } 5:  6: protected void cmdEnable_Click(object sender, EventArgs e) 7: { 8: rdoG1R1.Enabled = true; 9: rdoG1R2.Enabled = true; 10: rdoG1R3.Enabled = true; 11: rdoG2R1.Enabled = true; 12: rdoG2R2.Enabled = true; 13: } 14:  15: protected void cmdDisable_Click(object sender, EventArgs e) 16: { 17: rdoG1R1.Enabled = false; 18: rdoG1R2.Enabled = false; 19: rdoG1R3.Enabled = false; 20: rdoG2R1.Enabled = false; 21: rdoG2R2.Enabled = false; 22: } 23:  24: protected void cmdTest_Click(object sender, EventArgs e) 25: { 26: rdoG1R2.Checked = true; 27: rdoG2R1.Checked = true; 28: } 29: 30: protected void Page_PreRender(object sender, EventArgs e) 31: { 32:  33: } After you copy the markup and page-behind code into the appropriate files.  I recommend you set a breakpoint on Page_Load as well as cmdTest_Click, and add each of the radio button controls to the Watch list so that you can walk through the code and see exactly what is happening.  Use the Blank Postback button to cause a postback to the server so you can inspect things without making any changes. The moral of the story is: if you do server-side manipulation of the Checked status of RadioButton controls, then you need to set ALL of the controls in a group whenever you want to change one.

    Read the article

  • Common reasons for the &lsquo;Sys is undefined&rsquo; error in ASP.NET Ajax applications

      In this blog I will try to summarize the most common reasons for getting the famous 'Sys is undefined' error when running an Ajax enabled web site or application (there are almost one milion results on Google for that phrase). Where does it come from? In every Ajax web pages source you will see a code like this: <script type="text/javascript"> //<![CDATA[ Sys.WebForms.PageRequestManager._initialize('ScriptManager1', document.getElementById('form1')); Sys.WebForms.PageRequestManager.getInstance()._updateControls([], [], [], 90); //]]> </script>   This is the initialization script of the ScriptManager. So, if for some reason the Sys namespace is not available when the code executes you get the Sys is undefined error. Here are the most common reasons and solutions for that problem:   1. The error occurs when you have added a control from RadControls for ASP.NET AJAX, but your application is not configured to use ASP.NET AJAX. For example, in VS 2005 you created a new Blank Site instead of a new Ajax-Enabled Web Site and the Sys is undefined message pops up. To fix it you need to follow the steps described at Configuring ASP.NET Ajax article (check the topic called Adding ASP.NET AJAX Configuration Elements to an Existing Web Site) or simply create the Ajax-Enabled Web Site. You can also check my other blog post on the matter: Visual Studio 2008: Where is the new ASP.NET Ajax-Enabled Web Site template?   2. Authentication - as the website denies access to all pages to unauthorized users, access to the Telerik.Web.UI.WebResource.axd handler is unauthorized (this is the default handler of RadScriptManager). This causes the handler to serve the content of the login page instead of the combined scripts, hence the error. To solve it - add a <location> section to the application configuration file to allow access to Telerik.Web.UI.WebResource.axd to all users, like: <configuration> ... <location path="Telerik.Web.UI.WebResource.axd"> <system.web> <authorization> <allow users="*"/> </authorization> </system.web> </location> ... </configuration>   Note that the access to the standard ScriptResource.axd and WebResource.axd is automatically allowed for all users (authenticated and unauthenticated), so if you use the ScriptManager instead of RadScriptManager - you will not face this problem. The authentication problem does not manifest when you disable script combining or use the CDN. Adding the above configuration section will make it work with RadScriptManagers combined script.   3. The IE6 browser fails to load the compressed script. The problem does not appear in any other browser. There is a well known bug in the older versions of IE6 which lose the first 2,048 bytes of data that are sent back from a Web server that uses HTTP compression. Latest versions of RadScriptManager does not compress the output at all if the client is IE6, but in the previous versions you need to manually disable the output compression to prevent the error. So, if you get the Sys is undefined error in IE6 - update to the latest version of RadControls or simply disable the output compression.   4. Requests to the *.axd files returns Error Code 404 - Not Found. This could  be fixed easily: Check in the IIS management console that the .axd extension (the default HTTP handler extension) is allowed:     Also check if the Verify if file exists checkbox is unchecked (click on the Edit button appearing in the previous screenshot to check). More information can be found in our troubleshooting article and from the ASP.NET QA team blog post   5. The virtual directory in IIS is not marked as Web Application. Converting it to Web Application should fix the problem.   6. Check for the <xhtmlConformance mode="Legacy"/> option in your web.config and remove it. It would be rather rare to become a victim of this exact case, but still have it in mind. Scott Guthrie describes it in more details   In the above points I mentioned several times the terms web resources, javascript output, compressed script. If you want to find out more about these please see the Web Resources Demystified series of my friend and colleague Atanas Korchev   I hope that one of the above solutions will help you get rid of the Sys is undefined error.   Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Rebuilding CoasterBuzz, Part IV: Dependency injection, it's what's for breakfast

    - by Jeff
    (Repost from my personal blog.) This is another post in a series about rebuilding one of my Web sites, which has been around for 12 years. I hope to relaunch soon. More: Part I: Evolution, and death to WCF Part II: Hot data objects Part III: The architecture using the "Web stack of love" If anything generally good for the craft has come out of the rise of ASP.NET MVC, it's that people are more likely to use dependency injection, and loosely couple the pieces parts of their applications. A lot of the emphasis on coding this way has been to facilitate unit testing, and that's awesome. Unit testing makes me feel a lot less like a hack, and a lot more confident in what I'm doing. Dependency injection is pretty straight forward. It says, "Given an instance of this class, I need instances of other classes, defined not by their concrete implementations, but their interfaces." Probably the first place a developer exercises this in when having a class talk to some kind of data repository. For a very simple example, pretend the FooService has to get some Foo. It looks like this: public class FooService {    public FooService(IFooRepository fooRepo)    {       _fooRepo = fooRepo;    }    private readonly IFooRepository _fooRepo;    public Foo GetMeFoo()    {       return _fooRepo.FooFromDatabase();    } } When we need the FooService, we ask the dependency container to get it for us. It says, "You'll need an IFooRepository in that, so let me see what that's mapped to, and put it in there for you." Why is this good for you? It's good because your FooService doesn't know or care about how you get some foo. You can stub out what the methods and properties on a fake IFooRepository might return, and test just the FooService. I don't want to get too far into unit testing, but it's the most commonly cited reason to use DI containers in MVC. What I wanted to mention is how there's another benefit in a project like mine, where I have to glue together a bunch of stuff. For example, when I have someone sign up for a new account on CoasterBuzz, I'm actually using POP Forums' new account mailer, which composes a bunch of text that includes a link to verify your account. The thing is, I want to use custom text and some other logic that's specific to CoasterBuzz. To accomplish this, I make a new class that inherits from the forum's NewAccountMailer, and override some stuff. Easy enough. Then I use Ninject, the DI container I'm using, to unbind the forum's implementation, and substitute my own. Ninject uses something called a NinjectModule to bind interfaces to concrete implementations. The forum has its own module, and then the CoasterBuzz module is loaded second. The CB module has two lines of code to swap out the mailer implementation: Unbind<PopForums.Email.INewAccountMailer>(); Bind<PopForums.Email.INewAccountMailer>().To<CbNewAccountMailer>(); Piece of cake! Now, when code asks the DI container for an INewAccountMailer, it gets my custom implementation instead. This is a lot easier to deal with than some of the alternatives. I could do some copy-paste, but then I'm not using well-tested code from the forum. I could write stuff from scratch, but then I'm throwing away a bunch of logic I've already written (in this case, stuff around e-mail, e-mail settings, mail delivery failures). There are other places where the DI container comes in handy. For example, CoasterBuzz does a number of custom things with user profiles, and special content for paid members. It uses the forum as the core piece to managing users, so I can ask the container to get me instances of classes that do user lookups, for example, and have zero care about how the forum handles database calls, configuration, etc. What a great world to live in, compared to ten years ago. Sure, the primary interest in DI is around the "separation of concerns" and facilitating unit testing, but as your library grows and you use more open source, it starts to be the glue that pulls everything together.

    Read the article

  • Migrating Virtual Iron guest to Oracle VM 3.x

    - by scoter
    As stated on the official site, Oracle in 2009, acquired a provider of server virtualization management software named Virtual Iron; you can find all the acquisition details at this link. Into the FAQ on the official site you can also view that, for the future, Oracle plans to fully integrate Virtual Iron technology into Oracle VM products, and any enhancements will be delivered as a part of the combined solution; this is what is going on with Oracle VM 3.x. So, customers started asking us to migrate Virtual Iron guests to Oracle VM. IMPORTANT: This procedure needs a dedicated OVM-Server with no-guests running on top; be careful while execute this procedure on production environments. In these little steps you will find how-to migrate, as fast as possible, your guests between VI ( Virtual Iron ) and Oracle VM; keep in mind that OracleVM has a built-in P2V utility ( Official Documentation )  that you can use to migrate guests between VI and Oracle VM. Concepts: VI repositories.  On VI we have the same "repository" concept as in Oracle VM; the difference between these two products is that VI use a raw-lun as repository ( instead of using ocfs2 and its capabilities, like ref-links ). The VI "raw-lun" repository, with a pure operating-system perspective, may be presented as in this picture: Infact on this "raw-lun" VI create an LVM2 volume-group. The VI "raw-lun" repository, with an hypervisor perspective, may be presented as in this picture: So, the relationships are: LVM2-Volume-Group <-> VI Repository LVM2-Logical-Volume <-> VI guest virtual-disk The first step is to present the VI repository ( raw-lun ) to your dedicated OVM-Server. Prepare dedicated OVM-Server On the OVM-Server ( OVS ) you need to discover new lun and, after that, discover volume-group and logical-volumes containted in VI repository; due to default OVS configuration you need to edit lvm2 configuration file: /etc/lvm/lvm.conf     # By default for OVS we restrict every block device:     # filter = [ "r/.*/" ] and comment the line starting with "filter" as above. Now you have to discover the raw-lun presented and, next, activate volume-group and logical-volumes: #!/bin/bash for HOST in `ls /sys/class/scsi_host`;do echo '- - -' > /sys/class/scsi_host/$HOST/scan; done CPATH=`pwd` cd /dev for DEVICE in `ls sd[a-z] sd?[a-z]`;do echo '1' > /sys/block/$DEVICE/device/rescan; done cd $CPATH cd /dev/mapper for PARTITION in `ls *[a-z] *?[a-z]`;do partprobe /dev/mapper/$PARTITION; done cd $CPATH vgchange -a yAfter that you will see a new device:[root@ovs01 ~]# cd /dev/6000F4B00000000000210135bef64994[root@ovs01 6000F4B00000000000210135bef64994]# ls -l 6000F4B0000000000061013* lrwxrwxrwx 1 root root 77 Oct 29 10:50 6000F4B00000000000610135c3a0b8cb -> /dev/mapper/6000F4B00000000000210135bef64994-6000F4B00000000000610135c3a0b8cb By your OVM-Manager create a guest server with the same definition as on VI:same core number as VI source guestsame memory as VI source guestsame number of disks as VI source guest ( you can create OVS virtual disk with a small size of 1GB because the "clone" will, eventually, extend the size of your new virtual disks )Summarizing:source-virtual-disk path ( VI ):/dev/mapper/6000F4B00000000000210135bef64994-6000F4B00000000000610135c3a0b8cbdest-virtual-disk path ( OVS ):/OVS/Repositories/0004fb00000300006cfeb81c12f12f00/VirtualDisks/0004fb000012000055e0fc4c5c8a35ee.img ** ** = to identify your virtual disk you have verify its name under the "vm.cfg" file of your new guest.Clone VI virtual-disk to OVS virtual-diskdd if=/dev/mapper/6000F4B00000000000210135bef64994-6000F4B00000000000610135c3a0b8cb of=/OVS/Repositories/0004fb00000300006cfeb81c12f12f00/VirtualDisks/0004fb000012000055e0fc4c5c8a35ee.img Clean unsupported parameters and changes on OVS.1. Restore original /etc/lvm/lvm.conf    # By default for OVS we restrict every block device:     filter = [ "r/.*/" ]    and uncomment the line starting with "filter" as above.2. Force-stop lvm2-monitor service  # service lvm2-monitor force-stop 3. Restore original /etc/lvm directories ( archive, backup and cache )  # cd /etc/lvm  # rm -fr archive backup cache; mkdir archive backup cache4. Reboot OVSRefresh OVS repository and start your guest.By OracleVM Manager refresh your repository:By OracleVM Manager start your "migrated" guest: Comments and corrections are welcome.  Simon COTER 

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • World Backup Day

    - by red(at)work
    Here at Red Gate Towers, the SQL Backup development team have been hunkered down in their shed for the last few months, with the toolbox, blowtorch and chamois leather out, upgrading SQL Backup. When we started, autumn leaves were falling. Now we're about to finish, spring flowers are budding. If not quite a gleaming new machine, at the very least a familiar, reliable engine with some shiny new bits on it will trundle magnificently out of the workshop. One of the interesting things I've noticed about working on software development teams is that the team is together for so long 'implementing' stuff - designing, coding, testing, fixing bugs and so on - that you occasionally forget why you're doing what you're doing. Doubt creeps in. It feels like a long time since we launched this project in a fanfare of optimism and enthusiasm, and all that clarity of purpose and mission "yee-haw" has dissipated with the daily pressures of development. Every now and again, we look up from our bunker and notice all those thousands of users out there, with their different configurations and working practices and each with their own set of problems and requirements, and we ask ourselves "does anyone care about what we're doing?" Has the world moved on while we've been busy? Could we have been doing something more useful with the time and talent of all these excellent people we've assembled? In truth, you can research and test and validate all you like, but you never really know if you've done the right thing (or at least, something valuable for some users) until you release. All projects suffer this insecurity. If they don't, maybe you're not worrying enough about what you're building. The two enemies of software development are certainty and complacency. Oh, and of course, rival teams with Nerf guns. The goal of SQL Backup 7 is to make it so easy to schedule regular restores of your backups that you have no excuse not to. Why schedule a restore? Because your data is not as good as your last backup. It's only as good as your last successful restore. If you're not checking your backups by restoring them and running an integrity check on the database, you're only doing half the job. It seems that most DBAs know that this is best practice, but it can be tricky and time-consuming to set up, so it's one of those tasks that can get forgotten in the midst all the other demands on their time. Sometimes, they're just too busy firefighting. But if it was simple to do? That was our inspiration for SQL Backup 7. So it was heartening to read Brent Ozar's blog post the other day about World Backup Day. To be honest, I'd never heard of World Backup Day (Talk Like a Pirate Day, yes, but not this one); however, its emphasis on not just backing up your data but checking the validity of those backups was exactly the same message we had in mind when building SQL Backup 7. It's printed on a piece of A3 above our planning board - "Make backup verification so easy to do that no DBA has an excuse for not doing it" It's the missing piece that completes the puzzle. Simple idea, great concept, useful feature, but, as it turned out, far from straightforward to implement. The problem is the future. As Marty McFly discovered over the course of three movies, the future is uncertain and hard to predict - so when you are scheduling a restore to take place an hour, day, week or month after the backup, there are all kinds of questions that you wouldn't normally have to consider. Where will this backup live? Will it even exist at the time? Will it be split into multiple files? What will the file names be? Will it be encrypted? What files should it be restored to? SQL Backup needs to know what to expect at the time the restore job is actually run. Of course, a DBA will know the answer to all these questions, but to deliver the whole point of version 7, we wanted to make it easy for them to input that information into SQL Backup. We think we've done that. When you create your scheduled backup job, there is now an option to create a "reminder" to follow it up with a scheduled restore to verify the resulting backups. Actually, it's much more than a reminder, as it stores all the relevant data so you can click it and pre-populate the wizard with all the right settings to set up your verification restores. Simple. But, what do you think? We'd love you to try it. Post by Brian Harris

    Read the article

  • Part 4 of 4 : Tips/Tricks for Silverlight Developers.

    - by mbcrump
    Part 1 | Part 2 | Part 3 | Part 4 I wanted to create a series of blog post that gets right to the point and is aimed specifically at Silverlight Developers. The most important things I want this series to answer is : What is it?  Why do I care? How do I do it? I hope that you enjoy this series. Let’s get started: Tip/Trick #16) What is it? Find out version information about Silverlight and which WebKit it is using by going to http://issilverlightinstalled.com/scriptverify/. Why do I care? I’ve had those users that its just easier to give them a site and say copy/paste the line that says User Agent in order to troubleshoot a Silverlight problem. I’ve also been debugging my own Silverlight applications and needed an easy way to determine if the plugin is disabled or not. How do I do it: Simply navigate to http://issilverlightinstalled.com/scriptverify/ and hit the Verify button. An example screenshot is located below: Results from Chrome 7 Results from Internet Explorer 8 (With Silverlight Disabled) Tip/Trick #17) What is it? Use Lambdas whenever you can. Why do I care?  It is my personal opinion that code is easier to read using Lambdas after you get past the syntax. How do I do it: For example: You may write code like the following: void MainPage_Loaded(object sender, RoutedEventArgs e) { //Check and see if we have a newer .XAP file on the server Application.Current.CheckAndDownloadUpdateAsync(); Application.Current.CheckAndDownloadUpdateCompleted += new CheckAndDownloadUpdateCompletedEventHandler(Current_CheckAndDownloadUpdateCompleted); } void Current_CheckAndDownloadUpdateCompleted(object sender, CheckAndDownloadUpdateCompletedEventArgs e) { if (e.UpdateAvailable) { MessageBox.Show( "An update has been installed. To see the updates please exit and restart the application"); } } To me this style forces me to look for the other Method to see what the code is actually doing. The style located below is much easier to read in my opinion and does the exact same thing. void MainPage_Loaded(object sender, RoutedEventArgs e) { //Check and see if we have a newer .XAP file on the server Application.Current.CheckAndDownloadUpdateAsync(); Application.Current.CheckAndDownloadUpdateCompleted += (s, e) => { if (e.UpdateAvailable) { MessageBox.Show( "An update has been installed. To see the updates please exit and restart the application"); } }; } Tip/Trick #18) What is it? Prevent development Web Service references from breaking when Visual Studio auto generates a new port number. Why do I care?  We have all been there, we are developing a Silverlight Application and all of a sudden our development web services break. We check and find out that the local port number that Visual Studio assigned has changed and now we need up to update all of our service references. We need a way to stop this. How do I do it: This can actually be prevented with just a few mouse click. Right click on your web solution and goto properties. Click the tab that says, Web. You just need to click the radio button and specify a port number. Now you won’t be bothered with that anymore. Tip/Trick #19) What is it? You can disable the Close Button a ChildWindow. Why do I care?  I wouldn’t blog about it if I hadn’t seen it. Devs trying to override keystrokes to prevent users from closing a Child Window. How do I do it: A property exist on the ChildWindow called “HasCloseButton”, you simply change that to false and your close button is gone. You can delete the “Cancel” button and add some logic to the OK button if you want the user to respond before proceeding. Tip/Trick #20) What is it? Cleanup your XAML. Why do I care?  By removing unneeded namespaces, not naming all of your controls and getting rid of designer markup you can improve code quality and readability. How do I do it: (This is a 3 in one tip) Remove unused Designer markup: 1) Have you ever wondered what the following code snippet does? xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480" This code is telling the designer to do something special with this page in “Design mode” Specifically the width and the height of the page. When its running in the browser it will not use this information and it is actually ignored by the XAML parser. In other words, if you don’t need it then delete it. 2) If you are not using a namespace then remove it. In the code sample below, I am using Resharper which will tell me the ones that I’m not using by the grayed out line below. If you don’t have resharper you can look in your XAML and manually remove the unneeded namespaces. 3) Don’t name an control unless you actually need to refer to it in procedural code. If you name a control you will take a slight performance hit that is totally unnecessary if its not being called. <TextBlock Height="23" Text="TextBlock" />   That is the end of the series. I hope that you enjoyed it and please check out Part 1 | Part 2 | Part 3 if your hungry for more.  Subscribe to my feed CodeProject

    Read the article

  • Exiting a reboot loop

    - by user12617035
    If you're in a situation where the system is panic'ing during boot, you can use # boot net -s to regain control of your system. In my case, I'd added some diagnostic code to a (PCI) driver (that is used to boot the root filesystem). There was a bug in the driver, and each time during boot, the bug occurred, and so caused the system to panic: ... 000000000180b950 genunix:vfs_mountroot+60 (800, 200, 0, 185d400, 1883000, 18aec00) %l0-3: 0000000000001770 0000000000000640 0000000001814000 00000000000008fc %l4-7: 0000000001833c00 00000000018b1000 0000000000000600 0000000000000200 000000000180ba10 genunix:main+98 (18141a0, 1013800, 18362c0, 18ab800, 180e000, 1814000) %l0-3: 0000000070002000 0000000000000001 000000000180c000 000000000180e000 %l4-7: 0000000000000001 0000000001074800 0000000000000060 0000000000000000 skipping system dump - no dump device configured rebooting... If you're logged in via the console, you can send a BREAK sequence in order to gain control of the firmware's (OBP's) prompt. Enter Ctrl-Shift-[ in order to get the TELNET prompt. Once telnet has control, enter this: telnet> send brk You'll be presented with OBP's prompt: ok You then enter the following in order to boot into single-user mode via the network: ok boot net -s Note that booting from the network under Solaris will implicitly cause the system to be INSTALLED with whatever software had last been configured to be installed. However, we are using boot net -s as a "handle" with which to get at the Solaris prompt. Once at that prompt, we can perform actions as root that will let us back out our buggy driver (ok... MY buggy driver :-)) ...and replace it with the original, non-buggy driver. Entering the boot command caused the following output, as well as left us at the Solaris prompt (in single-user-mode): Sun Blade 1500, No Keyboard Copyright 1998-2004 Sun Microsystems, Inc. All rights reserved. OpenBoot 4.16.4, 1024 MB memory installed, Serial #53463393. Ethernet address 0:3:ba:2f:c9:61, Host ID: 832fc961. Rebooting with command: boot net -s Boot device: /pci@1f,700000/network@2 File and args: -s 1000 Mbps FDX Link up Timeout waiting for ARP/RARP packet Timeout waiting for ARP/RARP packet 4000 1000 Mbps FDX Link up Requesting Internet address for 0:3:ba:2f:c9:61 SunOS Release 5.10 Version Generic_118833-17 64-bit Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Booting to milestone "milestone/single-user:default". Configuring devices. Using RPC Bootparams for network configuration information. Attempting to configure interface bge0... Configured interface bge0 Requesting System Maintenance Mode SINGLE USER MODE # Our goal is to now move to the directory containing the buggy driver and replace it with the original driver (that we had saved away before ever loading our buggy driver! :-) However, since we booted from the network, the root filesystem ("/") is NOT mounted on one of our local disks. It is mounted on an NFS filesystem exported by our install server. To verify this, enter the following command: # mount | head -1 / on my-server:/export/install/media/s10u2/solarisdvd.s10s_u2dvd/latest/Solaris_10/Tools/Boot remote/read/write/setuid/devices/dev=4ac0001 on Wed Dec 31 16:00:00 1969 As a result, we have to create a temporary mount point and then mount the local disk onto that mount point: # mkdir /tmp/mnt # mount /dev/dsk/c0t0d0s0 /tmp/mnt Note that your system will not necessarily have had its root filesystem on "c0t0d0s0". This is something that you should also have recorded before you ever loaded your.. er... "my" buggy driver! :-) One can find the local disk mounted under the root filesystem by entering: # df -k / Filesystem kbytes used avail capacity Mounted on /dev/dsk/c0t0d0s0 76703839 4035535 71901266 6% / To continue with our example, we can now move to the directory of buggy-driver in order to replace it with the original driver. Note that /tmp/mnt is prefixed to the path of where we'd "normally" find the driver: # cd /tmp/mnt/platform/sun4u/kernel/drv/sparcv9 # ls -l pci\* -rw-r--r-- 1 root root 288504 Dec 6 15:38 pcisch -rw-r--r-- 1 root root 288504 Dec 6 15:38 pcisch.aar -rwxr-xr-x 1 root sys 211616 Jun 8 2006 pcisch.orig # cp -p pcisch.orig pcisch We can now synchronize any in-memory filesystem data structures with those on disk... and then reboot. The system will then boot correctly... as expected: # sync;sync # reboot syncing file systems... done Sun Blade 1500, No Keyboard Copyright 1998-2004 Sun Microsystems, Inc. All rights reserved. OpenBoot 4.16.4, 1024 MB memory installed, Serial #xxxxxxxx. Ethernet address 0:3:ba:2f:c9:61, Host ID: yyyyyyyy. Rebooting with command: boot Boot device: /pci@1e,600000/ide@d/disk@0,0:a File and args: SunOS Release 5.10 Version Generic_118833-17 64-bit Copyright 1983-2005 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Hostname: my-host NIS domain name is my-campus.Central.Sun.COM my-host console login: ...so that's how it's done! Of course, the easier way is to never write a buggy-driver... but.. then.. we all "have an eraser on the end of each of our pencils"... don't we ? :-) "...thank you... and good night..."

    Read the article

  • Tweaking Hudson memory usage

    - by rovarghe
    Hudson 3.1 has some performance optimizations that greatly reduces its memory footprint. Prior to this Hudson used to always hold the entire data model (all jobs and all builds) in memory which affected scalability. Some installations configured heap sizes in excess of 1GB to counteract this. Hudson 3.1.x maintains an MRU cache and only loads jobs and builds as they are required. Because of the inability to change existing APIs and be backward compatible with plugins, there were limits to how far we could go with this approach. Memory optimizations almost always come with a related cost, in this case its additional I/O that has to be performed to load data on request. On a small site that has frequent traffic, this is usually not noticeable since the MRU cache will usually hold on to all the data. A large site with infrequent traffic might experience some delays when the first request hits the server after a long gap. If you have a large heap and are able to allocate more memory, the cache settings can be adjusted to take advantage of this and even go back to pre-3.1 behavior. All the cache settings can be passed as options to the JVM container (Tomcat or the default Jetty container) using the -D option. There are two caches, independant of each other, one for Jobs and the other for Builds. For the jobs cache: hudson.jobs.cache.evict_in_seconds ( default=60 ) Seconds from last access (could be because of a servlet request or a background cron thread) a job should be purged from the cache. Set this to 0 to never purge based on time. hudson.jobs.cache.initial_capacity ( default=1024 ) Initial number of jobs the cache can accomodate. Setting this to the number of jobs you typically display on your Hudson landing page or home page will speed up consecutive access to that page. If the default is too large you may consider downsizing and using that memory for the Builds cache instead. hudson.jobs.cache.max_entries ( default=1024) Maximum number of jobs in the cache. The default is large enough for most installations, but if you find I/O activity when always accessing the hudson home page you might consider increasing this, but first verify if the I/O is caused by frequent eviction (see above), rather than by the cache not being large enough. For the builds cache: The builds cache is used to store Build objects as they are read from storage. Typically this happens when a user drills down into the details of a particular Job from the hudson hom epage. The cache is shared among builds for different jobs since in most installations all jobs are not accessed with the same frequency, so a per-job builds cache would be a waste of memory. hudson.job.builds.cache.evict_in_seconds ( default=60 ) Same as the equivalent Job cache, applied to Build. hudson.job.builds.cache.initial_capacity" ( default=512 ) Same as equivalent Job cache setting. Note the smaller initial size. If your site stores a large number of builds and has frequent access to more builds you might consider bumping this up. hudson.job.builds.cache.max_entries ( default=10240 ) The default max is large enough for most installations, the builds cache has bigger sized objects, so be careful about increasing the upper limit on this. See section on monitoring below. Sample usage: java -jar hudson-war-3.1.2-SNAPSHOT.war -Dhudson.jobs.cache.evict_in_seconds=300 \ -Dhudson.job.builds.cache.evict_in_seconds=300 Monitoring cache usage The 'jmap' tool that comes with the JDK can be used to monitor cache performance in an indirect way by looking at the number of Job and Build objects in each cache. Find the PID of the hudson instance and run $ jmap -histo:live <pid | grep 'hudson.model.*Lazy.*Key$' Here's a sample output: num #instances #bytes class name 523: 28 896 hudson.model.RunMap$LazyRunValue$Key 1200: 3 96 hudson.model.LazyTopLevelItem$Key These are the keys to the Jobs (LazyTopLevelItem$Key) and Builds (RunMap$LazyRunValue$Key) in the caches, so counting the number of keys is a good indicator of the number of items in the cache at any given moment. The size in bytes can be ignored, they are just the size of the keys, not the actual sizes of the objects they hold. Those sizes can only be obtained with a profiler. With the output above we can conclude that there are 3 jobs and 28 builds in memory. The 28 builds can all be from 1 job or all 3 jobs. Over time on an idle system, these should get evicted and memory cache should be empty. In practice, because of background cron threads and triggers, jobs rarely fall down to zero. Access of a job or a build by a cron thread resets the eviction timer.

    Read the article

  • ?My Oracle Support???????????????

    - by user763198
    Normal 0 7.8 ? 0 2 false false false EN-US ZH-CN X-NONE Normal 0 7.8 ? 0 2 false false false EN-US ZH-CN X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:????; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:105%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:major-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:major-latin; mso-fareast-language:EN-US; mso-bidi-language:EN-US;} l ????????? ??????Help ??????? Table of Contents? ????Certification? ??,????????: My Oracle Support Help - Certifications (Doc ID 870956.5) l ????????? ?????????????????????????,???My Oracle Support Help - Certifications (Doc ID 870956.5)????????????????????,??? Global Customer Support ????? l ??????????? ????????????: http://www.oracle.com/support/contact.html l ???????????? ????????????: https://shop.oracle.com/pls/ostore/f?p=dstore:home:0:::::&tz=8:00 ????????????????,???? “Contact Us” ??? l ???????,??? Export ??? ???????:    1. ???????,????,????Copy Selected Row,??Actions?????Copy Selected Rows? 2. ??????spreadsheet,??????? email ???? ?????????????table ??,????table ??,????Printable View????????????????,?????????????? ????tables????Export??, ??????export???????CSV???? ??:???Printable View ?Export?????????,??????????????? l ?????????? ?Oracle Education ??????????: http://education.oracle.com ????????????? ???????????,???Oracle Education ??(????????????)? l ????? License codes/Keys? http://www.oracle.com/us/support/licensecodes/index.html ?????????License Codes ?????????????(?????),????????????????????Oracle license code ?????????,????Customer Support Identifier (CSI)? l ???Software Delivery Cloud ????????? ???Software Delivery Cloud ???????? (Doc ID 1070969.1) ??? http://edelivery.oracle.com/ ,Oracle Software Delivery Cloud. ??“Sign In/Register” button. ??Terms & Restrictions ????Continue ????????,????Go ????????Continue ????????? l ??????Oracle.com ???? 1, ? www.oracle.com 2, ???????“Help” 3, ?????,???“Oracle.com login FAQ” 4, ???????????? 5, Oracle.com ???????????? l ????CRM Ondemand ????? ?CRMOD ??,????????: (i) CRMOD ??????: http://www.oracle.com/us/products/applications/crmondemand/support/customer-care-support-337680.html (ii) CRMOD ?????????,?????????????????: https://ebusiness.siebel.com/odcustomercare/contact/contact_cc.asp l ???? Oracle Software Delivery Cloud ??? ????Oracle Software Delivery Cloud – ??Oracle ??????,???/????????email ???????? ???????: 1. ? www.oracle.com ?????Single Sign-On (SSO) ?? 2. ?????"Account" 3. ????,?? "Please verify account(?????)" ,????????????email ID 4. ????????,???????? l ???My Oracle Support??????Oracle Database Patchset ? 1. ?????? 2. ??Product ?Product Family: Oracle database. 3. ????:? Oracle 10.2.0.4. 4. ????:?Microsoft x64 (64-bit). 5. ??????: Patchset/Minipack. 6. ?? Go. l ?????????????FTP ??? ???? CD/DVD ? FTP ??: ???My Oracle Support ( https://support.oracle.com ) ???????Contact Us?? ???????????? CD/DVD ? FTP ???????????? ????????,????????,?????????????,????????????

    Read the article

  • WPF CommandParameter is NULL first time CanExecute is called

    - by Jonas Follesø
    I have run into an issue with WPF and Commands that are bound to a Button inside the DataTemplate of an ItemsControl. The scenario is quite straight forward. The ItemsControl is bound to a list of objects, and I want to be able to remove each object in the list by clicking a Button. The Button executes a Command, and the Command takes care of the deletion. The CommandParameter is bound to the Object I want to delete. That way I know what the user clicked. A user should only be able to delete their "own" objects - so I need to do some checks in the "CanExecute" call of the Command to verify that the user has the right permissions. The problem is that the parameter passed to CanExecute is NULL the first time it's called - so I can't run the logic to enable/disable the command. However, if I make it allways enabled, and then click the button to execute the command, the CommandParameter is passed in correctly. So that means that the binding against the CommandParameter is working. The XAML for the ItemsControl and the DataTemplate looks like this: <ItemsControl x:Name="commentsList" ItemsSource="{Binding Path=SharedDataItemPM.Comments}" Width="Auto" Height="Auto"> <ItemsControl.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <Button Content="Delete" FontSize="10" Command="{Binding Path=DataContext.DeleteCommentCommand, ElementName=commentsList}" CommandParameter="{Binding}" /> </StackPanel> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> So as you can see I have a list of Comments objects. I want the CommandParameter of the DeleteCommentCommand to be bound to the Command object. So I guess my question is: have anyone experienced this problem before? CanExecute gets called on my Command, but the parameter is always NULL the first time - why is that? Update: I was able to narrow the problem down a little. I added an empty Debug ValueConverter so that I could output a message when the CommandParameter is data bound. Turns out the problem is that the CanExecute method is executed before the CommandParameter is bound to the button. I have tried to set the CommandParameter before the Command (like suggested) - but it still doesn't work. Any tips on how to control it. Update2: Is there any way to detect when the binding is "done", so that I can force re-evaluation of the command? Also - is it a problem that I have multiple Buttons (one for each item in the ItemsControl) that bind to the same instance of a Command-object? Update3: I have uploaded a reproduction of the bug to my SkyDrive: http://cid-1a08c11c407c0d8e.skydrive.live.com/self.aspx/Code%20samples/CommandParameterBinding.zip

    Read the article

  • Android: Trusting all Certificates using HttpClient over HTTPS

    - by psuguitarplayer
    Hi all, Recently posted a question regarding the HttpClient over Https (found here). I've made some headway, but I've run into new issues. As with my last problem, I can't seem to find an example anywhere that works for me. Basically, I want my client to accept any certificate (because I'm only ever pointing to one server) but I keep getting a javax.net.ssl.SSLException: Not trusted server certificate exception. So this is what I have: public void connect() throws A_WHOLE_BUNCH_OF_EXCEPTIONS { HttpPost post = new HttpPost(new URI(PROD_URL)); post.setEntity(new StringEntity(BODY)); KeyStore trusted = KeyStore.getInstance("BKS"); trusted.load(null, "".toCharArray()); SSLSocketFactory sslf = new SSLSocketFactory(trusted); sslf.setHostnameVerifier(SSLSocketFactory.ALLOW_ALL_HOSTNAME_VERIFIER); SchemeRegistry schemeRegistry = new SchemeRegistry(); schemeRegistry.register(new Scheme ("https", sslf, 443)); SingleClientConnManager cm = new SingleClientConnManager(post.getParams(), schemeRegistry); HttpClient client = new DefaultHttpClient(cm, post.getParams()); HttpResponse result = client.execute(post); } And here's the error I'm getting: W/System.err( 901): javax.net.ssl.SSLException: Not trusted server certificate W/System.err( 901): at org.apache.harmony.xnet.provider.jsse.OpenSSLSocketImpl.startHandshake(OpenSSLSocketImpl.java:360) W/System.err( 901): at org.apache.http.conn.ssl.AbstractVerifier.verify(AbstractVerifier.java:92) W/System.err( 901): at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:321) W/System.err( 901): at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:129) W/System.err( 901): at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:164) W/System.err( 901): at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:119) W/System.err( 901): at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:348) W/System.err( 901): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:555) W/System.err( 901): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:487) W/System.err( 901): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:465) W/System.err( 901): at me.harrisonlee.test.ssl.MainActivity.connect(MainActivity.java:129) W/System.err( 901): at me.harrisonlee.test.ssl.MainActivity.access$0(MainActivity.java:77) W/System.err( 901): at me.harrisonlee.test.ssl.MainActivity$2.run(MainActivity.java:49) W/System.err( 901): Caused by: java.security.cert.CertificateException: java.security.InvalidAlgorithmParameterException: the trust anchors set is empty W/System.err( 901): at org.apache.harmony.xnet.provider.jsse.TrustManagerImpl.checkServerTrusted(TrustManagerImpl.java:157) W/System.err( 901): at org.apache.harmony.xnet.provider.jsse.OpenSSLSocketImpl.startHandshake(OpenSSLSocketImpl.java:355) W/System.err( 901): ... 12 more W/System.err( 901): Caused by: java.security.InvalidAlgorithmParameterException: the trust anchors set is empty W/System.err( 901): at java.security.cert.PKIXParameters.checkTrustAnchors(PKIXParameters.java:645) W/System.err( 901): at java.security.cert.PKIXParameters.<init>(PKIXParameters.java:89) W/System.err( 901): at org.apache.harmony.xnet.provider.jsse.TrustManagerImpl.<init>(TrustManagerImpl.java:89) W/System.err( 901): at org.apache.harmony.xnet.provider.jsse.TrustManagerFactoryImpl.engineGetTrustManagers(TrustManagerFactoryImpl.java:134) W/System.err( 901): at javax.net.ssl.TrustManagerFactory.getTrustManagers(TrustManagerFactory.java:226) W/System.err( 901): at org.apache.http.conn.ssl.SSLSocketFactory.createTrustManagers(SSLSocketFactory.java:263) W/System.err( 901): at org.apache.http.conn.ssl.SSLSocketFactory.<init>(SSLSocketFactory.java:190) W/System.err( 901): at org.apache.http.conn.ssl.SSLSocketFactory.<init>(SSLSocketFactory.java:216) W/System.err( 901): at me.harrisonlee.test.ssl.MainActivity.connect(MainActivity.java:107) W/System.err( 901): ... 2 more

    Read the article

  • WinVerifyTrust API problem

    - by Shayan
    I'm using WinVerifyTrust API in windows XP and I don't want any kind of user interaction. But when I set the WTD_UI_NONE attribute, although it doesn't show any dialog boxes, but it waits for a long time on the files that in fact wanted user interaction (I mean files which without mentioning the NO UI it will ask the user for that file). This is my code: WINTRUST_FILE_INFO FileData; memset(&FileData, 0, sizeof(FileData)); FileData.cbStruct = sizeof(WINTRUST_FILE_INFO); wchar_t fileName[32769]; FileData.pcwszFilePath = fileName; FileData.hFile = NULL; FileData.pgKnownSubject = NULL; /* WVTPolicyGUID specifies the policy to apply on the file WINTRUST_ACTION_GENERIC_VERIFY_V2 policy checks: 1) The certificate used to sign the file chains up to a root certificate located in the trusted root certificate store. This implies that the identity of the publisher has been verified by a certification authority. 2) In cases where user interface is displayed (which this example does not do), WinVerifyTrust will check for whether the end entity certificate is stored in the trusted publisher store, implying that the user trusts content from this publisher. 3) The end entity certificate has sufficient permission to sign code, as indicated by the presence of a code signing EKU or no EKU. */ GUID WVTPolicyGUID = WINTRUST_ACTION_GENERIC_VERIFY_V2; WINTRUST_DATA WinTrustData; // Initialize the WinVerifyTrust input data structure. // Default all fields to 0. memset(&WinTrustData, 0, sizeof(WinTrustData)); WinTrustData.cbStruct = sizeof(WinTrustData); // Use default code signing EKU. WinTrustData.pPolicyCallbackData = NULL; // No data to pass to SIP. WinTrustData.pSIPClientData = NULL; // Disable WVT UI. WinTrustData.dwUIChoice = WTD_UI_NONE; // No revocation checking. WinTrustData.fdwRevocationChecks = WTD_REVOKE_NONE; // Verify an embedded signature on a file. WinTrustData.dwUnionChoice = WTD_CHOICE_FILE; // Default verification. WinTrustData.dwStateAction = 0; // Not applicable for default verification of embedded signature. WinTrustData.hWVTStateData = NULL; // Not used. WinTrustData.pwszURLReference = NULL; // Default. WinTrustData.dwProvFlags = WTD_REVOCATION_CHECK_END_CERT; // This is not applicable if there is no UI because it changes // the UI to accommodate running applications instead of // installing applications. WinTrustData.dwUIContext = 0; // Set pFile. WinTrustData.pFile = &FileData; // WinVerifyTrust verifies signatures as specified by the GUID // and Wintrust_Data. lStatus = WinVerifyTrust( (HWND)INVALID_HANDLE_VALUE, &WVTPolicyGUID, &WinTrustData); printf("%x\n", lStatus);

    Read the article

  • Rx Reactive extensions: Unit testing with FromAsyncPattern

    - by Andrew Anderson
    The Reactive Extensions have a sexy little hook to simplify calling async methods: var func = Observable.FromAsyncPattern<InType, OutType>( myWcfService.BeginDoStuff, myWcfService.EndDoStuff); func(inData).ObserveOnDispatcher().Subscribe(x => Foo(x)); I am using this in an WPF project, and it works great at runtime. Unfortunately, when trying to unit test methods that use this technique I am experiencing random failures. ~3 out of every five executions of a test that contain this code fails. Here is a sample test (implemented using a Rhino/unity auto-mocking container): [TestMethod()] public void SomeTest() { // arrange var container = GetAutoMockingContainer(); container.Resolve<IMyWcfServiceClient>() .Expect(x => x.BeginDoStuff(null, null, null)) .IgnoreArguments() .Do( new Func<Specification, AsyncCallback, object, IAsyncResult>((inData, asyncCallback, state) => { return new CompletedAsyncResult(asyncCallback, state); })); container.Resolve<IRepositoryServiceClient>() .Expect(x => x.EndRetrieveAttributeDefinitionsForSorting(null)) .IgnoreArguments() .Do( new Func<IAsyncResult, OutData>((ar) => { return someMockData; })); // act var target = CreateTestSubject(container); target.DoMethodThatInvokesService(); // Run the dispatcher for everything over background priority Dispatcher.CurrentDispatcher.Invoke(DispatcherPriority.Background, new Action(() => { })); // assert Assert.IsTrue(my operation ran as expected); } The problem that I see is that the code that I specified to run when the async action completed (in this case, Foo(x)), is never called. I can verify this by setting breakpoints in Foo and observing that they are never reached. Further, I can force a long delay after calling DoMethodThatInvokesService (which kicks off the async call), and the code is still never run. I do know that the lines of code invoking the Rx framework were called. Other things I've tried: I have attempted to modify the second last line according to the suggestions here: Reactive Extensions Rx - unit testing something with ObserveOnDispatcher No love. I have added .Take(1) to the Rx code as follows: func(inData).ObserveOnDispatcher().Take(1).Subscribe(x = Foo(x)); This improved my failure rate to something like 1 in 5, but they still occurred. I have rewritten the Rx code to use the plain jane Async pattern. This works, however my developer ego really would love to use Rx instead of boring old begin/end. In the end I do have a work around in hand (i.e. don't use Rx), however I feel that it is not ideal. If anyone has ran into this problem in the past and found a solution, I'd dearly love to hear it.

    Read the article

  • How to avoid open-redirect vulnerability and safely redirect on successful login (HINT: ASP.NET MVC

    - by Brad B.
    Normally, when a site requires that you are logged in before you can access a certain page, you are taken to the login screen and after successfully authenticating yourself, you are redirected back to the originally requested page. This is great for usability - but without careful scrutiny, this feature can easily become an open redirect vulnerability. Sadly, for an example of this vulnerability, look no further than the default LogOn action provided by ASP.NET MVC 2: [HttpPost] public ActionResult LogOn(LogOnModel model, string returnUrl) { if (ModelState.IsValid) { if (MembershipService.ValidateUser(model.UserName, model.Password)) { FormsService.SignIn(model.UserName, model.RememberMe); if (!String.IsNullOrEmpty(returnUrl)) { return Redirect(returnUrl); // open redirect vulnerability HERE } else { return RedirectToAction("Index", "Home"); } } else { ModelState.AddModelError("", "User name or password incorrect..."); } } return View(model); } If a user is successfully authenticated, they are redirected to "returnUrl" (if it was provided via the login form submission). Here is a simple example attack (one of many, actually) that exploits this vulnerability: Attacker, pretending to be victim's bank, sends an email to victim containing a link, like this: http://www.mybank.com/logon?returnUrl=http://www.badsite.com Having been taught to verify the ENTIRE domain name (e.g., google.com = GOOD, google.com.as31x.example.com = BAD), the victim knows the link is OK - there isn't any tricky sub-domain phishing going on. The victim clicks the link, sees their actual familiar banking website and is asked to logon Victim logs on and is subsequently redirected to http://www.badsite.com which is made to look exactly like victim's bank's website, so victim doesn't know he is now on a different site. http://www.badsite.com says something like "We need to update our records - please type in some extremely personal information below: [ssn], [address], [phone number], etc." Victim, still thinking he is on his banking website, falls for the ploy and provides attacker with the information Any ideas on how to maintain this redirect-on-successful-login functionality yet avoid the open-redirect vulnerability? I'm leaning toward the option of splitting the "returnUrl" parameter into controller/action parts and use "RedirectToRouteResult" instead of simply "Redirect". Does this approach open any new vulnerabilities? Side note: I know this open-redirect may not seem to be a big deal compared to the likes of XSS and CSRF, but us developers are the only thing protecting our customers from the bad guys - anything we can do to make the bad guys' job harder is a win in my book. Thanks, Brad

    Read the article

  • How to track deleted self-tracking entities in ObservableCollection without memory leaks

    - by Yannick M.
    In our multi-tier business application we have ObservableCollections of Self-Tracking Entities that are returned from service calls. The idea is we want to be able to get entities, add, update and remove them from the collection client side, and then send these changes to the server side, where they will be persisted to the database. Self-Tracking Entities, as their name might suggest, track their state themselves. When a new STE is created, it has the Added state, when you modify a property, it sets the Modified state, it can also have Deleted state but this state is not set when the entity is removed from an ObservableCollection (obviously). If you want this behavior you need to code it yourself. In my current implementation, when an entity is removed from the ObservableCollection, I keep it in a shadow collection, so that when the ObservableCollection is sent back to the server, I can send the deleted items along, so Entity Framework knows to delete them. Something along the lines of: protected IDictionary<int, IList> DeletedCollections = new Dictionary<int, IList>(); protected void SubscribeDeletionHandler<TEntity>(ObservableCollection<TEntity> collection) { var deletedEntities = new List<TEntity>(); DeletedCollections[collection.GetHashCode()] = deletedEntities; collection.CollectionChanged += (o, a) => { if (a.OldItems != null) { deletedEntities.AddRange(a.OldItems.Cast<TEntity>()); } }; } Now if the user decides to save his changes to the server, I can get the list of removed items, and send them along: ObservableCollection<Customer> customers = MyServiceProxy.GetCustomers(); customers.RemoveAt(0); MyServiceProxy.UpdateCustomers(customers); At this point the UpdateCustomers method will verify my shadow collection if any items were removed, and send them along to the server side. This approach works fine, until you start to think about the life-cycle these shadow collections. Basically, when the ObservableCollection is garbage collected there is no way of knowing that we need to remove the shadow collection from our dictionary. I came up with some complicated solution that basically does manual memory management in this case. I keep a WeakReference to the ObservableCollection and every few seconds I check to see if the reference is inactive, in which case I remove the shadow collection. But this seems like a terrible solution... I hope the collective genius of StackOverflow can shed light on a better solution. Thanks!

    Read the article

  • System.Data.SQLite parameter issue

    - by CasperT
    I have the following code: try { //Create connection SQLiteConnection conn = DBConnection.OpenDB(); //Verify user input, normally you give dbType a size, but Text is an exception var uNavnParam = new SQLiteParameter("@uNavnParam", SqlDbType.Text) { Value = uNavn }; var bNavnParam = new SQLiteParameter("@bNavnParam", SqlDbType.Text) { Value = bNavn }; var passwdParam = new SQLiteParameter("@passwdParam", SqlDbType.Text) {Value = passwd}; var pc_idParam = new SQLiteParameter("@pc_idParam", SqlDbType.TinyInt) { Value = pc_id }; var noterParam = new SQLiteParameter("@noterParam", SqlDbType.Text) { Value = noter }; var licens_idParam = new SQLiteParameter("@licens_idParam", SqlDbType.TinyInt) { Value = licens_id }; var insertSQL = new SQLiteCommand("INSERT INTO Brugere (navn, brugernavn, password, pc_id, noter, licens_id)" + "VALUES ('@uNameParam', '@bNavnParam', '@passwdParam', '@pc_idParam', '@noterParam', '@licens_idParam')", conn); insertSQL.Parameters.Add(uNavnParam); //replace paramenter with verified userinput insertSQL.Parameters.Add(bNavnParam); insertSQL.Parameters.Add(passwdParam); insertSQL.Parameters.Add(pc_idParam); insertSQL.Parameters.Add(noterParam); insertSQL.Parameters.Add(licens_idParam); insertSQL.ExecuteNonQuery(); //Execute query //Close connection DBConnection.CloseDB(conn); //Let the user know that it was changed succesfully this.Text = "Succes! Changed!"; } catch(SQLiteException e) { //Catch error MessageBox.Show(e.ToString(), "ALARM"); } It executes perfectly, but when I view my "brugere" table, it has inserted the values: '@uNameParam', '@bNavnParam', '@passwdParam', '@pc_idParam', '@noterParam', '@licens_idParam' literally. Instead of replacing them. I have tried making a breakpoint and checked the parameters, they do have the correct assigned values. So that is not the issue either. I have been tinkering with this a lot now, with no luck, can anyone help? Oh and for reference, here is the OpenDB method from the DBConnection class: public static SQLiteConnection OpenDB() { try { //Gets connectionstring from app.config const string myConnectString = "data source=data;"; var conn = new SQLiteConnection(myConnectString); conn.Open(); return conn; } catch (SQLiteException e) { MessageBox.Show(e.ToString(), "ALARM"); return null; } }

    Read the article

  • Asp.NET MVC ActionFilter cannot get Form Submit data

    - by Goden
    I want to use custom action filter to manipulate parameters to one action. User inputs: 2 names in a form ; Action: actually needs to take 2 ids; Action Filter (onExecuting, will verify the input names and if valid, convert them into 2 ids and replace in the routedata) because i don't want to put validation logic in Action Controller. here's part of the code: Routing Info routes.MapRoute( "Default", // Route name "{controller}/{action}", // URL with parameters new { controller = "Home", action = "Index"} // Parameter defaults ); routes.MapRoute( "RelationshipResults", // Route Name "Relationship/{initPersonID}/{targetPersonID}", // URL with parameters new { controller = "Relationship", action = "Results" }); Form to submit (Create 2 input box and submit via jquery) <% using (Html.BeginForm("Results", "Relationship", FormMethod.Post, new { id = "formSearch" })) {% ... <td align="left"><%: MvcWeibookWeb.Properties.Resource.Home_InitPersonName%></td> <td align="right"> <%= Html.TextBox("initPersonName")%></td> <td rowspan="3" valign="top"> <div id="sinaIntro"> <%: MvcWeibookWeb.Properties.Resource.Home_SinaIntro %> <br /> <%: MvcWeibookWeb.Properties.Resource.Genearl_PromotionSina %> </div> </td> </tr> <tr> <td align="left" width="90px"><%: MvcWeibookWeb.Properties.Resource.Home_TargetPersonName%></td> <td align="right"><%= Html.TextBox("targetPersonName")%></td> </tr> <tr> <td colspan="2" align="right"> <a href="#" class="btn-HomeSearch" onclick="$('#formSearch').submit();"><%: MvcWeibookWeb.Properties.Resource.Home_Search%></a> </td> Action Filter public override void OnActionExecuting(ActionExecutingContext filterContext) { Sina.Searcher searcher = new Sina.Searcher(Sina.Processor.UserNetwork); String initPersonName, targetPersonName; // form submit names, we need to process them and convert them to IDs before it enters the real controller. initPersonName = filterContext.RouteData.Values["initPersonName"] as String; targetPersonName = filterContext.RouteData.Values["targetPersonName"] as String; // do sth to convert it to ids and replace Action/Controller [ValidationActionFilter] [HandleError] public ActionResult Results( Int64 initPersonName, Int64 targetPersonName) { ... My problem is: in the actionFilter, it never gets the 2 parameter "initPersonName" and "targetPersonName", the RouteData.Values don't contain these 2 keys... :(

    Read the article

  • Android AsyncTask context problem, help!

    - by dnkoutso
    I've been working with AsyncTasks in Android and I am dealing with a strange issue. Take a simple example, an Activity with one AsyncTask. The task on the background does not do anything spectacular, it just sleeps for 8 seconds. At the end of the AsyncTask in the onPostExecute() method I am just setting a button visibility status to View.VISIBLE, only to verify my results. Now, this works great until the user decides to change his phones orientation while the AsyncTask is working (within the 8 second sleep window). I understand the Android activity life cycle and I know the activity gets destroyed and recreated. This is where the problem comes in. The AsyncTask is referring to a button and apparently holds a reference to the context that started the AsyncTask in the first place. I would expect, that this old context (since the user caused an orientation change) to either become null and the AsyncTask to throw an NPE for the reference to the button it is trying to make visible. Instead, no NPE is thrown, the asynctask thinks that the button reference is not null, sets it to visible. The result? Nothing is happening on the screen! I have tackled this by keeping and updating the context reference into the AsyncTask. This is cumbersome and prone to leaks. Here's the code: public class Main extends Activity { private Button mButton = null; private Button mTestButton = null; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); mButton = (Button) findViewById(R.id.btnStart); mButton.setOnClickListener(new OnClickListener () { @Override public void onClick(View v) { new taskDoSomething().execute(0l); } }); mTestButton = (Button) findViewById(R.id.btnTest); } private class taskDoSomething extends AsyncTask<Long, Integer, Integer> { @Override protected Integer doInBackground(Long... params) { Log.i("LOGGER", "Starting..."); try { Thread.sleep(8000); } catch (InterruptedException e) { e.printStackTrace(); } return 0; } @Override protected void onPostExecute(Integer result) { Log.i("LOGGER", "...Done"); mTestButton.setVisibility(View.VISIBLE); } } } Try executing and while the AsyncTask is working change your phones orientation.

    Read the article

  • DataTable identity column not set after DataAdapter.Update/Refresh on table with "instead of"-trigge

    - by Arno
    Within our unit tests we use plain ADO.NET (DataTable, DataAdapter) for preparing the database resp. checking the results, while the tested components themselves run under NHibernate 2.1. .NET version is 3.5, SqlServer version is 2005. The database tables have identity columns as primary keys. Some tables apply instead-of-insert/update triggers (this is due to backward compatibility, nothing I can change). The triggers generally work like this: create trigger dbo.emp_insert on dbo.emp instead of insert as begin set nocount on insert into emp ... select @@identity end The insert statement issued by the ADO.NET DataAdapter (generated on-the-fly by a thin ADO.NET wrapper) tries to retrieve the identity value back into the DataRow: exec sp_executesql N' insert into emp (...) values (...); select id, ... from emp where id = @@identity ' But the DataRow's id-Column is still 0. When I remove the trigger temporarily, it works fine - the id-Column then holds the identity value set by the database. NHibernate on the other hand uses this kind of insert statement: exec sp_executesql N' insert into emp (...) values (...); select scope_identity() ' This works, the NHibernate POCO has its id property correctly set right after flushing. Which seems a little bit counter-intuitive to me, as I expected the trigger to run in a different scope, hence @@identity should be a better fit than scope_identity(). So I thought no problem, I will apply scope_identity() instead of @@identity under ADO.NET as well. But this has no effect, the DataRow value is still not updated accordingly. And now for the best part: When I copy and paste those two statements from SqlServer profiler into a Management Studio query (that is including "exec sp_executesql"), and run them there, the results seem to be inverse! There the ADO.NET version works, and the NHibernate version doesn't (select scope_identity() returns null). I tried several times to verify, but to no avail. Of course this just shows the resultset coming from the database, whatever happens inside NHibernate and ADO.NET is another topic. Also, several session properties defined by T-SQL SET are different in the two scenarios (Management Studio query vs. application at runtime) This is a real puzzle to me. I would be happy about any insights on that. Thank you!

    Read the article

  • Cannot integrate Gallio MBUnit with Team City

    - by Bernard Larouche
    I have been trying to get my MBUnit tests suite to work on Team City for many days now without any success. My solution builds no problem. The program is with my tests. After googling for Gallio integration with Team City I tried many ways to make this thing work and I think I am close but need help. I have included the gallio bin directory to my repository and also on my TC Server. Here is my build runner set up in Team City : Build runner : MSBuild Build file path : Myproject.msbuild Targets : RebuildSolution RunTests Here is Myproject.msbuild file I created and included in the Source control trunk directory : <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <!-- This is needed by MSBuild to locate the Gallio task --> <UsingTask AssemblyFile="C:\Gallio\bin\Gallio.MSBuildTasks.dll" TaskName="Gallio" /> <!-- Specify the tests assemblies --> <ItemGroup> <TestAssemblies Include="C:\_CBL\CBL\CoderForTraders\Source\trunk\UnitTest\DomainModel.Tests\bin\Debug\CBL.CoderForTraders.DomainModel.Tests.dll" /> </ItemGroup> <Target Name="RunTests"> <Gallio IgnoreFailures="false" Assemblies="@(TestAssemblies)" RunnerExtensions="TeamCityExtension,Gallio.TeamCityIntegration"> <!-- This tells MSBuild to store the output value of the task's ExitCode property into the project's ExitCode property --> <Output TaskParameter="ExitCode" PropertyName="ExitCode"/> </Gallio> <Error Text="Tests execution failed" Condition="'$(ExitCode)' != 0" /> </Target> <Target Name="RebuildSolution"> <Message Text="Starting to Build"/> <MSBuild Projects="CoderForTraders.sln" Properties="Configuration=Debug" Targets="Rebuild" /> </Target> </Project> Here are the errors displayed by Team City : error MSB4064: The "Assemblies" parameter is not supported by the "Gallio" task. Verify the parameter exists on the task, and it is a settable public instance property error MSB4063: The "Gallio" task could not be initialized with its input parameters. Thanks for your help

    Read the article

  • Certificates Validations Issues

    - by user298331
    Hi All, i am facing some issues related certificates.i need some help to resolve these issues. Requirements : security mode="TransportWithMessageCredential" binding binding name="basicHttpEndpointBinding" certificateValidationMode ="ChainTrust" revocationMode="Online" Certificates : Service Cerificates : Transportlevel : XXXX.cer my cerificate name is my system DNS name and it is having root node i.e RootTrnCA.cer this is used to enable https.but am not validationg transport level certificates. Message Level : services.ca.iim (VXXXX.Cer--Act.Mac.Ca--services.ca.iim ) Client Cerificates : Transportlevel : ZZZZ.cer my cerificate name is my system DNS name and it is having root node i.e RootTrnCA.cer ignoring transport certificate errors through coading..... Message Level : client.ca.iim (VXXXX.Cer--Act.Mac.Ca--client.ca.iim ) Issues : 1) Response message is not contain Service certificate Signature in Soap header.so i am not able to validate Server certificate details in Client code. 2)if i use the transport with message credential and Chaintrust.i am getting error : The revocation function was unable to check revocation because the revocation server was offline.) so please very the below service and cleint config and correct me if i am wrong. Service config : Client config : i am attaching certificate through coading : objProxy.ChannelFactory.Credentials.ClientCertificate.SetCertificate(System.Security.Cryptography.X509Certificates. StoreLocation.LocalMachine, System.Security.Cryptography.X509Certificates. StoreName.My, X509FindType.FindBySubjectName, "client.ca.iim"); <binding name="XXXXXServiceHost.Http" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportWithMessageCredential"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="Certificate" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="https://XXXXXX/XXXServiceHost/MemberSvc.svc/soap11" binding="basicHttpBinding" bindingConfiguration="XXXServiceHost.Http" contract="ServiceReference1.IMemberIBA" name="XXXServiceHost.Http" /> </client> </system.serviceModel>Please Verify both and Help me how to resolve above two issues . Thanks Babu

    Read the article

  • dojo/dijit ContentPane setting content

    - by Kitson
    I am trying append some XML retrieved via a dojo.XHRGet to a dijit.layout.ContentPane. Everything works ok in Firefox (3.6) but in Chrome, I only get back 'undefined' in the particular ContentPane. My code looks something like this: var cp = dijit.byId("mapDetailsPane"); cp.destroyDescendants(); // there are some existing Widgets/content I want to clear // and replace with the new content var xhrData = { url : "getsomexml.php", handleAs: "xml", preventCache: true, failOk: true }; var deferred = new dojo.xhrGet(xhrData); deferred.addCallback(function(data) { console.log(data.firstChild); // get a DOM object in both Firebug // and Chrome Dev Tools cp.attr("content",data.firstChild); // get the XML appended to the doc in Firefox, // but "undefined" in Chrome }); Because in both browsers I get back a valid Document object I know XHRGet is working fine, but there seems to be some sort of difference in how the content is being set. Is there a better way to handle the return data from the request? There was a request to see my XML, so here is part of it... <?xml version="1.0" encoding="UTF-8" standalone="no"?> <svg xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.1" width="672" height="1674"> <defs> <style type="text/css"> <![CDATA[ ...bunch of CSS... ]]> </style> <marker refX="0" refY="0" orient="auto" id="A00End" style="overflow: visible;"> ...bunch more defs... </defs> <g id="endpoints"> ...bunch of SVG with a some... <a xlink:href="javascript:gotoLogLine(16423,55);" xlink:type="simple">...more svg...</a> </g> </svg> I have run the output XML trough the WC3 validator for XML to verify it is valid. Like I said before, works in FireFox 3.6. I tried it on Safari and I got the same "undefined" so it seems to be related to Webkit.

    Read the article

  • Connecting to MSSQL 2008 from Java

    - by Xinus
    I am trying to connect to MSSQL 2008 server from Java here is a program import java.sql.*; public class connectURL { public static void main(String[] args) { // Create a variable for the connection string. String connectionUrl = "jdbc:sqlserver://localhost/SQLEXPRESS/Databases/HelloWorld:1433;";// + //"databaseName=HelloWorld;integratedSecurity=true;"; // Declare the JDBC objects. Connection con = null; Statement stmt = null; ResultSet rs = null; try { // Establish the connection. Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver"); con = DriverManager.getConnection(connectionUrl); // Create and execute an SQL statement that returns some data. String SQL = "SELECT TOP 10 * FROM Person.Contact"; stmt = con.createStatement(); rs = stmt.executeQuery(SQL); // Iterate through the data in the result set and display it. while (rs.next()) { System.out.println(rs.getString(4) + " " + rs.getString(6)); } } // Handle any errors that may have occurred. catch (Exception e) { e.printStackTrace(); } finally { if (rs != null) try { rs.close(); } catch(Exception e) {} if (stmt != null) try { stmt.close(); } catch(Exception e) {} if (con != null) try { con.close(); } catch(Exception e) {} } } } But it shows error as com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host localhost/SQLEXPRESS/Databases/HelloWorld, port 1433 has failed. Error: "null. Verify the connection properties, check that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port, and that no firewall is blocking TCP connections to the port.". at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:170) at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:1049) at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:833) at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:716) at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:841) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at connectURL.main(connectURL.java:43) I have given followed all the instructions as given in http://teamtutorials.com/database-tutorials/configuring-and-creating-a-database-in-ms-sql-2008 What can be the problem ?

    Read the article

  • Can't Set Crystal Report Selection Formula Programatically

    - by eidylon
    Hello all, first, I can't stand Crystal! Okay, that's off my chest... Now, we have an old VB6 app we maintain for a client, which uses the Crystal Automation library to programatically change the record selection formulas in a bunch of Crystal Reports 8.5 reports. There are two reports which are ALMOST identical. I had to change them recently to add another field from another table. When I added the table in to the reports though, while it added it in the visual designer, it did not add it in the FROM clause of the SQL statement. So, I manually edited the SQL statement to add in the additional join. KO, works great. If i run the reports in Crystal preview mode, they work exactly as expected. Now, the users went to test the changes from within the VB app. One of the reports works fine and dandy. The other report however, is failing to set the selection formula as expected. The code sets the selection formulas using the function PESetSelectionFormula. I verified that the string being passed in to the function as the new selection formula is correct via a step-through examination of the variables. The call to PESetSelectionFormula seems to be working okay, and is returning a value of 1, which as near as I can find anywhere indicates success. (The other report, which is working fine from code is also returning 1.) However, the report is failing with an error: Error Code: 534 - Error detected by database DLL. The code, for debugging purposes dumps out the SQL string currently being used by the report. The SQL coming out of the report is: SELECT ... FROM ... WHERE ORDER BY ... As you can see, the WHERE clause is blank, which I would imagine is why the database DLL is upchucking on this statement. I don't understand why the automation library is not setting the WHERE clause even though the call to PESetSelectionFormula is being passed a valid string and is returning success. I thought perhaps it was because I had manually edited the SQL in the report to add the table it wasn't adding, but I did the same thing in the other nearly identical report, and that one is working fine. Anyone have any ideas why PESetSelectionFormula might report success but not actually do anything? P.S. I have already tried doing a Database Verify Database from the menus, and that said the report was all up to date and did not help at all.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >