Search Results

Search found 27931 results on 1118 pages for 'custom configuration'.

Page 85/1118 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Restarting Haproxy Gracefully

    - by Anand Gupta
    As per various blogs, HAproxy can be gracefully restarted using the following command: sudo haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) TO verify this, I had set up a apache bench script which contiguously sent message to haproxy. Ideally, whenever I restarted my server the script should not have an affect on the apache bunch execiton. But, it seems that whenever Haproxy is restarted apache bench scripts terminate and the connection to load balancer is lost. Here is the details of my HaProxy configuration file : global nbproc 4 log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon pidfile /var/run/haproxy.pid stats socket /home/ubuntu/haproxy.sock #debug #quiet defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen webstats bind 0.0.0.0:1000 stats enable mode http stats uri /lb?stats stats auth anand:aaaaaaaa #stats refresh listen web-farm 0.0.0.0:80 mode http balance roundrobin option httpchk HEAD /index.php HTTP/1.0 server server2.com 1.1.1.1:80 server serve1.com 1.1.1.2:80 ~ Please let me know what am I missing here.

    Read the article

  • jQuery Ajax Error Handling – How To Show Custom Error Messages

    - by schnieds
    So you want to make your error feedback nice for your users…Kind of an ironic statement isn’t it? We obviously want to avoid errors if at all possible in our applications, but when errors do occur then we want to provide some nice feedback to our users. The worst thing that can happen is to blow up a huge server exception page when something goes wrong or equally bad is not providing any feedback at all and leaving the user in the dark. Although I do not recommend displaying actual .NET Framework exception messages or stack traces to the user in most instances; they are usually not helpful to the user and can be a security concern.... [Read More]Aaron Schniederhttp://www.churchofficeonline.com

    Read the article

  • QoS for Cisco Router to Prioritize Voice and Interactive Traffic

    - by TJ Huffington
    I have a Cisco 891W NATing Voice and Data to the internet over a 10mbit/2mbit connection. Voice traffic gets degraded when I upload large files. Pings time out as well. I tried to configure a QoS policy but it's basically not doing anything. Voice traffic still degrades when upload bandwidth gets saturated. Here is my current configruation: class-map match-any QoS-Transactional match protocol ssh match protocol xwindows class-map match-any QoS-Voice match protocol rtp audio class-map match-any QoS-Bulk match protocol secure-nntp match protocol smtp match protocol tftp match protocol ftp class-map match-any QoS-Management match protocol snmp match protocol dns match protocol secure-imap class-map match-any QoS-Inter-Video match protocol rtp video class-map match-any QoS-Voice-Control match access-group name Voice-Control policy-map QoS-Priority-Output class QoS-Voice priority percent 25 set dscp ef class QoS-Inter-Video bandwidth remaining percent 10 set dscp af41 class QoS-Transactional bandwidth remaining percent 25 random-detect dscp-based set dscp af21 class QoS-Bulk bandwidth remaining percent 5 random-detect dscp-based set dscp af11 class QoS-Management bandwidth remaining percent 1 set dscp cs2 class QoS-Voice-Control priority percent 5 set dscp ef class class-default fair-queue interface FastEthernet8 bandwidth 1024 bandwidth receive 20480 ip address dhcp ip nat outside ip virtual-reassembly duplex auto speed auto auto discovery qos crypto map mymap max-reserved-bandwidth 80 service-policy output QoS-Priority-Output crypto map mymap 10 ipsec-isakmp set peer 1.2.3.4 default set transform-set ESP-3DES-SHA match address 110 qos pre-classify ! fa8 is my connection to the internet. Voice traffic goes over a VPN ("mymap") to the SIP server. That's why I specified "qos pre-classify" which I believe is the way to classify traffic over the VPN. However even when I ping a public IP while saturating upload bandwidth, the latency is exceptionally high. Is this configuration correct? Are there any suggestions that might make this work for my setup? Thanks in advance.

    Read the article

  • Access UIButton's titleLabel with custom background image

    - by Meltemi
    Quick one: I have a custom button image that I want to use but I still want to change the text of the button with setTitle:forState:. When I use a "custom" button in IB I lose the Title view. When I use Rounded Rect in IB the custom image appears scrunched w/in the Rounded Rect button. Why is this so difficult?

    Read the article

  • Prevent Custom Entity being deleted from Entity Framework during Update Model Wizard

    - by rbg
    In Entity Framework v1 If you create a Custom Entity to map it to a Stored Procedure during FunctionImport and then Select "Update Model from Database" the Update Model Wizard removes the Custom Entity (as added Manually in the SSDL XML prior to functionImport) from the SSDL. Does anyone know if this limitation has been dealt with Entity Framework v4? I mean is there a way to prevent the Wizard from removing the Custom Entity from the Storage Schema (SSDL) during Model Update from Database????

    Read the article

  • Knockoutjs - stringify to handling observables and custom events

    - by Renso
    Goal: Once you viewmodel has been built and populated with data, at some point it goal of it all is to persist the data to the database (or some other media). Regardless of where you want to save it, your client-side viewmodel needs to be converted to a JSON string and sent back to the server. Environment considerations: jQuery 1.4.3+ Knockoutjs version 1.1.2   How to: So let’s set the stage, you are using Knockoutjs and you have a viewmodel with some Knockout dependencies. You want to make sure it is in the proper JSON format and via ajax post it to the server for persistence.   First order of business is to deal with the viewmodel (JSON) object. To most the JSON stringifier sounds familiar. The JSON stringifier converts JavaScript data structures into JSON text. JSON does not support cyclic data structures, so be careful to not give cyclical structures to the JSON stringifier. You may ask, is this the best way to do it? What about those observables and other Knockout properties that I don’t want to persist or want their actual value persisted and not their function, etc. Not sure if you were aware, but KO already has a method; ko.utils.stringifyJson() - it's mostly just a wrapper around JSON.stringify. (which is native in some browsers, and can be made available by referencing json2.js in others). What does it do that the regular stringify does not is that it automatically converts observable, dependentObservable, or observableArray to their underlying value to JSON. Hold on! There is a new feature in this version of Knockout, the ko.toJSON. It is part of the core library and it will clone the view model’s object graph, so you don’t mess it up after you have stringified  it and unwrap all its observables. It's smart enough to avoid reference cycles. Since you are using the MVVM pattern it would assume you are not trying to reference DOM nodes from your view. Wait a minute. I can already see this info on the http://knockoutjs.com/examples/contactsEditor.html website, why mention it all here? First of this is a much nicer blog, no orange ? At this time, you may want to have a look at the blog and see what I am talking about. See the save event, how they stringify the view model’s contacts only? That’s cool but what if your view model is a representation of your object you want to persist, meaning it has no property that represents the json object you want to persist, it is the view model itself. The example in http://knockoutjs.com/examples/contactsEditor.html assumes you have a list of contacts you may want to persist. In the example here, you want to persist the view model itself. The viewmodel here looks something like this:     var myViewmodel = {         accountName: ko.observable(""),         accountType: ko.observable("Active")     };     myViewmodel.isItActive = ko.dependentObservable(function () {         return myViewmodel.accountType() == "Active";     });     myViewmodel.clickToSaveMe = function() {         SaveTheAccount();     }; Here is the function in charge of saving the account: Function SaveTheAccount() {     $.ajax({         data: ko.toJSON(viewmodel),         url: $('#ajaxSaveAccountUrl').val(),         type: "POST",         dataType: "json",         async: false,         success: function (result) {             if (result && result.Success == true) {                 $('#accountMessage').html('<span class="fadeMyContainerSlowly">The account has been saved</span>').show();                 FadeContainerAwaySlowly();             }         },         error: function (xmlHttpRequest, textStatus, errorThrown) {             alert('An error occurred: ' + errorThrown);         }     }); //ajax }; Try run this and your browser will eventually freeze up or crash. Firebug will tell you that you have a repetitive call to the first function call in your model that keeps firing infinitely.  What is happening is that Knockout serializes the view model to a JSON string by traversing the object graph and firing off the functions, again-and-again. Not sure why it does that, but it does. So what is the work around: Nullify your function calls and then post it:         var lightweightModel = viewmodel.clickToSaveMe = null;         data: ko.toJSON(lightweightModel), So then I traced the JSON string on the server and found it having issues with primitive types. C#, by the way. So I changed ko.toJSON(model) to ko.toJS(model), and that solved my problem. Of course you could just create a property on the viewmodel for the account itself, so you only have to serialize the property and not the entire viewmodel. If that is an option then that would be the way to go. If your view model contains other properties in the view model that you also want to post then that would not be an option and then you’ll know what to watch out for. Hope this helps.

    Read the article

  • Grails Mail port configuration

    - by bsreekanth
    Hello, I am trying to send mail through grails mail plugin. I configured according to the documentation, and also followed few blog posts (http://blog.lourish.com/2010/04/02/sending-asynchronous-html-email-in-grails-with-activemq-jms-and-gmail/). That post mention that the closure way of declaring the configuration overrides others, but not true. Anyway I tried both approach, but seems like the port is still use the smtp default one. I get the below exception. exception: org.springframework.mail.MailSendException: Mail server connection failed; nested exception is javax.mail.MessagingException: Could not connect to SMTP host: localhost, port: 25; nested exception is: java.net.ConnectException: Connection refused: connect Now, I wrote a small program directly using the java mail library, and I could send the mail with that. The configuration is shown below. tried additional config "mail.smtp.port":"465"", but no change.. used the parameters mentioned in the above blog post, result same grails { mail { host = "smtp.gmail.com" port = "465" username = "[email protected]" password = "mypwd" props = ["mail.smtp.auth":"true", // "mail.smtp.port":"465", "mail.smtp.socketFactory.port":"465", "mail.smtp.socketFactory.class":"javax.net.ssl.SSLSocketFactory", "mail.smtp.socketFactory.fallback":"false"] } } thanks in advance.. Update: It is not port or firewall config, as when I made a grails application from scratch, and tried with the same config, everything works. Also, asked in grails forum http://grails.1312388.n4.nabble.com/grails-mail-mailSender-does-not-have-config-values-td2237704.html#a2237704 . Hope get a lead to try.

    Read the article

  • Professional Custom Logo Design vs. Mr. Right

    John is an ex-marine and ex-employee of general motors. He recently lost his job working as a welder on the assembly lines of one of GM manufacturing plants. John has traveled a lot and knows a lot a... [Author: Emily Matthew - Web Design and Development - March 31, 2010]

    Read the article

  • Custom UIView built with Interface Builder accessible/positionable via Interface Builder

    - by Nader
    This shouldn't be this confusing. I have a custom UIView with a bunch on controls on it. UILabels, buttons, etc. I've created this Nib using Interface Builder. I want to be able to position this custom uiview on another UIView using the interface builder. How do I link my UIView custom class, to the nib? initWithCoder gets called, but I want this class to get loaded from the nib. Thanks

    Read the article

  • How to use Blazeds with a custom classloader?

    - by festerwim
    Hi, has anybody tried using a custom classloader with BlazeDS? We have a web application using BlazeDS and we can convert Java objects in to ActionScript object and back without problems in the main application. However, we also have a plug-in mechanism based on a custom classloader. BlazeDS cannot map the types contained in jar files of that custom classloader since I don't know how to tell it to BlazeDS. Has anybody already done this? The livedocs of TypeMarshallingcontext show a setClassloader() method, but since the context seems to be a singleton, I assume this will not work if you have multiple custom classloaders (we have 1 for each plugin that is deployed) regards, Wim

    Read the article

  • A couple of questions on exceptions/flow control and the application of custom exceptions

    - by dotnetdev
    1) Custom exceptions can help make your intentions clear. How can this be? The intention is to handle or log the exception, regardless of whether the type is built-in or custom. The main reason I use custom exceptions is to not use one exception type to cover the same problem in different contexts (eg parameter is null in system code which may be effect by an external factor and an empty shopping basket). However, the partition between system and business-domain code and using different exception types seems very obvious and not making the most of custom exceptions. Related to this, if custom exceptions cover the business exceptions, I could also get all the places which are sources for exceptions at the business domain level using "Find all references". Is it worth adding exceptions if you check the arguments in a method for being null, use them a few times, and then add the catch? Is it a realistic risk that an external factor or some other freak cause could cause the argument to be null after being checked anyway? 2) What does it mean when exceptions should not be used to control the flow of programs and why not? I assume this is like: if (exceptionVariable != null) { } Is it generally good practise to fill every variable in an exception object? As a developer, do you expect every possible variable to be filled by another coder?

    Read the article

  • Using NServiceBus behind a custom web service

    - by Michael Stephenson
    In this post I'd like to talk about an architecture scenario we had recently and how we were able to utilise NServiceBus to help us address this problem. Scenario Cognos is a reporting system used by one of my clients. A while back we developed a web service façade to allow line of business applications to be able to access reports from Cognos to support their various functions. The service was intended to provide access to reports which were quick running reports or pre-generated reports which could be accessed real-time on demand. One of the key aims of the web service was to provide a simple generic interface to allow applications to get any report without needing to worry about the complex .net SDK for Cognos. The web service also supported multi-hop kerberos delegation so that report data could be accesses under the context of the end user. This service was working well for a period of time. The Problem The problem we encountered was that reports were now also required to be available to batch processes. The original design was optimised for low latency so users would enjoy a positive experience, however when the batch processes started to request 250+ concurrent reports over an extended period of time you can begin to imagine the sorts of problems that come into play. The key problems this new scenario caused are: Users may be affected and the latency of on demand reports was significantly slower The Cognos infrastructure was not scaled sufficiently to be able to cope with these long peaks of load From a cost perspective it just isn't feasible to scale the Cognos infrastructure to be able to handle the load when it is only for a couple of hour window each night. We really needed to introduce a second pattern for accessing this service which would support high through-put scenarios. We also had little control over the batch process in terms of being able to throttle its load. We could however make some changes to the way it accessed the reports. The Approach My idea was to introduce a throttling mechanism between the Web Service Façade and Cognos. This would allow the batch processes to push reports requests hard at the web service which we were confident the web service can handle. The web service would then queue these requests and process them behind the scenes and make a call back to the batch application to provide the report once it had been accessed. In terms of technology we had some limitations because we were not able to use WCF or IIS7 where the MSMQ-Activated WCF services could have helped, but we did have MSMQ as an option and I thought NServiceBus could do just the job to help us here. The flow of how this would work was as follows: The batch applications would send a request for a report to the web service The web service uses NServiceBus to send the message to a Queue The NServiceBus Generic Host is running as a windows service with a message handler which subscribes to these messages The message handler gets the message, accesses the report from Cognos The message handler calls back to the original batch application, this is decoupled because the calling application provides a call back url The report gets into the batch application and is processed as normal This approach looks something like the below diagram: The key points are an application wanting to take advantage of the batch driven reports needs to do the following: Implement our call back contract Make a call to the service providing a call back url Provide a correlation ID so it knows how to tie each response back to its request What does NServiceBus offer in this solution So this scenario is not the typical messaging service bus type of solution people implement with NServiceBus, but it did offer the following: Simplified interaction with MSMQ Offered the ability to configure the number of processes working through the queue so we could find a balance between load on Cognos versus the applications end to end processing time NServiceBus offers retries and a way to manage failed messages NServiceBus offers a high availability setup The simple thing is that NServiceBus gave us the platform to build the solution on. We just implemented a message handler which functionally processed a message and we could rely on NServiceBus to do all of the hard work around managing the queues and all of the lower level things that would have took ages to write to any kind of robust level. Conclusion With this approach we were able to deal with a fairly significant performance issue with out too much rework. Hopefully this write up gives people some insight into ideas on how to leverage the excellent NServiceBus framework to help solve integration and high through-put scenarios.

    Read the article

  • How can I define custom 'contentGroups' in a custom Flex 4 component?

    - by swidnikk
    The spark panel component for example can be written like this And its skin file will handle layout of the contentGroup, controlBarGroup, and titleDisplay. Notice, however that the contentGroup is doesn't appear in the code above and that the controlBarGroup accepts child mxml components. Now say I want to create a custom component that defines various required and non-required skinparts, such as 'headerGroup', 'navigationGroup', and 'accountPreferencesGroup'. I'd like to write this custom component like this The motivation here is that I can now create a couple different skin files to change the look and layout of those subgroups. Reading source of the spark panel, there are some calls within the mx_internal namespace such as getMXMLContent() which is a method of the spark group component, but which I have no access to. Does the description above make sense? How can I create custom 'contentGroups' in my custom Flex4 component that can use nested mxml child components? Should I approach this a different way?

    Read the article

  • SSL certificates work fine from command line but fails in script

    - by jrallison
    I'm trying to setup email notifications for my continuous integration server. I have a script which uses nail to send the email when the build works: #!/bin/bash echo "Build Worked!" | nail -A myisp -s 'Build Success' [email protected] When I run this from the command line with sh build-worked, it works and I receive the email. However, when I start the continuous integration server which executes the same script, I get the following error: nail: /opt/bitnami/common/lib/libssl.so.0.9.8: no version information available (required by nail) nail: /opt/bitnami/common/lib/libcrypto.so.0.9.8: no version information available (required by nail) Error with certificate at depth: 0 issuer = /C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/[email protected] subject = /C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com err 20: unable to get local issuer certificate Continue (y/n)? could not initiate SSL/TLS connection: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed . . . message not sent. I must be messing some configuration, any ideas?

    Read the article

  • Configure APE-Server on Ubuntu10.10 webserver

    - by sadmicrowave
    I'm having problems configuring my ape-server. First, I reside behind a corporate firewall where our own DNS servers are maintained. I requested a domain name for my server and was provided uslonsweb003.us.mycompany.com from my IT group. Therefore, my website works and can be accessed via (intranet only) at http://uslonsweb003.us.mycompany.com/test.php. I followed the instructions at ape-project.org and run the Check Tool at the end only to find I get an error stating: Running test : Contacting APE Server (adding frequency) Can't contact APE Server. Please check the folowing url is pointing to your APE server : http://0.uslonsweb003.us.mycompany.com:6969 my /etc/apache2/apache2.conf module looks as follows: <VirtualHost *:80> Servername uslonsweb003.us.mycompany.com ServerAlias ape.uslonsweb003.us.mycompany.com ServerAlias *.ape.uslonsweb003.us.mycompany.com DocumentRoot "/var/www/" </VirtualHost> my /var/www/ape-jsf/Demos/config.js config section looks as follows: APE.Config.baseUrl = 'http://uslonsweb003.us.mycompany.com/ape-jsf'; APE.Config.domain = 'uslonsweb003.us.mycompany.com'; APE.Config.server = 'uslonsweb003.us.mycompany.com:6969'; The instructions at ape-project.org tell me that the APE.Config.server should be `ape.mydomain.com:6969'; but that does not work (I'm assuming because my corporate DNS does not understand the 'ape' before the domain name since 'ape' was not registered with the IT DNS). So therefore, I changed it to what you see above. Please help!! Thanks in advance UPDATE 1 per the installation instructions located on this page http://www.ape-project.org/wiki/index.php/Advanced_APE_configuration under 'Configure your Server/Computer' (I'm running it on a server obviously) It says I need to add some lines to my DNS config file. It sounds like (since I'm within a corporate network) I would ask my IT group to add the following lines to the DNS configuration file on their end: ape IN A x.x.x.x ; IP address of my APE server *.ape IN CNAME ape I just want to make sure this is all I have to have them add (or if this is even correct) before I ask them.

    Read the article

  • Can't setup 3 nodes MongoDB recplica set

    - by Victor Lin
    I just follow instructions in MongoDB document Replica Sets - Basics to setup a 3-node Replica set. Everything goes fine when I do the initiate and add first node in the primary. [foo@host-a mongodb]$ bin/mongo localhost MongoDB shell version: 1.8.2 connecting to: localhost > rs.initiate() { "info2" : "no configuration explicitly specified -- making one", "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } > rs.add("host-b") { "ok" : 1 } So far so good, but when I try to add third node myset:PRIMARY> rs.addArb("host-c") Sun Aug 7 22:57:09 MessagingPort recv() errno:104 Connection reset by peer 127.0.0.1:27017 Sun Aug 7 22:57:09 SocketException: remote: error: 9001 socket exception [1] Sun Aug 7 22:57:09 DBClientCursor::init call() failed Sun Aug 7 22:57:09 query failed : local.$cmd { count: "system.replset", query: {}, fields: {} } to: 127.0.0.1 Sun Aug 7 22:57:09 Error: error doing query: failed shell/collection.js:150 Sun Aug 7 22:57:09 trying reconnect to 127.0.0.1 Sun Aug 7 22:57:09 reconnect 127.0.0.1 ok As result, the current primary became secondary, and the host-b was marked as dead, but actually, it is still alive. myset:SECONDARY> rs.status() { "set" : "myset", "date" : ISODate("2011-08-08T04:03:23Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "host-a:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "optime" : { "t" : 1312775799000, "i" : 1 }, "optimeDate" : ISODate("2011-08-08T03:56:39Z"), "self" : true }, { "_id" : 1, "name" : "host-b", "health" : 0, "state" : 6, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2011-08-08T04:03:22Z"), "errmsg" : "still initializing" } ], "ok" : 1 } How could this happen? I just follow the guide in the document, did I do something wrong? Moreover, I can't do anything on current secondary server. It doesn't allow me to reconfig on the secondary node, but the problem is there is no primary node. myset:SECONDARY> rs.reconfig({}) { "errmsg" : "replSetReconfig command must be sent to the current replica set primary.", "ok" : 0 } Any ideas?

    Read the article

  • Configuration Error ASP.NET password format specified is invalid

    - by salvationishere
    I am getting the above error in IIS 6.0 now when I browse my C# / SQL web application. This was built in VS 2008 and SS 2008 on a 32-bit XP OS. The application was working before I added Login controls to it. However, this is my first time configuring Login/password controls so I am probably missing something really basic. This error doesn't happen until I try to login. Here are the details of my error from IIS; I get the same error in VS: Parser Error Message: Password format specified is invalid. Source Error: Line 31: <add Line 32: name="SqlProvider" Line 33: type="System.Web.Security.SqlMembershipProvider" Line 34: connectionStringName="AdventureWorksConnectionString2" Line 35: applicationName="AddFileToSQL2" Source File: C:\Inetpub\AddFileToSQL2\web.config Line: 33 And the relevant contents of my web.config are: <connectionStrings> <add name="Master" connectionString="server=MSSQLSERVER;database=Master; Integrated Security=SSPI" providerName="System.Data.SqlClient" /> <add name="AdventureWorksConnectionString" connectionString="Data Source=SIDEKICK;Initial Catalog=AdventureWorks;Integrated Security=True" providerName="System.Data.SqlClient" /> <add name="AdventureWorksConnectionString2" connectionString="Data Source=SIDEKICK;Initial Catalog=AdventureWorks;Integrated Security=True; " providerName="System.Data.SqlClient" /> </connectionStrings> <system.web> <membership defaultProvider="SqlProvider" userIsOnlineTimeWindow="15"> <providers> <clear /> <add name="SqlProvider" type="System.Web.Security.SqlMembershipProvider" connectionStringName="AdventureWorksConnectionString2" applicationName="AddFileToSQL2" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="true" requiresUniqueEmail="false" passwordFormat="encrypted" /> </providers> </membership> <!-- Set compilation debug="true" to insert debugging symbols into the compiled page. Because this affects performance, set this value to true only during development. --> <roleManager enabled="true" /> <compilation debug="true"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> </assemblies> </compilation> <!-- The <authentication> section enables configuration of the security authentication mode used by ASP.NET to identify an incoming user. --> <authentication mode="Forms"> <forms loginUrl="Password.aspx" protection="All" timeout="30" name="SqlAuthCookie" path="/FormsAuth" requireSSL="false" slidingExpiration="true" defaultUrl="default.aspx" cookieless="UseCookies" enableCrossAppRedirects="false" /> </authentication> <!--Authorization permits only authenticated users to access the application --> <authorization> <deny users="?" /> <allow users="*" /> </authorization> <!-- The <customErrors> section enables configuration of what to do if/when an unhandled error occurs during the execution of a request. Specifically, it enables developers to configure html error pages to be displayed in place of a error stack trace. <customErrors mode="RemoteOnly" defaultRedirect="GenericErrorPage.htm"> <error statusCode="403" redirect="NoAccess.htm" /> <error statusCode="404" redirect="FileNotFound.htm" /> </customErrors> --> <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false"/> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </httpModules> </system.web> <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="WarnAsError" value="false"/> </compiler> </compilers> </system.codedom> <!-- The system.webServer section is required for running ASP.NET AJAX under Internet Information Services 7.0. It is not necessary for previous version of IIS. --> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <remove name="ScriptModule"/> <add name="ScriptModule" preCondition="managedHandler" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <remove name="ScriptHandlerFactory"/> <remove name="ScriptHandlerFactoryAppServices"/> <remove name="ScriptResource"/> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </handlers> </system.webServer> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions.Design" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> </assemblyBinding> </runtime> <system.net> <mailSettings> <smtp from="[email protected]"> <network host="SIDEKICK" password="" userName="" /> </smtp> </mailSettings> </system.net> </configuration> I checked and I do have an aspnetdb database in my SSMS. The Network Service account has SELECT, EXECUTE, INSERT, UPDATE access to this database. But one problem I see is that all of the tables in this database are empty except for aspnet_SchemaVersions, which just has 2 records (common and membership). Is this right? I added users and roles via ASP.NET Configuration wizard, and I believe I set this up correctly since I followed the Microsoft tutorial at http://msdn.microsoft.com/en-us/library/ms998347.aspx. One other problem I see from VS is after adding content to my Page_Load on my initial login Password.aspx.cs file, I'm getting an invalid cast problem below. I googled this problem also but the solutions I saw confused me even more. The Page_Load section I added is: protected void Page_Load(object sender, EventArgs e) { Response.Write("Hello, " + Server.HtmlEncode(User.Identity.Name)); FormsIdentity id = (FormsIdentity)User.Identity; FormsAuthenticationTicket ticket = id.Ticket; Response.Write("<p/>TicketName: " + ticket.Name); Response.Write("<br/>Cookie Path: " + ticket.CookiePath); Response.Write("<br/>Ticket Expiration: " + ticket.Expiration.ToString()); Response.Write("<br/>Expired: " + ticket.Expired.ToString()); Response.Write("<br/>Persistent: " + ticket.IsPersistent.ToString()); Response.Write("<br/>IssueDate: " + ticket.IssueDate.ToString()); Response.Write("<br/>UserData: " + ticket.UserData); Response.Write("<br/>Version: " + ticket.Version.ToString()); } And the VS exception I'm getting: System.InvalidCastException was unhandled by user code Message="Unable to cast object of type 'System.Security.Principal.GenericIdentity' to type 'System.Web.Security.FormsIdentity'." Source="AddFileToSQL" StackTrace: at AddFileToSQL.Password.Page_Load(Object sender, EventArgs e) in C:\Documents and Settings\Admin\My Documents\Visual Studio 2008\Projects\AddFileToSQL2\AddFileToSQL\Password.aspx.cs:line 22 at System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) at System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) at System.Web.UI.Control.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) InnerException:

    Read the article

  • Android : EditText with custom keyboard

    - by jpprade
    Hello I created my own custom keyboard following the the example in the sdk. Now I would like to use this custom keyboard by default on my EditText in my app (actualy I have to long press the edittext and then choose my custom keyboard). How can I do that ? (seems to be related to the inputType property but I can't find out how to set it) Thanks !

    Read the article

  • Survey: Do you write custom SQL CLR procedures/functions/etc

    - by James Luetkehoelter
    I'm quite curious because despite the great capabilities of writing CLR-based stored procedures to off-load those nasty operations TSQL isn't that great at (like iteration, or complex math), I'm continuing to see a wealth of SQL 2008 databases with complex stored procedures and functions which would make great candidates. The in-house skill to create the CLR code exists as well, but there is flat out resistance to use it. In one scenario I was told "Oh, iteration isn't a problem because we've trained...(read more)

    Read the article

  • App crashes only after second execution only in Release configuration

    - by denbec
    Hey all, i know this is probably not an easy question to answer, as it's hard to describe on my hand. I have an app that runs without problems on the device in Debug Configuration (also multiple times). Once I put it into Release Configuration (which I need before publishing?), the app starts without problems and I can proceed to the next page, where I show an core-plot graph. BUT only if I run it from xcode. As soon as I end the App and start it again, it opens without problems, but on the next page, it crashes. Now I don't have anything to debug other than the crash report: Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0xcf10000a Crashed Thread: 0 Thread 0 Crashed: 0 libobjc.A.dylib 0x000026f2 objc_msgSend + 14 1 StandbyCheck 0x0001fbea -[CPXYTheme newGraph] (CPXYTheme.m:36) 2 StandbyCheck 0x00007c06 -[SCGraphCell initWithStyle:reuseIdentifier:] (SCGraphCell.m:28) 3 StandbyCheck 0x00076b4a -[TTTableViewDataSource tableView:cellForRowAtIndexPath:] (TTTableViewDataSource.m:128) 4 UIKit 0x0007797a -[UITableView(UITableViewInternal) _createPreparedCellForGlobalRow:withIndexPath:] + 514 5 UIKit 0x000776b0 -[UITableView(UITableViewInternal) _createPreparedCellForGlobalRow:] + 28 6 UIKit 0x00037e78 -[UITableView(_UITableViewPrivate) _updateVisibleCellsNow] + 940 7 UIKit 0x000367d4 -[UITableView layoutSubviews] + 176 8 StandbyCheck 0x000734b8 -[TTTableView layoutSubviews] (TTTableView.m:226) [...] Now, can someone point in any direction? What are the differences in Debug/Release Modes? How could I possibly debug this failure? I've been searching for hours now, please help me :( Thanks, Dennis

    Read the article

  • Puppet: array in parameterized classes VS using resources

    - by Luke404
    I have some use cases where I want to define multiple similar resources that should end up in a single file (via a template). As an example I'm trying to write a puppet module that will let me manage the mapping between MAC addresses and network interface names (writing udev's persistent-net-rules file from puppet), but there are also many other similar usage cases. I searched around and found that it could be done with the new parameterised classes syntax: if implemented that way it should end up being used like this: node { "myserver.example.com": class { "network::iftab": interfaces => { "eth0" => { "mac" => "ab:cd:ef:98:76:54" } "eth1" => { "mac" => "98:76:de:ad:be:ef" } } } } Not too bad, I agree, but it would rapidly explode when you manage more complex stuff (think network configurations like in this module or any other multiple-complex-resources-in-a-single-config-file stuff). In a similar question on SF someone suggested using Pienaar's puppet-concat module but I doubt it could get any better than parameterised classes. What would be really cool and clean in the configuration definition would be something like the included host type, it's usage is simple, pretty and clean and naturally maps to multiple resources that will end up being configured in a single place. Transposed to my example it would be like: node { "myserver.example.com": interface { "eth0": "mac" => "ab:cd:ef:98:76:54", "foo" => "bar", "asd" => "lol", "eth1": "mac" => "98:76:de:ad:be:ef", "foo" => "rab", "asd" => "olo", } } ...that looks much better to my eyes, even with 3x options to each resource. Should I really be passing arrays to parameterised classes, or there is a better way to do this kind of stuff? Is there some accepted consensus in the puppet [users|developers] community? By the way, I'm referring to the latest stable release of the 2.7 branch and I am not interested in compatibility with older versions.

    Read the article

  • Create Custom Windows Key Keyboard Shortcuts in Windows

    - by Asian Angel
    Nearly everyone uses keyboard shortcuts of some sort on their Windows system but what if you could create new ones for your favorite apps or folders? You might just be amazed at how simple it can be with just a few clicks and no programming using WinKey. WinKey in Action During the installation process you will see this window that gives you a good basic idea of just what can be accomplished with this wonderful little app. As soon as the installation process has finished you will see the “Main App Window”. It provides a simple straightforward listing of all the keyboard shortcuts that it is currently managing. Note: WinKey will automatically add an entry to the “Startup Listing” in your “Start Menu” during installation. To see the regular built-in Windows keyboard shortcuts that it is managing click “Standard Shortcuts” to select it and then click on “Properties”. For those who are curious WinKey does have a “System Tray Icon” that can be disabled if desired. Now onto creating those new keyboard shortcuts… For our example we decided to create a keyboard shortcut for an app rather than a folder. To create a shortcut for an app click on the small “Paper Icon” as shown here. Once you have done that browse to the appropriate folder and select the exe file. The second step will be choosing which keyboard shortcut you would like to associate with that particular app. You can use the drop-down list to choose from a listing of available keyboard combinations. For our example we chose “Windows Key + A”. The final step is choosing the “Run Mode”. There are three options available in the drop-down list…choose the one that best suits your needs. Here is what our example looked like once finished. All that is left to do at this point is click “OK” to finish the process. And just like that your new keyboard shortcut is now listed in the “Main App Window”. Time to try out your new keyboard shortcut! One quick use of our new keyboard shortcut and Iron Browser opened right up. WinKey really does make creating new keyboard shortcuts as simple as possible. Conclusion If you have been wanting to create new keyboard shortcuts for your favorite apps and folders then it really does not get any simpler than with WinKey. This is definitely a recommended app for anyone who loves “get it done” software. Links Download WinKey at Softpedia Similar Articles Productive Geek Tips Show Keyboard Shortcut Access Keys in Windows VistaCreate a Keyboard Shortcut to Access Hidden Desktop Icons and FilesKeyboard Ninja: 21 Keyboard Shortcut ArticlesAnother Desktop Cube for Windows XP/VistaHow-To Geek on Lifehacker: Control Your Computer with Shortcuts & Speed Up Vista Setup TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Recycle ! Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems

    Read the article

  • Routing Apache TracEnv

    - by fampinheiro
    Hello, i have a situation with many trac instances. They all have the same structure in the filesystem. PATH/trac1 PATH/trac2 PATH/trac3 i have this configuration <Location /trac/trac1> SetHandler mod_python PythonInterpreter main_interpreter PythonHandler trac.web.modpython_frontend PythonOption TracEnv PATH/trac1 PythonOption TracUriRoot /trac/trac1 PythonOption PYTHON_EGG_CACHE PATH/eggs/ </Location> <Location /trac/trac2> SetHandler mod_python PythonInterpreter main_interpreter PythonHandler trac.web.modpython_frontend PythonOption TracEnv PATH/trac2 PythonOption TracUriRoot /trac/trac2 PythonOption PYTHON_EGG_CACHE PATH/eggs/ </Location> <Location /trac/trac3> SetHandler mod_python PythonInterpreter main_interpreter PythonHandler trac.web.modpython_frontend PythonOption TracEnv PATH/trac3 PythonOption TracUriRoot /trac/trac3 PythonOption PYTHON_EGG_CACHE PATH/eggs/ </Location> i wonder if it's possible to do something like (TracEnvParentDir is not an option) <Location /trac/{ENV}> SetHandler mod_python PythonInterpreter main_interpreter PythonHandler trac.web.modpython_frontend PythonOption TracEnv PATH/{ENV} PythonOption TracUriRoot /trac/{ENV} PythonOption PYTHON_EGG_CACHE PATH/eggs/ </Location> Thank you for your time. EDIT: TracEnvParentDir is not an option because my structure is the following +---projs +---trac1 ¦ +---public [instance] ¦ +---t1 ¦ ¦ +---common [instance] ¦ ¦ +---g1 [instance] ¦ ¦ +---g2 [instance] ¦ ¦ +---g3 [instance] ¦ ¦ +---g4 [instance] ¦ ¦ +---g5 [instance] ¦ +---t2 ¦ ¦ +---common [instance] ¦ ¦ +---g1 [instance] ¦ ¦ +---g2 [instance] ¦ ¦ +---g3 [instance] ¦ ¦ +---g4 [instance] ¦ ¦ +---g5 [instance] ¦ +---t3 ¦ +---common [instance] ¦ +---g1 [instance] ¦ +---g2 [instance] ¦ +---g3 [instance] ¦ +---g4 [instance] ¦ +---g5 [instance] ¦ +---trac2 +---public [instance] +---t1 ¦ +---common [instance] ¦ +---g1 [instance] ¦ +---g2 [instance] ¦ +---g3 [instance] ¦ +---g4 [instance] ¦ +---g5 [instance] +---t2 ¦ +---common [instance] ¦ +---g1 [instance] ¦ +---g2 [instance] ¦ +---g3 [instance] ¦ +---g4 [instance] ¦ +---g5 [instance] +---t3 +---common [instance] +---g1 [instance] +---g2 [instance] +---g3 [instance] +---g4 [instance] +---g5 [instance] I use the TracEnvParentDir on t1, t2 and t3 and TracEnv on trac1/public and trac2/public I wonder if it's possible to define a part of the url variable.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >