Search Results

Search found 15040 results on 602 pages for 'request servervariables'.

Page 120/602 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • Unable to print login-required images in IE

    - by Tim Fountain
    I have some images in a section of a site that require the user to be logged in in order to view. These images are served by a PHP script, which checks the user's login state and if valid, serves the binary data with the appropriate headers. This all works fine. The issue comes when a user tries to print one of these images. In Internet Explorer, when they go to print preview they get the broken image box with a red cross in the corner instead of the actual file. This is what gets printed also. All other browsers can print the images without issue. I have some images elsewhere on the site that are also served via. PHP but these don't require a login. These print fine. The PHP-powered HTML pages on the site that require a login also print fine in IE. It's just login-required images. The user hitting print preview does not seem to result in additional HTTP request to the server for the file. However I do see an additional HTTP request a few seconds later that comes from the same IP (may or may not be related), This request includes no host header, no REQUEST_URI and no user agent. The 'please login' page sends an appropriate 403 header. I've also added a far-in-future expires header to the image response itself to ensure that browsers can serve/print the files from their own cache but this hasn't made any difference. Why can't IE print the images and what else can I do to investigate or fix the problem?

    Read the article

  • IIS 6 windows 2003 help installing SSL cert

    - by ADAM
    I requested a new ssl cert from godaddy which has been issued. When try to install it in iis through the website directory security tab i get a "the pending certificate request for this response file was not found. this request may be cancelled. you cannot install selected response certificate using this wizard" error. I may have run the wizard and deleted the pending request. Is there any way i can install the certificate without getting a new one? (i hope so) I have the original certrequest.txt file

    Read the article

  • Performing client-side OAuth authorized Twitter API calls versus server side, how much of a difference is there in terms of performance?

    - by Terence Ponce
    I'm working on a Twitter application in Ruby on Rails. One of the biggest arguments that I have with other people on the project is the method of calling the Twitter API. Before, everything was done on the server: OAuth login, updating the user's Twitter data, and retrieving tweets. Retrieving tweets was the heaviest thing to do since we don't store the tweets in our database, so viewing the tweets means that we have to call the API every time. One of the people in the project suggested that we call the tweets through Javascript instead to lessen the load on the server. We used GET search, which, correct me if I'm wrong, will be removed when v1.0 becomes completely deprecated, but that really isn't a concern now. When the Twitter API has migrated completely to v1.1 (again, correct me if I'm wrong), every calls to the API must be authenticated, so we have to authenticate our Javascript requests to the API. As said here: We don't support or recommend performing OAuth directly through Javascript -- it's insecure and puts your application at risk. The only acceptable way to perform it is if you kept all keys and secrets server-side, computed the OAuth signatures and parameters server side, then issued the request client-side from the server-generated OAuth values. If we do exactly what Twitter suggests, the only difference between this and doing everything server-side is that our server won't have to contact the Twitter API anymore every time the user wants to view tweets. Here's how I would picture what's happening every time the user makes a request: If we do it through Javascript, it would be harder on my part because I would have to create the signatures manually for every request, but I will gladly do it if the boost in performance is worth all the trouble. Doing it through Ruby on Rails would be very easy since the Twitter gem does most of the grunt work already, so I'm really encouraging the other people in the project to agree with me. Is the difference in performance trivial or is it significant enough to switch to Javascript?

    Read the article

  • Requesting quality analysis test cases up front of implementation/change

    - by arin
    Recently I have been assigned to work on a major requirement that falls between a change request and an improvement. The previous implementation was done (badly) by a senior developer that left the company and did so without leaving a trace of documentation. Here were my initial steps to approach this problem: Considering that the release date was fast approaching and there was no time for slip-ups, I initially asked if the requirement was a "must have". Since the requirement helped the product significantly in terms of usability, the answer was "If possible, yes". Knowing the wide-spread use and affects of this requirement, had it come to a point where the requirement could not be finished prior to release, I asked if it would be a viable option to thrash the current state and revert back to the state prior to the ex-senior implementation. The answer was "Most likely: no". Understanding that the requirement was coming from the higher management, and due to the complexity of it, I asked all usability test cases to be written prior to the implementation (by QA) and given to me, to aid me in the comprehension of this task. This was a big no-no for the folks at the management as they failed to understand this approach. Knowing that I had to insist on my request and the responsibility of this requirement, I insisted and have fallen out of favor with some of the folks, leaving me in a state of "baffledness". Basically, I was trying a test-driven approach to a high-risk, high-complexity and must-have requirement and trying to be safe rather than sorry. Is this approach wrong or have I approached it incorrectly? P.S.: The change request/improvement was cancelled and the implementation was reverted back to the prior state due to the complexity of the problem and lack of time. This only happened after a 2 hour long meeting with other seniors in order to convince the aforementioned folks.

    Read the article

  • What is wrong with my game loop/mechanic?

    - by elias94xx
    I'm currently working on a 2d sidescrolling game prototype in HTML5 canvas. My implementations so far include a sprite, vector, loop and ticker class/object. Which can be viewed here: http://elias-schuett.de/apps/_experiments/2d_ssg/js/ So my game essentially works well on todays lowspec PC's and laptops. But it does not on an older win xp machine I own and on my Android 2.3 device. I tend to get ~10 FPS with these devices which results in a too high delta value, which than automaticly gets fixed to 1.0 which results in a slow loop. Now I know for a fact that there is a way to implement a super smooth 60 or 30 FPS loop on both devices. Best example would be: http://playbiolab.com/ I don't need all the chunk and debugging technology impact.js offers. I could even write a super simple game where you just control a damn square and it still wouldn't run on a equally fast 30 or 60 fps. Here is the Loop class/object I'm using. It requires a requestAnimationFrame unify function. Both devices I've tested my game on support requestAnimationFrame, so there is no interval fallback. var Loop = function(callback) { this.fps = null; this.delta = 1; this.lastTime = +new Date; this.callback = callback; this.request = null; }; Loop.prototype.start = function() { var _this = this; this.request = requestAnimationFrame(function(now) { _this.start(); _this.delta = (now - _this.lastTime); _this.fps = 1000/_this.delta; _this.delta = _this.delta / (1000/60) > 2 ? 1 : _this.delta / (1000/60); _this.lastTime = now; _this.callback(); }); }; Loop.prototype.stop = function() { cancelAnimationFrame(this.request); };

    Read the article

  • Intercept Apache communication

    - by Nathan Adams
    I am looking to develop a solution that eliminates potential spammers. The way this system will work is that it will watch connections and requests. Going into the specifics is more for stackoverflow, But, what I am interested in is if it is possible to tell Apache to pass the request over to my application first and give it the ability to accept/deny the request. Sure, it will make requests slower, but I think that is a trade off I am willing to take. I still want, however, Apache to run the request through any interpreters (such as PHP). The idea is that one wouldn't have to implement anti-spam measures on a per app basis but have an "umbrella" of spam protection.

    Read the article

  • View another persons calendar details in Outlook 2010

    - by SqlRyan
    I know how to view somebody else's calendar - there are 100 walk-throughs like this one on Google. However, this feature has changed in Outlook 2010, and you no longer get prompted for rights to view another person's calendar, and Outlook just displays their "Free/Busy" information, which doesn't help me. I'd like to request permissions to view the details of their appointments, but I can't find any place to request permissions on their calendar - Outlook 2010 just gives me "Free/Busy" rights and then appears to have no option to request additional rights. Can anybody point me in the right direction?

    Read the article

  • On RouterOS, how will transparent proxying (with DNAT) affect reporting of netflows?

    - by Tim
    I have a box running Mikrotik RouterOS, which is set up to do transparent web proxying, as described here. In short, this means that I have a firewall rule for destination NAT causing any port 80 traffic to get redirected to port 8080 on the router, which is received by the Mikrotik local web proxy. The local web proxy then makes the web request on the client's behalf, in this case to a parent web proxy server (which in turn does the real web request). My question is, how will this two-part process get reported in the logging of traffic flow information (netflows)? Looking at the logged information, what I seem to be seeing is this: One flow recorded from client machine (private IP) to remote proxy (8080) Another flow recorded from router to remote proxy (8080) The original request that the client made to port 80 isn't recorded. I want to write code to analyse traffic usage, so I want to be sure I'm not losing information if I discard the latter of these.

    Read the article

  • Website: Requested filename being rewritten

    - by horatio
    I have been unable to find an answer via search. I have a website (I do not administer the servers) where the server will serve a different file than the one requested. I first noticed this when using a filename of the following form: _foo.php (single underscore) If I request foo.php (does not exist), the server returns _foo.php. By "returns" I mean that the server decides I meant _foo.php, processes the php file, and serves the output. If I request afoo.php, zfoo.php, or even __foo.php (two underscores) (these files do not exist) the server returns _foo.php. If I request aafoo.php, the server returns 404. To sum up: the server seems to be doing a partial filename match. My question is: what is happening and is this accepted behavior for a web server (or standard behavior of a common mod/package/etc)?

    Read the article

  • How to port forward https traffic via ssh and/or remote desktop to through several networks and PCs?

    - by donttellya
    I have the following environment: In company X I develop a application on a pc A in network A with ip address 192.168.100.50 which has to do an https request to an http server located in the intranet of company Y In company X is another pc B in network B with ip address 192.168.200.100 pc B (of company X) can access the intranet from company Y via ssh tunnel (putty) pc A (of company X) can ping pc B (of company X) note: pc A can also do a remote desktop connection to pc B) pc B can ping the http sever pc A can not ping the http server How can the https request from pc A of company X get to the http server of company Y? On which pc must be putty configured? And which settings for host, port forwarding etc. has to be done in putty? So finally the https request should go from PC A - PC B - Http Server in company Y.

    Read the article

  • Munin Aggregate Graphs from several servers

    - by Sparsh Gupta
    I am using DNS round robin load balancing and have divided my total traffic onto multiple servers. Each server does around 300-400req/second but I am interested in having an aggregate graph telling me the TOTAL of all requests per second served by our architecture. Is there any way I can do this. Right now each graph in Munin comes as a separate graph as they depict things on one server. I am using configuration as follow which doesn't work doesnt work for me, does this configuration got errors? [TRAFFIC.AGGREGATED] update no requests.graph_title nGinx requests requests.graph_vlabel nGinx requests per second requests.draw LINE2 requests.graph_args --base 1000 requests.graph_category nginx requests.label req/sec requests.type DERIVE requests.min 0 requests.graph_order output requests.output.sum \ lb1.visualwebsiteoptimizer.com:nginx_request_lb1.visualwebsiteoptimizer.com_request.request \ lb3.visualwebsiteoptimizer.com:nginx_request_lb2.visualwebsiteoptimizer.com_request.request \ lb3.visualwebsiteoptimizer.com:nginx_request_lb3.visualwebsiteoptimizer.com_request.request

    Read the article

  • Is encoding needed in this decryption?

    - by Lijo
    I have a Encryption – Decryption scenario as shown below. //[Clear text ID string as input] -- [(ASCII GetByte) + Encoding] -- [Encrption as byte array] -- [Database column is in VarBinary] -- [Pass byte[] as VarBinary parameter to SP for comparison] //[ID stored as VarBinary in Database] -- [Read as byte array] -- [(Decrypt as byte array) + Encoding + (ASCII Get String)] -- Show as string in the UI My question is in the decryption scenario. After decryption I get a byte array. I am doing an encoding (IBM037) after that. Is it correct? Is there something wrong in the flow shown above? private static byte[] GetEncryptedID(string id) { Interface_Request input = new Interface_Request(); input.RequestText = Encodeto64(id); input.RequestType = Encryption; ProgramInterface inputRequest = new ProgramInterface(); inputRequest.Test_Trial_Request = input; using (KTestService operation = new KTestService()) { return ((operation.KTrialOperation(inputRequest)).Test_Trial_Response.ResponseText); } } private static string GetDecryptedID(byte[] id) { Interface_Request input = new Interface_Request(); input.RequestText = id; input.RequestType = Decryption; ProgramInterface request = new ProgramInterface(); request.Test_Trial_Request = input; using (KTestService operationD = new KTestService()) { ProgramInterface1 response = operationD.KI014Operation(request); byte[] decryptedValue = response.ICSF_AES_Response.ResponseText; Encoding sourceByteFormat = Encoding.GetEncoding("IBM037"); Encoding destinationByteFormat = Encoding.ASCII; //Convert from one byte format to other (IBM to ASCII) byte[] ibmEncodedBytes = Encoding.Convert(sourceByteFormat, destinationByteFormat,decryptedValue); return System.Text.ASCIIEncoding.ASCII.GetString(ibmEncodedBytes); } } private static byte[] EncodeTo64(string toEncode) { byte[] dataInBytes = System.Text.ASCIIEncoding.ASCII.GetBytes(toEncode); Encoding destinationByteFormat = Encoding.GetEncoding("IBM037"); Encoding sourceByteFormat = Encoding.ASCII; //Convert from one byte format to other (ASCII to IBM) byte[] asciiBytes = Encoding.Convert(sourceByteFormat, destinationByteFormat, dataInBytes); return asciiBytes; }

    Read the article

  • Support Question? Immediate response!

    - by Alliances & Channels Redaktion
    In the support case, it usually has to go fast - as it is well if you have already resolved fundamental questions in advance. For all partners who wish to learn more about support topics, about the use of the SI number, about My Oracle Support, the exact sequence of support processes and service request edits or simply about the Oracle Support Portfolio, it is advisable to visit the Oracle Partner Days. There Oracle Support in the exhibition area is represented with an information booth! Our team will be there individually on general and very specific questions, such as: - What are my rights with which partner SI number? - How do I open or escalate a service request? - What should I do when a service request is processed in the U.S.? - What exactly is Platinum Support? - Can we use Platinum Support as a partner? - How can I use "My Oracle Support" efficiently? Incidentally: The participation at the Oracle Partner Day is also worthwhile, if you are already a Support Professional. As always attracts a varied program of training opportunities, information, networking and entertainment! Please register here for the Oracle Partner Days: 22. 10.2013 Montreux/ Switzerland 29.10.2013 Zürich/ Switzerland 29.10.2013 Utrecht/ Netherlands 07.11.2013 Gent/ Belgium

    Read the article

  • HAPROXY per domain redirection

    - by SecondThought
    I'm trying to redirect requests to my load balancer by domain name with acl and hdr_dom, to a separate backend. The redirection works ok with the first request - 'GET /' (the destination server is a WordPress site) but when the client asks for the assets ('GET /blablabla/style.css' for example) the haproxy doesn't redirect it to the right backend anymore, but to the default one, with . In the haproxy log I can see the correct host that the request is for (the one that I defined in hdr_dom) but it's like that since the GET request itself is relative (I mean not containing the domain but only from the /blablabla and forth), haproxy doesn't recognize it with the hdr_dom. I'm just guessing here.. Please help...

    Read the article

  • Loadbalancing Questions

    - by Van Holtz
    I have been learning networking for about 4 months. Wrote a single standalone Multiplayer server and succeeded with authoritative approach. Now I want to extend it by splitting the single server into clusters to allow even more players to log in to avoid latency issues. Now I have protyped the Loadbalancing server and its running pretty good so far. This is my architecture, I have a master server which acts as a proxy, every sub servers(chat, login, game) connect to the master server as well as all the clients. when a client connects, Client Request: Send Request - MS(Master) - Decides which SS(SubServer) to forward to - Forwards Request to SS - SS - Analyze Message - Send Response to MS - Decides which Client to forward to - Forwards Response to Client Well, it looks like its going through lots of stages. it takes double the time to process the message than a single server approach. i feel like my model isnt the best or i may be wrong. is there any better model or the one they use in professional games? I still want a Master-SubServer approach. I just want to clarify that I'm going in the right direction before writing all my codes. Thanks for any answer :)

    Read the article

  • Diskless with Ubuntu 12.04

    - by user139462
    I'm trying to setup a new diskless solution with ubuntu 12.04 without any success. I followed this howto: https://help.ubuntu.com/community/DisklessUbuntuHowto But the initramfs seems not to be able to mount my nfs share. On my server side: My /etc/exports /srv/nfs4 192.168.0.0/24(fsid=0,rw,no_subtree_check) /srv/nfs4/nfsroot 192.168.0.0/24(rw,no_root_squash,no_subtree_check,fsid=1,nohide,insecure,sync) I'm able to mount my nfs share on standard Ubuntu installation without any problem. I can mount my nfs on any client with those commands: mount 192.168.0.3:/nfsroot /mnt or mount 192.168.0.3:/srv/nfs4/nfsroot /mnt My /tftpboot/pxelinux.cfg/default config file is DEFAULT vmlinuz-3.5.0-25-generic root=/dev/nfs initrd=initrd.img-3.5.0-25-generic nfsroot=192.168.0.3:/nfsroot ip=dhcp rw I also tried DEFAULT vmlinuz-3.5.0-25-generic root=/dev/nfs initrd=initrd.img-3.5.0-25-generic nfsroot=192.168.0.3:/srv/nfs4/nfsroot ip=dhcp rw. What I got in initramfs: With the setting [nfsroot=192.168.0.3:/nfsroot] Diskless output: mount call failed - server replied: Permission denied On Syslog of my nfs server: rpc.mountd[1266]: refused mount request from 192.168.0.10 for /nfsroot (/): not exported With the setting [nfsroot=192.168.0.3:/srv/nfs4/nfsroot] Diskless output: mount: the kernel lacks NFS v3 support On Syslog of my nfs server I got: Mar 11 14:03:06 BootFromLan rpc.mountd[1266]: authenticated mount request from 192.168.0.10:834 for /srv/nfs4/nfsroot (/srv/nfs4/nfsroot) Mar 11 14:03:06 BootFromLan rpc.mountd[1266]: refused unmount request from 192.168.0.10 for /root (/): not exported

    Read the article

  • Rest client throw timeout exception

    - by shandu
    Hi, I have create REST client in C# using example on this page: http://msdn.microsoft.com/en-us/library/aa395208(v=vs.90).aspx. Server is built in PHP. When I send request to some urls I have this exception: The request channel timed out while waiting for a reply after 00:00:59.9531250. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout. But, sometimes, when I debug code, I get response. How to solve this?

    Read the article

  • Rest client throw timeout exception

    - by shandu
    Hi, I have create REST client in C# using example on this page: http://msdn.microsoft.com/en-us/library/aa395208(v=vs.90).aspx. Server is built in PHP. When I send request to some urls I have this exception: The request channel timed out while waiting for a reply after 00:00:59.9531250. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout. But, sometimes, when I debug code, I get response. How to solve this?

    Read the article

  • Windows Metro Requests

    - by Scott Dorman
    Windows 8 and Windows Metro style apps have a lot of potential, but only if application vendors realize there is a demand to see their app as a Metro style app and not just as a desktop app (or worse, only as an Android or iOS app). As consumers, the only thing we can do is be vocal about our desire to see these apps on Windows 8 as a Metro style app. In an effort to raise awareness, I just launched WinMetro Requests. This is our opportunity to request Windows Metro style apps  and show those companies just how much interest there is for seeing their app as a Metro style app. This site is running on UserVoice, so it allows you to easily submit application requests, add comments, and, more importantly, vote for your favorite applications to come to Windows as a Metro style app! As I find out the status of requested applications, I will update the status of the request. If you know and have official communication from one of the companies indicating they will be or are working on a Windows Metro style app, please let me know and I'll update the status of the request after verifying (or at least trying to verify) the information.

    Read the article

  • Application to handle form approval

    - by ChrisMuench
    Hello, Hopefully this is the right place for this question. I have done a fair amount of research and yet to find anything that matches what I want. What I'm envisioning is the following. Let me know if any of you know of a program that will do what I want. Also it must be web-based anom user - fills out form - email gets sent to admin saying xyz has filled out form abc with links to approve/disapprove request. admin can also login and edit form and resent results to original submitter. Also once the admin approves/disapproves request the original submitter gets an approve/disapprove email. and you can search by date submitted, specific project/form, status of request(submitted, approved, disapproved). any ideas all on where I could find this? I started to look into drupal with workflows and actions but it just doesn't flow right for this

    Read the article

  • How to know if my nginx is in good health?

    - by Howard
    I am running a nginx on EC2 (m1.small) for SSL termination. I am using 2 workers on Ubuntu, with latest nginx (stable), the network throughput is around 2Mbps and system load average is around 2 to 3. I am wondering if this system is in good health for now, e.g. what is the queue length (I know nginx can handle a lot of concurrent request, but I mean before the request is being served, how many of them need to wait before being served) what is the average queue time for a given request to be served. I want to know because if my nginx is cpu bounded (e.g. due to SSL), I will need to upgrade to a faster instance. My current nginx status Active connections: 4076 server accepts handled requests 90664283 90664283 104117012 Reading: 525 Writing: 81 Waiting: 3470

    Read the article

  • How do I analyze an Apache Bench result?

    - by Alan Hoffmeister
    I need some help with analyzing a log from Apache Bench: Benchmarking texteli.com (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Server Hostname: texteli.com Server Port: 80 Document Path: /4f84b59c557eb79321000dfa Document Length: 13400 bytes Concurrency Level: 200 Time taken for tests: 37.030 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 13524000 bytes HTML transferred: 13400000 bytes Requests per second: 27.01 [#/sec] (mean) Time per request: 7406.024 [ms] (mean) Time per request: 37.030 [ms] (mean, across all concurrent requests) Transfer rate: 356.66 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 27 37 19.5 34 319 Processing: 80 6273 1673.7 6907 8987 Waiting: 47 3436 2085.2 3345 8856 Total: 115 6310 1675.8 6940 9022 Percentage of the requests served within a certain time (ms) 50% 6940 66% 6968 75% 6988 80% 7007 90% 7025 95% 7078 98% 8410 99% 8876 100% 9022 (longest request) What this results can tell me? Isn't 27 rps too slow?

    Read the article

  • Would form keys reduce the amount of spam we receive?

    - by David Wilkins
    I work for a company that has an online store, and we constantly have to deal with a lot of spam product reviews, and bogus customer accounts. These are all created by automated systems and are more of a nuisance than anything. What I am thinking of (in lieu of captcha, which can be broken) is adding a sort of form key solution to all relevant forms. I know for certain some of the spammers are using XRumer, and I know they seldom request a page before sending us the form data (Is this the definition of CSRF?) so I would think that tying a key to each requested form would at least stem the tide. I also know the spammers are lazy and don't check their work, or they would see that we have never posted a spam review, and they have never gained any revenue from our site. Would this succeed in significantly reducing the volume of spam product reviews and customer account creations we are seeing? EDIT: To clarify what I mean by "Form Keys": I am referring to creating a unique identifier (or "key") that will be used as an invisible, static form field. This key will also be stored either in the database (relative to the user session) or in a cookie variable. When the form's target gets a request, the key must be validated for the form's data to be processed. Those pesky bots won't have the key because they don't load the javascript that generates the form (they just send a blind request to the target) and even if they did load the javascript once, they'd only have one valid key, and I'm not sure they even use cookies.

    Read the article

  • Is this a ridiculous way to structure a DB schema, or am I completely missing something?

    - by Jim
    I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers...

    Read the article

  • WCF REST Error Handler

    - by Elton Stoneman
    I’ve put up on GitHub a sample WCF error handler for REST services, which returns proper HTTP status codes in response to service errors.   The code is very simple – a ServiceBehavior implementation which can be specified in config to tag the RestErrorHandler to a service. Any uncaught exceptions will be routed to the error handler, which sets the HTTP status code and description in the response, based on the type of exception.   The sample defines a ClientException which can be thrown in code to indicate a problem with the client’s request, and the response will be a status 400 with a friendly error message:       throw new ClientException("Invalid userId. Must be provided as a positive integer");   - responds:   Request URL http://localhost/Sixeyed.WcfRestErrorHandler.Sample/ErrorProneService.svc/lastLogin?userId=xyz   Error Status Code: 400, Description: Invalid userId. Must be provided as a positive integer   Any other uncaught exceptions are hidden from the client. The full details are logged with a GUID to identify the error, and the response to the client is a status 500 with a generic message giving them the GUID to follow up on:       var iUserId = 0;     var dbz = 1 / iUserId;   - logs the divide-by-zero error and responds:   Request URL http://localhost/Sixeyed.WcfRestErrorHandler.Sample/ErrorProneService.svc/dbz     Error Status Code: 500, Description: Something has gone wrong. Please contact our support team with helpdesk ID: C9C5A968-4AEA-48C7-B90A-DEC986F80DA5   The sample demonstrates two techniques for building the response. For client exceptions, a friendly HTML response is sent in the body as well as the status code and description. Personally I prefer not to do that – it doesn’t make sense to get a 400 error and find text/html when you’re expecting application/json, but it’s easy to do if that’s the functionality you want. The other option is to send an empty response, which the sample does with server exceptions.   The obvious extension is to have multiple exceptions representing all the status codes you want to provide, then your code is as simple as throwing the relevant exception – UnauthorizedException, ForbiddenExeption, NotImplementedException etc – anywhere in the stack, and it will be handled nicely.

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >