Search Results

Search found 47076 results on 1884 pages for 'web logs'.

Page 97/1884 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Suphp connection reset by peer in error logs

    - by trante
    In my Linux server's logs I have this record nearly every 5 minutes. I couldn't find the reason for two weeks and I would be very happy if you can recommend me a way to diagnose the problem. It is inside error_log file. I use php 5.3.8 and litespeed. 2012-09-03 16:01:28.399 [INFO] [95.7.223.91:63814-0#APVH_example.com] connection to [/tmp/lshttpd/ APVH_example.com_Suphp.sock. 781] on request #151, confirmed, 0, associated process: 845244, running: 0, error: Connection reset by peer!

    Read the article

  • Reading log files from web application

    - by Egorinsk
    I want to write a small PHP application for monitoring logs on a Debian server, including syslog logs and Apache/PHP messages. The problem here is that Apache user (www-data) has no access to /var/log directory. What would be the best way to grant an access to logs for PHP application? Let's assume that log files can be really large, like hundreds of megabytes. I have some ideas: Write a shell script that would be run via sudo and tail last 512 Kb of log into a separate file that can be read by application - that's ineffective, because of forking a new process and having to read data twice Add www-data to adm group (that can read logs) - that's insecure Start a PHP process via cron every minute to read logs — that's not very good, because it doesn't allow real-time monitoring. Also, this script will be started even when I don't read logs, and consume CPU time (server is in the cloud, and I'll have to pay for it) Create a hardlink for all log files with lowered permissions - I guess, that won't work because logrotate could recreate log files and they'll change inode number. Start a separate nginx/Apache server under privileged user that may read logs. Maybe anyone got a better solution?

    Read the article

  • AWStats log format for tomcat access logs which has X-Forwarded-For

    - by Nix
    What should be the AWStats log format for below tomcat access logs ? I tried these formats but the external IP addresses are not coming into AWStats reports. LogFormat="%host %other %logname %time1 %methodurl %code %bytesd %refererquot %uaquot %referer %other %other" LogFormat="%other %other %logname %time1 %methodurl %code %bytesd %refererquot %uaquot %host_proxy" tomcat valve settings: pattern="%h %l %{USER_ID}s %t &quot;%r&quot; %s %b &quot;%{Referer}i&quot; &quot;%{User-Agent}i&quot; &quot;X-Forwarded-For=%{X-Forwarded-For}i&quot; &quot;JSESSIONID=%{JSESSIONID}c&quot; %D" Log entry: 127.0.0.1 - - [04/Nov/2013:13:39:55 +0000] "GET / HTTP/1.1" 200 12345 "https://www.google.com/url?some_url" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36" "X-Forwarded-For=real_ip, proxy_server_internal_ip" "JSESSIONID=-" 12345

    Read the article

  • Apache only logs PHP errors if LogLevel is set to debug

    - by Sudowned
    I'm developing a CodeIgniter application and for reasons that I do not fully understand errors have stopped being logged in the file specified in the Apache site conf. The page I'm testing is definitely generating a 500 error, but that is not reflected in the logs unless I set LogLevel debug. Setting LogLevel to error or warn results in no errors being logged. I don't think this is a CI issue because I've been developing this site for close to a week now and errors have been logged as expected until I picked the project up again this morning. Though for what it's worth, I've got: error_reporting(E_ALL); set in my index.php.

    Read the article

  • Does splitting out Data, Logs, and TempDB matter using a SAN with SQL 2008

    - by MVCylon
    I'm not a server admin. So be gentle. But I was just at a conference and in one of the training classes the Instructor explained some SQL DBA best practices. One of which was to separate out Mdf,Ldf, and TempDB onto different drives to increase performance. Now at our office we have a san. The Sys Admins created 3 san drives one for data, one for Logs, and one for TempDB. My intuition tells me that was a wasted effort...was it? I don't know alot of the details, but if you ask i'll try to fill in any specs needed to answer this question accurately.

    Read the article

  • Shared volume for data (multiple MDF) and another shared volume for logs (multiple LDF)

    - by hagensoft
    I have 3 instances of SQL Server 2008, each on different machines with multiple databases on each instance. I have 2 separate LUNS on my SAN for MDF and LDF files. The NDX and TempDB files run on the local drive on each machine. Is it O.K. for the 3 instances to share a same volume for the data files and another volume for the log files? I don't have thin provisioning on the SAN so I would like to not constaint disk space creating multiple volumes because I was adviced that I should create a volume (drive letter) for each instance, if not for each database. I am aware that I should split my logs and data files at least. No instance would share the actual database files, just the space on drive. Any help is appretiated.

    Read the article

  • Syntax for piping varnish logs to rotatelogs

    - by jetboy
    Ubuntu 12.04 Server x64, Varnish 3.0.2 I'm trying to pipe varnishncsa's logs through Apache's rotatelogs, and running from the shell, things work fine: sudo varnishncsa -a -P /var/run/varnishncsa/varnishncsa.pid |/usr/sbin/rotatelogs /var/log/varnish/varnish.log.%Y%m%d%H 3600 creates a new logfile in /var/log/varnish, with rotation every hour (3600 seconds). However, I'm struggling to get things working the same way inside /etc/init.d/varnishncsa: PATH=/sbin:/bin:/usr/sbin:/usr/bin DAEMON=/usr/bin/$NAME PIDFILE=/var/run/$NAME/$NAME.pid LOGFILE=/var/log/varnish/varnishncsa.log USER=varnishlog DAEMON_OPTS="-a -P ${PIDFILE}" DAEMON_PIPE="|/usr/sbin/rotatelogs /var/log/varnish/varnish.log.%Y%m%d%H 3600" ... start_varnishncsa() { output=$(/bin/tempfile -s.varnish) log_daemon_msg "Starting $DESC" "$NAME" create_pid_directory if start-stop-daemon --start --verbose --pidfile ${PIDFILE} \ --chuid $USER --exec ${DAEMON} -- ${DAEMON_OPTS} \ > ${output} 2>&1; then log_end_msg 0 else log_end_msg 1 cat $output exit 1 fi rm $output } Where should I put DAEMON_PIPE in the above code? I've tried at the end of: if start-stop-daemon --start --verbose --pidfile ${PIDFILE} which is where additional command line parameters usually go, but it isn't creating a logfile.

    Read the article

  • Logs being flooded from Squid for having intercepted and authentication enabled together

    - by Horace
    I have done some hefty Google'ing and I can't seem to find a single solution to this issue that I cam currently experiencing. Here is a sample configuration from squid that I have: # # DIGEST Auth # auth_param digest program /usr/sbin/digest_file_auth /etc/squid/digpass auth_param digest children 8 auth_param digest realm LHPROJECTS.LAN Network Proxy auth_param digest nonce_garbage_interval 10 minutes auth_param digest nonce_max_duration 45 minutes auth_param digest nonce_max_count 100 auth_param digest nonce_strictness on # Squid normally listens to port 3128 # Squid normally listens to port 3128 http_port 192.168.10.2:3128 transparent https_port 192.168.10.2:3128 intercept http_port 192.168.10.2:3130 As noted above, I have three ports defined, 2 of them are transparent/intercept and one is a regular http port (which I use for authentication). Which works rather well in this configuration however my logs are getting flooded of this entry authentication not applicable on intercepted requests whenever a transparent connection is made. So far, I can't seem to find any documentation that would describe how to suppress these messages ?

    Read the article

  • Apache mod_remoteip and access logs

    - by GioMac
    Since Apache 2.4 I've started using mod_remoteip instead of mod_extract_forwarded for rewriting client address from x-forwarded-for provided by frontend servers (varnish, squid, apache etc). So far everything works fine with the modules, i.e. php, cgi, wsgi etc... - client addresses are shown as they should be, but I couldn't write client address in access logs (%a, %h, %{c}a). No luck - I'm always getting 127.0.0.1 (localhost forward ex.). How to log client's ip address when using mod_remoteip?

    Read the article

  • Compare Error logs across service fleet, find unique errors

    - by neuroelectronic
    I'm looking for a tool where I can list the servers to check, the location of the file and it would return a list of the most common errors across those servers (say like 2 or 3 servers for report brevity) and get a report something like this Server.A Server.B Server.C -------- -------- -------- 42 error.X 39 error.X 61 error.X 21 error.Y 7 error.Y 5 error.A 17 error.B 6 error.A 4 error.Y 4 error.A 2 error.R 3 error.S 3 error.R 1 error.S 1 error.R Of course, excluding timestamps and other error details and just grepping out the common sub-strings and listing them like so. I'd be able to look at the table and see that error.B is unique to Server.A and conclude that there is something up with Server.A. Does something like this already exist? Is this something I'll have to code myself? I'm not necessarily looking for this specific report, just the functionality to find unique errors across a set of error logs.

    Read the article

  • Why do Apache access logs - timer resolution issue?

    - by Rob
    When going through Apache 2.2 access logs, logging with the %D directive (The time taken to serve the request, in microseconds), that it's very common for a 200 response to have a given number of bytes, but a "time to serve" of zero. For example, a given URL might be requested 10 times in a single day, and a 200 response is sent for them all, and all return, say 1000 bytes. However, 7 of them have a "time to serve" of zero, while the other 3 have a time to serve of 1 second. Is this simply because the request was served faster than the resolution of the timer Apache uses?

    Read the article

  • Reading log files from web application

    - by Egorinsk
    Hi! I want to write a small PHP application for monitoring logs on a Debian server, including syslog logs and Apache/PHP messages. The problem here is that Apache user (www-data) has no access to /var/log directory. What would be the best way to grant an access to logs for PHP application? Let's assume that log files can be really large, like hundreds of megabytes. I have some ideas: Write a shell script that would be run via sudo and tail last 512 Kb of log into a separate file that can be read by application - that's ineffective, because of forking a new process and having to read data twice Add www-data to adm group (that can read logs) - that's insecure Start a PHP process via cron every minute to read logs — that's not very good, because it doesn't allow real-time monitoring. Also, this script will be started even when I don't read logs, and consume CPU time (server is in the cloud, and I'll have to pay for it) Create a hardlink for all log files with lowered permissions - I guess, that won't work because logrotate could recreate log files and they'll change inode number. Start a separate nginx/Apache server under privileged user that may read logs. Maybe anyone got a better solution?

    Read the article

  • Apache logging: rotating logs on Win32?

    - by Jason S
    I was noticing my disk space disappearing faster than expected, and finally narrowed it down to a rewrite.log file that was 4 GB in size! Is there a way to rotate the various Apache logs (rewrite, error, access, etc.) on a Win32 PC so that only the most recent entries are there and I can limit the data size that results? I found the bit about log rotation on Apache's website but it's Unix-centric. Edit: I got rotatelogs.exe to work, and it's great except that it slows the server response down noticably so I rejected the idea of using it.

    Read the article

  • Deselect dates in Web Calendar c#

    - by yomismo
    Hello, I'm trying to select and de-select dates on a C# Web Calendar control. The problem I have is that I can select or deselect dates except when there is only a single date selected. Clicking on it does not trigger the selection changed event, so Ineed to do something on the dayrender event but I'm not sure what or how. Any ideas? TIA Code so far: public static List<DateTime> list = new List<DateTime>(); protected void Calendar1_DayRender(object sender, DayRenderEventArgs e) { if (e.Day.IsSelected == true) { list.Add(e.Day.Date); } Session["SelectedDates"] = list; } protected void Calendar1_SelectionChanged(object sender, EventArgs e) { DateTime selection = Calendar1.SelectedDate; if (Session["SelectedDates"] != null) { List<DateTime> newList = (List<DateTime>)Session["SelectedDates"]; foreach (DateTime dt in newList) { Calendar1.SelectedDates.Add(dt); } if (searchdate(selection, newList)) { Calendar1.SelectedDates.Remove(selection); } list.Clear(); } } public bool searchdate(DateTime date, List<DateTime> dates) { var query = from o in dates where o.Date == date select o; if (query.ToList().Count == 0) { return false; } else { return true; } }

    Read the article

  • Failed sending bytes array to JAX-WS web service on Axis

    - by user304309
    Hi I have made a small example to show my problem. Here is my web-service: package service; import javax.jws.WebMethod; import javax.jws.WebService; @WebService public class BytesService { @WebMethod public String redirectString(String string){ return string+" - is what you sended"; } @WebMethod public byte[] redirectBytes(byte[] bytes) { System.out.println("### redirectBytes"); System.out.println("### bytes lenght:" + bytes.length); System.out.println("### message" + new String(bytes)); return bytes; } @WebMethod public byte[] genBytes() { byte[] bytes = "Hello".getBytes(); return bytes; } } I pack it in jar file and store in "axis2-1.5.1/repository/servicejars" folder. Then I generate client Proxy using Eclipse for EE default utils. And use it in my code in the following way: BytesService service = new BytesServiceProxy(); System.out.println("Redirect string"); System.out.println(service.redirectString("Hello")); System.out.println("Redirect bytes"); byte[] param = { (byte)21, (byte)22, (byte)23 }; System.out.println(param.length); param = service.redirectBytes(param); System.out.println(param.length); System.out.println("Gen bytes"); param = service.genBytes(); System.out.println(param.length); And here is what my client prints: Redirect string Hello - is what you sended Redirect bytes 3 0 Gen bytes 5 And on server I have: ### redirectBytes ### bytes lenght:0 ### message So byte array can normally be transfered from service, but is not accepted from the client. And it works fine with strings. Now I use Base64Encoder, but I dislike this solution.

    Read the article

  • Google appengine authentication on iPhone web app on the home screen

    - by Rakesh Pai
    I'm using Google appengine for developing an web application that is meant to be used on both the browser and iphone. I have purchased a domain name for this application, so that I have a pretty URL. I've used the User API for authentication. This works just fine on desktop browsers and iPhone Safari. The user could add the application to the home screen (by tapping the "+" at the bottom toolbar). However when that's done, it seems like the cookies set by Google are not in affect within this "application", and the user is effectively logged out. To make matters worse, when the user clicks on the login link (as generated by GAE), the app closes and opens safari to complete the login. Since the session is apparently not shared between the two, the login process is futile, and the "home-screen" version of the app continues to be logged out. It seems that the cookies are not shared between a "home-screen" app and Safari. It also seems that the "home-screen" app will only work within it's own domain, and any redirect to any other domain will open Safari. Any idea how I can go about fixing this?

    Read the article

  • ASP.NET Convert to Web App question

    - by mattgcon
    The following web control will not convert for some reason (add designer and cs pages). I am getting a page directive is missing error What is wrong with the code that is causing it to not convert? <%@ Control Language="C#" AutoEventWireup="true" %> <%@ Register TagPrefix="ipam" TagName="tnavbar" src="~/controls/tnavbar.ascx" %> <script language="C#" runat="server"> string strCurrent = ""; string strDepth = ""; public string Current { get { return strCurrent; } set { strCurrent = value; } } public string Depth { get { return strDepth; } set { strDepth = value; } } void Page_Load(Object sender, EventArgs e) { idTnavbar.Current = strCurrent; idTnavbar.Item1Link = strDepth + idTnavbar.Item1Link; idTnavbar.Item2Link = strDepth + idTnavbar.Item2Link; idTnavbar.Item3Link = strDepth + idTnavbar.Item3Link; idTnavbar.Item4Link = strDepth + idTnavbar.Item4Link; idTnavbar.Item5Link = strDepth + idTnavbar.Item5Link; idTnavbar.Item6Link = strDepth + idTnavbar.Item6Link; idTnavbar.Item7Link = strDepth + idTnavbar.Item7Link; } </script> <ipam:tnavbar id="idTnavbar" Item1="2000 -- 2001" Item1Link="2000_--_2001.aspx" Item2="2001 -- 2002" Item2Link="2001_--_2002.aspx" Item3="2002 -- 2003" Item3Link="2002_--_2003.aspx" Item4="2003 -- 2004" Item4Link="2003_--_2004.aspx" Item5="2004 -- 2005" Item5Link="2004_--_2005.aspx" Item6="2005 -- 2006" Item6Link="2005_--_2006.aspx" Item7="2006 -- 2007" Item7Link="2006_--_2007.aspx" runat="server" /> Please help, if I can solve this issue many more pages will be fixed to.

    Read the article

  • Request for the permission of type 'System.Web.AspNetHostingPermission' failed when compiling web si

    - by ahsteele
    I have been using Windows 7 for a while but have not had to work with a particular legacy intranet application since my upgrade. Unfortunately, this application is setup as an ASP.NET Website project hosted on a remote server. When I have the website open in Visual Studio 2008 and try to debug it I get the following compiler error: Request for the permission of type 'System.Web.AspNetHostingPermission' failed To resolve this issue on Windows Vista machines, I would change the machine's .NET Security Configuration trust level to full for the local intranet (fix outlined here). I believe this configuration utility relied upon the mscorcfg.msc which from some cursory research appears to be apart of the .NET 2.0 SDK. I have tried to follow the instructions from this Microsoft Support article running the command below to no avail. Drive:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\caspol.exe -m -ag 1 -url "file:////\\computername\sharename\*" FullTrust -exclusive on Presently, I have the following .NET and ASP.NET components installed on my machine Microsoft .NET Compact Framework 2.0 SP2 Microsoft .NET Compact Framework 3.5 Microsoft .NET Framework 4 Client Profile Microsoft .NET Framework 4 Extended Microsoft .NET Framework 4 Multi-Targeting Pack Microsoft ASP.NET MVC 1.0 Microsoft ASP.NET MVC 2 Microsoft ASP.NET MVC 2 - Visual Studio 2008 Tools Microsoft ASP.NET MVC 2 - Visual Studio 2010 Tools Do I need to install the .NET 2.0 SDK? Am I issuing the caspol command incorrectly? Is there something else that I am missing?

    Read the article

  • Building Web Application project using MSBuild from command line on 64-bit: missing targets file

    - by James Allen
    Building a solution containing a web application project using MSBuild from powershell like this: msbuild "/p:OutDir=$build_dir\" $solution_file Works fine for me on 32-bit, but on a 64-bit machine I am running into this error: error MSB4019: The imported project "C:\Program Files\MSBuild\Microsoft\VisualStudio\v9.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk. I am using Visual Studio 2008 and powershell v2. The problem has already been documented here and here. Basically on 64-bit install of VS, the Microsoft.WebApplication.targets needed by MSBuild is in the Program Files(x86) dir, not the Program Files dir, but MSBuild doesn't recognise this and so looks in the wrong place. The two solutions are not ideal: Manually copy the file on 64-bit from Program Files(x86) to Program Files. This is a poor solution - every dev will have to do this manually. Manually edit the csproj file so MSBuild looks in the right place. Again not ideal: I would rather not have to get everyone on 64bit to manually edit csproj files on every new project. e.g. <Import Project="$(MSBuildExtensionsPathx86)\$(WebAppTargetsSuffix)" Condition="Exists('$(MSBuildExtensionsPathx86)\$(WebAppTargetsSuffix)')" /> Ideally I want a way to tell MSBuild to import the target file form the right place from the command line but I can't work out how to do that. Any solutions?

    Read the article

  • How to check for server side update in iPhone application Web service request

    - by Nirmal
    Hello All.... I have kind of Listing application for iPhone application, which always calling a web service to my php server and fetching the data and displaying it onto iPhone screen. Now, the thing to consider in this scenario is, my iPhone application everytime requesting on the server and fetching the data. But now my requirement is like I want to replace following set of actions with above one : - Everytime when my application is launched in iPhone, it should check for the new data at the server`. - And if server replies "true" then only my iPhone application will made a request to fetch the data. - In case of "false", my iPhone application will display the data which is already cached in local phone memory. Now to implement this scenario at server side (which has php, mysql), I am planning with the following solution : Table : tblNewerData id newDataFlag == ============ 1 true Trigger : tgrUpdateNewData Above trigger will update the tblNewerData - newDataFlag field on Insert case of my main table. And every time my iPhone app will request for tblNewerData-newDataFlag field, and if it found true then only it will create new request, and if it founds false then the cached version of data will be displayed. So, I want to know that, is it the correct way to do so ? or else any other smart option available ? Thanks in advance.

    Read the article

  • Why does Http Web Request and IWebProxy work at wierd times

    - by Mike Webb
    Another question about Web proxy. Here is my code: IWebProxy Proxya = System.Net.WebRequest.GetSystemWebProxy(); Proxya.Credentials = CredentialCache.DefaultNetworkCredentials; HttpWebRequest rqst = (HttpWebRequest)WebRequest.Create(targetServer); rqst.Proxy = Proxya; rqst.Timeout = 5000; try { rqst.GetResponse(); } catch(WebException wex) { connectErrMsg = wex.Message; proxyworks = false; } This code fails the first time it is called. After that on successive calls it works sometimes, but not others. It also never hits the catch block. Now the weird part. If I add a MessageBox.Show(msg) call in the first section of code before the GetResponse() call this all will work every time. Here is an example: try { // ========Here is where I make the call and get the response======== System.Windows.Forms.MessageBox.Show("Getting Response"); // ========This makes the whole thing work every time======== rqst.GetResponse(); } catch(WebException wex) { connectErrMsg = wex.Message; proxyworks = false; } I'm baffled about why it is behaving this way. I don't know if the timeout is not working (it's in milliseconds, not seconds, so should timeout after 5 seconds, right?...) or what is going on. The most confusing this is that the message box call makes it all work. So any help and suggestions on what is happening is appreciated. These are the kind of bugs that drive me absolutely out of my mind.

    Read the article

  • sproutcore vs javascriptMVC for web app development

    - by swami
    Hi, I want to use a javascript framework with MVC for a complex web application (which will be one of a set of related apps and pages) for an intranet in a digital archives. I have been looking at SproutCore and JavascriptMVC. I want to choose one framework and stick with it. Does anybody know what the distinguishing features are when comparing these two? I want something that is simple, straightforward that I can customize/hack easily, and that doesn't get in my way too much, but that at the same time gives me a basis for keeping my code nicely organized, and event-driven. I also plan on using jquery substantially. I know sproutcore is backed by Apple, and looks like it is getting more popular by the day, and it has a nice green website :), whereas JavascriptMVC looks less professional, with less of a following and less momentum behind it. I've done the tutorials for both and I was impressed by SproutCore more (in the JMVC tutorial you don't really do anything substantial) - but somewhere in the back of my mind I feel that JMVC might just be better because it doesn't try and do too much - it just gives you MVC functionality based on a couple of jquery plugins, and you can use jquery for everything else, so its flexible. Whereas SproutCore seems to have more of its own API etc... which is also nice in a way... but then you're kind of stuck within that.... hmmm I'm confused :). Any thoughts would be much appreciated.

    Read the article

  • Extracting URIs from RDF web page in Java using Jena Library

    - by Prannoy Mittal
    I have written following code for extratcting URIs from a web page with content type application/rdf-xml for Linked Data application. public static void test(String url) { try { Model read = ModelFactory.createDefaultModel().read(url); System.out.println("to go"); StmtIterator si; si = read.listStatements(); System.out.println("to go"); while(si.hasNext()) { Statement s=si.nextStatement(); Resource r=s.getSubject(); Property p=s.getPredicate(); RDFNode o=s.getObject(); System.out.println(r.getURI()); System.out.println(p.getURI()); System.out.println(o.asResource().getURI()); } } catch(JenaException | NoSuchElementException c) { } } But above code is not extracting all URIs. It provides only few of the URIs. Please guide me where i Went wrong?? hey Rafeel For Eq: for XML File : <?xml version="1.0"?> <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:ex="http://example.org/stuff/1.0/"> <rdf:Description rdf:about="http://www.w3.org/TR/rdf-syntax-grammar" dc:title="RDF/XML Syntax Specification (Revised)"> <ex:editor> <rdf:Description ex:fullName="Dave Beckett"> <ex:homePage rdf:resource="http://purl.org/net/dajobe/" /> </rdf:Description> </ex:editor> </rdf:Description> </rdf:RDF> The output is : Subject URI is http://www.w3.org/TR/rdf-syntax-grammar Predicate URI is http://example.org/stuff/1.0/editor Object URI is null Subject URI is http://www.w3.org/TR/rdf-syntax-grammar Predicate URI is http://purl.org/dc/elements/1.1/title Website is read

    Read the article

  • Unable to call RESTful web services methods

    - by Alessandro
    Hello, I'm trying to dive into the RESTful web services world and have started with the following template: [ServiceContract] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)] public class Test { // TODO: Implement the collection resource that will contain the SampleItem instances [WebGet(UriTemplate = ""), OperationContract] public List<SampleItem> GetCollection() { // TODO: Replace the current implementation to return a collection of SampleItem instances return new List<SampleItem>() {new SampleItem() {Id = 1, StringValue = "Hello"}}; } [WebInvoke(UriTemplate = "", Method = "POST"), OperationContract] public SampleItem Create(SampleItem instance) { // TODO: Add the new instance of SampleItem to the collection throw new NotImplementedException(); } [WebGet(UriTemplate = "{id}"), OperationContract] public SampleItem Get(string id) { // TODO: Return the instance of SampleItem with the given id throw new NotImplementedException(); } [WebInvoke(UriTemplate = "{id}", Method = "PUT"), OperationContract] public SampleItem Update(string id, SampleItem instance) { return new SampleItem { Id = 99, StringValue = "Done" }; } [WebInvoke(UriTemplate = "{id}", Method = "DELETE"), OperationContract] public void Delete(string id) { // TODO: Remove the instance of SampleItem with the given id from the collection throw new NotImplementedException(); } } I am able to perform the GET operation but I am unable to perform PUT, POST or DELETE requests. Can anyone explain me how to perform these operations and how to create the correct URLs? Best regards Alessandro

    Read the article

  • Web user expectations

    - by Ash
    When designing a good Web GUI what expectations can we expect from an end user? I've come up with the following, but I wonder if there are any others which can suggest.. If I click on a hyperlink it will take me to another page/part of this page If I tick/untick a checkbox it might alter the page state (enable/disable elements) If I click on a button I expect it to do something to data. If I click on a button I expect something to happen immediately (either to the current page, or for me to be taken to another page) If I have clicked on a hyperlink and it has taken me to another page, I expect to be able to use the Back button to get back to the previous page in a state similar to that which I left it in If I change something in a form, I can change it back to its previous value if necessary Unless I click on the 'Submit' button nothing should happen to my data. If I bookmark/favourite a page then it should show the same related data each time I visit it If text is underlined and looks like a link, it should be a link and act as one The reasoning behind this question is more a 'UI from hell' one. For example I have come across pages which checking a tickbox next to a record will delete it, straight away, via ajax. To me that just seems wrong, a checkbox is a toggle - something which a delete operation definitely isn't!

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >