Search Results

Search found 26126 results on 1046 pages for 'generic service contract'.

Page 535/1046 | < Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >

  • WCF and streaming requests and responses

    - by Cheeso
    Is it correct that in WCF, I cannot have a service write to a stream that is received by the client? My understanding is that streaming is supported in WCF for requests, responses, or both. Is it true that in all cases, the receiver of the stream must invoke Read ? I would like to support a scenario where the receiver of the stream can Write on it. Is this supported? Let me show it this way. The simplest example of Streaming in WCF is the service returning a FileStream to a client. This is a streamed response. The server code is like this: [ServiceContract] public interface IStreamService { [OperationContract] Stream GetData(string fileName); } public class StreamService : IStreamService { public Stream GetData(string filename) { FileStream fs = new FileStream(filename, FileMode.Open) return fs; } } And the client code is like this: StreamDemo.StreamServiceClient client = new WcfStreamDemoClient.StreamDemo.StreamServiceClient(); Stream str = client.GetData(@"c:\path\to\myfile.dat"); do { b = str.ReadByte(); //read next byte from stream ... } while (b != -1); (example taken from http://blog.joachim.at/?p=33) Clear, right? The server returns the Stream to the client, and the client invokes Read on it. Is it possible for the client to provide a Stream, and the server to invoke Write on it? In other words, rather than a pull model - where the client pulls data from the server - it is a push model, where the client provides the "sink" stream and the server writes into it. Is this possible in WCF, and if so, how? What are the config settings required for the binding, interface, etc? The analogy is the Response.OutputStream from an ASP.NET request. In ASPNET, any page can invoke Write on the output stream, and the content is received by the client. Can I do something similar in WCF? Thanks.

    Read the article

  • Multiple Java Versions

    - by user327486
    There are few applications which use Java 1.6.2x , few 1.7.1X and other uses 1.7.4X versions. Since we decided to push all three applications to the user .How to make the applications to use its particular version. There are few web based apps and enterprise apps which requires only a specific set of java versions which is creating issues. Os : Win 7 IE - ver 8 Work around In Progress : Trying to apply a batch file for each app to set the require java version path , but its not the required solution. Do we have any generic way which automatically maps to its required java version , instead of running batch file for each application. Looking forward your valuable suggestions.

    Read the article

  • Testing fault tolerant code

    - by Robert
    I’m currently working on a server application were we have agreed to try and maintain a certain level of service. The level of service we want to guaranty is: if a request is accepted by the server and the server sends on an acknowledgement to the client we want to guaranty that the request will happen, even if the server crashes. As requests can be long running and the acknowledgement time needs be short we implement this by persisting the request, then sending an acknowledgement to the client, then carrying out the various actions to fulfill the request. As actions are carried out they too are persisted, so the server knows the state of a request on start up, and there’s also various reconciliation mechanisms with external systems to check the accuracy of our logs. This all seems to work fairly well, but we have difficult saying this with any conviction as we find it very difficult to test our fault tolerant code. So far we’ve come up with two strategies but neither is entirely satisfactory: Have an external process watch the server code and then try and kill it off at what the external process thinks is an appropriate point in the test Add code the application that will cause it to crash a certain know critical points My problem with the first strategy is the external process cannot know the exact state of the application, so we cannot be sure we’re hitting the most problematic points in the code. My problem with the second strategy, although it gives more control over were the fault takes, is I do not like have code to inject faults within my application, even with optional compilation etc. I fear it would be too easy to over look a fault injection point and have it slip into a production environment.

    Read the article

  • How to resolve 'Error Dependency is not satisfiable: libascound2' on ubuntu

    - by michael
    Hi, I am trying to install skype-ubuntu-intrepid_2.1.0.91-1.i386.deb on ubuntu 8.04: But in the Package installer, I get 'Error Dependency is not satisfiable: libascound2'. And I have tried. $ sudo apt-get install libasound2 [sudo] password for novarra: Reading package lists... Done Building dependency tree Reading state information... Done libasound2 is already the newest version. The following packages were automatically installed and are no longer required: linux-headers-2.6.24-24-generic libdns35 linux-headers-2.6.24-24 Use 'apt-get autoremove' to remove them. I appreciate if anyone can help me with it.

    Read the article

  • Unable to use .xjb file inside wsdlc ant task

    - by Govind
    Hi, I have a requirement of customize the default conversion provided by JAXB. For the xs:date type we need to show only the date part(removing the time). I have created an .xjb file and used the xjc command to generate the required classes. This is working perfectly and I got the desired results. Since in our project we create the web service jars using ant I tried to include it inside the wsdlc ant task I get the error as: dateFormatter.xjb is not a xsd config file. <target name="generate-service-from-wsdl" depends="validate-weblogic, clean"> <taskdef name="wsdlc" classname="weblogic.wsee.tools.anttasks.WsdlcTask" /> <wsdlc srcWsdl="${sourceWsdl}/My_Gateway.wsdl" verbose="on" destJwsDir="${targetDir}" destImplDir="${targetDir}/impl" packageName="${servicePackage}" > <xsdConfig dir="wsdls/xjb" includes="dateFormatter.xjb"/> </wsdlc> </target> I am using Weblogic 9.2 and tried the using Weblogic 10.3 jar using the binding tag instead of xsdConfig. But I get the same error. Please let me know where am I making the mistake and how to correct it. Thanks, Govind.

    Read the article

  • Convert C# Silverlight App To AZURE CLOUD Platform?!?!

    - by Goober
    The Scenario I've been following Brad Abrams Silverlight tutorial on his blog.... I have tried following Brads "How to deploy your app to the Cloud" tutorial however i'm struggling with it, even though it is in the same context as the first tutorial.... The Question Is the application structure essentially the same as the original "non-cloud based version"!? If not, which parts are different? (I get that there is a Cloud Service project added to the solution) - but what else?! Connection String Issue In my "Non-Cloud based application", I make use of the ADO.Net Entity Framework to communicate with my database. The connection string in my web.config file looks like: <add name="InmZenEntities" connectionString="metadata=res://*/InmZenModel.csdl|res://*/InmZenModel.ssdl|res://*/InmZenModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=CHASEDIGITALWS3;Initial Catalog=InmarsatZenith;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /></connectionStrings> However However the connection string that I get from SQL AZURE looks like: Server=tcp:k12ioy1rsi.ctp.database.windows.net;Database=master;User ID=simongilbert;Password=myPassword;Trusted_Connection=False; So how do I go about merging the two when I move the "non-cloud based application" to THE CLOUD?! Any help regarding converting a silverlight application to a cloud service and deploying it would be greatly appreciated

    Read the article

  • How can I have APF block script kiddies that mod_security detects?

    - by Gaia
    In one of the vhosts' error_log I found thousands of lines like these, all from the same IP: [Mon Apr 19 08:15:59 2010] [error] [client 61.147.67.206] mod_security: Access denied with code 403. Pattern match "(chr|fwrite|fopen|system|e?chr|passthru|popen|proc_open|shell_exec|exec|proc_nice|proc_terminate|proc_get_status|proc_close|pfsockopen|leak|apache_child_terminate|posix_kill|posix_mkfifo|posix_setpgid|posix_setsid|posix_setuid|phpinfo)\\\\(.*\\\\)\\\\;" at THE_REQUEST [id "330001"] [rev "1"] [msg "Generic PHP exploit pattern denied"] [severity "CRITICAL"] [hostname "x.x.x.x"] [uri "//webmail/config.inc.php?p=phpinfo();"] Given how obvious the situation is, how come mod_security isnt automatically adding at least that IP to deny rules? There is no way someone hasnt thought of this before...

    Read the article

  • What is the prefered or accepted method for testing proxy settings?

    - by Mike Webb
    I have a lot of trouble with the internet connectivity in the program I am working on and it all seems to spawn from some issue with the proxy settings. Most of the issues at this point are fixed, but the issue I am having now is that my method of testing the proxy settings makes some users wait for long periods of time. Here is what I do: System.Net.WebClient webClnt = new System.Net.WebClient(); webClnt.Proxy = proxy; webClnt.Credentials = proxy.Credentials; byte[] tempBytes; try { tempBytes = webClnt.DownloadData(url.Address); } catch { //Invalid proxy settings //Code to handle the exception goes here } This is the only way that I've found to test if the proxy settings are correct. I tried making a web service call to our web service, but no proxy settings are needed when making the call. It will work even if I have bogus proxy settings. The above method, though, has no timeout member that I can set that I can find and I use the DownloadData as opposed to the DownloadDataAsync because I need to wait til the method is done so that I can know if the settings are correct before continuing on in the program. Any suggestions on a better method or a work around for this method is appreciated. Mike

    Read the article

  • ASP.NET 2.0 and COM Port Communication

    - by theaviator
    ASP.NET 2.0 and COM Port Communication Hello Guys, I have a managed DLL which communicates with the devices attached on COM/Serial ports. The desktop Winforms application sends requests on ports and receives/stores data in memory. In Winforms app I have added a reference to DLL and I am using the methods. This works well. Now, there is a situation where I need to show this data from serial/com port on a web-page. And also users should be able to send requests to the ports using this DLL. I have made a web app in ASP.NET (2.0). Added a reference to the DLL. I am able to use this DLL, the DLL communicates on the COM upon button click on web-page and also the response is shown on web page. However I am not happy with the approach and strongly feel that this is a bad approach. Also the development server crashes after 3 -4 requests. What is the best approach in this scenario. If I use a windows service then how would my ASP.net app will communicate with the Weindows service. Or can this be easily done using WCF. I have not used WCF any time nor any of .net remoting technique. Please suggest me the best architecture in this scenario. Thank you

    Read the article

  • Visual Studio 2008: Can't connect to known good TFS 2010 beta 2

    - by p.campbell
    A freshly installed TFS 2010 Beta 2 is at http://serverX:8080/tfs. A Windows 7 developer machine with VS 2008 Pro SP1 and the VS2008 Team Explorer (no SP). The TFS 2008 Service Pack 1 didn't work for me - "None of the products that are addressed by this software update are installed on this computer." The developer machine is able to browse the TFS site at the above URL. The Issue is around trying to add the TFS server into the Team Explorer window in Visual Studio 2008. Here's a screenshot showing the error: unable to connect to this Team Foundation Server. Possible reasons for failure include: The Team Foundation Server name, port number or protocol is incorrect. The Team Foundation Server is offline. Password is expired or incorrect. The TFS server is up and running properly. Firewall ports are open, and is accessible via the browser on the dev machine!! larger image Question: how can you connect from VS 2008 Pro to a TFS 2010 Beta 2 server? Resolution Here's how I solved this problem: installed VS 2008 Team Explorer as above. re-install VS 2008 Service Pack 1 when adding a TFS server to Team Explorer, you MUST specify the URL as such: http://[tfsserver]:[port]/[vdir]/[projectCollection] in my case above, it was http://serverX:8080/tfs/AppDev-TestProject you cannot simply add the TFS server name and have VS look for all Project Collections on the server. TFS 2010 has a new URL (by default) and VS 2008 doesn't recognize how to gather that list.

    Read the article

  • Spring Security 3.0 - Intercept-URL - All pages require authentication but one

    - by gav
    Hi All, I want any user to be able to submit their name to a volunteer form but only administrators to be able to view any other URL. Unfortunately I don't seem to be able to get this correct. My resources.xml are as follows; <?xml version="1.0" encoding="UTF-8"?> <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.0.xsd"> <http realm = "BumBumTrain Personnel list requires you to login" auto-config="true" use-expressions="true"> <http-basic/> <intercept-url pattern="/person/volunteer*" access=""/> <intercept-url pattern="/**" access="isAuthenticated()" /> </http> <authentication-manager alias="authenticationManager"> <authentication-provider> <user-service> <user name="admin" password="admin" authorities="ROLE_ADMIN"/> </user-service> </authentication-provider> </authentication-manager> </beans:beans> Specifically I am trying to achieve the access settings I described via; <intercept-url pattern="/person/volunteer*" access=""/> <intercept-url pattern="/**" access="isAuthenticated()" /> Could someone please describe how to use intercept-url to achieve the outcome I've described? Thanks Gav

    Read the article

  • Setup staging with multiple SVN

    - by Kapil Sharma
    We are a startup, setting new environments for product to be released soon. Planned server structure with planned release flow is as shown in below image It ideally have a local server (or Staging server, shown in green) in local office, without public IP address and Production Server (Red) at Amazon EC2. Both local and production server have there own SVN copy. Management here want to update production server with production SVN and without providing its access to developers (including freelancers/contract employees). So for developers, there is a Local SVN on local server. Another purpose of local SVN to keep a copy of code on local server, which is under our direct control. Although there are some technical concerns like how will code at local server will be updated from local SVN and commit on production SVN but bigger question is, is that structure correct? Major requirement remain don't provide production SVN access to developers. What are other possible options to achieve that? Another minor question, if suitable here, if above structure is correct, is it possible for a SVN checkout to get updated from one SVN (Local SVN) but commit to other (Production SVN)? If yes, How? edit An answer has been accepted but for bounty, I'm still looking for answer Is that structure correct? Its pros/Cons? Technical solution is already provided by accepted answer.

    Read the article

  • Unmount Mass Storage USB Device from the Command Line in Linux

    - by Casey
    I've searched high and low, and can't figure this one out. I have a older Olympus Camera (2001 or so). When I plug in the USB connection, I get the following log output: $ dmesg | grep sd [20047.625076] sd 21:0:0:0: Attached scsi generic sg7 type 0 [20047.627922] sd 21:0:0:0: [sdg] Attached SCSI removable disk Secondly, the drive is not mounted in the FS, but when I run gphoto2 I get the following error: $ gphoto2 --list-config *** Error *** An error occurred in the io-library ('Could not lock the device'): Camera is already in use. *** Error (-60: 'Could not lock the device') *** What command will unmount the drive. For example in Nautilus, I can right click and select "Safely Remove Device". After doing that, the /dev/sg7 and /dev/sdg devices are removed. Some things I've tried already are sdparm and sg3_utils, however I am unfamiliar with them, so it's possible I just didn't find the right command.

    Read the article

  • Does HttpListener work well on Mono?

    - by billpg
    Hi everyone. I'm looking to write a small web service to run on a small Linux box. I prefer to code in C#, so I'm looking to use Mono. I don't want the overhead of running a full web server or Mono's version of ASP.NET. I'm thinking of having a single process with a thread dealing with each client connection. Shared memory between threads instead of a database. I've read a little on Microsoft's version of HttpListener and how it works with the Http.sys driver. Alas, Mono's documentation on this class is just the automated class interface with no discussion of how it works under the hood. (Linux doesn't have Http.sys, so I imagine it's implemented substantially differently.) Could anyone point me towards some resources discussing this module please? Many thanks, Bill, billpg.com (A little background to my question for the interested.) Some time ago, I asked this question, interested in keeping a long conversation open with lots of back-and-forth. I had settled on designing my own ad-hoc protocol, but people I spoke to really wanted a REST interface, even at the cost of the "Okay, send your command now" signal. So, I wondered about running ASP.NET on a Linux/Mono server, but stumbled upon HttpListener. This seemed ideal, as each "conversation" could run in a separate thread. The thread that calls HttpListener in a loop can look for which thread each incomming connection is for and pass the reference to that thread. The alternative for an ASP.NET driven service, would be to have the ASPX code pick up the state from a database, and write back the new state when it finishes. Yes, it would work, but that's a lot of overhead.

    Read the article

  • TCP: Address already in use exception - possible causes for client port? NO PORT EXHAUSTION

    - by TomTom
    Hello, stupid problem. I get those from a client connecting to a server. Sadly, the setup is complicated making debugging complex - and we run out of options. The environment: *Client/Server system, both running on the same machine. The client is actually a service doing some database manipulation at specific times. * The cnonection comes from C# going through OleDb to an EasySoft JDBC driver to a custom written JDBC server that then hosts logic in C++. Yeah, compelx - but the third party supplier decided to expose the extension mechanisms for their server through a JDBC interface. Not a lot can be done here ;) The Symptom: At (ir)regular intervals we get a "Address already in use: connect" told from the JDBC driver. They seem to come from one particular service we run. Now, I did read all the stuff about port exhaustion. This is why we have a little tool running now that counts ports and their states every minute. Last time this happened, we had an astonishing 370 ports in use, with the count rising to about 900 AFTER the error. We aleady patched the registry (it is a windows machine) to allow more than the 5000 client ports standard, but even then, we are far far from that limit to start with. Which is why I am asking here. Ayneone an ide what ELSE could cause this? It is a Windows 2003 Server machine, 64 bit. The only other thing I can see that may cause it (but this functionality is supposedly disabled) is Symantec Endpoint Protection that is installed on the server - and being capable of actinc as a firewall, it could possibly intercept network traffic. I dont want to open a can of worms by pointing to Symantec prematurely (if pointing to Symantec can ever be seen as such). So, anyone an idea what else may be the cause? Thanks

    Read the article

  • Eclipse Maven web application - can not run on server anymore

    - by wuntee
    I have an maven eclipse webapp project that I was able to right click and 'Run on server' and it would deploy on tomcat. I recently did a 'maven - Update project conifgurations' and I now can NOT deploy and run the project as a webapp. Has anyone seen this before? The only output from tomcat is as follows - it doesnt even look like its trying to deploy the application. Apr 14, 2010 3:58:54 PM org.apache.catalina.core.AprLifecycleListener init INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: .:/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java Apr 14, 2010 3:58:54 PM org.apache.tomcat.util.digester.SetPropertiesRule begin WARNING: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting property 'source' to 'org.eclipse.jst.j2ee.server:taac-web' did not find a matching property. Apr 14, 2010 3:58:54 PM org.apache.coyote.http11.Http11Protocol init INFO: Initializing Coyote HTTP/1.1 on http-8080 Apr 14, 2010 3:58:54 PM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 402 ms Apr 14, 2010 3:58:54 PM org.apache.catalina.core.StandardService start INFO: Starting service Catalina Apr 14, 2010 3:58:54 PM org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache Tomcat/6.0.24 Apr 14, 2010 3:58:54 PM org.apache.coyote.http11.Http11Protocol start INFO: Starting Coyote HTTP/1.1 on http-8080 Apr 14, 2010 3:58:54 PM org.apache.jk.common.ChannelSocket init INFO: JK: ajp13 listening on /0.0.0.0:8009 Apr 14, 2010 3:58:54 PM org.apache.jk.server.JkMain start INFO: Jk running ID=0 time=0/14 config=null Apr 14, 2010 3:58:54 PM org.apache.catalina.startup.Catalina start INFO: Server startup in 247 ms

    Read the article

  • Can somebody help me install this jBPM based workflow management suite?

    - by Eternal Saint
    1.Its a book Workflow Interface software available at sourceforge http://bookworkflowint.sourceforge.net/ Any instructions on installing and configuring it would be great especially in windows, however I can try Linux specific ones as well. I could not find any installation instructions. I posted this in stackoverflow by mistake and was directed here. 2.Can you guys suggest any good scanning/digitization workflow software (document imaging) that I can adopt to my scanner software? Infact even a simple one would do may be based on hotfolders. I just want to be able to track the uniqueid/barcode of the scanned book, its status so that its not scanned again. It books or manuscripts could be in millions page count. I thought of using some kind of generic bug tracking tool, just track a few fields, dont know if its the right choice Thank you very much

    Read the article

  • BITS, TakeOwnership, and Kerberos / Windows Integrated Authentication

    - by Charlie Flowers
    We're using BITS to upload files from machines in our retail locations to our servers. BITS will stop transferring a file if the user who owns the BITS job logs off. Therefore, we're using a Windows Service running as LocalSystem to submit the jobs to BITS and be the job owner. This allows transfers to continue 24/7. However, it raises a question about authentication. We want the BITS server extensions in IIS to use Kerberos to authenticate the client machine. As far as I can tell, that leaves us with only 2 options, both of which are not ideal: Either we create an "ImageUploader" account and store its username/password in a config file that the Windows Service uses as credentials for the BITS job, or we ask the logged on user who creates the BITS job for his password, and then use his credentials for the BITS job. I guess the third option is not to use Kerberos, and maybe go with Basic Auth plus SSL. I'm sure I'm wrong and there's a better option. Is there? Thanks in advance.

    Read the article

  • Spring security and authentication provider

    - by Pascal
    I'm trying to implement Spring 3 Security in a project, but I can not get rid of the following error: org.springframework.beans.factory.BeanCreationException: Error creating bean with name '_authenticationManager': Invocation of init method failed; nested exception is java.lang.IllegalArgumentException: No authentication providers were found in the application context This seems weird, as I did provide an authentication provider! I've added these lines to web. <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> </filter> <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> And this is my applicationContext-security.xml: <http auto-config="false"> <intercept-url pattern="/**" access="ROLE_USER" /> <http-basic /> </http> <authentication-manager alias="authenticationManager"> <authentication-provider> <user-service> <user name="jimi" password="jimispassword" authorities="ROLE_USER, ROLE_ADMIN"/> <user name="bob" password="bobspassword" authorities="ROLE_USER"/> </user-service> </authentication-provider> </authentication-manager> Google couldn't help me much further, nor could the official documentation.

    Read the article

  • How can you handle cross-cutting conerns in JAX-WS without Spring or AOP? Handlers?

    - by LES2
    I do have something more specific in mind, however: Each web service method needs to be wrapped with some boiler place code (cross cutting concern, yes, spring AOP would work great here but it either doesn't work or unapproved by gov't architecture group). A simple service call is as follows: @WebMethod... public Foo performFoo(...) { Object result = null; Object something = blah; try { soil(something); result = handlePerformFoo(...); } catch(Exception e) { throw translateException(e); } finally { wash(something); } return result; } protected abstract Foo handlePerformFoo(...); (I hope that's enough context). Basically, I would like a hook (that was in the same thread - like a method invocation interceptor) that could have a before() and after() that could could soil(something) and wash(something) around the method call for every freaking WebMethod. Can't use Spring AOP because my web services are not Spring managed beans :( HELP!!!!! Give advice! Please don't let me copy-paste that boiler plate 1 billion times (as I've been instructed to do). Regards, LES

    Read the article

  • Pgpool-regclass gives error when installling

    - by user119720
    I have a problem when installing the pgpool-regclass. When I'm running 'Make',it shows me this kind of error : p,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -I/usr/include/et -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I. -I. -I/usr/pgsql-9.2/include/server -I/usr/pgsql-9.2/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2 -I/usr/include -c -o pgpool-regclass.o pgpool-regclass.c pgpool-regclass.c:99:37: error: macro "RangeVarGetRelid" requires 3 arguments, but only 2 given pgpool-regclass.c: In function âpgpool_regclassâ: pgpool-regclass.c:99: error: âRangeVarGetRelidâ undeclared (first use in this function) pgpool-regclass.c:99: error: (Each undeclared identifier is reported only once pgpool-regclass.c:99: error: for each function it appears in.) make: *** [pgpool-regclass.o] Error 1 Can anyone help me to sort this things out?I really appreciate it. Thanks.

    Read the article

  • How to keep a trace of a record inside a nested repeater?

    - by Amokrane
    Hi, I have the following implementation: As you can see I have a repeater (listing the Machines) and a nested repeater (listing the WindowsServices inside each Machine). For each Windows Service I can perform an action using a button. However, to perform this action I need to know which Machine and which WindowsService are concerned. This is my code: protected void Page_Init(object sender, EventArgs e) { rptMachine.ItemDataBound += new RepeaterItemEventHandler(rptMachine_ItemDataBound); } protected void Page_Load(object sender, EventArgs e) { // bind the Machine repeater rptMachine.DataSource = _monitoringService.Machines; rptMachine.DataBind(); } protected void rptMachine_ItemDataBound(object sender, RepeaterItemEventArgs e) { if (e.Item.ItemType == ListItemType.Item || e.Item.ItemType == ListItemType.AlternatingItem) { Repeater nestedRepeater = (Repeater) e.Item.FindControl("rptWindowsService"); nestedRepeater.DataSource = ((IMachine) e.Item.DataItem).WindowsServices; nestedRepeater.DataBind(); Button btnActionInner = null; // bind the action button situated inside the nested repeater foreach(RepeaterItem ri in nestedRepeater.Items) { if((Button)ri.FindControl("btnAction") != null) { btnActionInner = (Button) ri.FindControl("btnAction"); btnActionInner.CommandName = "ActionState"; btnActionInner.CommandArgument = strWindowsService; } } } } protected void rptWindowsService_ItemCommand(object source, RepeaterCommandEventArgs e) { // do the specific action stop/run for the windows service if (e.CommandName == "ActionState") { if(((Button)(e.CommandSource)).Text.Equals("Stop")) { } else if(((Button)(e.CommandSource)).Text.Equals("Run")) { } } } } } So basically I need to know (inside rptWindowsService_ItemCommand) what is the pair that is concerned by the operation. What's the best way to do that? Don't hesitate to ask for more clarifications! Thanks

    Read the article

  • locate doesn't find all the files it should

    - by neubert
    I type in locate gmp.h at the prompt and get the following: /usr/src/linux-headers-3.13.0-24/include/linux/igmp.h /usr/src/linux-headers-3.13.0-24/include/uapi/linux/igmp.h /usr/src/linux-headers-3.13.0-24-generic/include/linux/igmp.h But when I do ls /usr/include/x86-64-linux-gnu/ I see this: a.out.h asm bits c++ fpu_control.h gmp.h gnu ieee754.h sys Why isn't locate locating /usr/include/x86-64-linux-gnu/gmp.h? edit: ls -l /usr/include/x64-64-linux-gnu/gmp.h says this: ls: cannot access /usr/include/x64-64-linux-gnu/gmp.h: No such file or diretory Why would ls /usr/include/x86-64-linux-gnu/ say it exists when ls -l /usr/include/x64-64-linux-gnu/gmp.h says it doesn't? A screenshot:

    Read the article

  • WCF done broke.

    - by SteveCav
    WCF is completely fouled up on my machine. I start a new sln in VS2008. Add a project (WCF Service Lib).Build the project (builds ok). debug the project -- it gets halfway through the progress bar and bombs out with the error below (real path changed to "my mex path" so codeproject doesn't think its spam). I'm assuming it's file permissions but don't know where to start. Any ideas?: Error: Cannot obtain Metadata from my mex path If this is a Windows (R) Communication Foundation service to which you have access, please check that you have enabled metadata publishing at the specified address. For help enabling metadata publishing, please refer to the MSDN documentation at somewhere unhelpful Exchange Error URI: my mex path Metadata contains a reference that cannot be resolved: 'my mex path'. Could not connect to my mex path. TCP error code 10061: No connection could be made because the target machine actively refused it 127.0.0.1:8731. Unable to connect to the remote server No connection could be made because the target machine actively refused it 127.0.0.1:8731HTTP GET Error URI: my mex path There was an error downloading 'my mex path'. Unable to connect to the remote server No connection could be made because the target machine actively refused it 127.0.0.1:8731

    Read the article

  • RIA Services for transmitting non DB object-graph

    - by Mike Gates
    I have been getting into RIA services because I thought it would simplify dealing with the services layer of web applications I wish to build. I see lots of examples out there showing how to create DomainService classes which expose and consume entities that have some kind of relational database backing, and therefore have foreign-key relationships. However, I would like to know how to expose and consume normal object graphs...objects that contain references to eachother but don't have foreign keys. For example, say I want a service operation called "GetFolderInformation(string pathToFolder)". I want this to return a custom object called "FolderInformation" structured with: - string Name - IEnumerable<FileInformation> Files I cannot get this to work because it seems that RIA wants to deal with entities that have foreign key relationships. Why? Why can't the serializer just see my object references and recreate that in the proxy on the other side? Data exists behind service layers that doesn't necessarily have foreign key relationships...like folder/file for example. EDIT: I realized I hadn't asked my question! My question is, is there a way to do what I am trying to do?

    Read the article

< Previous Page | 531 532 533 534 535 536 537 538 539 540 541 542  | Next Page >