Search Results

Search found 13200 results on 528 pages for 'wcf testing'.

Page 130/528 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • export data from WCF Service to excel

    - by Dave
    I need to provide an export to excel feature for a large amount of data returned from a WCF web service. The code to load the datalist is as below: List<resultSet> r = myObject.ReturnResultSet(myWebRequestUrl); //call to WCF service myDataList.DataSource = r; myDataList.DataBind(); I am using the Reponse object to do the job: Response.Clear(); Response.Buffer = true; Response.ContentType = "application/vnd.ms-excel"; Response.AddHeader("Content-Disposition", "attachment; filename=MyExcel.xls"); StringBuilder sb = new StringBuilder(); StringWriter sw = new StringWriter(sb); HtmlTextWriter tw = new HtmlTextWriter(sw); myDataList.RenderControl(tw); Response.Write(sb.ToString()); Response.End(); The problem is that WCF Service times out for large amount of data (about 5000 rows) and the result set is null. When I debug the service, I can see the window for saving/opening the excel sheet appear before the service returns the result and hence the excel sheet is always empty. Please help me figure this out.

    Read the article

  • Calculate time of method execution and send to WCF service async

    - by Tim
    I need to implement time calculation for repository methods in my asp .net mvc project classes. The problem is that i need to send time calculation data to WCF Service which is time consuming. I think about threads which can help to cal WCF service asynchronously. But I have very little experience with it. Do I need to create new thread each time or I can create a global thread, if so then how? I have something like that: StopWatch class public class StopWatch { private DateTime _startTime; private DateTime _endTime; public void Start() { _startTime = DateTime.Now; } protected void StopTimerAndWriteStatistics() { _endTime = DateTime.Now; TimeSpan timeResult = _endTime - _startTime; //WCF proxy object var reporting = AppServerUtility.GetProxy<IReporting>(); //Send data to server reporting.WriteStatistics(_startTime, _endTime, timeResult, "some information"); } public void Stop() { //Here is the thread I have question with var thread = new Thread(StopTimerAndWriteStatistics); thread.Start(); } } Using of StopWatch class in Repository public class SomeRepository { public List<ObjectInfo> List() { StopWatch sw = new StopWatch(); sw.Start(); //performing long time operation sw.Stop(); } } What am I doing wrong with threads?

    Read the article

  • silverlight security with WCF service, Forms Authentication and Custom Form Ticket

    - by user74825
    I have a silverlight application with login on the silverlight page. It uses Forms Authentication with WCF authentication service and customer Membership Provider. Something like : http://blogs.msdn.com/phaniraj/archive/2009/09/10/using-the-ado-net-data-services-silverlight-client-library-in-x-domain-and-out-of-browser-scenarios-ii-forms-authentication.aspx So, SL page login page calls the WCF service authentication service, it validates using DB - brings back username and password. Now, in each subsequent calls (in Global.asax in Authenticate_Request, I get HttpContext.User.IsAuthenticated and HttpContext.User.UserName). I have all this working properly. But, I just don't want the username, but more information surrounding the user, like UserId, UserAddress, UserAssociateCustomer etc. I tried couple of different approaches. 1) Use HttpContext.Cache as a dictionary to save the item and get it off based on httpcontext.user.name, problem is cache can be erased if there memory is being used heavily. 2) Tried CustomFormsAuth Ticket, when forms authentication writes a ticket, I intercept CreatingCookie method and write additional info in formauthentication ticket, so that I can read it in subsequent requests, I am having problems with this approach, I don't find the ticket in subsequent requests. I read about how we should use REsponse.Redirect, but where do I redirect user from WCF call. How do you guys implement the above scenario? Any best practices.? Any issues you see with going on HTTPS? All examples (or most of them) just explains simple forms authentication with "I am logged in message".. Any suggestions ?

    Read the article

  • ASMX schema varies when using WCF Service

    - by Lijo
    Hi, I have a client (created using ASMX "Add Web Reference"). The service is WCF. The signature of the methods varies for the client and the Service. I get some unwanted parameteres to the method. Note: I have used IsRequired = true for DataMember. Service: [OperationContract] int GetInt(); Client: proxy.GetInt(out requiredResult, out resultBool); Could you please help me to make the schame non-varying in both WCF clinet and non-WCF cliet? Do we have any best practices for that? using System.ServiceModel; using System.Runtime.Serialization; namespace SimpleLibraryService { [ServiceContract(Namespace = "http://Lijo.Samples")] public interface IElementaryService { [OperationContract] int GetInt(); [OperationContract] int SecondTestInt(); } public class NameDecorator : IElementaryService { [DataMember(IsRequired=true)] int resultIntVal = 1; int firstVal = 1; public int GetInt() { return firstVal; } public int SecondTestInt() { return resultIntVal; } } } Binding = "basicHttpBinding" using NonWCFClient.WebServiceTEST; namespace NonWCFClient { class Program { static void Main(string[] args) { NonWCFClient.WebServiceTEST.NameDecorator proxy = new NameDecorator(); int requiredResult =0; bool resultBool = false; proxy.GetInt(out requiredResult, out resultBool); Console.WriteLine("GetInt___"+requiredResult.ToString() +"__" + resultBool.ToString()); int secondResult =0; bool secondBool = false; proxy.SecondTestInt(out secondResult, out secondBool); Console.WriteLine("SecondTestInt___" + secondResult.ToString() + "__" + secondBool.ToString()); Console.ReadLine(); } } } Please help.. Thanks Lijo

    Read the article

  • Windows service threading call to WCF service

    - by Sam Brinsted
    Hi, I have a windows service that is reading data from a database and then submitting it to a WCF serivce. Once that has finished it is stamping a processed date on the original record. Trouble I am currently having is to do with threading. The call to the WCF serivce is relatively long and I want to have a number of concurrent calls to the service to help improve the throughput of the windows service. Currently I have a submitToService method on a new worker class. Upon reading a new row from the database I am creating a new thread which is calling this method. This obviously isn't too good as the number of threads quickly shoots up and overburdens the WCF service. I have put a thread.sleep in the submit method and am sure to call System.Threading.Thread.CurrentThread.Abort(); after the submission has finished. However, I don't seem to see the number of threads go down. How can I just have a fixed number of threads that can be used in the windows service? I did think about using a thread pool but read somewhere that wasn't a good choice for a windows service. Thanks very much.

    Read the article

  • Access problems with IIS 7 and a WCF service

    - by Steve
    I have a Silverlight app that calls a WCF service, the service calls some stored procedures in an SQL db using Visual Studio 2008's Link to SQL class and returns the information to whatever called it. I have set up the compiled project (website with embedded app and the WCF service) on an remote IIS 7 server. I recompiled my local copy to use the WCF service that is now hosted on the IIS box and not the one on the local dev server that Visual Studio provides, if I use the local version of the website (hosted on the dev server, and using the remote SCF service) it is able to make calls it needs and display the information. However, if I use the website that is being hosted by the remote IIS server, the app will not get the information it needs from the service. On the IIS server I have the application pool and the website running under my credentials, which have access to the database. Users connecting to the webpage use anonymous authentication. Any ideas as to why I can only access the service when running from the dev server and not through the remotely hosted webpage are appreciated. If anything needs clarification, please ask.

    Read the article

  • Testing IPP Printing with ipptool

    - by senloe
    I'm trying to send an IPP print job using the ipptool. Using the sample .test files, I can send commands to the printer, but I am unable to successfully use the print-job.test file. Here's an example using ipptool. c:\...>ipptool -v ipp://name.local.:631/ipp/printer print-job.test ipptool: Filename "$filename" on line 21 cannot be read. ipptool: Filename mapped to "". It looks like it's failing resolving the variable $filename within the test file so I attempted to hardcode this value in the test file. In this case I get no error, but still no print. Does anybody have any experience using ipptool to test ipp printing?

    Read the article

  • Testing radius server from Mac OS X client

    - by Calvin Froedge
    I have a radius server set up on a server running Ubuntu 11.04. I have configured my switch to use the authentication server's IP (192.168.1.2) for RADIUS / 802.1x authentication, and I created a connection to test connecting from my Mac OSX client. Here is my radius configuration for the client: client 192.168.1.0/16 { secret = testing123 } I can successfully authenticate using both 127.0.0.1 (localhost) and 192.168.1.2 (ip of eth1), so I know radius is getting those requests. I set up a connection to test from my macbook, and my requests are timing out. http://screencast.com/t/tMhRLS3H7 Is there a better way to test the radius connection from my macbook? Thanks! UPDATE: I was able to successfully test on Mac OSX client using RadPerf. This is available as a cross-platform command line tool.

    Read the article

  • Stress test speed on a gateway?

    - by TheLQ
    I'm interested in stress testing my gateway server but am lost on how. Most of the stress testing applications I've seen only see how much load an app like Apache can handle, but not this. Essentially I want to send as many packets I can into this box with one computer on one card and see how many come out the other in another computer just to get an idea of what kind of load this can handle. I'm also interested how Snort will perform. I'm not really sure how to do this though. What tools could you recommend that could do this?

    Read the article

  • Ubuntu Hardy : Testing for environment variables in udev rules doesn't seem to work

    - by Fred
    I have a Ubuntu 8.04 LTS (server edition), and I need to write a udev rule for it to act upon plugging a USB thumb drive. However, I need a different action depending on the filesystem of the drive. I know I can use the ID_FS_TYPE environment variable to check for the filesystem on the drive. Following instructions found here, I try a dummy udev rule as such : KERNEL!="sd[a-z][0-9]", GOTO="my_udev_rule_end" ACTION=="add", RUN+="/usr/bin/touch /tmp/test_udev_%E{ID_FS_TYPE}" ACTION=="add", ENV{ID_FS_TYPE}=="vfat", RUN+="/usr/bin/touch /tmp/test_udev_it_works" LABEL="my_udev_rule_end" However, when I plug in a thumb drive with a vfat filesystem (which should trigger both rules), I end up with a file called /tmp/test_udev_vfat, meaning the first rule was triggered successfully, and that the ID_FS_TYPE environment variable is "vfat", but I don't have the other file, meaning that although I know the ID_FS_TYPE env variable is "vfat", I can't seem to check against it for a match. I tried googling the thing, but pretty much every result seems to assume ENV{ID_FS_TYPE}=="vfat" works. I also tested the exact same udev rule on Ubuntu 10.04 LTS server, and I have the same result. I'm probably missing something very simple, but I just don't get it. Does anyone see what is wrong with my udev rule that would prevent it from matching on ENV{ID_FS_TYPE}? Thanks.

    Read the article

  • Suggest methods for testing changes to "pam.d/common-*" files

    - by Jamie
    How do I test the changes to the pam.d configuration files: Do I need to restart the PAM service to test the changes? Should I go through every service listed in the /etc/pam.d/ directory? I'm about to make changes to the pam.d/common-* files in an effort to put an Ubuntu box into an active directory controlled network. I'm just learning what to do, so I'm preparing the configuration in a VM, which I plan to deploy in metal in the coming week. It is a clean install of Ubuntu 10.04 Beta 2 server, so other than SSH daemon, all other services are stock.

    Read the article

  • Disable Memory Modules In BIOS for Testing Purposes (Optimize Nehalem/Gulftown Memory Performance)

    - by Bob
    I recently acquired an HP Z800 with two Intel Xeon X5650 (Gulftown) 6 core processors. The person that configured the system chose 16GB (8 x 2GB DDR3-1333). I'm assuming this person was unaware these processors have 3 memory channels and to optimize memory performance one should choose memory in multiples of three. Based on this information, I have a question: By entering the BIOS, can I disable the bank on each processor that has the single memory module? If so, will this have any adverse effects or behave differently than physically removing the modules? I ask due to the fact that I prefer to store the extra memory in the system if it truly behaves as if the memory is not even there. Also, I see this as an opportunity to test 12GB vs. 16GB to see if there is a noticeable difference. Note: According to http://www.delltechcenter.com/page/04-08-2009+-+Nehalem+and+Memory+Configurations?t=anon, the current configuration reduces the overall data transfer speed to 1066 and in addition, the memory bandwidth goes down by about 23%.

    Read the article

  • Multiple VM environment for developing/testing

    - by Hippo
    I was asked to create a setup for automated deployment, configuration, installation/updates of websites. A bunch of small websites will be bundled on one server. If more website will come up a new server will be created... I decided to us chef for this task. All servers will be running Ubuntu at the same version and configuration. The actual question: Everything needs to be tested properly before starting live deployment, so my question is: What is the best virtualisation tool to run multiple (5 - 10) virtual machines on a Ubuntu Laptop? Requirements: easy setup, fast (clone/snapshot of VMs) All VMs should be easily connected to the internet and should be able to communicate to each other (Open-Source / free would be great) So far I looked into: Virtual box is more for Desktop virtualisation, Cloning not possible, every new machine needs to be installed VMware Player Any suggestions? If there are any question about what I am doing please comment on this question, I will answer as soon as possible. This question is not about the actual set up, it is about a nice working environment.

    Read the article

  • Opening and Testing Ports on Modem > Router Connection

    - by JakeTheSnake
    Working off of my last question, I can access my server's FTP over the LAN but not over the internet. I'm using Filezilla on port 666. My router/modem configuration is as such (similar to other post): 1) Modem connects to WAN 2) WAN port on modem connects to LAN port on Router 3) Modem internal IP address is 192.168.0.254 4) Router internal IP address is 192.168.0.1 5) Modem has DHCP turned OFF 6) Router has DHCP turned ON 7) Router is running Tomato firmware and it's set as 'Router' (not 'Gateway') 8) The internet is working (just had to say that) I've set up port forwarding both on the modem and router - both route port 666 to the IP address of 192.168.0.3 (TCP); that is the IP address of the server which has FileZilla running. I don't know if that's hindering anything but I've also tried it with just the modem and just the router...same result. I've also tried setting the server to be DMZ (both on router and modem). Neither router nor modem have anything in their logs about denying inbound traffic on port 666 so my ability to troubleshoot stops there. I've tried contacting my ISP (Telus, running on mobility plan...it's a "Smart" Hub) but they weren't much help. They said they only block port 25 and 80 and maybe a few others, but not most ports. I test whether or not the port is open by going to canyouseeme.org - I don't know whether or not that would produce a 'connection refused' result just based on the fact that the FTP requires a login...I'm not well versed on this matter. FWIW, sometimes I get a 'connection refused' error on canyouseeme.org but mostly it's 'connection timed out'. I don't know what else to do at this point.

    Read the article

  • Is there a Kerberos testing tool?

    - by ixe013
    I often use openssl s_client to test and debug SSL connections (to LDAPS or HTTPS services). It allows me to isolate the problem down to SSL, without anything getting in the way. I know about klist that allows me to purge the ticket cache. Is there tool that would allow me to ask a Kerberos ticket for a given server, not event sending it ? Just enough to see the whole Kerberos exchange in Wireshark for example ?

    Read the article

  • Stress test speed on a gateway?

    - by TheLQ
    I'm interested in stress testing my gateway server but am lost on how. Most of the stress testing applications I've seen only see how much load an app like Apache can handle, but not this. Essentially I want to send as many packets I can into this box with one computer on one card and see how many come out the other in another computer just to get an idea of what kind of load this can handle. I'm also interested how Snort will perform. I'm not really sure how to do this though. What tools could you recommend that could do this?

    Read the article

  • Testing Tomcat with Virtual Hosts

    - by Marty Pitt
    I'm trying to test Tomcat virtual hosts on my dev machine (windows 7/Tomcat 6). I'd like to have requests for localhost, test1.localhost and test2.localhost all route through to the same tomcat instance. I've edited my hosts file to look as follows: 127.0.0.1 localhost ::1 localhost 127.0.0.1 test1.localhost 127.0.0.1 test2.localhost And added modified the Engine in server.xml as follows: <Engine defaultHost="localhost" name="Catalina"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase" /> <Host appBase="webapps" autoDeploy="true" name="localhost" unpackWARs="true" xmlNamespaceAware="false" xmlValidation="false"> <Alias>test1.localhost</Alias> <Alias>test2.localhost</Alias> </Host> </Engine> However, I'm getting a 404 when hitting test1.localhost:8080/myWebApp, although localhost:8080/myWebApp works fine. I can ping test1.localhost fine. What have I missed?

    Read the article

  • Testing domains on intranet/local network?

    - by meder
    This may sound like a very silly question, but how could I setup domains ( eg www.foo.com ) on my local network? I know that all a domain is, is just a name registered to a name server and that nameserver has a zone record, and in the zone record there are several records of which the A Record is the most important in dictating where the lookup goes to, which machine it should point to. I basically want to make it so that I can refer to my other computer/webserver as 'www.foo.com' and make my local sites accessible by that, mess with virtualhost records in Apache and zone records for the domain except locally so I can explore and fiddle around and learn instead of having to rely on the domains I own on a public registrar that I could only access through the internet. Once again I apologize if this is a silly question, or if I'm completely thinking backwards. Background information: My OS is Debian, I'm a novice at Linux. I've done very small edits in zone records on a Bind9 Server but that's the extent of my networking experience.

    Read the article

  • Testing for Active Directory Schema modification (not upgrade)

    - by Darktux
    I am trying to test a schema modification. That is i need to add one of the attributes to global catalog by modifying schema , initially in a lab which is exact replica.My questions are below; - What tests need to be done post schema change to determine if its safe for production? - Apart from measuring changes in DIT size post change, is there a way to find the whole size increase for adding an attribute to GC pre change? please let me know if any extra questions or info required.

    Read the article

  • ab benchmarking testing

    - by Tennyson
    I have a question about ab benchmarking test, if i need to measure the time the server takes to serve IO.php with persistent connection. does the persistent connection mean i need to input "./ab -k ........." or "./ab -n 1000 -c 100 ........." Thanks a lot

    Read the article

  • Testing for disk write

    - by Montecristo
    I'm writing an application for storing lots of images (size <5MB) on an ext3 filesystem, this is what I have for now. After some searching here on serverfault I have decided for a structure of directories like this: 000/000/000000001.jpg ... 236/519/236519107.jpg This structure will allow me to save up to 1'000'000'000 images as I'll store a max of 1'000 images in each leaf. I've created it, from a theoretical point of view seems ok to me (though I've no experience on this), but I want to find out what will happen when there will be directories full of files in there. A question about creating this structure: is it better to create it all in one go (takes approx 50 minutes on my pc) or should I create directories as they are needed? From a developer point of view I think the first option is better (no extra waiting time for the user), but from a sysadmin point of view, is this ok? I've thought I could do as if the filesystem is already under the running application, I'll make a script that will save images as fast as it can, monitoring things as follows: how much time does it take for an image to be saved when there is no or little space used? how does this change when the space starts to be used up? how much time does it take for an image to be read from a random leaf? Does this change a lot when there are lots of files? Does launching this command sync; echo 3 | sudo tee /proc/sys/vm/drop_caches has any sense at all? Is this the only thing I have to do to have a clean start if I want to start over again with my tests? Do you have any suggestions or corrections?

    Read the article

  • Testing Firewire 800 port on MacBook Pro

    - by dtlussier
    I am having trouble getting my MacBook pro to mount an external Firewire hard drive. I am able to mount the disk no problem on other Macs, just not my machine. I haven't received any errors from my machine, and don't see anything related to the Firewire port in the logs. Are there good diagnostic tools for this type of problem that come with the Mac? other free alternatives ?

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >