Search Results

Search found 17293 results on 692 pages for 'wcf workflow services'.

Page 145/692 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • IPC between multiple processes on multiple servers

    - by z8000
    Let's say you have 2 servers each with 8 CPU cores each. The servers each run 8 network services that each host an arbitrary number of long-lived TCP/IP client connections. Clients send messages to the services. The services do something based on the messages, and potentially notify N1 of the clients of state changes. Sure, it sounds like a botnet but it isn't. Consider how IRC works with c2s and s2s connections and s2s message relaying. The servers are in the same data center. The servers can communicate over a private VLAN @1GigE. Messages are < 1KB in size. How would you coordinate which services on which host should receive and relay messages to connected clients for state change messages? There's an infinite number of ways to solve this problem efficiently. AMQP (RabbitMQ, ZeroMQ, etc.) Spread Toolkit N^2 connections between allservices (bad) Heck, even run IRC! ... I'm looking for a solution that: perhaps exploits the fact that there's only a small closed cluster is easy to admin scales well is "dumb" (no weird edge cases) What are your experiences? What do you recommend? Thanks!

    Read the article

  • UIDs for service users in Mac OS X

    - by LaC
    Some third-party servers should be run under a special user for security reasons (eg, PostgreSQL is typically run by "postgres"). Of course, these service users should not show up in the Mac OS X login windows. I know how to create hidden users using dscl or dsimport, but I'm wondering what the best policy is for assigning UIDs (and matching GIDs). Apple's documentation states that UIDs from 0 to 100 are reserved (pg. 69), but OS X comes with several special users and groups outside that range. I used to use ids from 401 onwards for services, but I noticed that OS X 10.6 has started using that range for groups created by the Sharing pane in System Preferences. What is the recommended ID range to use for third-party services, then? Perhaps I should just use IDs in the 500 range, since all that is needed to hide a user in Snow Leopard is setting his password to "*"? Also, most of Apple's services have names starting with an underscore, with an alias sans underscore; eg, _sandbox and sandbox. Is there any special significance to this? Should I do the same for my services?

    Read the article

  • Google and Bing Map APIs Compared

    - by SGWellens
    At one of the local golf courses I frequent, there is an open grass field next to the course. It is about eight acres in size and mowed regularly. It is permissible to hit golf balls there—you bring and shag our own balls. My golf colleagues and I spend hours there practicing, chatting and in general just wasting time. One of the guys brings Ginger, the amazing, incredible, wonder dog. Ginger is a Portuguese Pointer. She chases squirrels, begs for snacks and supervises us closely to make sure we don't misbehave.     Anyway, I decided to make a dedicated web page to measure distances on the field in yards using online mapping services. I started with Google maps and then did the same application with Bing maps. It is a good way to become familiar with the APIs. Here are images of the final two maps: Google:  Bing:   To start with online mapping services, you need to visit the respective websites and get a developers key. I pared the code down to the minimum to make it easier to compare the APIs. Google maps required this CSS (or it wouldn't work): <style type="text/css">     html     {         height: 100%;     }       body     {         height: 100%;         margin: 0;         padding: 0;     } Here is how the map scripts are included. Google requires the developer Key when loading the JavaScript, Bing requires it when the map object is created: Google: <script type="text/javascript" src="https://maps.googleapis.com/maps/api/js?key=XXXXXXX&libraries=geometry&sensor=false" > </script> Bing: <script  type="text/javascript" src="http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=7.0"> </script> Note: I use jQuery to manipulate the DOM elements which may be overkill, but I may add more stuff to this application and I didn't want to have to add it later. Plus, I really like jQuery. Here is how the maps are created: Common Code (the same for both Google and Bing Maps):     <script type="text/javascript">         var gTheMap;         var gMarker1;         var gMarker2;           $(document).ready(DocLoaded);           function DocLoaded()         {             // golf course coordinates             var StartLat = 44.924254;             var StartLng = -93.366859;               // what element to display the map in             var mapdiv = $("#map_div")[0];   Google:         // where on earth the map should display         var StartPoint = new google.maps.LatLng(StartLat, StartLng);           // create the map         gTheMap = new google.maps.Map(mapdiv,             {                 center: StartPoint,                 zoom: 18,                 mapTypeId: google.maps.MapTypeId.SATELLITE             });           // place two markers         marker1 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng + .0001));         marker2 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng - .0001));           DragEnd(null);     } Bing:         // where on earth the map should display         var StartPoint = new  Microsoft.Maps.Location(StartLat, StartLng);           // create the map         gTheMap = new Microsoft.Maps.Map(mapdiv,             {                 credentials: 'Asbsa_hzfHl69XF3wxBd_WbW0dLNTRUH3ZHQG9qcV5EFRLuWEaOP1hjWdZ0A0P17',                 center: StartPoint,                 zoom: 18,                 mapTypeId: Microsoft.Maps.MapTypeId.aerial             });             // place two markers         marker1 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng + .0001));         marker2 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng - .0001));           DragEnd(null);     } Note: In the Bing documentation, mapTypeId: was missing from the list of options even though the sample code included it. Note: When creating the Bing map, use the developer Key for the credentials property. I immediately place two markers/pins on the map which is simpler that creating them on the fly with mouse clicks (as I first tried). The markers/pins are draggable and I capture the DragEnd event to calculate and display the distance in yards and draw a line when the user finishes dragging. Here is the code to place a marker: Google: // ---- PlaceMarker ------------------------------------   function PlaceMarker(location) {     var marker = new google.maps.Marker(         {             position: location,             map: gTheMap,             draggable: true         });     marker.addListener('dragend', DragEnd);     return marker; }   Bing: // ---- PlaceMarker ------------------------------------   function PlaceMarker(location) {     var marker = new Microsoft.Maps.Pushpin(location,     {         draggable : true     });     Microsoft.Maps.Events.addHandler(marker, 'dragend', DragEnd);     gTheMap.entities.push(marker);     return marker; } Here is the code than runs when the user stops dragging a marker: Google: // ---- DragEnd -------------------------------------------   var gLine = null;   function DragEnd(Event) {     var meters = google.maps.geometry.spherical.computeDistanceBetween(marker1.position, marker2.position);     var yards = meters * 1.0936133;     $("#message").text(yards.toFixed(1) + ' yards');    // draw a line connecting the points     var Endpoints = [marker1.position, marker2.position];       if (gLine == null)     {         gLine = new google.maps.Polyline({             path: Endpoints,             strokeColor: "#FFFF00",             strokeOpacity: 1.0,             strokeWeight: 2,             map: gTheMap         });     }     else        gLine.setPath(Endpoints); } Bing: // ---- DragEnd -------------------------------------------   var gLine = null;   function DragEnd(Args) {    var Distance =  CalculateDistance(marker1._location, marker2._location);      $("#message").text(Distance.toFixed(1) + ' yards');       // draw a line connecting the points    var Endpoints = [marker1._location, marker2._location];           if (gLine == null)    {        gLine = new Microsoft.Maps.Polyline(Endpoints,            {                strokeColor: new Microsoft.Maps.Color(0xFF, 0xFF, 0xFF, 0),  // aRGB                strokeThickness : 2            });          gTheMap.entities.push(gLine);    }    else        gLine.setLocations(Endpoints);  }   Note: I couldn't find a function to calculate the distance between points in the Bing API, so I wrote my own (CalculateDistance). If you want to see the source for it, you can pick it off the web page. Note: I was able to verify the accuracy of the measurements by using the golf hole next to the field. I put a pin/marker on the center of the green, and then by zooming in, I was able to see the 150 markers on the fairway and put the other pin/marker on one of them. Final Notes: All in all, the APIs are very similar. Both made it easy to accomplish a lot with a minimum amount of code. In one aerial view, there are leaves on the tree, in the other, the trees are bare. I don't know which service has the newer data. Here are links to working pages: Bing Map Demo Google Map Demo I hope someone finds this useful. Steve Wellens   CodeProject

    Read the article

  • Google and Bing Map APIs Compared

    - by SGWellens
    At one of the local golf courses I frequent, there is an open grass field next to the course. It is about eight acres in size and mowed regularly. It is permissible to hit golf balls there—you bring and shag our own balls. My golf colleagues and I spend hours there practicing, chatting and in general just wasting time. One of the guys brings Ginger, the amazing, incredible, wonder dog. Ginger is a Hungarian Vizlas (or Hungarian pointer). She chases squirrels, begs for snacks and supervises us closely to make sure we don't misbehave. Anyway, I decided to make a dedicated web page to measure distances on the field in yards using online mapping services. I started with Google maps and then did the same application with Bing maps. It is a good way to become familiar with the APIs. Here are images of the final two maps: Google:  Bing:   To start with online mapping services, you need to visit the respective websites and get a developers key. I pared the code down to the minimum to make it easier to compare the APIs. Google maps required this CSS (or it wouldn't work): <style type="text/css">     html     {         height: 100%;     }       body     {         height: 100%;         margin: 0;         padding: 0;     } Here is how the map scripts are included. Google requires the developer Key when loading the JavaScript, Bing requires it when the map object is created: Google: <script type="text/javascript" src="https://maps.googleapis.com/maps/api/js?key=XXXXXXX&libraries=geometry&sensor=false" > </script> Bing: <script  type="text/javascript" src="http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=7.0"> </script> Note: I use jQuery to manipulate the DOM elements which may be overkill, but I may add more stuff to this application and I didn't want to have to add it later. Plus, I really like jQuery. Here is how the maps are created: Common Code (the same for both Google and Bing Maps):     <script type="text/javascript">         var gTheMap;         var gMarker1;         var gMarker2;           $(document).ready(DocLoaded);           function DocLoaded()         {             // golf course coordinates             var StartLat = 44.924254;             var StartLng = -93.366859;               // what element to display the map in             var mapdiv = $("#map_div")[0];   Google:         // where on earth the map should display         var StartPoint = new google.maps.LatLng(StartLat, StartLng);           // create the map         gTheMap = new google.maps.Map(mapdiv,             {                 center: StartPoint,                 zoom: 18,                 mapTypeId: google.maps.MapTypeId.SATELLITE             });           // place two markers         marker1 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng + .0001));         marker2 = PlaceMarker(new google.maps.LatLng(StartLat, StartLng - .0001));           DragEnd(null);     } Bing:         // where on earth the map should display         var StartPoint = new  Microsoft.Maps.Location(StartLat, StartLng);           // create the map         gTheMap = new Microsoft.Maps.Map(mapdiv,             {                 credentials: 'XXXXXXXXXXXXXXXXXXX',                 center: StartPoint,                 zoom: 18,                 mapTypeId: Microsoft.Maps.MapTypeId.aerial             });           // place two markers         marker1 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng + .0001));         marker2 = PlaceMarker(new Microsoft.Maps.Location(StartLat, StartLng - .0001));           DragEnd(null);     } Note: In the Bing documentation, mapTypeId: was missing from the list of options even though the sample code included it. Note: When creating the Bing map, use the developer Key for the credentials property. I immediately place two markers/pins on the map which is simpler that creating them on the fly with mouse clicks (as I first tried). The markers/pins are draggable and I capture the DragEnd event to calculate and display the distance in yards and draw a line when the user finishes dragging. Here is the code to place a marker: Google: // ---- PlaceMarker ------------------------------------   function PlaceMarker(location) {     var marker = new google.maps.Marker(         {             position: location,             map: gTheMap,             draggable: true         });     marker.addListener('dragend', DragEnd);     return marker; } Bing: // ---- PlaceMarker ------------------------------------   function PlaceMarker(location) {     var marker = new Microsoft.Maps.Pushpin(location,     {         draggable : true     });     Microsoft.Maps.Events.addHandler(marker, 'dragend', DragEnd);     gTheMap.entities.push(marker);     return marker; } Here is the code than runs when the user stops dragging a marker: Google: // ---- DragEnd -------------------------------------------   var gLine = null;   function DragEnd(Event) {     var meters = google.maps.geometry.spherical.computeDistanceBetween(marker1.position, marker2.position);     var yards = meters * 1.0936133;     $("#message").text(yards.toFixed(1) + ' yards');    // draw a line connecting the points     var Endpoints = [marker1.position, marker2.position];       if (gLine == null)     {         gLine = new google.maps.Polyline({             path: Endpoints,             strokeColor: "#FFFF00",             strokeOpacity: 1.0,             strokeWeight: 2,             map: gTheMap         });     }     else        gLine.setPath(Endpoints); } Bing: // ---- DragEnd -------------------------------------------   var gLine = null;   function DragEnd(Args) {    var Distance =  CalculateDistance(marker1._location, marker2._location);      $("#message").text(Distance.toFixed(1) + ' yards');       // draw a line connecting the points    var Endpoints = [marker1._location, marker2._location];           if (gLine == null)    {        gLine = new Microsoft.Maps.Polyline(Endpoints,            {                strokeColor: new Microsoft.Maps.Color(0xFF, 0xFF, 0xFF, 0),  // aRGB                strokeThickness : 2            });          gTheMap.entities.push(gLine);    }    else        gLine.setLocations(Endpoints);  }  Note: I couldn't find a function to calculate the distance between points in the Bing API, so I wrote my own (CalculateDistance). If you want to see the source for it, you can pick it off the web page. Note: I was able to verify the accuracy of the measurements by using the golf hole next to the field. I put a pin/marker on the center of the green, and then by zooming in, I was able to see the 150 markers on the fairway and put the other pin/marker on one of them. Final Notes: All in all, the APIs are very similar. Both made it easy to accomplish a lot with a minimum amount of code. In one aerial view, there are leaves on the tree, in the other, the trees are bare. I don't know which service has the newer data. Here are links to working pages: Bing Map Demo Google Map Demo I hope someone finds this useful. Steve Wellens   CodeProject

    Read the article

  • export data from WCF Service to excel

    - by Dave
    I need to provide an export to excel feature for a large amount of data returned from a WCF web service. The code to load the datalist is as below: List<resultSet> r = myObject.ReturnResultSet(myWebRequestUrl); //call to WCF service myDataList.DataSource = r; myDataList.DataBind(); I am using the Reponse object to do the job: Response.Clear(); Response.Buffer = true; Response.ContentType = "application/vnd.ms-excel"; Response.AddHeader("Content-Disposition", "attachment; filename=MyExcel.xls"); StringBuilder sb = new StringBuilder(); StringWriter sw = new StringWriter(sb); HtmlTextWriter tw = new HtmlTextWriter(sw); myDataList.RenderControl(tw); Response.Write(sb.ToString()); Response.End(); The problem is that WCF Service times out for large amount of data (about 5000 rows) and the result set is null. When I debug the service, I can see the window for saving/opening the excel sheet appear before the service returns the result and hence the excel sheet is always empty. Please help me figure this out.

    Read the article

  • Calculate time of method execution and send to WCF service async

    - by Tim
    I need to implement time calculation for repository methods in my asp .net mvc project classes. The problem is that i need to send time calculation data to WCF Service which is time consuming. I think about threads which can help to cal WCF service asynchronously. But I have very little experience with it. Do I need to create new thread each time or I can create a global thread, if so then how? I have something like that: StopWatch class public class StopWatch { private DateTime _startTime; private DateTime _endTime; public void Start() { _startTime = DateTime.Now; } protected void StopTimerAndWriteStatistics() { _endTime = DateTime.Now; TimeSpan timeResult = _endTime - _startTime; //WCF proxy object var reporting = AppServerUtility.GetProxy<IReporting>(); //Send data to server reporting.WriteStatistics(_startTime, _endTime, timeResult, "some information"); } public void Stop() { //Here is the thread I have question with var thread = new Thread(StopTimerAndWriteStatistics); thread.Start(); } } Using of StopWatch class in Repository public class SomeRepository { public List<ObjectInfo> List() { StopWatch sw = new StopWatch(); sw.Start(); //performing long time operation sw.Stop(); } } What am I doing wrong with threads?

    Read the article

  • silverlight security with WCF service, Forms Authentication and Custom Form Ticket

    - by user74825
    I have a silverlight application with login on the silverlight page. It uses Forms Authentication with WCF authentication service and customer Membership Provider. Something like : http://blogs.msdn.com/phaniraj/archive/2009/09/10/using-the-ado-net-data-services-silverlight-client-library-in-x-domain-and-out-of-browser-scenarios-ii-forms-authentication.aspx So, SL page login page calls the WCF service authentication service, it validates using DB - brings back username and password. Now, in each subsequent calls (in Global.asax in Authenticate_Request, I get HttpContext.User.IsAuthenticated and HttpContext.User.UserName). I have all this working properly. But, I just don't want the username, but more information surrounding the user, like UserId, UserAddress, UserAssociateCustomer etc. I tried couple of different approaches. 1) Use HttpContext.Cache as a dictionary to save the item and get it off based on httpcontext.user.name, problem is cache can be erased if there memory is being used heavily. 2) Tried CustomFormsAuth Ticket, when forms authentication writes a ticket, I intercept CreatingCookie method and write additional info in formauthentication ticket, so that I can read it in subsequent requests, I am having problems with this approach, I don't find the ticket in subsequent requests. I read about how we should use REsponse.Redirect, but where do I redirect user from WCF call. How do you guys implement the above scenario? Any best practices.? Any issues you see with going on HTTPS? All examples (or most of them) just explains simple forms authentication with "I am logged in message".. Any suggestions ?

    Read the article

  • ASMX schema varies when using WCF Service

    - by Lijo
    Hi, I have a client (created using ASMX "Add Web Reference"). The service is WCF. The signature of the methods varies for the client and the Service. I get some unwanted parameteres to the method. Note: I have used IsRequired = true for DataMember. Service: [OperationContract] int GetInt(); Client: proxy.GetInt(out requiredResult, out resultBool); Could you please help me to make the schame non-varying in both WCF clinet and non-WCF cliet? Do we have any best practices for that? using System.ServiceModel; using System.Runtime.Serialization; namespace SimpleLibraryService { [ServiceContract(Namespace = "http://Lijo.Samples")] public interface IElementaryService { [OperationContract] int GetInt(); [OperationContract] int SecondTestInt(); } public class NameDecorator : IElementaryService { [DataMember(IsRequired=true)] int resultIntVal = 1; int firstVal = 1; public int GetInt() { return firstVal; } public int SecondTestInt() { return resultIntVal; } } } Binding = "basicHttpBinding" using NonWCFClient.WebServiceTEST; namespace NonWCFClient { class Program { static void Main(string[] args) { NonWCFClient.WebServiceTEST.NameDecorator proxy = new NameDecorator(); int requiredResult =0; bool resultBool = false; proxy.GetInt(out requiredResult, out resultBool); Console.WriteLine("GetInt___"+requiredResult.ToString() +"__" + resultBool.ToString()); int secondResult =0; bool secondBool = false; proxy.SecondTestInt(out secondResult, out secondBool); Console.WriteLine("SecondTestInt___" + secondResult.ToString() + "__" + secondBool.ToString()); Console.ReadLine(); } } } Please help.. Thanks Lijo

    Read the article

  • Access problems with IIS 7 and a WCF service

    - by Steve
    I have a Silverlight app that calls a WCF service, the service calls some stored procedures in an SQL db using Visual Studio 2008's Link to SQL class and returns the information to whatever called it. I have set up the compiled project (website with embedded app and the WCF service) on an remote IIS 7 server. I recompiled my local copy to use the WCF service that is now hosted on the IIS box and not the one on the local dev server that Visual Studio provides, if I use the local version of the website (hosted on the dev server, and using the remote SCF service) it is able to make calls it needs and display the information. However, if I use the website that is being hosted by the remote IIS server, the app will not get the information it needs from the service. On the IIS server I have the application pool and the website running under my credentials, which have access to the database. Users connecting to the webpage use anonymous authentication. Any ideas as to why I can only access the service when running from the dev server and not through the remotely hosted webpage are appreciated. If anything needs clarification, please ask.

    Read the article

  • Failing report subscriptions

    - by DavidWimbush
    We had an interesting problem while I was on holiday. (Why doesn't this stuff ever happen when I'm there?) The sysadmin upgraded our Exchange server to Exchange 2010 and everone's subscriptions stopped. My Subscriptions showed an error message saying that the email address of one of the recipients is invalid. When you create a subscription, Reporting puts your Windows user name into the To field and most users have no permissions to edit it. By default, Reporting leaves it up to exchange to resolve that into an email address. This only works if Exchange is set up to translate aliases or 'short names' into email addresses. It turns out this leaves Exchange open to being used as a relay so it is disabled out of the box. You now have three options: Open up Exchange. That would be bad. Give all Reporting users the ability to edit the To field in a subscription. a) They shouldn't have to, it should just work. b) They don't really have any business subscribing anyone but themselves. Fix the report server to add the domain. This looks like the right choice and it works for us. See below for details. Pre-requisites: A single email domain name. A clear relationship between the Windows user name and the email address. eg. If the user name is joebloggs, then joebloggs@domainname needs to be the email address or an alias of it. Warning: Saving changes to the rsreportserver.config file will restart the Report Server service which effectively takes Reporting down for around 30 seconds. Time your action accordingly. Edit the file rsreportserver.config (most probably in the folder ..\Program Files[ (x86)]\Microsoft SQL Server\MSRS10_50[.instancename]\Reporting Services\ReportServer). There's a setting called DefaultHostName which is empty by default. Enter your email domain name without the leading '@'. Save the file. This domain name will be appended to any destination addresses that don't have a domain name of their own.

    Read the article

  • Wrapping REST based Web Service

    - by PaulPerry
    I am designing a system that will be running online under Microsoft Windows Azure. One component is a REST based web service which will really be a wrapper (using proxy pattern) which calls the REST web services of a business partner, which has to do with BLOB storage (note: we are not using azure storage). The majority of the functionality will be taking a request, calling our partner web service, receiving the request and then passing that back to the client. There are a number of reasons for doing this, but one of the big ones is that we are going to support three clients: our desktop application (win and mac), mobile apps (iOS), and a web front end. Having a single API which we then send to our partner protects us if that partner ever changes. I want our service to support both JSON and XML for the data transfer format, JSON for web and probably XML for the desktop and mobile (we already have an XML parser in those products). Our partner also supports both of these formats. I was planning on using ASP.NET MVC 4 with the Web API. As I design this, the thing that concerns me is the static type checking of C#. What if the partner adds or removes elements from the data? We can probably defensively code for that, but I still feel some concern. Also, we have to do a fair amount of tedious coding, to setup our API and then to turn around and call our partner’s API. There probably is not much choice on it though. But, in the back of my mind I wonder if maybe a more dynamic language would be a better choice. I want to reach out and see if anybody has had to do this before, what technology solutions they have used to (I am not attached to this one, these days Azure can host other technologies), and if anybody who has done something like this can point out any issues that came up. Thanks! Researching the issue seems to only find solutions which focus on connecting a SOAP web service over a proxy server, and not what I am referring to here. Note: Cross posted (by suggestion) from http://stackoverflow.com/questions/11906802/wrapping-rest-based-web-service Thank you!

    Read the article

  • DTLoggedExec 1.1.2008.4 Released!

    - by Davide Mauri
    Today I've relased the latest version of my DTExec replacement tool, DTLoggedExec. The main changes are the following: Used a new strategy for version numbers. Now it will follow the following pattern Major.Minor.TargetSQLServerVersion.Revision Added support for Auto Configurations Fixed a bug that reported incorrect number of errors and warnings to Log Providers Fixed a buf that prevented correct casting of values when using /Set and /Param options Errors and Warnings are now counted more precisely. Updated database and log import scripts to categorize logs by projects and sections. E.g.: Project: MyBIProject; Sections: Staging, Datawarehouse Removed unused report stored procedures from database Updated Samples: 12 samples are now available to show ALL DTLoggedExec features From this version only SSIS 2008 will be supported http://dtloggedexec.codeplex.com/releases/view/62218  It useful to say something more on a couple of specific points: From this version only SSIS 2008 will be supportedYes, Integration Services 2005 are not supported anymore. The latest version capable of running SSIS 2005 Packages is the 1.0.0.2. Updated database and log import scripts to categorize logs by projects and sectionsWhen you import a log file, you can now assign it to a Project and to a Section of that project. In this way it's easier to gather statistical information for an entire project or a subsection of it. This also allows to store logged data of package belonging to different projects in the same database. For example:  Updated SamplesA complete set of samples that shows how to use all DTLoggedExec features are now shipped with the product. Enjoy! Added support for Auto ConfigurationsThis point will have a post on its own, since it's quite important and is by far the biggest new feature introduced in this release. To explain it in a few words, I can just say that you don't need to waste time with complex DTS configuration files or options, since a package will configure itself automatically. You just need to write a single statement as a parameter for DTLoggedExec. This feature can simplify deployment *a lot* :)   I the next days I'll write the mentioned post on Auto-Configurations and i'll update the documentation available on theDTLoggedExec website:   http://dtloggedexec.davidemauri.it/MainPage.ashx

    Read the article

  • IIS: No Session being handed out, but only in production

    - by Wayne
    I've reproduced this in a simple project - details below. It's a WCF service in ASP.NET compatibility mode. What I'm seeing is that when run on the dev machine (Win7), a HTTP session id is available inside the service operation (HttpContext.Current.Session is non-null). But when deployed to the server (Win2k8R2), I get "No session". On both machines the app is configured to use the classic app pool, and the app pools themselves are configured identically as far as I can tell. The only differences I can discern between the two applications is that on the dev box, under "Handler Mappings", ISAPI-dll is disabled (not on the server), and on the server there's a spurious handler called "AboMapperCustom-7105160" (does not exist on the dev box). What should I be looking at next? Am I missing something head-slappingly simple? Service is this: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class Service2 { [OperationContract] public string DoWork() { if (HttpContext.Current != null) { if (HttpContext.Current.Session != null) { return "SessionId: " + HttpContext.Current.Session.SessionID; } else { return "No Session"; } } else { return "No Context"; } } } Config is: <?xml version="1.0" encoding="UTF-8"?> <configuration> <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler,log4net, Version=1.2.9.0, Culture=neutral, PublicKeyToken=b32731d11ce58905" /> <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere" /> <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> </sectionGroup> </sectionGroup> </sectionGroup> </configSections> <log4net> <appender name="LogFile" type="log4net.Appender.RollingFileAppender"> <file value="C:\Temp\Test.log4net.log" /> <rollingStyle value="Once" /> <maxSizeRollBackups value="10" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d{ISO8601} [%5t] %-5p %c{1} %m%n" /> </layout> </appender> <root> <level value="DEBUG" /> <appender-ref ref="LogFile" /> </root> </log4net> <appSettings /> <connectionStrings /> <system.web> <compilation debug="true"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089" /> </assemblies> </compilation> <!-- The <authentication> section enables configuration of the security authentication mode used by ASP.NET to identify an incoming user. --> <authentication mode="Windows" /> <!-- The <customErrors> section enables configuration of what to do if/when an unhandled error occurs during the execution of a request. Specifically, it enables developers to configure html error pages to be displayed in place of a error stack trace. --> <customErrors mode="RemoteOnly" defaultRedirect="GenericErrorPage.htm"> <error statusCode="403" redirect="NoAccess.htm" /> <error statusCode="404" redirect="FileNotFound.htm" /> </customErrors> <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx" /> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false" /> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </httpModules> </system.web> <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <providerOption name="CompilerVersion" value="v3.5" /> <providerOption name="WarnAsError" value="false" /> </compiler> </compilers> </system.codedom> <!-- The system.webServer section is required for running ASP.NET AJAX under Internet Information Services 7.0. It is not necessary for previous version of IIS. --> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules> <remove name="ScriptModule" /> <add name="ScriptModule" preCondition="managedHandler" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated" /> <remove name="ScriptHandlerFactory" /> <remove name="ScriptHandlerFactoryAppServices" /> <remove name="ScriptResource" /> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </handlers> </system.webServer> <runtime> <assemblyBinding appliesTo="v2.0.50727" xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0" /> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions.Design" publicKeyToken="31bf3856ad364e35" /> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0" /> </dependentAssembly> </assemblyBinding> </runtime> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_Service2" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647"> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Windows" /> </security> </binding> </basicHttpBinding> </bindings> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <behaviors> <serviceBehaviors> <behavior name="WebApplication3.Service2Behavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <services> <service behaviorConfiguration="WebApplication3.Service2Behavior" name="WebApplication3.Service2"> <endpoint address="" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_Service2" contract="WebApplication3.Service2" /> </service> </services> </system.serviceModel> <system.diagnostics> <sources> <source name="System.ServiceModel" switchValue="Information, ActivityTracing" propagateActivity="true"> <listeners> <add name="traceListener" type="System.Diagnostics.XmlWriterTraceListener" initializeData="c:\Temp\Test2.svclog" /> </listeners> </source> </sources> <trace autoflush="true" indentsize="4"> <listeners> <add name="traceListener2" type="System.Diagnostics.TextWriterTraceListener" initializeData="c:\Temp\Test.log" traceOutputOptions="DateTime" /> </listeners> </trace> </system.diagnostics> </configuration> Testing with a simple console app: class Program { static void Main(string[] args) { ServiceReference1.Service2Client client = new ServiceReference1.Service2Client(); Console.WriteLine(client.DoWork()); Console.ReadKey(); } }

    Read the article

  • Proper DateTime Format for a Web Service

    - by user48408
    I have a webservice with a method which is called via a xmlhttprequest object in my javascript. The method accepts a datetime parameter which is subsequently converted to a string and run against the database to perform a calculation. I get the value from m_txtDateAdd and send off the xmlHttprequest <asp:textbox id=m_txtDateAdd tabIndex=4 runat="server" Width="96px" Text="<%# Today %>"> </asp:textbox> which has a validator attacted to it <asp:CustomValidator id="m_DateAddValidator" runat="server" ErrorMessage="Please Enter a Valid Date" ControlToValidate="m_txtDateAdd">&#x25CF;</asp:CustomValidator> My webmethod looks something like this [WebMethod] public decimal GetTotalCost(DateTime transactionDate) { String sqlDateString = transactionDate.Year+"/"+transactionDate.Month+"/"+transactionDate.Day; I use sqlDateString as part of the commandtext i send off to the database. Its a legacy application and its inline sql so I don't have the freedom to set up a stored procedure and create and assign parameters in my code behind. This works 90% of the time. The webservice is called on the onchange event of m_txtDateAdd. Every now and again the response i get from the server is System.ArgumentException: Cannot convert 25/06/2009 to System.DateTime. System.ArgumentException: Cannot convert 25/06/2009 to System.DateTime. Parameter name: type --- System.FormatException: String was not recognized as a valid DateTime. at System.DateTimeParse.Parse(String s, DateTimeFormatInfo dtfi, DateTimeStyles styles) at System.DateTime.Parse(String s, IFormatProvider provider) at System.Convert.ToDateTime(String value, IFormatProvider provider) at System.String.System.IConvertible.ToDateTime(IFormatProvider provider) at System.Convert.ChangeType(Object value, Type conversionType, IFormatProvider provider) at System.Web.Services.Protocols.ScalarFormatter.FromString(String value, Type type) --- End of inner exception stack trace --- at System.Web.Services.Protocols.ScalarFormatter.FromString(String value, Type type) at System.Web.Services.Protocols.ValueCollectionParameterReader.Read(NameValueCollection collection) at System.Web.Services.Protocols.HtmlFormParameterReader.Read(HttpRequest request) at System.Web.Services.Protocols.HttpServerProtocol.ReadParameters() at System.Web.Services.Protocols.WebServiceHandler.CoreProcessRequest()

    Read the article

  • Using ASP.NET session state with Silverlight (PRISM)

    - by Jon Andersen
    Hi, The scenario: I have a PRISM application developed in Silverlight (4), and I'm using a ASP.NET server side application to host several web-services (which, in turn, accesses WCF-services, but that's not really important here). The Silverlight application must be able to call the web services cross-domain (meaning that the web services isn't necessarily on the same server hosting the silverlight application). The Silverlight application consists of several modules, each accessing the ASP.NET web-services. I do not have much experience with Silverlight and PRISM, but as far as I can see, this is not a very unusual scenario... The problem: My challange is, that when 2 different modules access the web-services, I get 2 new sessions on the web-server. I would have thought that since both modules live on the same HTML-page (and then also in the same browser session), they would get the same session on the web-server...? I have tried to make the web-service Proxy-client globally available in the container (using Unity), by registering an instance (using Container.RegisterInstance), and then getting this instance whenever a module needs to make a web-service call (using Container.Resolve), but this doesn't seem to help. However, any calls made within the same module always gets the same session on the server. Can anyone see what I'm missing here...? Thanks! Jon

    Read the article

  • Can't connect Gtalk from external services like meebo, or clients like iChat and Adium

    - by Juan Esteban Pemberthy
    I've been using Gtalk from the beginning, usually with the clients Adium and iChat, but suddenly my account stop working for those clients and other external services like meebo.com, the weird thing (for me) is that my username and password is fine since I can login without problems to any other service like Gmail, and from there I can use talk, the windows official client also works, any clues on what's going on?

    Read the article

  • windows server 2008r2 with terminal services in multiple networks with different users

    - by phhe
    is there a way to let terminal services make some kind of 'abstraction' over the physical network interfaces of the server so they can be managed via gpo to grant or prohibit access for different users? the basic idea is to have 2 network interfaces (user and server/management) and not letting users within terminal sessions access the server/management network. or is this just impossible ? what would be a better way to do this ?

    Read the article

  • netcat/socat no response from other networking services

    - by jack
    Hi gurus First, I thought that this was Vmware problem : http://serverfault.com/questions/141838/vmware-problems-networking-no-packet-response But now, after testing on several physical machines, I realized certain services didn't return response data when using socat/netcat 1.1 which is supposed to the latest version since last updated. root@test3:~# netcat 192.168.1.2 25 220 762462a8c4d Microsoft ESMTP MAIL Service, Version: 6.0.2600.5949 ready at Fri, 12 May 2010 18:04:20 +0800 EHLO SAYHELLO VRFY TEST@LOCALHOST EHLO localhost sdfsafsd ^ root@test3:~# I've tested it on both windows and linuxes. I found no problem with telnet.

    Read the article

  • [SOLVED] netcat/socat no response from other networking services

    - by jack
    Hi gurus First, I thought that this was Vmware problem : http://serverfault.com/questions/141838/vmware-problems-networking-no-packet-response But now, after testing on several physical machines, I realized certain services didn't return response data when using socat/netcat 1.1 which is supposed to the latest version since last updated. root@test3:~# netcat 192.168.1.2 25 220 762462a8c4d Microsoft ESMTP MAIL Service, Version: 6.0.2600.5949 ready at Fri, 12 May 2010 18:04:20 +0600 EHLO localhost sdfsafsd ^ root@test3:~# I've tested it on both windows and linuxes. I found no problem with telnet.

    Read the article

  • What does "Use mandatory profiles on the RD Session Host server" do?

    - by Scott Chamberlain
    The description for "Use mandatory profiles on the RD Session Host server" is a little ambiguous: This policy setting allows you to specify whether Remote Desktop Services uses a mandatory profile for all users connecting remotely to the RD Session Host server. If you enable this policy setting, Remote Desktop Services uses the path specified in the Set path for Remote Desktop Services Roaming User Profile policy setting as the root folder for the mandatory user profile. All users connecting remotely to the RD Session Host server use the same user profile. If you disable or do not configure this policy setting, mandatory user profiles are not used by users connecting remotely to the RD Session Host server. I have a situation where only some users need to use mandatory profiles for logging in to a Remote Desktop Session Host. If I have some users with ntuser.dat and some users ntuser.man in their roaming profile what will RD Session Host do To a user who has ntuser.man in their roaming profile and has the setting set to Disabled? To a user who has ntuser.dat in their roaming profile and has the setting set to Enabled?

    Read the article

  • Install several lighttpd services in same server

    - by Pedro
    Hi, I'm running one videos streamming site using lighttpd, but at the moment the badwith of the server is at 50%, memory is ok, disks ok... But lighttpd is giving me timeouts. I think that if I had 2 or 3 lighttpd services running in the same machine I can solve this issue. How can I setup this? Regards, Pedro

    Read the article

  • Small Business Server services will not start, and remote desktop and UAC are broken

    - by Stephen Jennings
    Yesterday I began setting up a server with Windows Small Business Server 2008. All I am configuring it for right now is to be a domain controller and Exchange server. I completed the initial setup of SBS then started looking through different connection options (allowing VPN versus using a TS Gateway). After I rebooted one time, I started having three not-obviously-related issues: First, I could no longer remote desktop into the computer. I ran TCPView and saw that it was no longer listening on port 3389. I checked everything in Terminal Service Configuration but everything shows the computer ought to be allowing connections. Also, when I tried to use anything that required user account control elevation, the UAC dialog never popped up and the program that was waiting just froze. If I try to run "regedit" from the Run box, for example, it never appears. When I run in safe mode which does not run with UAC, I was able to access everything. I didn't want to deal with it, so I turned off UAC and rebooted. Finally, in the Windows SBS Console, there are status indicators for Security, Updates, Backup, and Other Alerts. The first three get stuck saying "Querying". Looking in the computer alerts, I have events showing the following services stopped: Background Intelligent Transfer Service KtmRm for Distributed Transaction Coordinator Distributed Transaction Coordinator Microsoft Exchange Information Store Microsoft Exchange System Attendant Microsoft Exchange Transport Windows Remote Management Update Services Windows Update I figured I must have configured something wrong accidentally and I couldn't find anything using Google explaining what might be the case, so I just decided to format the hard drive and reinstall SBS from scratch. I did this and everything was working last night, but I just turned the machine back on and it is doing the same thing again! On my second install, I did not configure anything except the following (all from SBS Console): Connect to the Internet (set IP and router address) Turn off customer feedback. Set up internet address. Decline to use a Smart Host for email. Added one standard user account. Since this happened again and I was very careful the second time not to configure anything outside of the SBS Console, I feel like there's something else going on. Right now the machine is on an isolated network that does have internet access. My desktop is the only other machine plugged into this network. Any and all help is appreciated (before I tear my hair out!)

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >