Search Results

Search found 64909 results on 2597 pages for 'service application'.

Page 26/2597 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Activity won't start a service

    - by Marko Cakic
    I m trying to start an IntentService from the main activity of y application and it won't start. I have the service in the manifest file. Here's the code: MainActivity public class Home extends Activity { private LinearLayout kontejner; IntentFilter intentFilter; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_home); kontejner = (LinearLayout) findViewById(R.id.kontejner); intentFilter = new IntentFilter(); startService(new Intent(getBaseContext(), HomeService.class)); } } Service: public class HomeService extends IntentService { public HomeService() { super("HomeService"); // TODO Auto-generated constructor stub } @Override protected void onHandleIntent(Intent intent) { Toast.makeText(getBaseContext(), "TEST", Toast.LENGTH_LONG).show(); } } Manifest: <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.salefinder" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="8" android:targetSdkVersion="15" /> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name=".Home" android:label="@string/title_activity_home" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <service android:name=".HomeService" /> </application> <uses-permission android:name="android.permission.INTERNET"/> </manifest> How can I make it work?

    Read the article

  • Is there any online web application for daily activity logs [closed]

    - by user25
    I am looking for daily calendar-like apps, but ones that have some text editor attached for every day. All I want to do is that when I open that page, it should open the text editor for that day and I could enter some data of what I have accomplished, then save it. The next day I'd like it so when I open the application, then I can go to new page, but I have all of the previous day's data to have a look at what I have done. Something like Google calendar, but I need an associated text editor to paste some notes, stuff, etc. on a daily basis with logs.

    Read the article

  • Cam being used by another application

    - by w35t
    On my laptop doesn't work any camera. I get error: "Camera being used by another application"". I don`t known which program using it. I tried use software like "SplitCam" or "ManyCam". Also I reinstalled K-lite codec to newest, but still getting this error. I get the same error when I turn my Canon video camera to laptop using firewire cable. OS: Windows 7 x64 Camera: Creative (i don`t find which model) Laptop: DELL inspiron 1520 Error when I tried open SplitCam: http://f.imagehost.org/0759/Clipboard03.png SplitCam error when repair it: http://i.imagehost.org/0607/split.png ManyCam just not responding. SonyVegas error: http://f.imagehost.org/0227/Clipboard05.png Sorry for my English. Thanks for any help.

    Read the article

  • Web application and remote storage of files

    - by Matt
    Hi have a web application that can store lots and lots of files on the server. i.e. users upload data to it. The files are stored below a particular storage path. The web host will be an IBM xseries 345. However, the disks are really expensive so we would like to put the files onto a less expensive server. Now here is the question. Should I use an NFS mount on the IBM server of a path on the storage server? Or should I write some scripts to upload the files to the storage server instead. Both the storage server and the web host are on the same network. Only the web server is visible to the world. Is NFS performance suitable for an expected low to moderately loaded server?

    Read the article

  • PHP + IIS Application Pool Identity Windows\Temp permissions

    - by Matt Boothman
    I am currently running PHP (5.3) on IIS 7.5 on a Win2k8 R2 Web Edition Server and would like to know what, if any, problems or security vulnerabilities I may introduct into a system by assigning Read, Write, Modify & Execute permissions to either IUSR account or the IIS_USERS group for %SystemRoot%\Temp? Should I be altering permissions to that folder at all (as Windows reminds me I probably shouldn't when i attempt to change them)? Should I create a temp folder somewhere else and set permissions accordingly? The problem is when i set Anonymous Authentication (I'm guessing is a more secure option???) to use the App Pool identity, when starting sessions PHP gets stuck in a loop because it's unable to create session files in the %SystemRoot%\Temp folder due to lack of permission on the application pool user or IIS_USERS group. Another problem being ImageMagick (PHP Extension) is being denied access to %SystemRoot%\Temp to write temporary files so is throwing exceptions. I have tried searching Google however have not found anything that touches upon this subject specifically. Any help greatly appreciated.

    Read the article

  • Cannot install Oracle Application server

    - by ziftech
    After succesfull installation of database 10g server, I cannot run install file for application server - it closed unexpectly. How can be problem investigated? Log file contains: Checking installer requirements... Checking operating system version: must be 5.0, 5.1 or 5.2. Actual 5.2 Passed Checking monitor: must be configured to display at least 256 colors. Actual 4294967296 Passed Checking swap space: must be greater than 1535 MB. Actual 4092MB Passed Checking Temp space: must be greater than 150 MB. Actual 5083 MB Passed All installer requirements met.

    Read the article

  • application that could track amount of downloads

    - by user23950
    My ISP only limits me for downloading about 25gb a month. After exceeding the limit, my speed goes down by half for another month. And its really a pain. I'm real addicted on downloading stuff from the internet. My question is: Is there an application that can track the amount/size of downloads in a month. Is there a trick that I could use to fool the eyes of my ISP. If they say 25 Gb limit in a month. Does it include the webpages, manga streams, video & audio streams. Or just direct download and p2p.

    Read the article

  • Warming up an IIS Application Pool automatically?

    - by Michael Stum
    So IIS likes to shut down app pools that aren't in use. While this makes sense, I would like to have certain app pools conterminously running, but I don't want to just disable the automatic app pool restart as some of the settings (e.g., maximum memory limit) are good to have. I know that Microsoft announced the IIS Application Warmup module as an IIS 7.5 feature only then to do a Bait & Switch and pull it again so that they can put it in IIS 8 instead, so I wonder if something exists to run on IIS 7.5/Windows 2008 R2?

    Read the article

  • Oracle application - files missing in the Mount point in UNix server

    - by arun_V
    My oracle application test instance is down, When I browse through the Unix server, I couldn’t find any files in the mount point,U01 U06 or U10, when I put BDF command it shows the following $ bdf Filesystem kbytes used avail %used Mounted on /dev/vg00/lvol3 204800 35571 158662 18% / /dev/vg00/lvol1 299157 38506 230735 14% /stand /dev/vg00/lvol8 1392640 1261068 123620 91% /var /dev/vg00/lvol7 1327104 825170 470631 64% /usr /dev/vg00/lvol4 716800 385891 310746 55% /tmp /dev/vg00/lvol6 872448 814943 53936 94% /opt /dev/vg00/lvolssh 32768 13243 18306 42% /opt/openssh /dev/vg00/lvol5 204800 187397 16334 92% /home /dev/vg00/lvolback 512000 472879 36704 93% /backup dg-ora04:/dgora03_u10 204800 167088 35416 83% /u10 dg-ora04:/dgora03_u06 204800 167088 35416 83% /u06 dg-ora04:/dgora03_u01 204800 167088 35416 83% /u01 Why can't I see any files inside the mount points?

    Read the article

  • How to enable systemd instantiated service with puppet?

    - by Richard Pena
    I've got the following puppet service: service { "getty@ttyUSB0.service": provider => systemd, ensure => running, enable => true, } When I try to apply this configuration on my client, it throws the following error: err: /Stage[main]//Node[puppetclient]/Service[[email protected]]/enable: change from false to true failed: Could not enable [email protected]: The service is running fine and I can make sure it's started on system boot by adding a symlink to getty.target.wants: ln -s /lib/systemd/system/getty@.service /etc/systemd/system/getty.target.wants/getty@ttyUSB0.service Of source, I could go ahead and remove "enable = true" from the service definition and include a the symlink manually in the puppet configuration, but shouldn't puppet take care of this? Am I doing something terribly wrong?

    Read the article

  • wpf cancel backgroundworker on application exits

    - by toni
    Hi! In my application I have a main windows and into it, in a frame I load a page. This page do a long time task when the user press a button. My problem is that when long task is being doing and the user presses the close button of the main window, the application seems to not finish because I am debugging it in VS2008 and I can see the stop button highlighted. If I want to finish I have to press stop button, the application doesn't stop the debugging automatically on application exit. I thought .NET stops automatically backgroundworkers on application exits but I am not sure after seeing this behaviour. I have tried to force and cancel background worker in unloaded event page with something like this: private void Page_Unloaded(object sender, RoutedEventArgs e) { // Is the Background Worker do some work? if (My_BgWorker != null && My_BgWorker.IsBusy) { //If it supports cancellation, Cancel It if (My_BgWorker.WorkerSupportsCancellation) { // Tell the Background Worker to stop working. My_BgWorker.CancelAsync(); } } } but with no success. After doing CancelAsync(), a few minutes after, I can see the backgroundworker finishes and raise RunWorkerCompleted and I can see the task is completed checking e.Cancelled argument in the event but after this event is exectued the application continues without exiting and I have no idea what is doing.... I set WorkerSupportsCancellation to true to support cancel at the begining. I would apreciate all answers. Thanks.

    Read the article

  • Deploying Application with mvc in shared hosting server

    - by ankita-13-3
    We have created an MVC web application in asp.net 3.5, it runs absolutely fine locally but when we deploy it on godaddy hosting server (shared hosting), it shows an error which is related to trust level problem. We contacted godaddy support and they say, that we only support medium trust level application. So how to convert my application in medium trust level. Do I need to make changes to web.config file. It shows the following error : Security Exception Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. Exception Details: System.Security.SecurityException: Request failed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SecurityException: Request failed.] System.Security.CodeAccessSecurityEngine.ThrowSecurityException(Assembly asm, PermissionSet granted, PermissionSet refused, RuntimeMethodHandle rmh, SecurityAction action, Object demand, IPermission permThatFailed) +150 System.Security.CodeAccessSecurityEngine.ThrowSecurityException(Object assemblyOrString, PermissionSet granted, PermissionSet refused, RuntimeMethodHandle rmh, SecurityAction action, Object demand, IPermission permThatFailed) +100 System.Security.CodeAccessSecurityEngine.CheckSetHelper(PermissionSet grants, PermissionSet refused, PermissionSet demands, RuntimeMethodHandle rmh, Object assemblyOrString, SecurityAction action, Boolean throwException) +284 System.Security.PermissionSetTriple.CheckSetDemand(PermissionSet demandSet, PermissionSet& alteredDemandset, RuntimeMethodHandle rmh) +69 System.Security.PermissionListSet.CheckSetDemand(PermissionSet pset, RuntimeMethodHandle rmh) +150 System.Security.PermissionListSet.DemandFlagsOrGrantSet(Int32 flags, PermissionSet grantSet) +30 System.Threading.CompressedStack.DemandFlagsOrGrantSet(Int32 flags, PermissionSet grantSet) +40 System.Security.CodeAccessSecurityEngine.ReflectionTargetDemandHelper(Int32 permission, PermissionSet targetGrant, CompressedStack securityContext) +123 System.Security.CodeAccessSecurityEngine.ReflectionTargetDemandHelper(Int32 permission, PermissionSet targetGrant, Resolver accessContext) +41 Look forward to your help. Regards Ankita Software Developer Shakti Informatics Pvt. Ltd. Web Template Hub

    Read the article

  • retrieve events where uid is the creator and application id is the admin - Facebook API

    - by Anup Parekh
    I would like to know if there is a Facebook API call to retrieve the events (eids) for all the events a user has created using my facebook connect application. The events are created using the following REST api call: https://api.facebook.com/method/events.create?event_info=' . $e_i . '&access_token=' . $cookie['access_token'] $e_i is the event info array where the 'host' value is set to 'Me' as follows $event_info['host'] = 'Me'; On Facebook events under the "Created by:" section it lists "My user name,Application Name", I presume this is because I am the creator and the application is the admin as stated in the REST api documentation http://developers.facebook.com/docs/reference/rest/events.create/ Unfortunately I cannot seem to find out how (neither REST nor GPRAPH API) to return a list of events where I am the creator and the application is the admin as in the above scenario. If this is possible I would really appreciate some assistance with how it is done. So far I have tried: REST API events.get using uid=application_id. This only returns events created by the application not those including the user who created them GRAPH API https://graph.facebook.com/me/events?fields=owner&access_token=... this returns all the events for 'me' but not where the application is also the admin. It seems strange that there's no reference to the linkage between the event creator and the event admin through the API but in Facebook it is able to pull both and display them on the event details.

    Read the article

  • Beware when using .NET's named pipes in a windows forms application

    - by FransBouma
    Yesterday a user of our .net ORM Profiler tool reported that he couldn't get the snapshot recording from code feature working in a windows forms application. Snapshot recording in code means you start recording profile data from within the profiled application, and after you're done you save the snapshot as a file which you can open in the profiler UI. When using a console application it worked, but when a windows forms application was used, the snapshot was always empty: nothing was recorded. Obviously, I wondered why that was, and debugged a little. Here's an example piece of code to record the snapshot. This piece of code works OK in a console application, but results in an empty snapshot in a windows forms application: var snapshot = new Snapshot(); snapshot.Record(); using(var ctx = new ORMProfilerTestDataContext()) { var customers = ctx.Customers.Where(c => c.Country == "USA").ToList(); } InterceptorCore.Flush(); snapshot.Stop(); string error=string.Empty; if(!snapshot.IsEmpty) { snapshot.SaveToFile(@"c:\temp\generatortest\test2\blaat.opsnapshot", out error); } if(!string.IsNullOrEmpty(error)) { Console.WriteLine("Save error: {0}", error); } (the Console.WriteLine doesn't do anything in a windows forms application, but you get the idea). ORM Profiler uses named pipes: the interceptor (referenced and initialized in your application, the application to profile) sends data over the named pipe to a listener, which when receiving a piece of data begins reading it, asynchronically, and when properly read, it will signal observers that new data has arrived so they can store it in a repository. In this case, the snapshot will be the observer and will store the data in its own repository. The reason the above code doesn't work in windows forms is because windows forms is a wrapper around Win32 and its WM_* message based system. Named pipes in .NET are wrappers around Windows named pipes which also work with WM_* messages. Even though we use BeginRead() on the named pipe (which spawns a thread to read the data from the named pipe), nothing is received by the named pipe in the windows forms application, because it doesn't handle the WM_* messages in its message queue till after the method is over, as the message pump of a windows forms application is handled by the only thread of the windows forms application, so it will handle WM_* messages when the application idles. The fix is easy though: add Application.DoEvents(); right before snapshot.Stop(). Application.DoEvents() forces the windows forms application to process all WM_* messages in its message queue at that moment: all messages for the named pipe are then handled, the .NET code of the named pipe wrapper will react on that and the whole process will complete as if nothing happened. It's not that simple to just say 'why didn't you use a worker thread to create the snapshot here?', because a thread doesn't get its own message pump: the messages would still be posted to the window's message pump. A hidden form would create its own message pump, so the additional thread should also create a window to get the WM_* messages of the named pipe posted to a different message pump than the one of the main window. This WM_* messages pain is not something you want to be confronted with when using .NET and its libraries. Unfortunately, the way they're implemented, a lot of APIs are leaky abstractions, they bleed the characteristics of the OS objects they hide away through to the .NET code. Be aware of that fact when using them :)

    Read the article

  • Doubts about several best practices for rest api + service layer

    - by TheBeefMightBeTough
    I'm going to be starting a project soon that exposes a restful api for business intelligence. It may not be limited to a restful api, so I plan to delegate requests to a service layer that then coordinates multiple domain objects (each of which have business logic local to the object). The api will likely have many calls as it is a long-term project. While thinking about the design, I recalled a few best practices. 1) Use command objects at the controller layer (I'm using Spring MVC). 2) Use DTOs at the service layer. 3) Validate in both the controller and service layer, though for different reasons. I have my doubts about these recommendations. 1) Using command objects adds a lot of extra single-purpose classes (potentially one per request). What exactly is the benefit? Annotation based validation can be done using this approach, sure. What if I have two requests that take the same parameters, but have different validation requirements? I would have to have two different classes with exactly the same members but different annotations? Bleh. 2) I have heard that using DTOs is preferable to parameters because it makes for more maintainable code down the road (say, e.g., requirements change and the service parameters need to be altered). I don't quite understand this. Shouldn't an api be more-or-less set in stone? I would understand that in the early phases of a project (or, especially, an entire company) the domain itself will not be well understood, and thus core domain objects may change along with the apis that manipulate these objects. At this point however the number of api methods should be small and their dependents few, so changes to the methods could easily be tolerated from a maintainability standpoint. In a large api with many methods and a substantial domain model, I would think having a DTO for potentially each domain object would become unwieldy. Am I misunderstanding something here? 3) I see validation in the controller and service layer as redundant in most cases. Why would I validate that parameters are not null and are in general well formed in the controller if the service is going to do exactly the same (and more). Couldn't I just do all the validation in the service and throw a runtime exception with a list of bad parameters then catch that in the controller to make the error messages more presentable? Better yet, couldn't I just make the error messages user-friendly in the service and let the exception trickle up to a global handler (ControllerAdvice in spring, for example)? Is there something wrong with either of these approaches? (I do see a use case for controller validation if the input does not map one-to-one with the service input, but since the controllers are for a rest api and not forms, the api parameters will probably map directly to service parameters.) I do also have a question about unchecked vs checked exceptions. Namely, I'm not really sure why I'd ever want to use a checked exception. Every time I have seen them used they just get wrapped into general exceptions (DomainException, SystemException, ApplicationException, w/e) to reduce the signature length of methods, or devs catch Exception rather than dealing with the App1Exception, App2Exception, Sys1Exception, Sys2Exception. I don't see how either of these practices is very useful. Why not just use unchecked exceptions always and catch the ones you actually do care about? You could just document what unchecked exceptions the method throws.

    Read the article

  • Why do the GNOME symbolic icons appear darker in a running application?

    - by David Planella
    I'm creating an application that uses symbolic icons from the default theme. However, there are a few icons that I need that cannot be represented by those from the default theme, so I'm creating my own ones. What I did was to simply go to /usr/share/icons/gnome/scalable/actions/, copied a few locally into my app's source tree that could serve as a basis, and started editing them. So far so good. But I've noticed the following: all symbolic icons are of a light grey color when looking at the original .svg file, but when they are put onto a widget, they become darker. Here's an example, using the /usr/share/icons/gnome/scalable/actions/view-refresh-symbolic.svg icon from the default theme: Here's what it looks like when opening the original with Inkscape: And here's what it looks like on a toolbar on a running application: Notice the icon being much darker at runtime. That happens both with the Ambiance and Radiance themes. I wouldn't mind much, but I noticed it affects my custom icon, whereby parts of it become darker (the inner fill), whereas parts of it remain the same color as the original (the stroke). So what causes the default symbolic icons to darken and how should implement that for my custom icons?

    Read the article

  • What is the difference between running a Windows service vs. running through shell?

    - by Zack
    I am trying to troubleshoot an issue on a Windows 2008 server where running attempting to connect to a "Timberline Data Source" ODBC driver crashes if the call is in a "service" context, but succeeds if the call is initiated manually in a Remote Desktop session. I have set the service to run as my user. I'm wondering if, all else being equal (user, machine, etc), are there any fundamental security/environment differences between running a process as a service vs manually? --- Implementation Details --- In case it is helpful for anyone, I had a system that started as an attempt to connect to a Timberline Database using ODBC and a Python CGI script called via IIS 7. The script itself works fine, however, as soon as I attempt to perform the ODBC connect function, the script crashes without throwing an exception. The script was able to connect fine when executed via command line. The same thing happened when using a C#/.net service, attempting to run via Apache, Windows Scheduler or even a 3rd party scheduling tool. With the last option (the 3rd party scheduling tool, pycron) I set the service up log in as my user and had the same issue (I confirmed via Task Manager that the process running user was, in fact, me). It just doesn't make sense to me why a service, which should be running as my user, appears to still be operating in a different security context or environment. Also, if it's important, the Timberline database is referenced by computer name on the network ("\\timberline-server\Timberline Office\Accounts\AT" or something to that effect) I also realized that, as Joel pointed out, the server DOES have a mapped drive ("Y:" which is mapped to "\\timberline-server\Timberline Office") The DSN is set up at the "System DSN" level which, according to the ODBC Administration Tool, means that the DSN is available to users and services Since I'm not allowed to answer this question yet, I'll post the solution that I arrived on: As Joel Coel mentioned, there actually was a mapped drive scenario. I didn't realize this because the DSN specified a path using UNC. However, it seems as though the actual Timberline Driver referred to a mapped drive. Since services don't start with the mapped drive, I was forced to add the drive mapping code into my service. Since it was written in python, I used code from a Stackoverflow answer that was able to map the drive on the fly.

    Read the article

  • WCF Windows Service Monitor and process emails

    - by acadia
    Hello, I need your suggestions in solving this issue. Here is the requirement. We have a Microsoft Exchange server and we have a service email account [email protected]. We have scanners all owner the company when a user scans a document and email is sent to [email protected] as attachment. Now I need to write a Windows service which needs to monitor that email account and whenever an email is received, read the attachement and store it in the database. My question is, is it possible to do something of this sort? Any suggestions greatly appreciated. Thanks

    Read the article

  • Sql Server Service Broker Conversation Groups

    - by Brian Hasden
    Can someone explain conversation groups in service broker? Currently, I'm using service broker to send messages from one SQL server to another. On the sending server, I'm trying to correlate the messages so they are processed in serial on the receiving side. Based on the documentation, conversation groups seem to be a perfect fit for this, but on the receiving server, the messages get assigned to a different conversation group from the one I specified when sending the message. I've search around the web and saw that this behavior seems to be intended (http://social.msdn.microsoft.com/forums/en-US/sqlservicebroker/thread/baf48074-6804-43ab-844a-cb28a6dce02b/), but then I'm confused about the usefulness of the syntax from (http://msdn.microsoft.com/en-us/library/ms178624.aspx) WAITFOR( GET CONVERSATION GROUP @conversation_group_id FROM [dbo].[ReceiveQueue] ) If the conversation group doesn't come across with the message from the sender and messages sent with the same conversation group id don't have the same conversation group id on the receive side, what's the point of the code above?

    Read the article

  • Windows Service HTTPListener Memory Issue

    - by crawshaws
    Hi all, Im a complete novice to the "best practices" etc of writing in any code. I tend to just write it an if it works, why fix it. Well, this way of working is landing me in some hot water. I am writing a simple windows service to server a single webpage. (This service will be incorperated in to another project which monitors the services and some folders on a group of servers.) My problem is that whenever a request is recieved, the memory usage jumps up by a few K per request and keeps qoing up on every request. Now ive found that by putting GC.Collect in the mix it stops at a certain number but im sure its not meant to be used this way. I was wondering if i am missing something or not doing something i should to free up memory. Here is the code: Public Class SimpleWebService : Inherits ServiceBase 'Set the values for the different event log types. Public Const EVENT_ERROR As Integer = 1 Public Const EVENT_WARNING As Integer = 2 Public Const EVENT_INFORMATION As Integer = 4 Public listenerThread As Thread Dim HTTPListner As HttpListener Dim blnKeepAlive As Boolean = True Shared Sub Main() Dim ServicesToRun As ServiceBase() ServicesToRun = New ServiceBase() {New SimpleWebService()} ServiceBase.Run(ServicesToRun) End Sub Protected Overrides Sub OnStart(ByVal args As String()) If Not HttpListener.IsSupported Then CreateEventLogEntry("Windows XP SP2, Server 2003, or higher is required to " & "use the HttpListener class.") Me.Stop() End If Try listenerThread = New Thread(AddressOf ListenForConnections) listenerThread.Start() Catch ex As Exception CreateEventLogEntry(ex.Message) End Try End Sub Protected Overrides Sub OnStop() blnKeepAlive = False End Sub Private Sub CreateEventLogEntry(ByRef strEventContent As String) Dim sSource As String Dim sLog As String sSource = "Service1" sLog = "Application" If Not EventLog.SourceExists(sSource) Then EventLog.CreateEventSource(sSource, sLog) End If Dim ELog As New EventLog(sLog, ".", sSource) ELog.WriteEntry(strEventContent) End Sub Public Sub ListenForConnections() HTTPListner = New HttpListener HTTPListner.Prefixes.Add("http://*:1986/") HTTPListner.Start() Do While blnKeepAlive Dim ctx As HttpListenerContext = HTTPListner.GetContext() Dim HandlerThread As Thread = New Thread(AddressOf ProcessRequest) HandlerThread.Start(ctx) HandlerThread = Nothing Loop HTTPListner.Stop() End Sub Private Sub ProcessRequest(ByVal ctx As HttpListenerContext) Dim sb As StringBuilder = New StringBuilder sb.Append("<html><body><h1>Test My Service</h1>") sb.Append("</body></html>") Dim buffer() As Byte = Encoding.UTF8.GetBytes(sb.ToString) ctx.Response.ContentLength64 = buffer.Length ctx.Response.OutputStream.Write(buffer, 0, buffer.Length) ctx.Response.OutputStream.Close() ctx.Response.Close() sb = Nothing buffer = Nothing ctx = Nothing 'This line seems to keep the mem leak down 'System.GC.Collect() End Sub End Class Please feel free to critisise and tear the code apart but please BE KIND. I have admitted I dont tend to follow the best practice when it comes to coding.

    Read the article

  • Remote Debug Windows Azure Cloud Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/02/remote-debug-windows-azure-cloud-service.aspxOn the 22nd of October Microsoft Announced the new Windows Azure SDK 2.2. It introduced a lot of cool features but one of it shocked most, which is the remote debug support for Windows Azure Cloud Service (a.k.a. WACS).   Live Debug is Nightmare for Cloud Application When we are developing against public cloud, debug might be the most difficult task, especially after the application had been deployed. In order to minimize the debug effort, Microsoft provided local emulator for cloud service and storage once the Windows Azure platform was announced. By using local emulator developers could be able run their application on local machine with almost the same behavior as running on Windows Azure, and that could be debug easily and quickly. But when we deployed our application to Azure, we have to use log, diagnostic monitor to debug, which is very low efficient. Visual Studio 2012 introduced a new feature named "anonymous remote debug" which allows any workstation under any user could be able to attach the remote process. This is less secure comparing the authenticated remote debug but much easier and simpler to use. Now in Windows Azure SDK 2.2, we could be able to attach our application from our local machine to Windows Azure, and it's very easy.   How to Use Remote Debugger First, let's create a new Windows Azure Cloud Project in Visual Studio and selected ASP.NET Web Role. Then create an ASP.NET WebForm application. Then right click on the cloud project and select "publish". In the publish dialog we need to make sure the application will be built in debug mode, since .NET assembly cannot be debugged in release mode. I enabled Remote Desktop as I will log into the virtual machine later in this post. It's NOT necessary for remote debug. And selected "advanced settings" tab, make sure we checked "Enable Remote Debugger for all roles". In WACS, a cloud service could be able to have one or more roles and each role could be able to have one or more instances. The remote debugger will be enabled for all roles and all instances if we checked. Currently there's no way for us to specify which role(s) and which instance(s) to enable. Finally click "publish" button. In the windows azure activity window in Visual Studio we can find some information about remote debugger. To attache remote process would be easy. Open the "server explorer" window in Visual Studio and expand "cloud services" node, find the cloud service, role and instance we had just published and wanted to debug, right click on the instance and select "attach debugger". Then after a while (it's based on how fast our Internet connect to Windows Azure Data Center) the Visual Studio will be switched to debug mode. Let's add a breakpoint in the default web page's form load function and refresh the page in browser to see what's happen. We can see that the our application was stopped at the breakpoint. The call stack, watch features are all available to use. Now let's hit F5 to continue the step, then back to the browser we will find the page was rendered successfully.   What Under the Hood Remote debugger is a WACS plugin. When we checked the "enable remote debugger" in the publish dialog, Visual Studio will add two cloud configuration settings in the CSCFG file. Since they were appended when deployment, we cannot find in our project's CSCFG file. But if we opened the publish package we could find as below. At the same time, Visual Studio will generate a certificate and included into the package for remote debugger. If we went to the azure management portal we will find there will a certificate under our application which was created, uploaded by remote debugger plugin. Since I enabled Remote Desktop there will be two certificates in the screenshot below. The other one is for remote debugger. When our application was deployed, windows azure system will open related ports for remote debugger. As below you can see there are two new ports opened on my application. Finally, in our WACS virtual machine, windows azure system will copy the remote debug component based on which version of Visual Studio we are using and start. Our application then can be debugged remotely through the visual studio remote debugger. Below is the task manager on the virtual machine of my WACS application.   Summary In this post I demonstrated one of the feature introduced in Windows Azure SDK 2.2, which is Remote Debugger. It allows us to attach our application from local machine to windows azure virtual machine once it had been deployed. Remote debugger is powerful and easy to use, but it brings more security risk. And since it's only available for debug build this means the performance will be worse than release build. Hence we should only use this feature for staging test and bug fix (publish our beta version to azure staging slot), rather than for production.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Application.Current.Shutdown() vs. Application.Current.Dispatcher.BeginInvokeShutdown()

    - by Daniel Rose
    First a bit of background: I have a WPF application, which is a GUI-front-end to a legacy Win32-application. The legacy app runs as DLL in a separate thread. The commands the user chooses in the UI are invoked on that "legacy thread". If the "legacy thread" finishes, the GUI-front-end cannot do anything useful anymore, so I need to shutdown the WPF-application. Therefore, at the end of the thread's method, I call Application.Current.Shutdown(). Since I am not on the main thread, I need to invoke this command. However, then I noticed that the Dispatcher also has BeginInvokeShutdown() to shutdown the dispatcher. So my question is: What is the difference between invoking Application.Current.Shutdown(); and calling Application.Current.Dispatcher.BeginInvokeShutdown();

    Read the article

  • 5 minutes WIF: Make your ASP.NET application use test-STS

    - by DigiMortal
    Windows Identity Foundation (WIF) provides us with simple and dummy STS application we can use to develop our system with no actual STS in place. In this posting I will show you how to add STS support to your existing application and how to generate dummy application that plays you real STS. Word of caution! Although it is relatively easy to build your own STS using WIF tools I don’t recommend you to build it. Identity providers must be highly secure and stable in every means and this makes development of your own STS very complex task. If it is possible then use some known STS solution. I suppose you have WIF and WIF SDK installed on your development machine. If you don’t then here are the links to download pages: Windows Identity Foundation Windows Identity Foundation SDK Adding STS support to your web application Suppose you have web application and you want to externalize authentication so your application is able to detect users, send unauthenticated users to login and work in other terms exactly like it worked before. WIF tools provide you with all you need. 1. Click on your web application project and select “Add STS reference…” from context menu to start adding or updating STS settings for web application. 2. Insert your application URI in application settings window. Note that web.config file is already selected for you. I inserted URI that corresponds to my web application address under IIS Express. This URI must exist (later) because otherwise you cannot use dummy STS service. 3. Select “Create a new STS project in the current solution” and click Next button. 4. Summary screen gives you information about how your site will use STS. You can run this wizard always when you have to modify STS parameters. Click Finish. If everything goes like expected then new web site will be added to your solution and it is named as YourWebAppName_STS. Dummy STS application Image on right shows you dummy STS web site. Yes, it is created as web site project not as web application. But it still works nice and you don’t have to make there any modifications. It just works but it is dummy one. Why dummy STS? Some points about dummy STS web site: Dummy STS is not template for your own custom STS identity provider. Dummy STS is very good and simple replacement of real STS so you have more flexible development environment and you don’t have to authenticate yourself in real service. Of course, you can modify dummy STS web site to mimic some behavior of your real STS. Pages in dummy STS Dummy STS has two pages – Login.aspx and  Default.aspx. Default.aspx is the page that handles requests to STS service. Login.aspx is the page where authentication takes place. Dummy STS authenticates users using FBA. You can insert whatever username you like and dummy STS still works. You can take a look at the code behind these pages to get some idea about how this dummy service is built up. But again – this service is there to simplify your life as developer. Authenticating users using dummy STS If you are using development web server that ships with Visual Studio 2010 I suggest you to switch over to IIS or IIS Express and make some more configuration changes as described in my previous posting Making WIF local STS to work with your ASP.NET application. When you are done with these little modifications you are ready to run your application and see how authentication works. If everything is okay then you are redirected to dummy STS login page when running your web application. Adam Carter is provided as username by default. If you click on submit button you are authenticated and redirected to application page. In my case it looks like this. Conclusion As you saw it is very easy to set up your own dummy STS web site for testing purposes. You coded nothing. You just ran wizard, inserted some data, modified configuration a little bit and you were done. Later, when your application goes to production you can run again this STS configuration utility and it generates correct settings for your real STS service automatically.

    Read the article

  • Just Another Web Service (JAWS) vs SOA

    Over the last few years SOA has been a hot topic lending it to be abused by many that have no understanding of the concept. In my opinion, one of the largest issues facing SOA is the lack of understanding and experience implementing SOA by business and IT alike. I just recently deployed a new web services that is called by multiple service clients. Would you call this SOA because it is a web service that can be called by any requesting client? In my opinion, this is not SOA; instead it is Just Another Web Service (JAWS).  Just because a company creates a web service does not mean that they are using SOA, in fact it only means that they are using a web service. SOA is an architectural style that focuses on the design of systems based on the consumer and providers thorough the use of contracts.  With this approach SOA needs to be applied for the top down in order for it to reach its full potential. In the case of the web service, the service is just a small part of the entire system that is reusable and has the flexibility to change. In order for a company in this case to move towards SOA then they need to define business processes that can be shared through the use of reusable software and loose coupling. Once the company’s thought and development process change to address changes in this manner they can start to become more SOA.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >