Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 312/880 | < Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >

  • Jboss unreachable/ slow behind apache with ajp

    - by Niels
    I have an linux server running with a JBoss Instance with apache2. Apache2 will use AJP connection to reverse proxy to JBoss. I found these messages in the apache error.log: [error] (70007)The timeout specified has expired: ajp_ilink_receive() can't receive header [error] ajp_read_header: ajp_ilink_receive failed [error] (120006)APR does not understand this error code: proxy: read response failed from 8.8.8.8:8009 (hostname) [error] (111)Connection refused: proxy: AJP: attempt to connect to 8.8.8.8:8009 (hostname) failed [error] ap_proxy_connect_backend disabling worker for (hostname) [error] proxy: AJP: failed to make connection to backend: hostname [error] proxy: AJP: disabled connection for (hostname)25 I googled around but I can't seem to find any related topics. There are people say this behavior can be caused by misconfigured apache vs jboss. Telling the max amount of connections apache allows are far greater then jboss, causing the apache connection to time out. But I know the app isn't used by thousands of simultaneous connections at the time not even hundreds of connections so I don't believe this could be a cause. Does anybody have an idea? Or could tell me how to debug this problem? I'm using these versions: Debian 4.3.5-4 64Bit Apache Version 2.2.16 JBOSS Version 4.2.3.GA Thanks

    Read the article

  • Error Ant Build/deploy to websphere 7.0

    - by adisembiring
    Hi I'm trying to build/deploy war to websphere process server 7.0. and I run on windows environment. I use http://illegalargumentexception.blogspot.com/2008/08/ant-automated-deployment-to-websphere.html as my reference. and http://illegalargumentexception.googlecode.com/svn/trunk/code/java/WebSphereAntFiles/ as my sample code to deployed. this is my buil.properies is ? #build properties mywebappear=D:/data/code/WebSphereAntFiles/scripts/test/mywebappEAR.ear #WAS6 install directory was_home=C:/IBM/WID7_WTE/runtimes/bi_v7 #server name (see cell/node/server; e.g. "server1") was_server=server1 #user + password; for use when security is enabled was_user=admin was_password=admin #stops scripts on problem was_failonerror=true #virtual host was_virtualhost=default_host #Absolute path to EAR file #was_ear=fooEAR.ear #Name of the enterprise application #was_appname=fooEAR this is my console while I trying to build with ws_ant.bat [wsDefaultBindings] mywebapp.war [wsDefaultBindings] <virtual-host> --> default_host [wsDefaultBindings] [wsDefaultBindings] ------------------------ [wsDefaultBindings] Saving EAR File to directory [wsDefaultBindings] Saved EAR File to directory Successfully test_wsStartServer: WAS_wsStartServer: depCheck: depCheck: [startServer] ADMU0116I: Tool information is being logged in file [startServer] C:\IBM\WID7_WTE\runtimes\bi_v7\profiles\qwps\logs\server1\startServer.log [startServer] ADMU0128I: Starting tool with the qwps profile [startServer] ADMU3100I: Reading configuration for server: server1 [startServer] ADMU3028I: Conflict detected on port 8880. Likely causes: a) An instance of [startServer] the server server1 is already running b) some other process is [startServer] using port 8880 [startServer] ADMU3027E: An instance of the server may already be running: server1 [startServer] ADMU0111E: Program exiting with error: [startServer] com.ibm.websphere.management.exception.AdminException: ADMU3027E: An [startServer] instance of the server may already be running: server1 [startServer] ADMU1211I: To obtain a full trace of the failure, use the -trace option. [startServer] ADMU0211I: Error details may be seen in the file: [startServer] C:/IBM/WID7_WTE/runtimes/bi_v7/profiles/qwps\logs\server1\startServer.log BUILD FAILED D:\data\code\WebSphereAntFiles\scripts\test\build.xml:68: The following error occurred while executing this line: D:\data\code\WebSphereAntFiles\scripts\was\wsStartServer.xml:49: Java returned: -1

    Read the article

  • fork() within a fork()

    - by codingfreak
    Hi Is there any way to differentiate the child processes created by different fork() functions within a program. global variable i; SIGCHLD handler function() { i--; } handle() { fork() --> FORK2 } main() { while(1) { if(i<5) { i++; if( (fpid=fork())==0) --> FORK1 handle() else (fpid>0) ..... } } } Is there any way I can differentiate between child processes created by FORK1 and FORK2 ?? because I am trying to decrement the value of global variable 'i' in SIGCHLD handler function and it should be decremented only for the processes created by FORK1 .. I tried to use an array and save the process id [this code is placed in fpid0 part] of the child processes created by FORK1 and then decrement the value of 'i' only if the process id of dead child is within the array ... But this didn't work out as sometimes child processes dead so fastly that updating the array is not done perfectly and everything messed up. So is there any better solution for this problem ??

    Read the article

  • Ruby on Rails (Redmine) on Apache - 503 Error

    - by andrewtweber
    I am running a Ruby on Rails application called Redmine. It's been working fine, but today it's giving a 503 Service Temporarily Unavailable error. (It was initially set up by an employee who is now gone.) I check the error log and it says: [Mon Nov 21 11:03:30 2011] [error] (111)Connection refused: proxy: HTTP: attempt to connect to 127.0.0.1:3000 (127.0.0.1) failed [Mon Nov 21 11:03:30 2011] [error] ap_proxy_connect_backend disabling worker for (127.0.0.1) Here's a chunk of my Apache config <VirtualHost *:80> ServerName redmine.{domain}.com RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f RewriteRule ^/(.*)$ balancer://redminecluster%{REQUEST_URI} [P,QSA,L] </VirtualHost> <Proxy balancer://redminecluster> BalancerMember http://127.0.0.1:3000 </Proxy> I found this link: http://www.redmine.org/boards/2/topics/20561 which suggests I simply need to "start the redmine server." I've tried /etc/init.d/redmine start which gives me this output => Booting Mongrel => Rails 2.3.11 application starting on http://0.0.0.0:3000 The contents of /etc/init.d/redmine: cd /var/redmine sudo ruby script/server -d -e production One thing I immediately notice is that it says 0.0.0.0 instead of 127.0.0.1. In addition, running top or ps -ef shows no record of a "mongrel" or "redmine" process. I've also tried restarting Apache before and after starting redmine. Not sure where to go from here.

    Read the article

  • ASP.NET MVC - PartialView not refreshing

    - by Billy Logan
    Hello Everyone, I have a view that uses a javascript callback to reload a partial view. For whatever reason the contents of the partial class do not refresh even though i can step through the entire process and see the page being recalled and populated. Any reason why the page would not display? Code is as follows: <div id="big_image_content"> <% Html.RenderPartial("ZoomImage", Model); %> </div> This link should reload the div above: <a href="javascript:void(0)" onclick="$('#big_image_content').load('/ShopDetai/ZoomImage);" title="<%= shape.Shape %>" alt="<%= shape.Shape %>"> <img src="http://images.rugs-direct.com/<%= shape.Image.ToLower() %>" width="40" alt="<%= shape.Shape %>"> </a> partial view(ZoomImage.ascx) simplified for now, but still doesn't load: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<RugsDirect.Data.ItemDetailsModel>" %> <%= Model.Category.ToLower()%> And finally the controller side of things: public ActionResult ZoomImage() { try { ItemDetailsModel model = GetMainImageContentModel(); return PartialView("ZoomImage", model); } catch (Exception ex) { //send the error email ExceptionPolicy.HandleException(ex, "Exception Policy"); //redirect to the error page return RedirectToAction("ViewError", "Shop"); } } Again, i can step through this entire process and all seems to be working accept for the page not reloading. I can even break on the <%= Model.Category.ToLower()% of the partial view, but it will not be displayed. Thanks in advance, Billy

    Read the article

  • Running job in the background from Perl WITHOUT waiting for return

    - by Rafael Almeida
    The Disclaimer First of all, I know this question (or close variations) have been asked a thousand times. I really spent a few hours looking in the obvious and the not-so-obvious places, but there may be something small I'm missing. The Context Let me define the problem more clearly: I'm writing a newsletter app in which I want the actual sending process to be async. As in, user clicks "send", request returns immediately and then they can check the progress in a specific page (via AJAX, for example). It's written in your traditional LAMP stack. In the particular host I'm using, PHP's exec() and system() are disabled for security reasons, but Perl's system functions (exec, system and backticks) aren't. So my workaround solution was to create a "trigger" script in Perl that calls the actual sender via the PHP CLI, and redirects to the progress page. Where I'm Stuck The very line the calls the sender is, as of now: system("php -q sender.php &"); Problem being, it's not returning immediately, but waiting for the script to finish. I want it to run in the background but the system call itself returns right away. I also tried running a similar script in my Linux terminal, and in fact the prompt doesn't show until after the script has finished, even though my test output doesn't run, indicating it's really running in the background. What I already tried Perl's exec() function - same result of system(). Changing the command to: "php -q sender.php | at now"), hoping that the "at" daemon would return and that the PHP process itself wouldn't be attached to Perl. What should I try now?

    Read the article

  • Refactoring Bloated ViewModel

    - by Holy Christ
    Hi, I am writing a PRISM/MVVM/WPF application. It's a LOB application, so there are a lot of complicated rules. I've noticed the View Model is starting to get bloated. There are two main issues. One is that to maintain MVVM, I'm doing a lot of things that feel hacky like adding a bunch of properties to my VM. The view binds to those properties to keep track of what feels like view specific information. For example, a boolean keeping track of the status of a long running process in the VM, so the view can disable some of its controls while the long running process is working. I've read that this issue could be solved with Attached Behaviors. I'll look more into that. In the example MVVM apps you see online, this isn't a big deal because they are over-simplified. The other issue is the number of commands in my VM. Right now there are four commands. I'm defining the commands in the VM using Josh Smith's RelayCommand (basically the DelegateCommand in PRISM) so all the business logic lives in the VM. I considered moving each command into separate unit of works. I'm not sure the best way to do this. Which patterns are you guys using to keep your VMs clean? I can already feel someone responding with "your view and VM is too complicated, you should break them into many view/VMs". It is certainly not too complicated from a Ux perspective - there are 2 buttons, a combobox, and a listbox. Also, from a logical perspective, it is one cohesive domain. Having said that, I'm very interested in hearing how others are dealing with this type of issue. Thanks for your input.

    Read the article

  • Search XDocument with LINQ with out knowing the Namespace

    - by BarDev
    Is there a way to search a XDocument without knowing the Namespace. I have a process that logs all soap requests and encrypts the sensitive data. I want to find any elements based on name. Something like, give me all elements where the name is CreditCard. I don't care what the namespace is. My problem seems to be with LINQ and requiring a xml namespace. I have other processes that retrieve values from XML, but I know the namespace for these other process. XDocument xDocument = XDocument.Load(@"C:\temp\Packet.xml"); XNamespace xNamespace = "http://CompanyName.AppName.Service.Contracts"; var elements = xDocument.Root.DescendantsAndSelf().Elements().Where(d = d.Name == xNamespace + "CreditCardNumber"); But what I really want, is to have the ability to search xml without knowing about namespaces, something like this: XDocument xDocument = XDocument.Load(@"C:\temp\Packet.xml"); var elements = xDocument.Root.DescendantsAndSelf().Elements().Where(d = d.Name == "CreditCardNumber") But of course this will not work be cause I do no have a namespace. BarDev

    Read the article

  • dojo.io.iframe erroring when uploading a file

    - by Grant Collins
    Hi, Hit an interesting problem today when trying to upload an image file < 2MB using dojo.io.iframe. My function to process the form is called, but before the form is posted to the server I am getting the following error: TypeError: ifd.getElementsByTagName("textarea")[0] is undefined My function that is used to action the post of the form is : function uploadnewlogo(){ var logoDiv = dojo.byId('userlogo'); var logoMsg = dojo.byId('uploadmesg'); //prep the io frame to send logo data. dojo.io.iframe.send({ url: "/users/profile/changelogo/", method: "post", handleAs: "text", form: dojo.byId('logoUploadFrm'), handle: function(data,ioArgs){ var response = dojo.fromJson(data); if(response.status == 'success'){ //first clear the image //dojo.style(logoDiv, "display", "none"); logoDiv.innerHTML = ""; //then we update the image logoDiv.innerHTML = response.image; }else if(response.status == 'error'){ logoMsg.innerHTML = data.mesg; }else{ logoMsg.innerHTML = '<div class="error">Whoops! We can not process your image.</div>'; } }, error: function(data, ioArgs){ logoMsg.innerHTML = '<div class="error">' + data + '</div>'; } }); } The form is very basic with just a File input component and a simple button that calls this bit of javascript and dojo. I've got very similar code in my application that uploads word/pdf documents and that doesn't error, but for some reason this does. Any ideas or pointers on what I should try to get this to work without errors? Oh I'm using php and Zend framework for the backend if that has anything to do with it, but I doubt it as it's not even hitting the server before it fails. Many thanks, Grant

    Read the article

  • Coherent access to mainframe files from Win32 application and IBM RDZ/Eclipse?

    - by Ira Baxter
    I have a suite of tools for processing IBM COBOL source code; these tools are built as Win32 applications and talk to Windows (including network) files using traditional Windows file system calls (open, close, read, write) and work just fine, thank you. I'd like to integrate these with Eclipse; we understand how to get Eclipse to do UI for us we think. The problem is that Eclipse/RDZ users access mainframe files through some IBM magic. In How does RDZ access mainframe files I tried to understand how Eclipse accessed files on a mainframe. Apparantly Eclipse/RDZ has a secret filesystem access backdoor not available to normal mortals. At issue is how our tools, reading some Windows-accessible file (local disk file, NFS to mainframe, ...) can associate such files with the files that Eclipse can access or is using? Ideally we'd like UI-integrated versions of our tools take an Eclipse file-name string for a mainframe file, pass it to our Windows application to process, have the Windows application open/read/process the file, and return results associated with that file to the Eclipse UI. Is there a canonical file name path that would be used with mainframe NFS that would be equivalent to the name or access object the Eclipse RDZ used to access the same file? Are all operations doable internally by Eclipse, doable by the mainframe NFS [for instance, can NFS read/update an element in a partitioned data set? Can Eclipse RDZ? Does it matter?] Is the mainframe file access available to custom Java code running under Eclipse RDZ (e.g., equivalents of open/close/read/write based on filename/path/something?) If so, can somebody steer me towards documentation describing the access methods? Anybody else already solve this problem or have a good suggestion?

    Read the article

  • Java OutOfMemoryError due to Linux RAM disk cache not freed

    - by Markus Jevring
    The process will run fine all day, then, bam, without warning, it will throw this error. Sometimes seemingly in the middle of doing nothing. It will happen at seemingly random times during the day. I checked to see if anything else was running on the machine, like scheduled backups or something, but found nothing. The machine has enough physical memory (2GB, with about 1GB free for a 3-500MB load), and has sufficient -Xmx specified. According to our sysadmin, the problem is that the RAM that the kernel uses as a disk cache (apparently all but 8MB) is not freed when the JVM needs to allocate memory, so the JVM process throws an OutOfMemoryError. This could be because Java asks the kernel if enough memory is available before allocating and finds that it is insufficient, resulting in a crash. I would like to think, however, that Java simply tries to allocate the memory via the kernel, and when the kernel gets such a request, it makes room for the application by throwing our some of the disk cache. Has anyone else run in to the issue, and if so, what was the error, and how did you solve it? We are currently using jdk1.6.0_20 on SLES 10 SP2 Linux 2.6.16.60-0.42.9-smp in VMWare ESX.

    Read the article

  • Why is Available Physical Memory (dwAvailPhys) > Available Virtual Memory (dwAvailVirtual) in call G

    - by Andrew
    I am playing with an MSDN sample to do memory stress testing (see: http://msdn.microsoft.com/en-us/magazine/cc163613.aspx) and an extension of that tool that specifically eats physical memory (see http://www.donationcoder.com/Forums/bb/index.php?topic=14895.0;prev_next=next). I am obviously confused though on the differences between Virtual and Physical Memory. I thought each process has 2 GB of virtual memory (although I also read 1.5 GB because of "overhead". My understanding was that some/all/none of this virtual memory could be physical memory, and the amount of physical memory used by a process could change over time (memory could be swapped out to disc, etc.)I further thought that, in general, when you allocate memory, the operating system could use physical memory or virtual memory. From this, I conclude that dwAvailVirtual should always be equal to or greater than dwAvailPhys in the call GlobalMemoryStatus. However, I often (always?) see the opposite. What am I missing. I apologize in advance if my question is not well formed. I'm still trying to get my head around the whole memory management system in Windows. Tutorials/Explanations/Book recs are most welcome! Andrew

    Read the article

  • C# Windows Service

    - by Goober
    Scenario I've created a windows service, but whenever I start it, it stops immediately. The service was concieved from a console application that used to subscribe to an event and watch processes on a server. If anything happened to process (i.e. It was killed), then the event would trigger the process to be restarted. The reason I'm telling you this is because the original code used to look like this: Original Console App Code: static void Main(string[] args) { StartProcess sp = new StartProcess(); //Note the readline that means the application sits here waiting for an event! Console.ReadLine(); } Now that this code has been turned into a Windows Service, it is essentially EXACTLY THE SAME. However, the service does not sit there waiting, even with the readline, it just ends..... New Windows Service Code: protected override void OnStart(string[] args) { ProcessMonitor pm = new ProcessMonitor(); Console.ReadLine(); } Thoughts Since the functionality is entirely encapsulated within this single class (It quite literally starts, sets up some events and waits) - How can I get the service to actually sit there and just wait? It seems to be ignoring the readline. However this works perfectly as a console application, it is just far more convenient to have it as a service.

    Read the article

  • uninstall string

    - by Sakhawat Ali
    Hi experts, I am developing an desktop based application using VB.NET, similar to add/remove program. everything was working fine until i start working on uninstall feature. Now what am i doing is that i get the uninstall string of the specific application from the registry and use System.Diagnostics.Process to run UninstallString. Dim proc As New Process() proc.StartInfo.FileName =UninstallString proc.Start() proc.WaitForExit() proc.Close() latter i found that it only work for straight file paths only, i mean with no command line argument like: C:\program files\someApp\uninstall.exe I make a list of list of all UninstallStrings of all application installed on my machine. i found few things like application installed using MSI, some were with rundll32 and few were with straight file path with some command argument like: My Silverlight SDK UninstallString, MSI Example MsiExec.exe /X{2012098D-EEE9-4769-8DD3-B038050854D4} My JetAudio UninstallString, RunDll32 Example RunDll32 C:\PROGRA~1\COMMON~1\INSTAL~1\engine\6\INTEL3~1\Ctor.dll,LaunchSetup "C:\Program Files\InstallShield Installation Information{91F34319-08DE-457A-99C0-0BCDFAC145B9}\Setup.exe" -l0x9 My Google Chrome UninstallString, straight file path with command argument example "C:\Program Files\Google\Chrome\Application\5.0.375.55\Installer\setup.exe" -uninstall The code i mentioned above does not work for these. i did some string parsing, separate two thing from UninstallString one is Filename and other is Arguments. like for MSI, filename is MSIEXEC.EXE and argument will be rest of the string, same for RunDLL32, same for straight file path with command argument. Now what am i facing is that, after every 2 or 3 days i come to know that this type of unistallstring is also not working. and why is that not working because it is a new type maybe abc C:\program files\someapp.exe -ddd so parse it too. is there any better way of doing that rather then parsing the string.

    Read the article

  • java distributed cache for low latency, high availability

    - by Shahbaz
    I've never used distributed caches/DHTs like memcached, jboss cache, ehcache, etc. I'm wondering which, if any, is appropriate for my use. First, I'm not doing web applications (as most of these project seem to be geared towards web apps). I write servers (Order Management Systems actually) for financial trading firms. The servers themselves are not too complicated. They need to receive information (market data, orders, executions, etc.) rout them to their destination while possibly transforming some of these messages. I am looking at these products to solve the following problems: * Safe repository of the state of the server. I'd rather build the logic of my application as a bunch of transformers (similar to Apache Camel) and store the state in a 'safe' place * This repository should be distributed: in case one of these data stores crashes, one or two more should be up and I should be able to switch to them seamlessly * This repository should be fast. Single digits milliseconds count here, in other words, systems which consume/process this data are automated systems, not humans clicking on links. This system needs to have high-throughput and low latency. By sending my data outside the process, I am necessarily slowing performance, but I am trying to balance absolute raw speed and absolute protection of data. * This repository should be safe. Similar to the point about several on-line backups, this system needs to write data to disk (potentially more than one disk). I'd really like to stop writing my own 'transaction servers.' Am I correct to be looking into projects such as jboss cache, ehcache, etc.? Thanks

    Read the article

  • Can't write the SysWow64 value to registry with vbscript for Screensaver

    - by Valentein
    Scripts, registries, screen-savers, oh my! I'm trying to use a screen-saver on a Windows XP 64 bit machine which uses a .NET app which makes an interop call which relies on some x86 Shockwave Dlls (some Shockwave animation). Everything should be in the %systemroot%\WINNT\SysWOW64 directory. When the timeout for the screensaver occurs, the process should looks like this: Screensaver.scr - .NET app - shockwave animation. During installation I want a vbscript to my screen-saver file to copy the Screensaver.scr to the SysWow64 directory and then set the proper registry key to this file for Windows to launch the screen-saver. The code is something like this: Dim sScreenSaver, tScreenSaver sScreenSaver = "C:\SourceFiles\bin\ScreenSaver.scr" 'screensaver tScreenSaver = "C:\winnt\SysWOW64\" Set WshShell = WScript.CreateObject("WScript.Shell") 'script shell to run objects Set FSO = createobject("scripting.filesystemobject") 'file system object 'copy screensaver FSO.CopyFile sScreenSaver, tScreenSaver, True 'set screen saver Dim p1 p1 = "HKEY_CURRENT_USER\Control Panel\Desktop\" WshShell.RegWrite p1 & "SCRNSAVE.EXE", (tScreenSaver & "ScreenSaver.scr") After installation, I can verify the the Screensaver exists in the correct directory. (It actually seems to be in both the system32 and the sysWOW64 directories---whether that's the install script or something I did post-install I'm in the process of verifying.) However, the registry entry is not correct. In both the 32 and 64 bit regedit I see the HKCU\ControlPanel\Desktop\SCRNSAVE.EX is set to: C:\WINNT\system32\Screensaver.scr This isn't right. The screen-saver won't run from this directory. It only runs from SysWOW64. If I manually edit the registry with regedit to the correct SysWOW64 path everything works fine. Is this a problem with using the script or is this a Windows registry redirection, or filesystem redirection problem? You'd think this would be simple...

    Read the article

  • Can't debug Java Windows Services with jhat, jps, jstack

    - by Matthew McCullough
    I frequently showcase the jhat, jps, and jstack tool set to developers on Linux and Mac. However, a developer recently indicated that these are unusable in Windows if the Java app in question is running as a Windows Service. A Sun-filed bug says something very similar, but was closed due to inactivity. I have tested this out for myself, and indeed it appears true, though I can hardly believe it. Here is the setup: Tomcat or similar running as a Windows service with the "Log On As" == "Local System" A user with Admin privileges logged in to the same Windows machine. Admin opens Windows Task Manager, can see java.exe running Admin opens console, types "jps", gets back a list of processes that does not include Tomcat's java service process. As a brute force attempt, get the PID of tomcat as a service from Windows Task Manager. Type jstack < pid . Get a reply: < pid no such process This appears reproducible under Windows XP, Windows 2003 Server, and Windows 7. Java versions 1.5 and 1.6 yield the same outcome. Is there a way from the terminal, even though logged in as Admin, to "sudo up" to get JPS and the other tools to see the java service?

    Read the article

  • IIS High use & Server Performance issues

    - by HaydnWVN
    Have an SBS2011 running Exchange, a database app and a few other things serving 5 users (3 low use, 1 high). The server was never specced for the database app so it isn't as powerful as I'd like... Only 12GB RAM. We have increasingly found performance problems with this server, last week it was so bad I couldn't even connect remotely. To free up some available RAM I have (over the past month or so): Restricted the Exchange Message Store to 1GB with (so far) no ill effects. Restricted SQL Databases (including SBSMonitoring and Sharepoint/##SSEE (Which isn't used)). Now I am finding that IIS Worker threads are using up the available memory and I have (so far) been unable to track down much useful information about restricting them. This server is not 'serving' anything web-based apart from OWA that I am finding people using because Outlook is so slow (again related to the Servers performance). I am aware that Exchange on SBS2011 is designed to use up available resources (and concede when other applications request). But it is not doing so (or anywhere near fast enough) for our needs. Opening the database application (using Postgres) takes 5+ minutes from client machines and regularly times out or crashes due to this. After a reboot (before SQL/Exchange/IIS databases are very large/totally cached) we get the performace we need and expect. Previously a reboot once a month was enough... Then once a week... Now they have taken to rebooting it almost daily!

    Read the article

  • How to display NotifyIcon and SSDP Service running during AutoLogon

    - by Paul Farry
    I've got an application (that is targetting .Net Framework 2.0) that is running on startup of the System, and I'm trying to get a NotifyIcon to display. When my program starts up when a user either Runs it normally or is started as a child process after the system has already logged on everything is fine. If my application starts up as the system is performing an AutoLogon using POSReady2009 (basically XP with Single User set). Then the NotifyIcon never becomes active. If you subsequently check (in a timer) the .Visible of the Icon at any point later it always reports that it is visible = true. If you disable the SSDPSrv and restart the Computer, the Icon displays correctly. I have a sneaking suspicion this is related to .Net 3.5sp1 installed over the top of a .Net 2 system. Is there some process that I should be following to ensure that my NotifyIcon is always available to the user. I have setup RegisterWindowMessage("TaskbarCreated") but I don't get this message called, except when you forcilbly Kill Explorer.exe and restart it. Even so, a NotifyIcon interally registers for these notifications anyway, so it shouldn't be required. I'm happy to stall the startup of my program, but once the program starts up, I expect that the icon shows correctly. If there is a KB article that I cannot find detailing this I'd be happy with that too.

    Read the article

  • How can I make a server log file available via my ASP.NET MVC website?

    - by Gary McGill
    I have an ASP.NET MVC website that works in tandem with a Windows Service that processes file uploads. For easy maintenance of the site, I'd like the log file for the Windows Service to be accessible (to me, only) via the website, so that I can hit http://myserver/logs/myservice to view the contents of the log file. How can I do that? At a guess, I could either have the service write its log file in a "Logs" folder at the top level of the site, or I could leave it where it is and set up a virtual directory to point to it. Which of these is better - or is there another, better way? Wherever the file is stored, I can see that there's going to be another problem. I tried out the first option (Logs folder in my website), but when I try to access the file via HTTP I get an error: The process cannot access the file 'foo' because it is being used by another process. Now, I know from experience that my service keeps the file locked for writing while it's running, but that I can still open the file in Notepad to view the current contents. (I'm surprised that IIS insists on write access, if that's what's happening). How can I get around that? Do I really have to write a handler to read the file and serve it to the browser myself? Or can I fix this with configuration or somesuch? PS. I'm using IIS7 if that helps.

    Read the article

  • SqlBulkCopy causes Deadlock on SQL Server 2000.

    - by megatoast
    I have a customized data import executable in .NET 3.5 which the SqlBulkCopy to basically do faster inserts on large amounts of data. The app basically takes an input file, massages the data and bulk uploads it into a SQL Server 2000. It was written by a consultant who was building it with a SQL 2008 database environment. Would that env difference be causing this? SQL 2000 does have the bcp utility which is what BulkCopy is based on. So, When we ran this, it triggered a Deadlock error. Error details: Transaction (Process ID 58) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. I've tried numerous ways to try to resolve it. like temporarily setting the connection string variable MultipleActiveResultSets=true, which wasn't ideal, but it still gives a Deadlock error. I also made sure it wasn't a connection time out problem. here's the function. Any advice? /// <summary> /// Bulks the insert. /// </summary> public void BulkInsert(string destinationTableName, DataTable dataTable) { SqlBulkCopy bulkCopy; if (this.Transaction != null) { bulkCopy = new SqlBulkCopy ( this.Connection, SqlBulkCopyOptions.TableLock, this.Transaction ); } else { bulkCopy = new SqlBulkCopy ( this.Connection.ConnectionString, SqlBulkCopyOptions.TableLock | SqlBulkCopyOptions.UseInternalTransaction ); } bulkCopy.ColumnMappings.Add("FeeScheduleID", "FeeScheduleID"); bulkCopy.ColumnMappings.Add("ProcedureID", "ProcedureID"); bulkCopy.ColumnMappings.Add("AltCode", "AltCode"); bulkCopy.ColumnMappings.Add("AltDescription", "AltDescription"); bulkCopy.ColumnMappings.Add("Fee", "Fee"); bulkCopy.ColumnMappings.Add("Discount", "Discount"); bulkCopy.ColumnMappings.Add("Comment", "Comment"); bulkCopy.ColumnMappings.Add("Description", "Description"); bulkCopy.BatchSize = dataTable.Rows.Count; bulkCopy.DestinationTableName = destinationTableName; bulkCopy.WriteToServer(dataTable); bulkCopy = null; }

    Read the article

  • Some general C questions.

    - by b-gen-jack-o-neill
    Hello. I am trying to fully understand the process pro writing code in some language to execution by OS. In my case, the language would be C and the OS would be Windows. So far, I read many different articles, but I am not sure, whether I understand the process right, and I would like to ask you if you know some good articles on some subjects I couldn´t find. So, what I think I know about C (and basically other languages): C compiler itself handles only data types, basic math operations, pointers operations, and work with functions. By work with functions I mean how to pass argument to it, and how to get output from function. During compilation, function call is replaced by passing arguments to stack, and than if function is not inline, its call is replaced by some symbol for linker. Linker than find the function definition, and replace the symbol to jump adress to that function (and of course than jump back to program). If the above is generally true and I get it right, where to final .exe file actually linker saves the functions? After the main() function? And what creates the .exe header? Compiler or Linker? Now, additional capabilities of C, today known as C standart library is set of functions and the declarations of them, that other programmers wrote to extend and simplify use of C language. But these functions like printf() were (or could be?) written in different language, or assembler. And there comes my next question, can be, for example printf() function be written in pure C without use of assembler? I know this is quite big question, but I just mostly want to know, wheather I am right or not. And trust me, I read a lots of articles on the web, and I would not ask you, If I could find these infromation together on one place, in one article. Insted I must piece by piece gather informations, so I am not sure if I am right. Thanks.

    Read the article

  • What is the relationship between Turing Machine & Modern Computer ? [closed]

    - by smwikipedia
    I heard a lot that modern computers are based on Turing machine. I just cannot build a bridge between a conceptual Turing Machine and a modern computer. Could someone help me build this bridge? Below is my current understanding. I think the computer is a big general-purpose Turing machine. Each program we write is a small specific-purpose Turing machine. The classical Turing machine do its job based on the input and its current state inside and so do our programs. Let's take a running program (a process) as an example. We know that in the process's address space, there's areas for stack, heap, and code. A classical Turing machine doesn't have the ability to remember many things, so we borrow the concept of stack from the push-down automaton. The heap and stack areas contains the state of our specific-purpose Turing machine (our program). The code area represents the logic of this small Turing machine. And various I/O devices supply input to this Turing machine.

    Read the article

  • Locking a table for getting MAX in LINQ

    - by Hossein Margani
    Hi Every one! I have a query in LINQ, I want to get MAX of Code of my table and increase it and insert new record with new Code. just like the IDENTITY feature of SQL Server, but here my Code column is char(5) where can be alphabets and numeric. My problem is when inserting a new row, two concurrent processes get max and insert an equal Code to the record. my command is: var maxCode = db.Customers.Select(c=>c.Code).Max(); var anotherCustomer = db.Customers.Where(...).SingleOrDefault(); anotherCustomer.Code = GenerateNextCode(maxCode); db.SubmitChanges(); I ran this command cross 1000 threads and each updating 200 customers, and used a Transaction with IsolationLevel.Serializable, after two or three execution an error occured: using (var db = new DBModelDataContext()) { DbTransaction tran = null; try { db.Connection.Open(); tran = db.Connection.BeginTransaction(IsolationLevel.Serializable); db.Transaction = tran; . . . . tran.Commit(); } catch { tran.Rollback(); } finally { db.Connection.Close(); } } error: Transaction (Process ID 60) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. other IsolationLevels generates this error: Row not found or changed. Please help me, thank you.

    Read the article

  • Custom Database integration with MOSS 2007

    - by Bob
    Hopefully someone has been down this road before and can offer some sound advice as far as which direction I should take. I am currently involved in a project in which we will be utilizing a custom database to store data extracted from excel files based on pre-established templates (to maintain consistency). We currently have a process (written in C#.Net 2008) that can extract the necessary data from the spreadsheets and import it into our custom database. What I am primarily interested in is figuring out the best method for integrating that process with our portal. What I would like to do is let SharePoint keep track of the metadata about the spreadsheet itself and let the custom database keep track of the data contained within the spreadsheet. So, one thing I need is a way to link spreadsheets from SharePoint to the custom database and vice versa. As these spreadsheets will be updated periodically, I need tried and true way of ensuring that the data remains synchronized between SharePoint and the custom database. I am also interested in finding out how to use the data from the custom database to create reports within the SharePoint portal. Any and all information will be greatly appreciated.

    Read the article

< Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >