Search Results

Search found 21539 results on 862 pages for 'hanging process'.

Page 126/862 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • ArchBeat Link-o-Rama for November 20, 2012

    - by Bob Rhubart
    Oracle Utilities Application Framework V4.2.0.0.0 Released | Anthony Shorten Principal Product Manager Anthony Shorten shares an overview of the changes implemented in the new release. Towards Ultra-Reusability for ADF - Adaptive Bindings | Duncan Mills "The task flow mechanism embodies one of the key value propositions of the ADF Framework," says Duncan Mills. "However, what if we could do more? How could we make task flows even more re-usable than they are today?" As you might expect, Duncan has answers for those questions. Oracle BPM Process Accelerators and process excellence | Andrew Richards "Process Accelerators are ready-to-deploy solutions based on best practices to simplify process management requirements," says Capgemini's Andrew Richards. "They are considered to be 'product grade,' meaning they have been designed; engineered, documented and tested by Oracle themselves to a level that they can be deployed as-is for a solution to a problem or extended as appropriate for a particular scenario." Oracle SOA Suite 11g PS 5 introduces BPEL with conditional correlation for aggregation scenarios | Lucas Jellema An extensive, detailed technical post from Oracle ACE Director Lucas Jellema. Check Box Support in ADF Tree Table Different Levels | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis updates last year's "ADF Tree - How to Autoselect/Deselect Checkbox" post with new information. As Boom Lures App Creators, Tough Part Is Making a Living Great New York Times article about mobile app develoment also touches on other significant IT issues. Thought for the Day "Building large applications is still really difficult. Making them serve an organisation well for many years is almost impossible." — Malcolm P. Atkinson Source: SoftwareQuotes.com

    Read the article

  • Great Example of a Simple Cost-Benefit Analysis

    - by BuckWoody
    I saw a post the other day that you should definitely go check out. It’s a cost/benefit decision, and although the author gives it a quick treatment and doesn’t take all points in the decision into account, you should focus on the process he follows. It’s a quick and simple example of the kind of thought process we should have as data professionals when we pick a server, a process, or application and even platform software. The key is to include more than just the price of a piece of software or hardware. You need to think about the “other” costs in the decision, and then make the right one. Sometimes the cheapest option is the cheapest, and other times, well, it isn’t. I’ve seen this played out not only in the decision to go with a certain selection, but in the options or editions it comes in. You have to put all of the decision points in the analysis to come up with the right answer, and you have to be able to explain your logic to your team and your company. This is the way you become a data professional, not just a DBA. You can check out the post here – it deals with Azure, but the point is the process, not Azure itself: http://blogs.msdn.com/eugeniop/archive/2010/03/19/windows-azure-guidance-a-simplistic-economic-analysis-of-a-expense-migration.aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How to get a service to listen on port 80 on Windows Server 2003

    - by Miky D
    I've coded a custom windows service that listens on TCP port 80 but when I try to install it on a Windows Server 2003 machine it fails to start because some other service is already listening on that port. So far I've disabled the IIS Admin service and the HTTP SSL service but no luck. When I run netstat -a -n -o | findstr 0.0:80 it gives me the process id 4 as the culprit, but when I look at the running processes that process id points to the "System" process. What can I do to get the System process to stop listening on port 80 and get my service to listen instead? P.S. I should point out that the service runs fine if I install it on my Windows XP or Windows 7 development boxes. Also, I should specify that this has nothing to do with it being a service. I've tried starting a regular application that attempts to bind to port 80 on the Windows Server 2003 with the same outcome - it fails because another application is already bound to that port.

    Read the article

  • How to improve designer and developer work flow?

    - by mbdev
    I work in a small startup with two front end developers and one designer. Currently the process starts with the designer sending a png file with the whole page design and assets if needed. My task as front end developer is to convert it to a HTML/CSS page. My work flow currently looks like this: Lay out the distinct parts using html elements. Style each element very roughly (floats, minimal fonts and padding) so I can modify it using inspection. Using Chrome Developer Tools (inspect) add/change css attributes while updating the css file. Refresh the page after X amount of changes Use Pixel Perfect to refine the design more. Sit with the designer to make last adjustments. Inferring the paddings, margins, font sizes using trial and error takes a lot of time and I feel the process could become more efficient but not sure how to improve it. Using PSD files is not an option since buying Photoshop for each developer is currently not considered. Design guide is also not available since design is still evolving and new features are introduced. Ideas for improving the process above and sharing how the process looks like in your company will be great.

    Read the article

  • Remote Desktop Services Gateway Issue

    - by AVandelay05
    Alright fellow techies here's the rundown. I have installed Server 2008 r2 Remote Dekstop Services on a VM in my network. I installed the following RD role services: RD Session Host, Licensing, Connection Broker, Gateway, Web Access. When I set things up originally, the gateway server and RDWeb worked as it should locally. After getting things running locally (remoteserver.domainname.local) I wanted to test things externally. From the outside, I couldn't get things running (meaning I could connect to rdweb access externally, but when I tried to run an app I would get the message "can't connect/find computer"). Here's my setup for external access The VM has every RD Services role services installed on it, meaning it acts as gateway, rd web access, session host, licensing, the whole bit. I made a self-signed certificate on the gateway server (gateway.domainname.net is the cert name). Internally, I have a secondary forward-lookup zone called domainname.net with an A record gateway pointing to the local IP of the gateway server. On our public DNS (domainname.net) I have an A record gateway. This is to access the RDWeb externally. In IIS I have the following authentication settings RDWeb: All disabled except for anonymous authentication Rpc: All disabled except for basic and windows RpcWithCert: All disbled except for windows authentication I have the necessary web access config in our sonicwall tz210 (https and rdp, external ip pointing to local ip of rds server) RAP and CAP have the correct user and computer groups, authentication, and allowed devices After all of this, here's what happens accessing externally. I can login correctly to RDWeb Access (I've tried a bogus login, I can't login to it so that's working properly). I see the Apps for use. I click on an app, click connect, the credential window opens, I put in the correct user creds, it tries to connect to the gateway server, but then the cred window comes back in view. I tried to reach a limit of failed logins, but never reached one, haha. So from the same external client machine I try to connect to the gateway through a Remote Desktop connection. I put in the correct gateway settings in the RD window, try to connect and get the same results as I did in RDWeb access. I checked the event logs on the RD Services machine and saw the following event IDs around the time I tried to login externally: ID 6037 with the message "The program svchost.exe, with the assigned process ID 2168, could not authenticate locally by using the target name host/gateway.domainname.net. The target name used is not valid. A target name should refer to one of the local computer names, for example, the DNS host name. Try a different target name." ID 10 RADWebAccess "RD Web Access was unable to access gateway.domainname.net, which is the server that is specified as running the RemoteApp and Desktop Connection Management service. Ensure that the computer account of the RD Web Access server is a member of the TS Web Access Computers security group on gateway.domainname.net" ID 4625 "An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: Administrator Account Domain: gateway.domainname.net Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc000006a Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: USER-LAPTOP Source Network Address: External IP Source Port: 63125 Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 This event is generated when a logon request fails. It is generated on the computer where access was attempted. The Subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The Logon Type field indicates the kind of logon that was requested. The most common types are 2 (interactive) and 3 (network). The Process Information fields indicate which account and process on the system requested the logon. The Network Information fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases. The authentication information fields provide detailed information about this specific logon request. - Transited services indicate which intermediate services have participated in this logon request. - Package name indicates which sub-protocol was used among the NTLM protocols." I don't think the VM has a null SID. The SID of the VM and it's physical host have different SIDS. I can access the blank page for rpc externally using the external gateway name. It seems like authentication is a problem. Also, is it a problem that the external name of the gateway server doesn't match the local name? The external name (which the cert is based on) is gateway.domainname.net and the internal name is remoteserver.domainname.local. That's the only thing I can think of that would be the problem, but the external name has to be different from the local right? Internally, I ping gateway.domainname.net and it gives me the correct local IP of the server. Now, there isn't an actual computer name in AD, but I don't know how I would achieve that? I hope I've been clear....any help would be appreciated. I think I'm close to achieving this. :)

    Read the article

  • Cannot delete apt-fast for a clean install

    - by colby
    This is my problem: $ destroy apt-fast [sudo] password for colbyryptos: Reading package lists... Done Building dependency tree Reading state information... Done Package apt-fast is not installed, so not removed 0 upgraded, 0 newly installed, 0 to remove and 14 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable Setting up man-db (2.6.1-2) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing man-db (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: man-db E: Sub-process /usr/bin/dpkg returned an error code (1) I have also tried sudo rm /var/lib/dpkg/lock, followed by sudo dpkg --configure -a. It then gives me this $ sudo dpkg --configure -a [sudo] password for colbyryptos: Setting up man-db (2.6.1-2) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing man-db (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: man-db

    Read the article

  • can I force server to always use turboboost?

    - by javapowered
    I'm using HP DL360p Gen8 with 2 * Xeon E5-2640. I do not load CPU 100%, i load it only ~10% and so I guess turboboost is not activated. However I'm using my server for trading so I absolutely don't care about CPU loading but I always want to process my data asap. So I want server to operate using maximum 3 GHz. I.e. 90% of CPU time I don't have anything to process. 10% of CPU time I have data to process. But I need to process it ASAP. I need every single microsecond. So I want server to operate always at maximum "turboboosted" mode. Is it possible?

    Read the article

  • Clarification on signals (sighup), jobs, and the controlling terminal

    - by asolberg
    So I've read two different perspectives and I'm trying to figure out which one is right. 1) Some sources online say that signals sent from the controlling terminal are ONLY sent to the foreground process group. That means if want a process to continue running in the background when you logout it is sufficient to simply suspend the job (ctrl-Z) and resume it in the background (bg). Then you can log out and it will continue to run because SIGHUP is only sent to the foreground job. See: http://blog.nelhage.com/2010/01/a-brief-introduction-to-termios-signaling-and-job-control/ ...In addition, if any signal-generating character is read by a terminal, it generates the appropriate signal to the foreground process group.... 2) Other sources claim you need to use the "nohup" command at the time the program is executed, or failing that, issue a "disown" command during execution to remove it from the jobs table that listens for SIGHUP. They say if you don't do this when you logout your process will also exit even if its running in a background process group. For example: http://docstore.mik.ua/orelly/unix3/upt/ch23_11.htm ...If I log out anyway, the shell sends my background job a HUP signal... In my own experiments with Ubuntu linux it seems like 1) is correct. I executed a command: "sleep 20 &" then logged out, logged back in and pressed did a "ps aux". Sure enough the sleep command was still running. So then why is it that so many people seem to believe number 2? And if all you have to do is place a job in the background to keep it running why do so many people use "nohup" and "disown?"

    Read the article

  • Can connect to Samba, but access denied to homes

    - by user893730
    I can connect to the samba server using both IP address and server name, and I can see the home folder name, but can't connect to it smb.cnf [global] workgroup = WORKGROUP server string = Venus wins support = no read only = no browsable = yes create mode = 0777 directory mode = 0777 case sensitive = no dns proxy = no interfaces = 127.0.0.1/8 eth0 bind interfaces only = yes log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 security = user encrypt passwords = true passdb backend = smbpasswd obey pam restrictions = yes unix password sync = no passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = no [homes] comment = User Directories path = /data/localdevs/%u public = no browsable = yes writable = yes the /etc/samba folder has the following files in it lmhosts smb.conf smb.conf.orig smbusers The output of "sudo pdbedit -L" is user1:500: ls -abl /data/localdevs/ drwxr-xr-x. 4 user1 user1 4096 Jul 24 17:35 user1 These are what samba logs are showing when I get the access denied to user1's home directory [2012/07/24 20:27:08.599216, 3] smbd/process.c:1489(process_smb) Transaction 24 of length 90 (0 toread) [2012/07/24 20:27:08.599350, 3] smbd/process.c:1298(switch_message) switch message SMBntcreateX (pid 2440) conn 0x7f6758780c00 [2012/07/24 20:27:08.599373, 4] smbd/uid.c:257(change_to_user) change_to_user: Skipping user change - already user [2012/07/24 20:27:08.599412, 3] smbd/vfs.c:881(check_reduced_name) check_reduced_name [.] [/data/localdevs/user1] [2012/07/24 20:27:08.599485, 3] smbd/vfs.c:1038(check_reduced_name) check_reduced_name: . reduced to /data/localdevs/user1 [2012/07/24 20:27:08.599508, 3] smbd/vfs.c:881(check_reduced_name) check_reduced_name [.] [/data/localdevs/user1] [2012/07/24 20:27:08.599552, 3] smbd/vfs.c:1038(check_reduced_name) check_reduced_name: . reduced to /data/localdevs/user1 [2012/07/24 20:27:08.599581, 3] smbd/dosmode.c:166(unix_mode) unix_mode(.) returning 0766 [2012/07/24 20:27:08.599643, 3] smbd/vfs.c:881(check_reduced_name) check_reduced_name [.] [/data/localdevs/user1] [2012/07/24 20:27:08.599668, 3] smbd/vfs.c:1038(check_reduced_name) check_reduced_name: . reduced to /data/localdevs/user1 [2012/07/24 20:27:08.599707, 4] smbd/open.c:1990(open_file_ntcreate) calling open_file with flags=0x0 flags2=0x0 mode=0766, access_mask = 0x81, open_access_mask = 0x81 [2012/07/24 20:27:08.599806, 3] smbd/open.c:467(open_file) Error opening file . (NT_STATUS_ACCESS_DENIED) (local_flags=0) (flags=0) [2012/07/24 20:27:08.599838, 3] smbd/error.c:80(error_packet_set) error packet at smbd/error.c(160) cmd=162 (SMBntcreateX) NT_STATUS_ACCESS_DENIED [2012/07/24 20:27:08.604075, 3] smbd/process.c:1489(process_smb) Transaction 25 of length 90 (0 toread) [2012/07/24 20:27:08.604193, 3] smbd/process.c:1298(switch_message) switch message SMBntcreateX (pid 2440) conn 0x7f6758780c00 [2012/07/24 20:27:08.604216, 4] smbd/uid.c:257(change_to_user) change_to_user: Skipping user change - already user [2012/07/24 20:27:08.604268, 3] smbd/vfs.c:881(check_reduced_name) check_reduced_name [.] [/data/localdevs/user1] [2012/07/24 20:27:08.604336, 3] smbd/vfs.c:1038(check_reduced_name) check_reduced_name: . reduced to /data/localdevs/user1 [2012/07/24 20:27:08.604395, 3] smbd/vfs.c:881(check_reduced_name) check_reduced_name [.] [/data/localdevs/user1] [2012/07/24 20:27:08.604419, 3] smbd/vfs.c:1038(check_reduced_name) check_reduced_name: . reduced to /data/localdevs/user1 [2012/07/24 20:27:08.604442, 3] smbd/dosmode.c:166(unix_mode) unix_mode(.) returning 0766 [2012/07/24 20:27:08.604532, 3] smbd/vfs.c:881(check_reduced_name) check_reduced_name [.] [/data/localdevs/user1] [2012/07/24 20:27:08.604554, 3] smbd/vfs.c:1038(check_reduced_name) check_reduced_name: . reduced to /data/localdevs/user1 [2012/07/24 20:27:08.604583, 4] smbd/open.c:1990(open_file_ntcreate) calling open_file with flags=0x0 flags2=0x0 mode=0766, access_mask = 0x81, open_access_mask = 0x81 [2012/07/24 20:27:08.604679, 3] smbd/open.c:467(open_file) Error opening file . (NT_STATUS_ACCESS_DENIED) (local_flags=0) (flags=0) [2012/07/24 20:27:08.604705, 3] smbd/error.c:80(error_packet_set) error packet at smbd/error.c(160) cmd=162 (SMBntcreateX) NT_STATUS_ACCESS_DENIED [2012/07/24 20:27:08.606977, 3] smbd/process.c:1489(process_smb) Transaction 26 of length 80 (0 toread) [2012/07/24 20:27:08.607096, 3] smbd/process.c:1298(switch_message) switch message SMBtrans2 (pid 2440) conn 0x7f6758780c00 [2012/07/24 20:27:08.607119, 4] smbd/uid.c:257(change_to_user) change_to_user: Skipping user change - already user [2012/07/24 20:27:08.607139, 3] smbd/trans2.c:5100(call_trans2qfilepathinfo) call_trans2qfilepathinfo: TRANSACT2_QPATHINFO: level = 1004 [2012/07/24 20:27:08.607162, 3] smbd/vfs.c:881(check_reduced_name) check_reduced_name [.] [/data/localdevs/user1] [2012/07/24 20:27:08.607184, 3] smbd/vfs.c:1038(check_reduced_name) check_reduced_name: . reduced to /data/localdevs/user1 [2012/07/24 20:27:08.607208, 3] smbd/trans2.c:5226(call_trans2qfilepathinfo) call_trans2qfilepathinfo . (fnum = -1) level=1004 call=5 total_data=0 [2012/07/24 20:27:08.608306, 3] smbd/process.c:1489(process_smb) Transaction 27 of length 80 (0 toread) [2012/07/24 20:27:08.608362, 3] smbd/process.c:1298(switch_message) switch message SMBtrans2 (pid 2440) conn 0x7f6758780c00 [2012/07/24 20:27:08.608383, 4] smbd/uid.c:257(change_to_user) change_to_user: Skipping user change - already user [2012/07/24 20:27:08.608403, 3] smbd/trans2.c:5100(call_trans2qfilepathinfo) call_trans2qfilepathinfo: TRANSACT2_QPATHINFO: level = 1005 [2012/07/24 20:27:08.608439, 3] smbd/vfs.c:881(check_reduced_name) check_reduced_name [.] [/data/localdevs/user1] [2012/07/24 20:27:08.608461, 3] smbd/vfs.c:1038(check_reduced_name) check_reduced_name: . reduced to /data/localdevs/user1 [2012/07/24 20:27:08.608484, 3] smbd/trans2.c:5226(call_trans2qfilepathinfo) call_trans2qfilepathinfo . (fnum = -1) level=1005 call=5 total_data=0

    Read the article

  • SOA Starting Point: Methods for Service Identification and Definition

    As more and more companies start to incorporate a Service Oriented Architectural design approach into their existing enterprise systems, it creates the need for a standardized integration technology. One common technology used by companies is an Enterprise Service Bus (ESB). An ESB, as defined by Progress Software, connects and mediates all communications and interactions between services. In essence an ESB is a form of middleware that allows services to communicate with one another regardless of framework, environment, or location. With the emergence of ESB, a new emphasis is now being placed on approaches that can be used to determine what Web services should be built. In addition, what order should these services be built? In May 2011, SOA Magazine published an article that identified 10 common methods for identifying and defining services. SOA’s Ten Common Methods for Service Identification and Definition: Business Process Decomposition Business Functions Business Entity Objects Ownership and Responsibility Goal-Driven Component-Based Existing Supply (Bottom-Up) Front-Office Application Usage Analysis Infrastructure Non-Functional Requirements  Each of these methods provides various pros and cons in regards to their use within the design process. I personally feel that during a design process, multiple methodologies should be used in order to accurately define a design for a system or enterprise system. Personally, I like to create a custom cocktail derived from combining these methodologies in order to ensure that my design fits with the project’s and business’s needs while still following development standards and guidelines. Of these ten methods, I am particularly fond of Business Process Decomposition, Business Functions, Goal-Driven, Component-Based, and routinely use them in my designs.  Works Cited Hubbers, J.-W., Ligthart, A., & Terlouw , L. (2007, 12 10). Ten Ways to Identify Services. Retrieved from SOA Magazine: http://www.soamag.com/I13/1207-1.php Progress.com. (2011, 10 30). ESB ARCHITECTURE AND LIFECYCLE DEFINITION. Retrieved from Progress.com: http://web.progress.com/en/esb-architecture-lifecycle-definition.html

    Read the article

  • Fixing dpkg after installation of broken packages? [closed]

    - by Amith KK
    Possible Duplicate: How to remove all associated files and configuration settings of an app installed through 'force architecture' command I installed a broken package, and now apt-get/aptitude is failing with trying to remove it Each time I run a apt operation, this is the message I get: Removing crossplatformui ... ztemtvcdromd: no process found dpkg: error processing crossplatformui (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: crossplatformui E: Sub-process /usr/bin/dpkg returned an error code (1) running sudo apt-get install -f gives me this: amith@amith-desktop:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: crossplatformui 0 upgraded, 0 newly installed, 1 to remove and 1123 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y (Reading database ... 340804 files and directories currently installed.) Removing crossplatformui ... ztemtvcdromd: no process found dpkg: error processing crossplatformui (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: crossplatformui E: Sub-process /usr/bin/dpkg returned an error code (1) How do I fix this?

    Read the article

  • Determine if the "yes" is necessary when doing an SCP

    - by glowcoder
    I'm writing a Groovy script to do an SCP. Note that I haven't ran it yet, because the rest of it isn't finished. Now, if you're doing an scp for the first time, have to authenticate the fingerprint. Future times, you don't. My current solution is, because I get 3 tries for the password, and I really only need 1 (it's not like the script will mistype the password... if it's wrong, it's wrong!) is to pipe in "yes" as the first password attempt. This way, it will accept the fingerprint if necessary, and use the correct password as the first attempt. If it didn't need it, it puts yes as the first attempt and the correct as the second. However, I feel this is not a very robust solution, and I know if I were a customer I would not like seeing "incorrect password" in my output. Especially if it fails for another reason, it would be an incredibly annoying misnomer. What follows is the appropriate section of the script in question. I am open to any tactics that involve using scp (or accomplishing the file transfer) in a different way. I just want to get the job done. I'm even open to shell scripting, although I'm not the best at it. def command = [] command.add('scp') command.add(srcusername + '@' + srcrepo + ':' + srcpath) command.add(tarusername + '@' + tarrepo + ':' + tarpath) def process = command.execute() process.consumeOutput(out) process << "yes" << LS << tarpassword << LS process << "yes" << LS << srcpassword << LS process.waitfor() Thanks so much, glowcoder

    Read the article

  • SQL 2008 Memory Usage

    - by Danilo Brambilla
    I have a SQL Server 2008 (ver 10.0.1600) running on a Windows Server 2008 R2 Enterprise server with 8 GB of physical ram. If I open Task Manager I can see on 'Physical Memory' section of 'Performance' tab that only 340 MB are Available of 8191 Total, but I can't see any process using such amount of memory. Please note SQL Server is memory limited to 6GB (Maximum Server Memory = 6000). If I open Sysinternals Process Explorer, I can see sqlsrvr.exe process has: Private Bytes: 227.000 K Working Set: 140.000 K Virtual Size: 8.762.000 K What does this means? Is there any way to free up this memory for other process? Why Virtual Size figure as allocated memory? I thought that Virtual Size was 'reserved memory' only.

    Read the article

  • Data that has been deleted in P6, how is it updated in Analytics

    - by Jeffrey McDaniel
    In P6 Reporting Database 2.0 the ETL process looked to the refrdel table in the P6 PMDB to determine which projects were deleted. The refrdel table could not be cleared out between ETL runs or those deletes would be lost. After the ETL process is run the refrdel can be cleared out. It is important to keep any purging of the refrdel in a consistent cycle so the ETL process can pick up these deletes and process them accordingly.  In P6 Reporting Database 2.2 and higher the Extended Schema is used as the data source. In the Extended Schema, deleted data is filtered out by the views. The Extended Schema services will handle any interaction with the refrdel table, this concern with timing refrdel cleanup and ETL runs is not applicable as of this release. In the Extended Schema tables (ex. TaskX) there can still be deleted data present. The Extended Schema views join on the primary PMDB tables (ex. Task) and filter out any deleted data.  Any data that was deleted that remains in the Extended Schema tables can be cleaned out at a designated time by running the clean up procedure as documented in the P6 Extended Schema white paper. This can be run occasionally but is not necessary to run often unless large amounts of data has been deleted.

    Read the article

  • How to disable System service from listening on port 80 in Windows Server 2003

    - by Miky D
    I'm trying to install a service on a Windows Server 2003 machine which is supposed to listen on port 80 but it fails to start because some other service is already listening on that port. So far I've disabled the IIS Admin service and the HTTP SSL service but no luck. When I run netstat -a -n -o | findstr 0.0:80 it gives me the process id 4 as the culprit, but when I look at the running processes that process id points to the "System" process. What can I do to get the System process to stop listening on port 80 and get my service to listen instead?

    Read the article

  • Celery - minimize memory consuption

    - by Andrew
    We have ~300 celeryd processes running under Ubuntu 10.4 64-bit , in idle every process takes ~19mb RES, ~174mb VIRT, thus - it's around 6GB of RAM in idle for all processes. In active state - process takes up to 100mb of RES and ~300mb VIRT Every process uses minidom(xml files are < 500kb, simple structure) and urllib. Quetions is - how can we decrease RAM consuption - at least for idle workers, probably some celery or python options may help? How to determine which part takes most of memory?

    Read the article

  • SQL 2008 Memory Usage

    - by Danilo Brambilla
    I have a SQL Server 2008 (ver 10.0.1600) running on a Windows Server 2008 R2 Enterprise server with 8 GB of physical ram. If I open Task Manager I can see on 'Physical Memory' section of 'Performance' tab that only 340 MB are Available of 8191 Total, but I can't see any process using such amount of memory. Please note SQL Server is memory limited to 6GB (Maximum Server Memory = 6000). If I open Sysinternals Process Explorer, I can see sqlsrvr.exe process has: Private Bytes: 227.000 K Working Set: 140.000 K Virtual Size: 8.762.000 K What does this means? Is there any way to free up this memory for other process? Why Virtual Size figure as allocated memory? I thought that Virtual Size was 'reserved memory' only.

    Read the article

  • Bash child proccess PID - how do you get it?

    - by Jason Tan
    Can any one tell me how to get the PID of a command executed in bash. E.g. I have a bash script that runs imapsync. When the script is killed the imapsync process does not always get killed, so I'd like to be able to identify the PID of imapsync programatically from my script, so that I can kill the imapsync process myself in a signal handler. So how do I programatically get the PID of a child process from a parent bash script? Thanks Folks

    Read the article

  • remove disks or other media press any key to restart

    - by Sam I am
    I'm having an issue regarding a disk imaging process. I have a WinPE image that will re-partition, the hard drive, and put a boot-able image on it. When I run the imaging process from an initial state, I get the following error remove disks or other media press any key to restart If I run the process subsequent times, It will work as desired, but I'm still interested in getting the process to work the first time. How do I figure out what's going on here? what should I be looking at? I'm a little bit out of my depth here, I don't know what to do next

    Read the article

  • whats the name of this pattern?

    - by Wes
    I see this a lot in frameworks. You have a master class which other classes register with. The master class then decides which of the registered classes to delegate the request to. An example based passed in class may be something this. public interface Processor { public boolean canHandle(Object objectToHandle); public void handle(Object objectToHandle); } public class EvenNumberProcessor extends Processor { public boolean canHandle(Object objectToHandle) { if (!isNumeric(objectToHandle)){ return false } return isEven(objectToHandle); } public void handle(objectToHandle) { //Optionally call canHandleAgain to ensure the calling class is fufilling its contract doSomething(); } } public class OddNumberProcessor extends Processor { public boolean canHandle(Object objectToHandle) { if (!isNumeric(objectToHandle)){ return false } return isOdd(objectToHandle); } public void handle(objectToHandle) { //Optionally call canHandleAgain to ensure the calling class is fufilling its contract doSomething(); } } //Can optionally implement processor interface public class processorDelegator { private List processors; public void addProcessor(Processor processor) { processors.add(processor); } public void process(Object objectToProcess) { //Lookup relevant processor either by keeping a list of what they can process //Or query each one to see if it can process the object. chosenProcessor=chooseProcessor(objectToProcess); chosenProcessor.handle(objectToProcess); } } Note there are a few variations I see on this. In one variation the sub classes provide a list of things they can process which the ProcessorDelegator understands. The other variation which is listed above in fake code is where each is queried in turn. This is similar to chain of command but I don't think its the same as chain of command means that the processor needs to pass to other processors. The other variation is where the ProcessorDelegator itself implements the interface which means you can get trees of ProcessorDelegators which specialise further. In the above example you could have a numeric processor delegator which delegates to an even/odd processor and a string processordelegator which delegates to different strings. My question is does this pattern have a name.

    Read the article

  • How to keep background requests in sequence

    - by Jason Lewis
    I'm faced with implementing interfaces for some rather archaic systems, for handling online deposits to stored value accounts (think campus card accounts for students). Here's my dilemma: stage 1 of the process involves passing the user off to a thrid-party site for the credit card transaction, like old-school PayPal. Step two involves using a proprietary protocol for communicating with a legacy system for conducting the actual deposit. Step two requires that each transaction have a unique sequence number, and that the requests' seqnums are in order. Since we're logging each transaction in Postgres, my first thought was to take a number from a sequence in the DB, guaranteeing uniqueness. But since we're dealing with web requests that might come in near-simultaneously, and since latency with the return from the off-ste payment processor is beyond our control, there's always the chance for a race condition in the order of requests passed back to the proprietary system, and if the seqnums are out of order, the request fails silently (brilliant, right?). I thought about enqueuing the requests in Redis and using Resque workers to process them (single worker, single process, so they are processed in order), but we need to be able to give the user feedback as to whether the transaction was processed successfully, so this seems less feasible to me. I've tried to make this application handle concurrency well (as much as possible for a Ruby on Rails app), but now we're in a situation where we have to interact with a system that is designed to be single process, single threaded, and sequential. If it at least gave an "out of order" error, I could just increment (or take the next value off the sequence), but it's designed to fail silently in the event of ANY error. We are handling timeouts in a way that blocks on I/O, but since the application uses multiple workers (Unicorn), that's no guarantee. Any ideas/suggestions would be appreciated.

    Read the article

  • How would you measure the amount of atmospheric dust in a server room?

    - by Tom O'Connor
    We've been advised by our tape library vendor that one of the reasons we might be seeing lots of errors is if our server room is particularly dusty. It doesn't look dusty, but that's not to say it's not there. We've got an environment sensor cluster which measures Temperature, Airflow and Relative Humidity. I should probably point out that the low-hanging fruit solution I came up with is to use Sellotape (scotch tape) in a loop, one side stuck to the server cabinet, the other side free-hanging. I've also put a couple of other tape loops by the exit and intake fans of the hardware (not blocking airflow, naturally). How can we (electronically, ideally) measure dust levels?

    Read the article

  • Remote Program (via ssh) suspends when leaving client computer

    - by Philipp F
    I'm working with MATLAB on a remote computer logging in via ssh -X remotepc and running matlab like matlab &. When I start a long-running process and leave the computer, it seems to suspend the process (after like 30mins being away) such that there is nearly no progress over night. As soon as I come back and wake up the client, the remote process continues with the calculation. I can see this from the load-average values (uptime) Why is that and how can I change this behaviour?

    Read the article

  • Running out of LowMem with Ubuntu PAE Kernel and 32GB of RAM

    - by magneticMonster
    I'm running a Java data import process on a 32-bit Ubuntu 10 PAE kernel machine. After running the process for a while, the oom-killer zaps my Java process. After some Googling and digging through docs, it looks like the system is running out of LowMem. I started the process for the third time and am watching free -lm show me Low: 464 386 77 with the free value (77MB) slowly decreasing. Why am I running out of lowmem and how do I increase it? Some details: $ cat /proc/sys/vm/lowmem_reserve_ratio 256 256 32 $ free -lm total used free shared buffers cached Mem: 32086 24611 7475 0 0 24012 Low: 464 407 57 High: 31621 24204 7417 -/+ buffers/cache: 598 31487 Swap: 2047 0 2047

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >