Search Results

Search found 39168 results on 1567 pages for 'running costs'.

Page 106/1567 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Is there a way to undo deletion of registry keys while the machine is still running?

    - by Oliver Giesen
    [ also posted from a programmer's POV at http://stackoverflow.com/questions/3299230 ] I messed up big time and deleted a large portion of my registry during a programming experiment: As a result most of the contents of HKEY_CURRENT_USER\Software\ are gone. I haven't logged off or shutdown since this happened. The applications that were already running seem to be coping fine so far but I suspect that after the next reboot there won't be much happiness left... Also, System Restore tells me there are no restore points even though I'm pretty sure there should have been. Could this be another symptom of the purged registry? I wouldn't have expected this information to be stored under HKCU, though... Does anybody know of a technique or utility that can possibly restore some or all of the deleted entries? I'm on Windows 7 Enterprise 32bit. I'm not really holding my breath but you can always hope, can't you?

    Read the article

  • Two sites running same code and different config files?

    - by Gen
    I have a Windows 2008R2 server with IIS, running one site (ASP.NET4.5). All the parameters are written in web.config file. I have to add a new site, that will run on the same code (same root folder) as the first one, but will read parameters like sql connection strings etc. from its own config file, not from first site web.config. How can I do that? Is it possible to run the second site in different app pool? Both sites will run the same .NET version of course. Thank you!

    Read the article

  • Is there anyway that I can set the 'real' memory usage value while running my java code?

    - by vira
    I'm running a code on a server to generate a 10,000x10,000 matrix and save each value into a table (MySQL). I was informed by the administrator that I can use up to 32g of the physical memory of our server but have no idea how to do it. I googling around and so far only found information about setting the virtual memory using -Xmx. I tried it anyway and using top command I got this: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3981 gv 35 15 32.4g 304m 10m S 1 0.5 9:54.84 java So, it shows that the -Xmx set the VIRT and not the RES value. Is there anyway that I can set the RES value into 32g?

    Read the article

  • What sysadmin must do to run OS with damaged /lib/libc.so file ? / rsyslogd daemon logrotation / deny checking list of running processes

    - by Virtual_Lotos
    What sysadmin must do to run OS with damaged /lib/libc.so file ? In other words, how command interpreter should be configured to be able to run system with corrupted /lib/libc.so file ? Do I have to move it to /var catalog ? Does the command interpreter must be statically compiled or have setuid attribute or perhaps must be a symbolic link to /bin/sh or must be no larger than 2MB ? How to prevent a user from checking list of processes started by another user ? How do I forbid a user to see which processes are running by another user ? What do I have to keep in mind when I want to make rsyslogd daemon logrotation ?

    Read the article

  • Performance problems - Jira running on Ubuntu over VMware ESX 4.0 maxing out all 4 vCPUs.

    - by Jack T
    We are running Jira on a box under VMware ESX 4.0 and performance is vaiable to say the least. The physical box has 12 Gig RAM and 4x Xeon 2.26 GHz CPUs. vCentre is telling us the CPUs are not maxed out at any time, RAM is fine too. When we issue a request to the host it sometimes maxes out all 4 vCPUs. Sometimes it's quick, sometimes very very slow. There doesn't seem to be a pattern. Any ideas?

    Read the article

  • Will the VMs on the ESXi cluster be running after disconnection from the network?

    - by John
    What will happen if I disconnect the ESXi cluster configured with HA from the switch(I need to change the power source on the main switch) and there is no management port redundancy? I'm going to disable the host monitoring and VM monitoring within HA settings, and connect to the switch after it boots up. Will the virtual machines be running if I disconnect the hosts from the network so they will not see any other ESXi host? I hope everything will work fine and the hosts will join the cluster after they connect to the network again, but I would like to be sure ..

    Read the article

  • How do I know I'm running inside a linux "screen" or not?

    - by Jun Chen
    The "screen" refers to a program mentioned in How to reconnect to a disconnected ssh session . That is a good facility. But there is a question I'd really like to know. How do I know whether I'm running inside a "screen"? The difference is: If yes, I know I can safely close current terminal window, e.g., close a PuTTY window, without losing my shell(Bash etc) session. If no, I know I have to take care of any pending works before I close the terminal window. Better, I'd like this status to be displayed in PS1 prompt so that I can see it any time automatically.

    Read the article

  • What steps are required to get DB2 working again after renaming the Windows XP system it was running on?

    - by Suppressingfire
    I think this is a fairly well known problem, but I haven't found a really solid solution to add to my toolbox. Here's the sequence of steps that leads to the problem: Install Windows (e.g., XP), naming the system XXX Install DB2 and create some databases Rename the system from XXX to YYY (via the System control panel's Computer Name tab Reboot and find DB2 unable to start How can I get DB2 up and running again without having to reinstall it and without having to rename the system back to XXX? I did find a blog post that hints at some registry values to tweak, but I'm hoping the SF community can come up with a solution in which I can have more confidence.

    Read the article

  • Why is my computer running slower after I just installed more RAM and a new HDD?

    - by hopla
    I just bought 4 GB of ram (2x2GB) and a 1TB hard drive and installed them, upgrading from my original 1GB RAM and 250GB HDD. I put the 2GB sticks in 1st and 3rd slots and the 1GB stick in 2nd. Now with my new ram and HDD my computer is running MUCH slower and I dont know why. I've tried restarting just to see what happens and I noticed that even the Windows XP starting music is lagging. If anyone could help that would be fantastic. It's hard even to type this out.

    Read the article

  • Application running as a service is not able to create the same number of processes as when it runs

    - by Pini Reznik
    I have a Windows application which creates up to 35 processes and it's working OK when it's running from cmd. But when it is executed as a service on the same machine it is able to create only 20 processes and all other are killed because of some kind of resource exhaustion problem. The problem is persistent on one Windows 2003 server but not reproducible on other servers. Can it be because the system has run out of desktop heap? http://support.microsoft.com/kb/184802 How can I check it?

    Read the article

  • how to serve php files on a Apache server (localhost) running Coldfusion/MySql?

    - by frequent
    I'm still learning my ways around on my localhost server, whih is running Apache 2.2, Coldfusion8 and MySQL Server 5.5 (on Windows XP). I need to work on a site I inherited, which also ran some PHP scripts under the same setup. I have installed PHP5 on my localhost, but when I open a dummy page with: <?php phpinfo();?> I only get plain text returned, so I guess I haven't configured Apache correctly to also serve PHP (while defaulting to Coldfusion). Question: Where do I need to get started if I want PHP to work on my current setup, too? Is there something I need to add to the httpd.conf file? If possible I don't want to uninstall/reinstall everything, because it took forever to get everything to work (excluding php). Thanks for any pointers!

    Read the article

  • Best virtualization solution for running Linux under Mac OS X?

    - by grumbles
    I'd like to run a virtualized Ubuntu instance under Mac OS X (10.6). I've used VirtualBox in the past, but am looking for something that will be faster, and don't mind paying for either Parallels Desktop or VMWare Fusion. Does anyone have experience running Linux guests under either or both programs? I'm primarily interested in doing software development on the Linux guest installation, but I'm also very concerned with the performance and responsiveness the guest OS. I have a mid-2010 15" MacBook Pro (2.66 GHz i5, 8 GB of RAM, NVIDIDA GeForce GT 330M). Thanks!

    Read the article

  • We have a Solaris 9 server running Oracle 10G and have been getting memory consumption errors for a few weeks now

    - by another_netadmin
    We recently upgraded our Enterprise application and everything worked ok until one weekend when we did a server reboot, ever since then we have run into memory errors. The server has 4GB of physical memory installed and the kernel parameters are set to the following (/etc/system). I'm not an Oracle guy so I'm not sure where to start looking but any informaiton is greatly appreciated. Thanks in advance. There are two databases running on this server, one is a production database and the other is a pre-production database. [root@bandb /]# cat /etc/system | grep seminfo set semsys:seminfo_semmni=100 set semsys:seminfo_semmns=2048 set semsys:seminfo_semmsl=400 set semsys:seminfo_semopm=100 set semsys:seminfo_semvmx=32767 [root@bandb /]# cat /etc/system | grep shminfo set shmsys:shminfo_shmmax=4294967295 set shmsys:shminfo_shmmin=1 set shmsys:shminfo_shmmni=100 set shmsys:shminfo_shmseg=10 [root@bandb /]#

    Read the article

  • Performance impact of running Linux in a virtual machine in Windows?

    - by vovick
    Hello, I'd like to know what performance impact I could expect running Linux in a virtual machine in Windows. The job I need Linux for is heavy and almost non-stop code compilation with GCC. Dual-boot doesn't look like a very attractive solution, so I'm counting on low VM overhead right now (10-20% would be fine for me, but 50% or more will be unacceptable). Did anyone try to measure the performance difference, are there any comparison tables? What virtual machine with the lowest overhead possible will you suggest? My host OS is Win7 and I've got a modern Core i7 with VT-x present. Thanks!

    Read the article

  • Why Oracle Delivers More Value than IBM in Data Integration Solutions

    - by irem.radzik(at)oracle.com
    For data integration projects, IT organization look for a robust but an easy-to-use solution, which simplifies enterprise data architecture while providing exceptional value-- not one that adds complexity and costs. This is a major challenge today for customers who are using IBM InfoSphere products like DataStage or Change Data Capture. Whereas, Oracle consistently delivers higher level value with its data integration products such as Oracle Data Integrator, Oracle GoldenGate. There are many differentiators for Oracle's Data Integration offering in comparison to IBM. Here are the top five: Lower cost of ownership Higher performance in both real-time and bulk data movement Ease of use and flexibility Reliability Complete, Open, and Integrated Middleware Offering Architectural differences between products contribute a great deal to these differences. First of all, Oracle's ETL architecture does not require a middle-tier transformation server, something IBM does require. Not only it costs more to manage an additional transformation server including energy costs, but it adds a performance bottleneck as well. In addition, IBM's data integration products are complex and often require lengthy professional services engagements to integrate. This translates to higher costs and delayed time to market. Then there's the reliability factor. Our customers choose Oracle GoldenGate over IBM's InfoSphere Change Data Capture product because Oracle GoldenGate is designed for mission-critical systems that require guaranteed data delivery and automatic recovery in case of process interruptions. On Thursday we will discuss these key differentiators in detail and provide customer examples that chose Oracle over IBM in data integration projects. Join us on Thursday Feb 10th at 11am PT to learn how Oracle delivers more value than IBM in data integration solutions.

    Read the article

  • Should we choose Java over C# or we should consider using Mono?

    - by A. Karimi
    We are a small team of independent developers with an average experience of 7 years in C#/.NET platform. We almost work on small to average web application projects that allows us to choose our favorite platform. I believe that our current platform (C#/.NET) allows us to be more productive than if we were working in Java but what makes me think about choosing Java over C# is the costs and the community (of the open source). Our projects allow us even work with various frameworks as well as various platforms. For example we can even use Nancy. So we are able to decrease the costs by using Mono which can be deployed on Linux servers. But I'm looking for a complete ecosystem (IDE/Platform/Production Environment) that decreases our costs and makes us feel completely supported by the community. As an example of issues I've experienced with MonoDevelop, I can refer to the poor support of the Razor syntax on MonoDevelop. As another example, We are using "VS 2012 Express for Web" as our IDE to decrease the costs but as you know it doesn't support plugins and I have serious problems with XML comments (I missed GhostDoc). We strongly believe in strongly-typed programming languages so please don't offer the other languages and platforms such as Ruby, PHP, etc. Now I want to choose between: Keep going on C#, buy some products and be hopeful about openness of .NET ecosystem and its open source community. Changing the platform and start using the Java open source ecosystem

    Read the article

  • What's the risk of running a Domain Controller so that it is accessible from the internet?

    - by Adrian Grigore
    I have three remote dedicated web servers at different webhosts. Adding them to a common domain would make a lot of administration tasks much easier. Since two of the servers are running Windows 2008 R2 Standard, I thought about promoting them to Domain Controllers in order to set up the windows domain. There's another thread at Serverfault that recommends this. At the same time I've read a lot of times on different websites that this is not a good idea because an domain controller should always be behind a firewall LAN. But I can't set up something like this because I don't have a LAN with a static IP accessible from the internet. In fact I don't even have a windows server in my LAN. What I have not found out is why exposing a DC to the Internet would be bad idea. The only risk I can see is that if someone penetrates one of my webservers, it should be much easier to penetrate the others as well. But as far as I can see that's the worst case scenario since I am only going my web servers to that domain, not any computers from my local network. Is this the only downside or does it also make it easier to penetrate one of my web servers in the first place? Thanks, Adrian

    Read the article

  • How to use connectify to share files between 2 laptops, one running Windows 7?

    - by p2pnode
    I have 2 laptops one running XP and another Windows 7. Both have WiFi-card installed. I want to be able to share files between these 2 machines and then in next step also be able to share internet connection. I have heard that Connectify on Windows 7 is a very handy and easy tool to set up a hotspot and other machines even XP,Vista can connect to this host hotspot and share files and internet connection. I am able to search for the network (set on host) on my client machine. But how to share files? I don't see any such menu or anything. Also after I have installed connectify on Windows 7, I am not able to connect to internet using data card. It throws error that "Error 31:A A device attached to the system is not functioning". And on client machine, if I connect to data card, as well as connect to wireless network setup by host machine, I no longer am able to access internet, even though the data card is connected. The browser throws error: Please help. Any other utility similar to connectify etc?

    Read the article

  • Tomato OS: "memory exhausted" running vi .... how to solve?

    - by Sam Jones
    I have set up tomato (shibby) on an asus RT-N66U router. It works great. I loaded up a few pieces, like transmission and optware. I can run vi, but when I run vi it fails with a "memory exhausted" error, and the terminal session hangs. For reference: If I simply start "vi" it runs fine. But if I specify vi I get the memory exhausted error, even if the file I am opening is just a couple of hundred bytes in size (like fstab). I discovered that my swap partition was not properly set up, so I did that. The swapon command now indicates I really do have a swap: [root@MyRouter samba]$ swapon -s Filename Type Size Used Priority /dev/sda1 partition 32900860 0 1 How can I get vi to work? Thanks! System setup reference information: asus RT-N66U router 2TB usb hard drive partitions on hard drive: Disk /dev/sda: 2000.4 GB, 2000398839808 bytes 255 heads, 63 sectors/track, 30400 cylinders Units = cylinders of 16065 * 4096 = 65802240 bytes Disk identifier: 0xfacbc8ab Device Boot Start End Blocks Id System /dev/sda1 1 512 32900868 82 Linux swap / Solaris /dev/sda2 513 29000 1830638880 83 Linux running samba memory: $ cat /proc/meminfo MemTotal: 255840 kB MemFree: 210980 kB Buffers: 5264 kB Cached: 22768 kB SwapCached: 0 kB Active: 20272 kB Inactive: 11448 kB HighTotal: 131072 kB HighFree: 99868 kB LowTotal: 124768 kB LowFree: 111112 kB SwapTotal: 32900860 kB SwapFree: 32900860 kB Dirty: 0 kB Writeback: 0 kB TIA!

    Read the article

  • How do I get a Mac to request a new IP address from another DHCP server running in parallel while Ne

    - by huyqt
    Hello, I have an interesting situation. I'm trying to us a Linux based machine to allow Mac's to Netboot (similiar to PXE boot) by running a DHCP service in parallel with the "global" DHCP server. The local DHCP server hands out IPs in a private subnet, e.g., 10.168.0.10-10.168.254-254, while the "global" DHCP server hands out IPs from the IP range 10.0.0.1 - 10.0.1.254. The local DHCP range is only supposed to be used in Preboot Execution Environment and Netboot. The local DHCP server is something I have control over, but I do not have access to the global DHCP server. I have a filter to only allow members with the vendor strings "AAPLBSDPC/i386" and "PXEClient". PXE works fine, but Netboot has a quirk. The Apple systems that haven't been connected to the network yet can Netboot fine. But once it grabs a "real" IP address from the global DHCP server, it will "save" it and request it the next time we want it to netboot (which the local dhcp server won't give it). This is what I want: Mar 30 10:52:28 dev01 dhcpd: DHCPDISCOVER from 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:29 dev01 dhcpd: DHCPOFFER on 10.168.222.46 to 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:31 dev01 dhcpd: DHCPREQUEST for 10.168.222.46 (10.168.0.1) from 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:31 dev01 dhcpd: DHCPACK on 10.168.222.46 to 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:32 dev01 in.tftpd[5890]: tftp: client does not accept options Mar 30 10:52:53 dev01 in.tftpd[5891]: tftp: client does not accept options Mar 30 10:52:53 dev01 in.tftpd[5893]: tftp: client does not accept options Mar 30 10:52:54 dev01 in.tftpd[5895]: tftp: client does not accept options This is what I get when it already has a "stored" IP: Mar 30 10:51:29 dev01 dhcpd: DHCPDISCOVER from 00:25:xx:xx:xx:xx via eth1 Mar 30 10:51:30 dev01 dhcpd: DHCPOFFER on 10.168.222.45 to 00:25:xx:xx:xx:xx via eth1 Mar 30 10:51:31 dev01 dhcpd: DHCPREQUEST for 10.0.0.61 (10.0.0.1) from 00:25:xx:xx:xx:xx via eth1: ignored (not authoritative). Do you have any suggestions? It would be much appreciated.

    Read the article

  • How to run VisualSvn Server on port 443 running IIS on same server?

    - by Metro Smurf
    Server 2008 R2 SP1 VisualSvn Server 2.1.6 The IIS server has about 10 sites. One of them uses https over port 443 with the following bindings: http x.x.x.39:80 site.com http x.x.x.39:80 www.site.com https x.x.x.39:443 VisualSvn Server Properties server name: svn.SomeSite.com server port: 443 Server Binding: x.x.x.40 No sites on IIS are listening to x.x.x.40. When starting up VisualSvn server, the following errors are thrown: make_sock: could not bind to address x.x.x.40:443 (OS 10013) An attempt was made to access a socket in a way forbidden by its access permissions. no listening sockets available, shutting down When I stop Site.com on IIS, then VisualSvn Server starts up without a problem. When I bind VisualSvn server to port 8443 and start Site.com, then VisualSvn Server starts without a problem. My goal is to be able to access the VisualSvn Server with a normal url, i.e., one that does't use a port number in the address: https://svn.site.com vs https://svn.site.com:8443 What needs to be configured to allow VisualSvn Server to run on port 443 with IIS running on the same server?

    Read the article

  • How to check for an existing executable before running it in a post-build event in VS2008?

    - by wtaniguchi
    Hey all, I'm trying to use SubWCRev to get the current revision number of our SVN repository and put it in a file so I can show it in the UI. As I'm working with a Web App, I use the following post build command line: "SubWCRev.exe" "$(SolutionDir)." "$(ProjectDir)Content\js\revnumber.js.tpl" "$(ProjectDir)Content\js\revnumber.js" It works great, but now I want to make sure I have SubWCRev before running it, so I can skip this post build if a fellow developer is not running TortoiseSVN. I tried a few batch codes here, but couldn't figure this out. Any ideas? Thanks!

    Read the article

  • How to migrate a running KVM (with full disk copy) to another node?

    - by klipz
    I'm doing tests on KVM, and I'd like to see if I can make a hot migration, I mean the virtual machine won't stop running during the migration (but a few seconds of freeze is ok). I use a small cluster for my test : kvm1, kvm2, and kvmnfs. kvm1 and kvm2 runs the virtual machines kvmnfs is a NFS server, and it's mounted on /KVM on both kvm1 and kvm2 To migrate a VM (only RAM in fact) from kvm1 to kvm2, I run the same kvm command on kvm2 (with -incoming tcp:0:4444) that on kvm1, then I use "migrate -d tcp:kvm2:4444" : It works great, since the VM file is common to both machines. Now, I wan't to make a full migration (RAM + disk) of a local VM file (no more NFS) of kvm1 to kvm2. I tried to create an empty file, with touch, on kvm2 and use the same kvm command line + the "-incoming ..."). Then on kvm1 I use "migrate -d tcp:kvm2:4444" : It copies everything, then... the VM fails (any I/O disk gives an I/O error) ! And my VM file on kvm2, the one I created with touch, as still a size of 0 bytes. What am I doing wrong ? What is the exact command to use on kvm2 ? And what is the command to launch, in the monitoring mode, on kvm1 ?

    Read the article

  • Unable to allocate new pages in table space "XXXX" ... but it's 250 megs and I'm only running DDL

    - by Sylvia
    Hello, I'm a DB2 newbie, so I'd appreciate even any pointers on where to start looking. We have great DB2 admins but they're swamped with other issues now, so I'm trying to do some troubleshooting on a development database. My situation is that I have a tablespace that's giving me this error message Unable to allocate new pages in table space "[MyTableSpace]". However, all I'm doing is running multiple (hundreds) of DDL statements, mainly creating tables but also indexes and pk scripts. So, considering that the tablespace has about 250 mg, I shouldn't be running out of space, right? Here's another thing - it appears that after I leave my script for a while, something "resets" and works for a while, then I begin to have the tablespace issue again. thanks, Sylvia

    Read the article

  • Would Apache running on port 8080 prevent dynamically loaded scripts in JavaScript?

    - by editor
    Had a nice PHP/HTML/JS prototype working on my personal Linode, then tried to throw it into a work machine. The page adds a script tag dynamically with some JavaScript. It's a bunch of Google charts that update based on different timeslices. That code looks something like this: // jQuery $.post to send the beginning and end timestamps $.post("channel_functions.php", data_to_post, function(data){ // the data that's returned is the javascript I want to load var script = document.createElement('script'); var head= document.getElementsByTagName('head')[0]; var script= document.createElement('script'); var text = document.createTextNode(data); script.type= 'text/javascript'; script.id = 'chart_data'; script.appendChild(text); // Adding script tag to page head.appendChild(script); // Call the function I know were present in the script tag loadTheCharts(); }); function loadTheCharts() { // These are the functions that were loaded dynamically // By this point the script tag is supposed be loaded, added and eval'd function1(); function2(); } Function1() and function2() don't exist until they get added to the dom, but I don't call loadTheCharts() until after the $.post has run so this doesn't seem to be a problem. I'm one of those dirty PHP coders you mother warned you about, so I'm not well versed in JavaScript beyond what I've read in the typical go-to O'Reilly books. But this code worked fine on my personal dev server, so I'm wondering why it wouldn't work on this new machine. The only difference in setup, from what I can tell, is that the new machine is running on port 8080, so it's 192.168.blah.blah:8080/index.php instead of nicedomain.com/index.php. I see the code was indeed added to the dom when I use webmaster tools to "view generated source" but in Firebug I get an error like "function2() is undefined" even though my understanding was that all script tags are eval'ed when added to . My question: Given what I've laid out, and that the machine is running on :8080, is there a reason anyone can think of as to why a dynamically loaded function like function2() would be defined on the Linode and not on the machine running Apache on 8080?

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >