Search Results

Search found 17567 results on 703 pages for 'muon package manager'.

Page 554/703 | < Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >

  • Setting up SQL Server 2005 to use all available memory in 32bit Windows Server 2003 - and verifying

    - by Rizwan Kassim
    There are a number of questions along this line - but they either sometimes contradict each other, or don't show how to properly verify that everything is actually working - hopefully this can be comprehensive... I'm running SQL Server 2005 SP3 Standard on Windows Server 2003 R2 Standard. My server has 8GB of memory installed - my system is almost entirely used as a Database Server - there are some services running on them, but the OS + services can run within 1Gb of RAM. What I've done (please tell me if I'm doing something wrong): /3GB in the boot.ini. (To increase the amount of user-space memory available - info) /PAE in the boot.ini. (Windows claimed to be doing PAE even without this switch, somethow.) Enabled AWE in SQL Server. Enabled Lock Pages in Memory Option for users SYSTEM and Local Service. (info). SQL Server Standard doesn't seem to use this until Cumulative Update 4, which isn't installed on my server. (info) Set Min/Max Memory to : 1024Mb/5112Mb After doing all the above, we definately saw a level of improvement - but I'd like now to verify my settings, make sure that I'm making full use of the memory available. (There appeared to be a slowdown when max = 7Gb, so I edged off from that value, but it might have been just perceptual.) To verify, I checked the following levels in PerfMon : Process(sqlserv):Working Set : 76386304 SQL Server(Memory Manager) : Total Server Memory : 3538944 (I saw a doc that noted that this wasn't the full memory used by SQL Server, so I'm not sure whether to trust it) So -- my questions... Should my max be around 7Gb? If not, what should it be? Why is total server memory at 3.5G, when it's been allocated 5G? What is the proper metric for the amount of memory allocated to SQL Server? The Working Set seems a bit large... Am I possibly missing any steps in the setup? Any recommended resources on starting to tune the caching system now? Thanks

    Read the article

  • How to permanently "renice" a process on Mac OS X (or iOS, etc)?

    - by mralexgray
    I use a nice (free) process manager called ATMonitor for Mac OS X that has a lot of cool hidden features... one of which is being able to click on a running process.. and set the "renice" from +20 (less priority) to -20 (highest priority). The best part.... it sticks between restarts... SO you want XYZ to get full attention all the time.. you set it once and it's done... I want to do the same thing (renice a process) on an iPad running a particular daemon.. and I don't know how to set a renice permanently. I can do it once, and it works fine... But the setting is lost on a reboot. I read somewhere.. Now, as for permanently resetting the priority of a process, this can't be done directly. You can fake it, however, with a shell script that starts the app and then immediately renice's it. Give that script a ".command" extension and it will be double-clickable in the GUI. Not very elegant, but it gets the job done. But as it says.. not very elegant, and I dont think this is how ATMonitor does it.... I found this thread.... http://superuser.com and they gave a way to do it as a launch argument, but no apparent way to save it as a persistent value... for instance - if the program wasn't going to be started by launchd... How do I set a permanent renice level, per executable binary, independent of it's PID, when, how or why it was launched?

    Read the article

  • Activation context generation failed for "C:\php\php-cgi.exe". Dependent Assembly

    - by Eyla
    Greeting, I have Windows Server 2008 Server Core. I want to configure this server to host php websites using IIS 7. I installed and configured IIS7 to run php using the steps in this website: http://blogs.msdn.com/b/philpenn/archive/2009/07/19/deploying-iis-7-5-fastcgi-php-on-server-core.aspx Now I’m facing a problem that when I request my php website I would get this error. Server Error 500 - Internal server error. There is a problem with the resource you are looking for, and it cannot be displayed. I check the even log and I found these details too: Activation context generation failed for "C:\php\php-cgi.exe". Dependent Assembly Microsoft.VC90.CRT,processorArchitecture="x86",publicKeyToken="1fc8b3b9a1e18e3b",type="win32",version="9.0.21022.8" could not be found. Please use sxstrace.exe for detailed diagnosis. I search about this error and I found a solution for it but which is to install Microsoft Visual C++ 2008 SP1 Redistributable Package (x86). I installed but still I’m getting same error. Please help me to solve this problem and let me know if you want to know more info about my issue.

    Read the article

  • Nginx dynamic upstream configuration / routing

    - by Dan Sosedoff
    I was experimenting with dynamic upstream configuration for nginx and cant find any good solution to implement upstream configuration from third-party source like redis or mysql. The idea behind it is to have a single file configuration in primary server and proxy requests to various app servers based on environment conditions. Think of dynamic deployments where you have X servers that are running Y workers on different ports. For instance, i create a new app and deploy. App manager selects a server and then rolls out a worker (Ruby/PHP/Python) and then reports the ip:port to the central database with status "up". At this time when i go to the given url nginx should proxy all requests to the specified ip:port upstream. The whole thing is pretty similar to what heroku does, except this proof-of-concept is not supposed to be production ready, mostly for internal needs. The easiest solution i found was using resolver with ruby-based DNS server. It works, nginx gets the IP address correctly, but the only problem is that you cant define port number for that IP. Second solution (which i havent tried yet) is to roll something else as a proxy server, maybe written in Erlang. In this case we need to use something to serve static content. Any ideas how to implement this in more flexible and stable way? P.S. Some research options: http://openresty.org/#DynamicRoutingBasedOnRedis https://github.com/nodejitsu/node-http-proxy

    Read the article

  • How do I provide dpkg configuration parameters to aptitude or apt-get?

    - by troutwine
    When installing gitolite I find that: # aptitude install gitolite The following NEW packages will be installed: gitolite 0 packages upgraded, 1 newly installed, 0 to remove and 29 not upgraded. Need to get 114 kB of archives. After unpacking 348 kB will be used. Get:1 http://security.debian.org/ squeeze/updates/main gitolite all 1.5.4-2+squeeze1 [114 kB] Fetched 114 kB in 0s (202 kB/s) Preconfiguring packages ... Selecting previously deselected package gitolite. (Reading database ... 30593 files and directories currently installed.) Unpacking gitolite (from .../gitolite_1.5.4-2+squeeze1_all.deb) ... Setting up gitolite (1.5.4-2+squeeze1) ... No adminkey given - not initializing gitolite in /var/lib/gitolite. The last line is of interest to me. If I run dpkg-reconfigure -plow gitolite I am presented with a dialog and can modify: the system user name for gitolite, the location of the gitolite repositories and provide the admin pubkey. I'd prefer to use the git system user and provide the admin pubkey on installation, say something of the sort: # aptitude install gitolite --user git --admin-pubkey 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDAc7kCAi2WkvqpAL1fK1sIw6xjpatJ+Ms2nrwLJPhdovEY3MPZF7mtH+rv1CHFDn66fLGiWevOFp...' That, of course, doesn't work. Can something similar be done? How do I determine the configuration parameters ahead of time? This would be remarkably useful, for instance, when installing gitolite automatically, via puppet or chef.

    Read the article

  • Clone virtual machine with Server 2008 R2 and Hyper-V?

    - by bwerks
    Hi all, I've recently just started working with Hyper-V, and so far it's quite nice. However, I've been running into problems with what seems like it should be the most basic of workflows. I've set up a baseline Server 2008 R2 configuration, and exported it with the intention of using the export for cloning. I entered "C:\Exports\" as the export folder. However, I run into problems when I try to import the image. From the Hyper-V manager, I select "Import Virtual Machine" and in the resulting window I entered "C:\Exports\BuildServer\" as the folder, set the radial to "Copy the virtual machine (create a new unique ID)" and checked the checkbox for "Duplicate all files so the same virtual machine can be imported again." Doing so results in the following error: "Import failed. Import task failed to copy file from 'H:\Exports\BuildServer\Virtual Hard Disks\BuildServer.vhd' to 'C:\Hyper-V\Virtual Hard Disks\BuildServer.vhd': The file exists. (0x80070050)" Have I somehow messed something up in configuration? Or is this a known thing? I've read it should be possible to clone VMs by copying them in the filesystem but I'd prefer to keep things in the management Ui if possible.

    Read the article

  • SAN with iSCSI-Target Performance Horrendous

    - by Justin
    We have a poor man's SAN setup in a 1U Ubuntu server running iSCSI-Target with two 300GB drives in RAID-0. We then are using it for block level storage for virtual machines. The hypervisor is connected to the SAN via gigabit on a dedicated VLAN and interfaces. We only have a single virtual machine setup and doing some benchmarks. If we run hdparm -t /dev/sda1 from the virtual machine, we get 'ok' performance of 75MB/s from the virtual machine to the SAN. Then we basically compile a package with ./configure and make. Things start ok, but then all the sudden the load average on the SAN grows to 7+ and things slow down to a crawl. When we SSH into the SAN and run top, sure the load is 7+, but the CPU usage is basically nothing, also the server has 1.5GB of memory available. When we kill the compile on the virtual machine, slowly the LOAD on the SAN goes back to sub 1 figures. What in the world is causing this? How can we diagnosis this further? Here are two screenshot from the SAN during high load. 1> Output of iotop on the SAN: 2> Output of top on the SAN:

    Read the article

  • solr php extension fails to run on newest Debian Wheezy

    - by hijarian
    I'm trying to use the Solr PHP extension on the recently-upgraded Debian Wheezy. It installs both from PECL and from sources flawlessly but instead of giving me expected functionality it gives me this on every PHP run: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/solr.so' - /usr/lib/php5/20100525/solr.so: undefined symbol: curl_easy_getinfo in Unknown on line 0 Also scripts which use the extension throws an error PHP Error[2]: include(SolrClient.php): failed to open stream: No such file or directory in file <...path to my autoloader...> My main point is that it was set up before and worked like a charm. In the upgrade among the relevant packages only the versions of PHP and libcurl was changed. Instance of Solr itself was left as is. I have all possible libcurl libraries: $ locate libcurl ... /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.3 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.2.0 /usr/lib/x86_64-linux-gnu/libcurl.a /usr/lib/x86_64-linux-gnu/libcurl.la /usr/lib/x86_64-linux-gnu/libcurl.so /usr/lib/x86_64-linux-gnu/libcurl.so.3 /usr/lib/x86_64-linux-gnu/libcurl.so.4 /usr/lib/x86_64-linux-gnu/libcurl.so.4.2.0 ... /usr/lib32/libcurl.so.3 /usr/lib32/libcurl.so.4 /usr/lib32/libcurl.so.4.2.0 ... I have instaled the php5-curl package version 5.4.4-2 with aptitude. I installed the Sorl extensions both with sudo pecl install solr (with various combinations of -f and -n flags and tried solr-beta too) and with wget ... cd ... phpize ./configure make make install I'm installing the 1.0.2 version of extension because it worked before the upgrade from Squeeze to Wheezy. As I said earlier, extension installs without any errors. I have already added the extension=solr.so incantation to the /etc/php5/mods-available/solr.ini What magic should I do to make solr extension work? Is this true that the only solution that I have is to downgrade the libcurl version as it was before the upgrade?

    Read the article

  • SCVMM 2008 R2 problems migrating VM from VS2005 to Hyper-V host

    - by Scott Ivey
    I have System Center Virtual Machine Manager 2008 R2 installed, and have a Hyper-V R2 host and a Virtual Server 2005 host. I'm trying to migrate my machines from the VS2005 host to the Hyper-V host, and keep getting the following error... VMM is unable to complete the requested file transfer. The connection to the HTTP server myserver.mydomain.local could not be established. (Unknown error (0x80072efd)) Recommended Action Ensure that the HTTP service and/or the agent on the machine myserver.mydomain.local are installed and running and that a firewall is not blocking HTTPS traffic. (Note - migrations between Hyper-V hosts managed by the VMM server work fine - my problem is just going from VS2005-Hyper-V hosts) I have no firewalls turned on on either of the servers, and no firewalls in the middle. I've looked all over for answers to this problem, and am getting nowhere. All the articles I find when searching are talking about either V2V or P2V - and i'm just trying to do a straight migrate VM. I've tried rebooting the boxes, changing the BITS SSL port number, restarting services, triple-checking firewalls, etc. Does anyone have any good suggestions as to how I can resolve this problem?

    Read the article

  • Ubuntu NBR karmic boot freezes at fsck from util-linux-ng 2.16

    - by BlueBill
    Hi all, I have a netbook (emachine e250 - equivalent to an acer aspire one) and I have Ubunutu NBR 9.10 installed on it. Every other cold boot freezes at the following error message: "fsck from util-linux-ng 2.16" There is no disk activity, no activity what so ever. I have left the machine sit for over an hour and nothing. It takes a couple of hard resets to be able to boot properly. Once it boots everything works great (wireless, suspend/resume, etc.)! I have spent the last couple of weeks researching the problem and the only thing that seems to work is setting nolapic in the boot string in grub - it boots every time. Unfortunately, nolapic disables the second core and causes problems with suspend resume. At first I thought it was an fsck problem with the first partition on the hard disk as it is a hidden ntfs partition containing the windows xp recover information. So in /etc/fstab I set the partition so that it would be ignored by fsck. This didn't seem to do anything. I have these partitions: /dev/sda1 - an ntfs recovery partition /dev/sda2 - /boot /dev/sda3 - swap /dev/sda5 - / /dev/sda6 - /home I am running kernel version 2.6.31-19-generic and have all the patches (as indicated by update manager). I also have no splash screen so I can see the boot progress. I have only been using NBR since January, I have been using Ubuntu on my desktop since last June (2009-06). What logs should I be looking at? Is there a log for failed boots? Thanks, Troy

    Read the article

  • Hyper V Server 2012 Remote Management Using Workgroup

    - by Chris Kolenko
    I'm trying to remotely manage Hyper V server 2012 from a windows 8 pc, both client and server are on a workgroup. I've spent about 3-4 hours trying to get this working with no luck so far trying the following: Creating a new administrator on the server with the same details as the client ie. username / password. Add an entry into my hosts file to point to the remote ip by server name. Tried using HVRemote. Disabled both firewalls. The error that I'm getting is RPC Service Unavailable. How can I accomplish what I'm trying to do? Update Some of the operations on the Hyper-V Manager work. IE. Virtual Switch Works. I can open the New VM Wizard. I run into an error when creating a new Virtual Hard Disk tho. I've tried creating a VM without a hard disk, which works. Using the new hard disk wizard does not work either. I still can not see any Virtual Machines. RPC server unavailable. Unable to establish communication between 'ServerName' and 'ClientName'

    Read the article

  • Windows Server 2008R2 - can't change or remove the default gateway

    - by disserman
    We've installed VMWare Server 2.0 on Windows 2008R2. After some time playing with it (actually only removing host-only and nat networks, and binding adapters to the specified vmnets) we've noticed a strange problem: if you change or remove the default gateway on the network card, the server completely loses a network connection you can't ping it from the subnet, it also can't connect to anyone. When the gateway is removed and a server tries to connect to the other machines, I can see some incoming packets using a sniffer, but I believe they are damaged in some kind (I'm not a mega-guru in TCP/IP and can't find a mistake in a binary translation of the packet) because the other side doesn't respond. What we tried: removed vmware server using add/remove programs deleted everything related to the vmware server and all installed network adapters in the windows registry double checked for the vmware bridged protocol driver file, it's physically absent and no any links in the registry. performed a tcp/ip reset with netsh and disabled/enabled all network adapters in the device manager to recreate a registry keys for them. tried another network adapter. and the situation is the same: as soon you remove or change the default gateway, windows stops working. The total absurd of the situation is that the default gateway points to the non-existing IP. But when it's set, you can ping a server from the subnet, when you remove it - you can't. Any help? I'm starting thinking the new build of the VMWare Server is some kind of the malware... :)

    Read the article

  • Open Office crashes, recovers, crashes again

    - by Daniel R Hicks
    After completely reinstalling my laptop due to apparent registry corruption, I've encountered a problem with Open Office: I open a simple Calc spreadsheet, it comes up normally, but then after anywhere from 5 seconds to several minutes (without even touching the Calc window) OO crashes, then comes up through recovery. If I let it "recover" it will do so and bring the spreadsheet up again, only to repeat the crash scenario again. If I kept clicking "OK" it would apparently do this all day. I reinstalled OO once and the problem went away for awhile, but it came back. I then attempted to "reset" my profile (ie, rename the OO user directory in App Data), but OO crashed during the first startup after that, then resumed the original behavior. If I open the same file using Excel it complains of errors in the file, and "recovers" them, but the "error report" it generates contains no details. If I save the "recovered" file then OO Calc will open it, but the problem returns after saving again. Any ideas? (The system is Vista SP2, running OO 3.4.1) How to reproduce: Start Open Office Calc. Save workspace as "CrashTest.ods" From Task Manager kill Open Office (soffice.exe/bin -- one of each) Double click on the saved "CrashTest.ods" in Explorer. OO puts up a message that recovery will occur -- allow it. When the Calc window comes up, don't touch it -- just wait about 10 seconds. Calc window closes and OO puts up a message that recovery will occur -- from now on the sequence will repeat. I suspect this behavior is limited to a few (recent) versions of OO, and very possibly only Calc. Reported as Open Office Bug 1211094. Sigh!! As much as it irritates me, I'm having to switch over to Excel for several things I used to do with Calc. Excel has a miserable UI, but at least it says up for longer than 10 seconds.

    Read the article

  • Windows Authentication behaves oddly when VPN'd

    - by Dan F
    Hi all We've got a few apps that rely on windows authentication - a couple of web apps with AD auth turned on and we usually connect to our SQL servers with windows auth. This normally runs without a hitch. It doesn't work so well if we're VPN'd to a client site though. SSMS Opening SSMS normally from the start menu, then picking a server that normally accepts windows auth, results in a message saying: Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (.Net SqlClient Data Provider) If I drop to a command prompt and use runas /user:domain\user to launch SSMS I can successfully windows auth to our SQL server instances with that ssms process. If I look in task manager, both copies of ssms.exe (start menu vs runas) have the same user, and I can see no discernible differences between the processes in procexp. AD Auth websites If I open IE and browse to any of our websites that require an authenticated windows user, I get the "who are you" prompt, and that dialog thinks I'm whoever the VPN user is. I can click "Use another account" and authenticate that way though. Outlook Even Outlook prompts for a username when we are VPN'd! It's affecting our Win7 and Vista machines. It's been a while since we had an XP box, but I don't recall having this issue on XP for what it's worth. The VPN connections are just using the built in windows VPN connections, they're not fancy cisco VPNs or anything of that nature. Does anyone know how to tell windows that I'd like to be my normal old primary domain user rather than the VPN user when authenticating to resources in our domain? Heck, I'd be happy with a solution that prompted me with the "who are you" if I was trying to access windows auth requiring resources on the client's VPN. Thanks! Apologies if this is more a superuser question, I wasn't sure which site it best suited. It's about networking and infrastructure and plagues all of our developers here, so I hope it's a serverfault Q.

    Read the article

  • ASP.NET Website Administration Tool: Unable to connect to SQL Server database

    - by MedicineMan
    I am trying to get authentication and authorization working with my ASP MVC project. I've run the aspnet_regsql.exe tool without any problem and see the aspnetdb database on my server (using the Management Studio tool). my connection string in my web.config is: <connectionStrings> <add name="ApplicationServices" connectionString="data source=MYSERVERNAME;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true" providerName="System.Data.SqlClient" /> The error I get is: There is a problem with your selected data store. This can be caused by an invalid server name or credentials, or by insufficient permission. It can also be caused by the role manager feature not being enabled. Click the button below to be redirected to a page where you can choose a new data store. The following message may help in diagnosing the problem: Unable to connect to SQL Server database. In the past, I have had trouble connecting to my database because I've needed to add users. Do I have to do something similar here?

    Read the article

  • Internal disk not correctly recognised by Windows 7

    - by david
    i'm having problems configuring a disk in a brand new, clean windows-7 install. here are some system specifics- . disk- western digital velociraptor wd6000hlhx . mobo- gigabyte z77x-ud3h . bios sata-mode set to ahci [not raid], w/disk connected to sata0 [6gb/s hi-speed sata]. . windows 7 enterprise sp1 x64 the disk is recognized by bios and correctly identified [name & size ok]. the disk is also recognized by windows on a h/w level, but it won't show up in the explorer. windows reports the device is working correctly. windows disk manager shows the drive, but says it's uninitialized and has no partitions [which is incorrect]. if i try to initialize the drive, windows throws an error saying that it "cannot find the file specified". [which file???] before connecting the drive to the new machine, i partitioned and formatted the disk under windows xp sp2, giving it 2 partitions [mbr, not gpt] and copying over a boatload of data. obviously none of this appears under windows 7. removing the disk from the new machine and replacing it back in the windows xp machine shows the disk and all data are intact and functional. i'd like to have windows 7 recognize the disk w/o having to lose the data and start over. is this possible? if so, how would i do that? I checked this post, but even though the problem seems identical, the information didn't help. any help appreciated. thanks!

    Read the article

  • HP ProCurve & Cisco switches interoperability

    - by Kamil Z
    I have a couple of questions regarding Cisco and HP ProCurve interoperability. Here's a link to pdf with my network topology. Can someone help me with basic VLAN configuration in such topology? Below there are some details of my configuration: # m_management_2 interface FastEthernet0/43 switchport access vlan 250 switchport mode access spanning-tree port-priority 32 spanning-tree cost 100 # MTA2-swmgmt1 vlan 1 name "DEFAULT_VLAN" untagged 1-48 ip address 10.10.249.190 255.255.255.128 exit # MTA2-swtr1 vlan 1 name "DEFAULT_VLAN" untagged 1-14,16-48 no ip address no untagged 15 exit vlan 100 name "MTA Mgmt" untagged 15 ip address 10.10.249.188 255.255.255.128 exit # MTA2-swtr2 vlan 1 name "DEFAULT_VLAN" untagged 1-14,16-48 no ip address no untagged 15 exit vlan 100 name "MTA Mgmt" untagged 15 ip address 10.10.249.189 255.255.255.128 exit I don't post MTA2-bcsw[12] configuration, because I wasn's successfull in this one yet. Every time I configure VLANs on MTA2-bcsw[12] Fa0/24 interface on m_management_2 goes down bacause of receiving tagged BPDUs on access port (there are no VLANs configured on MTA2-swmgmt1 because of fact that only 250 VLAN is allowed in this switch. Is it correct?). Can someone provide me some basic configuration for this topology? Second thing I want to ask is concept of connection from MTA2-swmgmt1 to MTA2-swtr[12] HP switches for the sake of management. How to configure such ports on HP switches (managed switch and manager switch). Is my actual configuration correct?

    Read the article

  • Tomcat 7 on Ubuntu 12.04 startup issues

    - by Nico Huysamen
    I am having trouble getting tomcat 7 to start up on my new VPS. I am really scratching my head since I have done this often. So I'm thinking it might be the VPS. I just got a new VPS from CINFU. After a clean install of Ubuntu 12.04 32bit, I install openjdk-6-jdk, update JAVA_HOME to point to: /usr/lib/jvm/java-1.6.0-openjdk-i386 and JRE_HOME to: /usr/lib/jvm/java-1.6.0-openjdk-i386/jre But when I try to run: ./catalina.sh run it simply outputs: Using CATALINA_BASE: /opt/tomcat/apache-tomcat-7.0.29 Using CATALINA_HOME: /opt/tomcat/apache-tomcat-7.0.29 Using CATALINA_TMPDIR: /opt/tomcat/apache-tomcat-7.0.29/temp Using JRE_HOME: /usr/lib/jvm/java-1.6.0-openjdk-i386 Using CLASSPATH: /opt/tomcat/apache-tomcat-7.0.29/bin/bootstrap.jar:/opt/tomcat/apache-tomcat-7.0.29/bin/tomcat-juli.jar and stops. It just hangs there doing nothing. If I run ./startup.sh && tail -f ../logs/catalina.out it gets to: Aug 24, 2012 8:38:36 PM org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["http-bio-8080"] Aug 24, 2012 8:38:36 PM org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["ajp-bio-8009"] Aug 24, 2012 8:38:36 PM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 495 ms Aug 24, 2012 8:38:36 PM org.apache.catalina.core.StandardService startInternal INFO: Starting service Catalina Aug 24, 2012 8:38:36 PM org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet Engine: Apache Tomcat/7.0.29 but I am unable to access anything. The request just hangs. I have also tried a few other things like explicitly exporting the paths etc in catalina.sh, and running ./startup.sh rather than catalina.sh, but the furthest I have gotten is that it finishes deploying all the WARs (the default ones that comes with tomcat like the host-manager etc), but then it hangs: Aug 24, 2012 8:47:30 PM org.apache.coyote.AbstractProtocol init INFO: Initializing ProtocolHandler ["http-bio-8080"] and does nothing. Anyone have any pointers that might help? As I said, I must really be missing something stupid since this has worked on all other VPSs that we have. UPDATE I figured out that the problem is actually the fact that they use OpnVZ virtualization and that there are known compatibility problems with Java.

    Read the article

  • SCCM 2012 - some remote clients unable to download some applications, 401.2 error

    - by growse
    I've got a small SCCM 2012 deployment with about 35 clients attached. Most of these clients are in the same network as the single SCCM host, but three are about 1000 miles away. Oddly, these three clients have stopped being able to download some application packages over BITS. Publishing a new package works for all the other clients, but for these three it never seems to download. If I go to the software centre, it just hangs at "0% downloaded". On the client, the DataTransfer.log says (repeatedly): CDTSJob::HandleErrors: DTS Job '{2DCBBB4C-6D84-479A-9218-885B72C834B9}' BITS Job '{E78147DD-4A26-4942-B4FD-6EC3EB77EECD}' under user 'S-1-5-18' OldErrorCount 442 NewErrorCount 443 ErrorCode 0x80072EE2 DataTransferService 30/07/2012 09:27:41 2964 (0x0B94) CDTSJob::HandleErrors: DTS Job ID='{2DCBBB4C-6D84-479A-9218-885B72C834B9}' URL='http://sccm-host:80/SMS_DP_SMSPKG$/Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1' ProtType=1 DataTransferService 30/07/2012 09:27:41 2964 (0x0B94) Cas.log says (repeatedly): Location update from CTM for content Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1 and request {AD041FCB-03D2-4FE6-A6FA-38A6B80FB2A1} ContentAccess 30/07/2012 08:33:39 5048 (0x13B8) Download location found 0 - http://lonsbrndsccm02.mcs.int.thomsonreuters.com/SMS_DP_SMSPKG$/Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1 ContentAccess 30/07/2012 08:33:39 5048 (0x13B8) Download request only, ignoring location update ContentAccess 30/07/2012 08:33:39 5048 (0x13B8) On the server, I've enabled failed request log tracing. The raw IIS log says the following: 2012-07-30 08:28:42 10.13.111.35 GET /SMS_DP_SMSPKG$/Content_3e7f6982-6346-4f27-ae00-ad5dcb391455.1/sccm /NSCP-0.4.0.172-x64.msi 80 - 10.2.27.19 Microsoft+BITS/7.5 401 2 5 293 Which is a 401.2 error, meaning access denied. The failed request log is large, but the punchline is that it chucks out a Unauthorized: Access is denied due to invalid credentials. message. All clients are members of the same domain and appear to be (otherwise) working great. I've re-installed the SCCM client, deleted and re-added the computer to SCCM. Some other packages seem to work fine, the daily anti-malware delta gets downloaded and patched without issue. Why are these packages failing?

    Read the article

  • CD and DVD burning very slow with LG DVD Burner

    - by Om Nom Nom
    I have an HP m7560in Desktop which came with an LG DVD burner. As far back as I can remember, it has been having problems in burning discs. It used to take about one and a half hour to burn 4 GB of data on a blank DVD-R. I am using Nero and ImgBurn and I usually select 8x speed for burning. I used to run XP earlier and now I run Windows 7 on it, but I face the problem on both. I have already checked that DMA mode is selected in the IDE controllers' properties. On XP, uninstalling the IDE controllers and DVD drive from the Device Manager, rebooting and letting it install the drivers again fixed the problem for a while, but after a couple reboots, the problem used to come back. On Windows 7, even doing that is not fixing the problem. In fact, in Windows 7, the burning process doesn't even begin. Imgburn gets stuck on the first stage (of writing Lead-In), and it never progresses until I turn off / reset the computer. I checked the disc info after this happened, and the disc still shows empty and ready to be burned on, so it's obvious that it didn't even touch the disc. I have already had the drive replaced once by HP, but it didn't fix the problem. Do you guys have any suggestions? (Note that Windows Updates doesn't give any newer driver for the controllers, or any other components for that matter).

    Read the article

  • Create SAMBA node trust relationship to Windows 2003 PDC server

    - by Rod Regier
    I am having problems creating a trust relationship between an OpenVMS/IA64 node running V/IA64 8.3-1H1, TCPIP 5.6 ECO 5, CIFS 1.1 ECO1 PS11 (SAMBA 3.0.28a) and Windows 2003 server running as a PDC. I do have two other OpenVMS/Alpha nodes running V/A 8.3, TCPIP 5.6 ECO 4, CIS 1.1 ECO1 PS10 (SAMBA 3.0.28a) with working trust relationships to the same Windows 2003 server. Looking for assistance in resolving the trust "handshake". \\ Details from failing node. Unless otherwise noted, corresponding files on working nodes are similar or identical. SMB.CONF extract: [global] server string = Samba %v running on %h (OpenVMS) workgroup = WILMA netbios name = %h security = DOMAIN encrypt passwords = Yes name resolve order = lmhosts host wins bcast Password server = * log file = /samba$log/log.%m printcap name = /sys$manager/ucx$printcap.dat guest account = DYMAX print command = print %f/queue=%p/delete/passall/name="""""%s""""" lprm command = delete/entry=%j map archive = No printing = OpenVMS net rpc testjoin [2010/08/13 16:09:28, 0] SAMBA$SRC:[SOURCE.RPC_CLIENT]CLI_PIPE.C;1:(2443) get_schannel_session_key: could not fetch trust account password for domain 'WILMA' [2010/08/13 16:09:28, 0] SAMBA$SRC:[SOURCE.UTILS]NET_RPC_JOIN.C;1:(72) net_rpc_join_ok: failed to get schannel session key from server W2K3AD2 for domain WILMA. Error was NT_STATUS_CANT_ACCESS_DOMAIN_I NFO Join to domain 'WILMA' is not valid net rpc join "-Uaccount%password" tdb_open_isam: error verifying status of file SAMBA$ROOT:[PRIVATE]secrets.tdb tdb_open_isam: errno value = 1 [2010/08/13 16:21:13, 0] SAMBA$SRC:[SOURCE.PASSDB]SECRETS.C;1:(72) Failed to open /SAMBA$ROOT/PRIVATE/secrets.tdb [2010/08/13 16:21:13, 0] SAMBA$SRC:[SOURCE.UTILS]NET_RPC.C;1:(322) error storing domain sid for WILMA tdb_open_isam: error verifying status of file SAMBA$ROOT:[PRIVATE]secrets.tdb tdb_open_isam: errno value = 1 [2010/08/13 16:21:13, 0] SAMBA$SRC:[SOURCE.PASSDB]SECRETS.C;1:(72) Failed to open /SAMBA$ROOT/PRIVATE/secrets.tdb [2010/08/13 16:21:13, 0] SAMBA$SRC:[SOURCE.UTILS]NET_RPC_JOIN.C;1:(409) error storing domain sid for WILMA Unable to join domain WILMA. \\ Example from other node: net rpc testjoin Join to 'WILMA' is OK

    Read the article

  • Is it possible to extract contents of a ThinApp container?

    - by rsk82
    Is there possible to extract such packages made by somebody else. Ok, so if there is no general way of extracting such archives, which can be a huge exe file or tiny exe starter with huge packed *.bin file with main files of an app that is to be run in portable way - is there a way to set an option in compilation *.ini file or other way to make such package able to be extracted. I remember I read somewhere that somebody created a tiny program (in was mentioned in vmware forums as far as I can remember, it was a crude thing coded for private use and I never managed to download it) to sit with main portable application and if such application has an open/save file dialog, it is possible to navigate to that program which virtually sits in the program files alongside main app from within the main app, and such program would scan all files that it can see and somehow is able to distiguish a real file from virtual one, and save all virtual files in a structure that is similar to the initial compilation folder from which portable app was created. I know that it is a very round-about way of doing things, but maybe the only one feasible. Nevertheless any news on this front ? Do antiviruses can somehow unpack these things ? Maybe they must buy a code or license for it from VmWare ? Edit: I found http://communities.vmware.com/thread/257433?tstart=600 and still trying to make sense out of this. Wrong, this was about moving old version of thinapp to win7.

    Read the article

  • RDS, RDWeb, and RemoteApp: How to use public certificate for launching apps on session host?

    - by Bret Fisher
    Question: How do i tell RDWeb to launch apps from remote.domain.com rather then host.internaldomain.local? Environment: Existing org with AD forest. New single Server 2012 running all Remote Desktop Services roles for session host. Used the new 2012 wizard to setup "QuickSessionCollection" with roles: RD Session Host RD Connection Broker RD Gateway RD Web Access RD Licensing Everything works with self-signed cert, but we want to prevent those. The users are potentially non-domain machines so sticking a private root cert for on their machines isn't an option. Every part of the solution needs to use public cert. Added public remote.domain.com cert to all roles using Server Manager GUI: RD Connection Broker - Enable Single Sign On RD Connection Broker - Publishing RD Web Access RD Gateway So now everything works beautifully except the last step: user logs into https://remote.domain.com user clicks a app icon, which in background downloads a .rdp file that is signed by remote.domain.com. .rdp is set to use RD Gateway, which is remote.domain.com .rdp says app is hosted on internal host.internaldomain.local, which doesn't match the RDP-tcp TLS cert of remote.domain.com, and pops a warning. It's this last step that I'd like to fix. Is there a config option in PowerShell, WMI, or .config to tell RDWeb/RemoteApp to use remote.domain.com for all published apps so the TLS cert for RDP matches what the Session Host is using? NOTE: This question talks about this issue, and this answer mentions how you might fix it in 2008, but that GUI doesn't exist in 2012 for RemoteApp, and I can't find a PowerShell setting for it. NOTE: Here's a screenshot of the setting in 2008R2 that I need to change. It tells RemoteApp what to use for the Session Host server name. How can I set that in 2012?

    Read the article

  • Virtual Win XP Mode stopped HP LJ Pro M1212nf MFP printing in Win 7 Pro

    - by Dee
    Virtual Win XP Mode stopped HP LJ Pro M1212nf MFP printing in Win 7 Pro: I am running Windows 7 Pro with Virtual Windows XP Mode. My printer is HP LaserJet Pro M1212nf MFP attached directly to a USB port of the computer. This printer was working fine in Windows 7, until I tried to attach the printer to the Virtual Windows XP Mode in order to load the printer driver in the Virtual Windows XP Mode. At that point, the printer disappeared from the list of USB devices on the toolbar at the top of the window of the Virtual Windows XP Mode. After installing the printer driver in the Virtual Windows XP Mode, the printer did not work in that mode and also no longer worked in Windows 7. In Windows 7 and in the Virtual Windows XP Mode, print files are sent to the print queue, but are never printed. In Windows 7, the print queue states that the printer is offline. In the Virtual Windows XP Mode, the printer can be toggled from "Print Offline" to "Print Online", but no print files are ever printed from the print queue. The printer acts as though it is no longer connected to the computer, even though it is still physically connected to the USB port of the computer. How can I get the printer to work again in Windows 7? (At this point, I am no longer interested in using the Virtual Windows XP Mode.) I have tried a large number of things to find and fix the printer problem, but have had no success. Device Manager cannot see the printer even though it is physically connected via USB port (have tried different USB ports) to the computer. Restoring Win 7 and Virtual Win XP Mode to times before the problem does not fix the problem. How can I get the computer to see the printer, so that I can print again in Win 7?

    Read the article

  • Force dual-mono audio (L+L or R+R) in Youtube video playback for one-channel audio movies

    - by jakub.g
    Occasionally, I find Youtube videos that have only one audio channel (only left or only right); example video (left channel only). This is quite annoying, especially with headphones on, as I hear sound in one ear, and no sound in the other. So, I want to be able to easily force dual mono (Left+Left or Right+Right) when I find that kind of video, and switch to normal stereo after I finish watching it. I have my headphones plugged well / I don't create audio/video - I want it for real-time playback only, In Windows audio config, setting balance 100% to Left / Right doesn't help (I have either still only left when moved to left, and no sound at all when moved to right), I've checked all the configurations in Control Panel > Sounds and Audio Devices > Audio > Sound Playback > Advanced like suggested in this post, in conjunction with moving balance left/right, and it doesn't seem to have any impact on actual sound I hear in headphones, No need to mix L with R, I just want L+L or R+R, I prefer software solutions to buying a stereo-to-mono adapter, Free solutions please, no $$$ ones, neither trials etc., In Control Panel > Realtek HD Sound Effect Manager I can turn on various mumbo-jumbo effects like: Concert Hall / Hangar / Bathroom / whatever environment (and in fact it makes the sound appear in two ears, but well, it's ridiculous to do this;), but there is no Dual Mono option. Finally, I know I can force L+L or R+R in VLC Player which supports Youtube (well, a little hack is needed, because Youtube internals change from time to time) but it is not very convenient to launch VLC just to play Youtube video - I want to keep it in the browser, I use Firefox generally (but well, if I don't find easier way, I will launch it in VLC).

    Read the article

< Previous Page | 550 551 552 553 554 555 556 557 558 559 560 561  | Next Page >