Search Results

Search found 2046 results on 82 pages for 'agent x'.

Page 45/82 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • IBM Tivoli Network Manager IP Edition - Job does not run

    - by Thorsten Niehues
    Since our network discovery takes too long I tried to split the biggest job into two parts. The two parts use the same Perl script but have a different scope. I copied a Job (Agent) doing the following: Copied the .agnt file Copied the associated perl script The problem is that one or the other job (changes randomly) does not run. The Disco Process will fail eventually. In the log of the job which does not run I see the following error message: Wed Jul 18 08:48:54 2012 Warning: Failed to send on transport layer found in file CRivObjSockClient.cc at line 1293 - Client My_MacTable_Cis is not connected to service Helper How do I fix this problem?

    Read the article

  • Can DPM 2007 back up Active Directory?

    - by rbeier
    We're installing Microsoft Data Protection Manager 2007 - we'll be using it to back up Exchange and SQL Server among other things. Does anyone know if DPM can also back up Active Directory? It sounds like the answer is "not really". You can install the DPM agent on a domain controller and make system state backups. But if your Active Directory is out of commission, there will be no way to restore the backups, since DPM depends on AD. Currently we're just using Windows Backup (ntbackup) to take system state backups on one of the DCs. Should we just continue with that? Thanks, Richard

    Read the article

  • VPN authentication and MAC addresses

    - by zakk
    I have to set up a VPN (various clients connecting to a web service on a server, which is also the VPN server) and I want to make sure that no user will share his/her credentials with third parties. I know that this problem is not solvable completely, but I'd want to set up some additional security checks... Some idea I have: 1) An additional check on MAC address, but... are MAC addresses preserved thru VPN? 2) Some kind of extra identification of the client (User Agent, open ports, I want to make sure that is the very same client I authorized). 3) I would like to avoid commercial solutions like a security token... I realize it would be the perfect solution, but it will be to expensive, I suppose... Do you feel that these options are viable? Do you have any other ideas? Thanks in advance for your replies!

    Read the article

  • zabbix 2.2.1 no graphs in Web scenario

    - by Mick
    Hello for some time I have a problem with graphs in web scenarios on Zabbix 2.2.1, I put below the screen, this problem has appeared at every graph of web scenario This same scenario installed a second zabbixie that runs on my local virtual machine with zabbix. In my local machine all components of zabbix (server, frontend, agents), but in my production zabbix only zabbix frontend are separated from each other. Scenario for openerp ============================== Name: OpenERP Web Checks Application: New application: Authentication: Update interval (in sec): 60 Retries: 1 Agent: Internet Explorer 10.0 Steps: ============================== Name: OpenERP login page URL: http://openerp.test.com Post: Variables: Timeout: 15 Required string: Required status codes: 200 My zabbix server performance: Anybody have some idea how fix it ? Regards Mick

    Read the article

  • Backup Exec 2010 throwing error trying to restore Exchange mailbox

    - by Mindflux
    Error category : Resource ErrorsError : e000848c - Unable to attach to a resource. Make sure that all selected resources exist and are online, and then try again. If the server or resource no longer exists, remove it from the selection list. Edit the selection list properties, click the View Selection DFor additional information regarding this error refer to link V-79-57344-33932 I've got the Exchange agent loaded on the Exchange server. Through talking with some other folks I've added the Exchange Management Console to the Media(Backup) server. None of this has helped. I can back up Exchange all day long, however I cannot restore from it. I've followed the link given (V-79-57344-33923) which goes here and none of that has helped either. Server is running: Win Server 2008 w/ SP2 (64 bit) Backup Exec 2010 I am backing up to a Tandberg T24 tape library.

    Read the article

  • All of ESX4 hosts not responding after rebooting vCenter server

    - by hojyokinmo
    Hello I'm constructing 5ESX+a vCenter serv. I'm testing FT on the system. Today vCenter OS (W2008) demanded system reboot. After rebooting vCenter Serv. All of ESX hosts have alert icon. end indicate "not responding" I tried to connect ESX (version 4) hosts to vCenter again. Once they came back normally, but after few seconds ,They turned red again. All of them have ping connection to vCenter serv. There are no red messages in Tusks and events. Two messages are found in Summary of Cluster ·There aren't sufficient resource for HA failover. ·Can't contact primary HA agent. I'm also not able to reset FT configuration because all of VMS aren't accessed from vCenter What should I do?

    Read the article

  • Exchange 2003 resource scheduling with mixed client versions

    - by Daniel Lucas
    We run Exchange 2003, but have a mix of Outlook 2003/2007/2010 in the environment. We have three rooms that need to be configured as resources. Some observations we've made with resource scheduling/booking are: Outlook 2010 users have trouble with the native Exchange 2003 resource scheduling method and require direct booking to be configured via registry Outlook 2007 users are unable to use direct booking (is this accurate?) Outlook 2003 users can only use the native Exchange 2003 resource scheduling method (is this accurate?) Direct booking cannot be combined with the auto-accept agent What is the correct way to setup resource scheduling in a mixed environment like this? Thanks, Daniel

    Read the article

  • ISA Server Route Add Question

    - by Kip
    Hi All, I have a situation where I have and ISA 2006 server (on Win2k3) that has an internal and an externaly facing NIC's. All works fine but I need to add a couple of routes for the following reason: Our monitoring software is on a different network. Our Terminal server is on a different network. Currently, access to the internet, through this proxy server, from the terminal server fails. Also, monitoring of the ISA server via a remote monitor or the installed agent talking to the remote monitor (BMC) also fails. The default enterprise rule on ISA blocks the traffic as I beleive it doesn't trust / know about those networks. Here is my routing table: I need to add a couple of address, but this one being the main one: 192.168.245.137 / mask 255.255.255.192 / gateway 192.168.245.129 But I can't get it to work. Routing is not my strong point but at the moment have no one else available to help. Can you offer any assistance? Please ask if you need more info

    Read the article

  • Send files ending in .mp4 in Apache with HTTP 206 Partial Content

    - by Pacha
    I am using Apache as web server and the return code is always HTTP/1.1 200. I want to set some kind of handler or use a mod to return HTTP/1.1 206 when the extension of the file requested is .mp4 so it can do video seeking, my web server is already returning some headers to do seeking, but it doesn't work. Is this possible? The HTTP headers http://*hidden*/media/movies/file/1080/d3191cd83109c593ec908f3a47efa8a2.mp4 GET /media/movies/file/1080/d3191cd83109c593ec908f3a47efa8a2.mp4 HTTP/1.1 Host: *hidden* User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Firefox/31.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Referer: http://vjs.zencdn.net/4.6/video-js.swf Cookie: csrftoken=zXngwwS1S827g7aAJYbHJS3ajn5BGq9M; sessionid=uj1hlj00c85aoehw0n5fye8waggb7uod Connection: keep-alive HTTP/1.1 200 OK Date: Thu, 21 Aug 2014 15:04:46 GMT Server: Apache/2.2.22 (Debian) X-Mod-H264-Streaming: version=2.2.7 Content-Length: 2148905782 Last-Modified: Wed, 13 Aug 2014 11:36:46 GMT Etag: "8e002a-8015b345-5008133ff23c4;-2146061514" Accept-Ranges: bytes Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: video/mp4

    Read the article

  • hiera_include equivalent for resource types

    - by quickshiftin
    I'm using the yumrepo built-in type. I can get a basic integration to hiera working yumrepo { hiera('yumrepo::name') : metadata_expire => hiera('yumrepo::metadata_expire'), descr => hiera('yumrepo::descr'), gpgcheck => hiera('yumrepo::gpgcheck'), http_caching => hiera('yumrepo::http_caching'), baseurl => hiera('yumrepo::baseurl'), enabled => hiera('yumrepo::enabled'), } If I try to remove that definition and instead go for hiera_include('classes'), here's what I've got in the corresponding yaml backend classes: - "yumrepo" yumrepo::metadata_expire: 0 yumrepo::descr: "custom repository" yumrepo::gpgcheck: 0 yumrepo::http_caching: none yumrepo::baseurl: "http://myserver/custom-repo/$basearch" yumrepo::enabled: 1 I get this error on an agent Error 400 on SERVER: Could not find class yumrepo I guess you can't get away from some sort of minimal node declaration w/ hiera and resource types? Maybe hiera_hash is the way to go? I gave this a shot, but it produces a syntax error yumrepo { 'hnav-development': hiera_hash('yumrepo') }

    Read the article

  • On boot firmware request for intel GMA 3100 chipset timing out

    - by Yannick M.
    I am currently in the process of installing a Gentoo linux box with a Vanilla 2.6.29-r5 kernel with gentoo-xen-kernel patches in order to run the Xen Hypervisor. After rebooting with the new kernel, the booting process seemed to hang on: [ 0.863005] platform microcode: firmware: requesting intel-ucode/06-0f-07 [ 60.863442] Microcode Update Driver: v2.00-xen <[email protected]>, Peter Oruba Apparently the firmware request times out after 60 seconds (/sys/class/firmware/timeout) and booting just continues. I have done some research and have found that on RHEL-4 this problem was related to the mount of /sys changed and the firmware.agent hotplug script couldn't parse the line correctly. However I am having some difficulty tracking down how to fix this on Gentoo. Any and all ideas are greatly appreciated! Thanks

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • HTTP Range request rejected

    - by Dan
    I am trying to understand why my production environment might be disallowing HTTP RANGE requests. I have a pool of W2K8x64/IIS7 servers behind a pair of Netscaler 9000s. I compose the following request in Fiddler: http://myorigin.example.com/file.flv User-Agent: Fiddler Host: myorigin.example.com Range: bytes=40000-60000 The response looks like: HTTP/1.1 200 OK Cache-Control: public Content-Type: video/x-flv Expires: Thu, 24 Jun 2010 18:23:53 GMT Last-Modified: Sat, 11 Apr 2009 00:16:14 GMT Accept-Ranges: none ETag: f9d5c718-e148-4225-9ca6-d1f91a2a3c08-_633749805744270000 Server: Microsoft-IIS/7.0 Edge-Control: max-age=2592000 X-Powered-By: ASP.NET Date: Tue, 25 May 2010 18:23:53 GMT Content-Length: 443668 "Accept-Ranges: none" tells me that the range request was rejected, but I am not sure where/why as IIS7 accepts Range by default. Could the 'scalers be shooting it down? Thanks, Dan

    Read the article

  • php file downloads instead of being processed with ajax on apache

    - by eagleon
    I have a small website where some content is displayed within a HTML tag using AJAX. The content is simply taken from another page on the same web site. However, sometimes instead of loading the parsed PHP file, the browser displays a download box instead. I downloaded the file and this is what it looks like a text file mixed with binary or gzipped data. I can't paste the binary stuff here, but here are some of the headers: Jul 2012 18:52:16 GMT Server: Apache/2 X-Powered-By: PHP/5.3.10 Content-Encoding: gzip Vary: Accept-Encoding,User-Agent Keep-Alive: timeout=1, max=95 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html HTTP/1.1 304 Not Modified Date: Sun, 01 Jul 2012 18:52:16 GMT Server: Apache/2 Connection: Keep-Alive Keep-Alive: timeout=1, max=93 ETag: "2fc857-409-4c39691c59b40" HTTP/1.1 304 Not Modified Date: Sun, 01 Jul 2012 18:52:16 GMT Server: Apache/2 Connection: Keep-Alive Keep-Alive: timeout=1, max=92 ETag: "2fc854-3e5-4c39691b65900" HTTP/1.1 304 Not Modified Date: Sun, 01 Jul 2012 18:52:16 GMT Server: Apache/2 Connection: Keep-Alive Keep-Alive: timeout=1, max=91 ETag: "2fc847-3e3-4c3969197d480" and large blocks of stuff like this: µàl]&BaËÜk#ìÏ

    Read the article

  • SQL Alter database failed - being used by checkpoint process

    - by Manjot
    Hi, On my SQL server 2008, i have a SQL agent job to restore a database on nightly basis. Procedure: find latest backup on other server Kill all conenction to the destination database Restore destination database with replace, recovery It failed last weekend because the database was being used by a system process (spid 11 checkpoint). since I couldnt kill the system process, I fixed this by restarting sql server. It failed this weekend as well with same error (checkpint process in this database as from sp_who) and when I run: SELECT session_id,request_id,command,status,start_time FROM sys.dm_exec_requests WHERE session_id = 11 It shows: 11 0 CHECKPOINT background 2010-04-06 10:17:49.103 I cant restart the server every time it fails. Can anyone please help me in fixing this? Thanks in advance Manjot

    Read the article

  • Implications and benefits of removing NT AUTHORITY\SYSTEM from sysadmin role?

    - by Cade Roux
    Disclaimer: I am not a DBA. I am a database developer. A DBA just sent a report to our data stewards and is planning to remove the NT AUTHORITY\SYSTEM account from the sysadmin role on a bunch of servers. (The probably violate some audit report they received). I see a MSKB article that says not to do this. From what I can tell reading a variety of disparate information on the web, a bunch of special services/operations (Volume Copy, Full Text Indexing, MOM, Windows Update) use this account even when the SQL Server and Agent service etc are all running under dedicated accounts.

    Read the article

  • Could not retrieve backup settings for primary ID in Log shipping

    - by user1723139
    I am doing log shipping between two Amazon EC2 instances running Windows Server 2008 R2 with SQL Server 2008 R2 standard edition. Both the instances are in the same domain and I can access the shared folders between the instances. The SQL server service account, agent service account are all running under a domain account. When I activate log shipping (with stand by mode restore in secondary server), the initial backup gets restored on the secondary. After that the backup operation is getting failed and i get the following error message: *** Error: Could not retrieve backup settings for primary ID 'xxxxxx-xxxx-xxxx-xxxx-4d772cd7337e'.(Microsoft.SqlServer.Management.LogShipping) *** *** Error: Failed to connect to server IP-0A7653F2.(Microsoft.SqlServer.ConnectionInfo) *** ****** Error: A network-related or instance-specific error occurred while establishing a connection to SQL Server.******** **The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)(.Net SqlClient Data Provider) *** **----- END OF TRANSACTION LOG BACKUP -----**** Any ideas?

    Read the article

  • SQL Server 2005 Default Backup Plan

    - by tylerl
    I noticed that a newly imported database on SQLServer 2005 had configured itself (without my knowledge) to perform daily backups; but it's not deleting old files and quickly filling up the disk. I don't know how the backup job got configured (maybe that's something that gets transferred when you move a database?) but I'm having trouble modifying it. The backup runs as part of SQL Server Agent job called "Daily Backups". This job runs a package called "(SSIS Packages)\Maintenance Plans\Backup Plan" -- which I can't find. The "Management\Maintenance Plans" area for my server is empty. I imagine I could delete the existing plan and re-create it manually, but I was hoping to just modify what was already there, since all that's missing is deleting old files.

    Read the article

  • Web Deploy to IIS7 fails with 401 (Unauthorized)

    - by Trex
    we have IIS7 running on Windows Web Server 2008 R2 and it's set up to support Web Deploy. It worked fine when we used the default Administrator account. We recently disabled this account (for security reasons) and are now trying to deploy using another account which is member of the Administrators group, but the deploy fails with 401 (Unauthorized). More specifically, it says: Connected to '<IP>' using Web Deployment Agent Service, but could not authorize. Make sure you are an admin on '<IP>'. The remote server returned an error: (401) Unauthorized. Anybody has any ideas why this is happening? Thanks. Trex

    Read the article

  • Strange request - http://66.196.81.202/error/vote

    - by mplungjan
    Hi a friend of mine is asking about the request which can for example be found here: http://www.geoidee.ch/geodata/geoserver-2.0.0/logs/2010_11_23.request.log His original message: On a couple of hundred web sites worldwide, one of the 50 most popular "File not found" error 404 is caused by the following request: "GET http://66.196.81.202/error/vote HTTP/1.0" It originates from a user agent that purports to be an iPhone. The two requests that hit my servers appeared to originate near Frankfort, Germany. The IP address in the request is part of Yahoo although I doubt that Yahoo had any intentional part. fe1.buzz.vip.re1.yahoo.com The HTTP request has a host header 66.196.81.202 and a X-Forwarded-For of 96.6.99.16 and my IP address I expected to be able to do a Google search and find some kind of security bulletin on it, but I found nothing. It could just be that my search skills are deficient. Thanks for any pointers to what this could be

    Read the article

  • Starting/Stopping IBM WebSphere Application Server (WAS) 7 from the Command Line

    - by Christopher Parker
    I've written a script to automate the process of starting, stopping, and restarting WAS7 from the command line. Nothing starts automatically on one of our staging servers, so I have to start everything: deployment manager, node agent, app server, and Web server. The script I wrote seems to work pretty well. A coworker of mine recommended that I structure my commands differently. I'm wondering if there's a good, valid reason for doing so. First, my variables: WAS_HOME="/opt/IBM/WebSphere/AppServer" WAS_PROFILE_NAME="AppSrv01" WAS_APP_SERVER="server1" WAS_WEB_SERVER="webserver1" How I had the start commands: "${WAS_HOME}/bin/startManager.sh" "${WAS_HOME}/bin/startNode.sh" -profileName $WAS_PROFILE_NAME "${WAS_HOME}/bin/startServer.sh" -profileName $WAS_PROFILE_NAME $WAS_APP_SERVER "${WAS_HOME}/bin/startServer.sh" -profileName $WAS_PROFILE_NAME $WAS_WEB_SERVER I was told that I should do it like this, instead: WAS_DMGR="Dmgr01" # Added variable "${WAS_HOME}/profiles/${WAS_PROFILE_NAME}/bin/startNode.sh" "${WAS_HOME}/profiles/${WAS_DMGR}/bin/startManager.sh" "${WAS_HOME}/profiles/${WAS_PROFILE_NAME}/bin/startServer.sh" $WAS_APP_SERVER "${WAS_HOME}/profiles/${WAS_PROFILE_NAME}/bin/startServer.sh" $WAS_WEB_SERVER How is the second way of starting up everything for WebSphere any better or more correct than the first, original, way?

    Read the article

  • Apache mod_deflate not compressing javascript and css files?

    - by user34295
    "GET /Symfony/web/app.php/app/dashboard HTTP/1.1" 4513/37979 (11%) "GET /Symfony/web/css/application.css HTTP/1.1" -/- (-%) "GET /Symfony/web/js/application.js HTTP/1.1" -/- (-%) "GET /Symfony/web/js/highcharts.js HTTP/1.1" -/- (-%) "GET /Symfony/app/Resources/public/img/logo.png HTTP/1.1" -/- (-%) Don't know if there is something wrong with my configuration, but the no compression for css and js seems strange to me. However both css and js are already minified. Here is Apache relevant section in cong/httpd.conf: # Deflate AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/rss+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript DeflateCompressionLevel 9 BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/html # IE5.x and IE6 get no gzip, but allow 7+ BrowserMatch \bMSIE\s7 !no-gzip Header append Vary User-Agent env=!dont-vary DeflateFilterNote Input instream DeflateFilterNote Output outstream DeflateFilterNote Ratio ratio LogFormat '"%r" %{outstream}n/%{instream}n (%{ratio}n%%)' deflate CustomLog logs/deflate.log deflate

    Read the article

  • Concurrent backups in SQL Server?

    - by Mikey Cee
    We currently have our backups managed by a third party company. There are a bunch of agent jobs created that take full backups (4 times a day) and transaction log backups (4 times an hour). We now want to manage our backups in house, but don't want to disable the third party's jobs until we are sure that we have everything configured correctly internally So I am proposing to have a short period (say, a couple of days) where backups are being taken both by the old and the new system. I am wondering what the ramifications of having these two different systems both manage backups, and the potential pitfalls of having backups taken simultaneously. Is this even supported? If so, and bearing in mind that the system can cope with one backup without any noticeable performance degradation, is it fairly logical to assume that it should be able to cope with two simultaneous backups? Currently the load on the server is fairly light and it rarely struggles. Any advice is appreciated

    Read the article

  • "Catch-All" access log with Apache Virtual Hosts?

    - by pix0r
    I have many virtual hosts set up on a web server, each one having its own error and access log. The relevant lines of httpd.conf are something like this: ErrorLog /var/log/httpd-error.log LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined CustomLog /var/log/httpd-access.log combined NameVirtualHost *:80 <VirtualHost *:80> ServerName myhost.com ServerAlias www.myhost.com DocumentRoot /var/www/myhost.com/htdocs ErrorLog /var/www/myhost.com/log/error.log CustomLog /var/www/myhost.com/log/access.log combined </VirtualHost> # ... many more VirtualHosts Currently, I'm getting some random errors in /var/log/httpd-error.log, but I'm getting nothing in /var/log/httpd-access.log. Is it possible to have ALL accesses and errors duplicated to a shared logfile? Is it possible to do this without adding new entries to every single VirtualHost?

    Read the article

  • Product Recommendation: Good job scheduler for windows servers?

    - by Bret Fisher
    Looking for a mostly-GUI tool that is low cost (less then $1k, but not required) and allows you to create scheduled tasks and jobs without writing vbscript, batch files, or powershell. Something simple that speaks SMB/CIFS, SMTP, LDAP, etc. for such things as "delete some files based on a list of folders from this text file" or "disable all users with expired accounts" or "delete all disabled users not in this AD group". I've seen some of the big multi-OS enterprise task automation systems and they just look way overkill. We're a windows-only shop, Server 2003 or newer and there's got to be a simple non-agent based product that is drag-n-drop for some of this basic automation. Today we use all three languages mentioned above, and the scripts are not as reliable as a workflow-based-tool would be. Thanks.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >