Search Results

Search found 35433 results on 1418 pages for 'document based'.

Page 444/1418 | < Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >

  • Office documents on intranet all requiring second login and can't pass auth? Disable webdav?

    - by DOTang
    I am not sure what is going on, but recently all the Office documents on our intranet get prompted a second time for login and according to the error logs it looks like it's trying to use webdav to open (an editable?) version of the document to save directly on the server? We have no sharepoint server setup or anything, but this shouldn't be happening. All I want is for the document to be saved or opened from a local copy in temp like normal. Here is the log: Line 57499: 2011-04-12 15:57:10 (ip) OPTIONS (address) - 443 (username) (user ip) Microsoft-WebDAV-MiniRedir/6.1.7601 - 401 1 1326 1525 238 0 Line 57500: 2011-04-12 15:57:10 (ip) OPTIONS (address) - 443 (username) (user ip) Microsoft-WebDAV-MiniRedir/6.1.7601 - 401 1 1326 1525 238 0 Line 57501: 2011-04-12 15:57:10 (ip) OPTIONS (address) - 443 (username) (user ip) Microsoft-WebDAV-MiniRedir/6.1.7601 - 401 1 1326 1525 238 0 The log basically contains a bunch of these. How can I disable this behavior so that office documents that are downloaded aren't attempted to be used through webdav?? Edit: I should clarify behavior, it asks if you want to save or open it, upon choosing open open, it asks to re-authenicate, you put in the user information and the login box comes up 3 times acting like you entered the wrong password. For some users, after passing the login box the third time, it still opens up, for others their browser just locks up. It also doesn't even look like webdav is installed on our server, I see no config options in IIS for it as outlined on this page: http://learn.iis.net/page.aspx/350/installing-and-configuring-webdav-on-iis-7/#001

    Read the article

  • Apache https is slow

    - by raucous12
    Hey, I've set apache up to use SSL with a self signed certificate. With https (KeepAlive on), I can get over 3000 requests per second. However, with https (KeepAlive off), I can only get 13 requests per second. I know there is supposed to be a bit of an overhead, but this seems abnormal. Can anyone suggest how I might go about debugging this. Here is the ab log for https: Server Software: Apache/2.2.3 Server Hostname: 127.0.0.1 Server Port: 443 SSL/TLS Protocol: TLSv1/SSLv3,DHE-RSA-AES256-SHA,4096,256 Document Path: /hello.html Document Length: 29 bytes Concurrency Level: 5 Time taken for tests: 30.49425 seconds Complete requests: 411 Failed requests: 0 Write errors: 0 Total transferred: 119601 bytes HTML transferred: 11919 bytes Requests per second: 13.68 [#/sec] (mean) Time per request: 365.565 [ms] (mean) Time per request: 73.113 [ms] (mean, across all concurrent requests) Transfer rate: 3.86 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 190 347 74.3 333 716 Processing: 0 14 24.0 1 166 Waiting: 0 11 21.6 0 165 Total: 191 361 80.8 345 716 Percentage of the requests served within a certain time (ms) 50% 345 66% 377 75% 408 80% 421 90% 468 95% 521 98% 578 99% 596 100% 716 (longest request)

    Read the article

  • Manual Http error response code in non-existent folder via routing

    - by Slytherin
    Apache server running on ubuntu-like linux I am getting unexpected behaviour when i try to manually send error response. If my .htaccess is responsible for the error response , then appropriate error document is loaded and displayed , with according response code in browser console. However , if my router is origin of the response code , then i get blank screen , but correct response code. .htaccess looks like this RewriteEngine On # RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule !\.(css|js|icon|zip|rar|png|jpg|gif|pdf)$ index.php [L] ErrorDocument 404 /err/404.html ErrorDocument 403 /err/403.html ErrorDocument 500 /err/500.html part of my router that sends the response is the following header("HTTP/1.1 403 Forbidden"); trying this format didnt help either header("HTTP/1.1 403 Forbidden", TRUE, 403); I also tried HTTP/1.0. Furthermore i was thinking that maybe relative path to error page might be an issue , but discarded this idea after attempting to access a document that is forbidden via .htaccess EDIT I should also point out , this scenario happens when URL for not-existing article is requested. Is it possible that Server is looking for a .htaccess file in a folder based on URL ? Eg: domain/blog/non-existent , is server looking for blog folder ? I am specifically asking this because there is no blog folder

    Read the article

  • Save and restore multiple layers within a Photoshop action that flattens

    - by SuitCase
    I'm editing comic pages with layers - "background", "foreground", "lineart" and "over lineart". I have a Photoshop action that includes a Mode-Bitmap command, which requires the image to be flattened. I need this part of the action because I use the Halftone Screen method of reducing the greyscale image to bitmap on the "background" layer, creating a certain effect. I am pretty sure there is no filter or anything else that gives the same effect. After the mode is changed to bitmap, my action changes things back to greyscale for further changes. This poses a problem. I only want to do the bitmap mode change on the background layer, and after I do the change I want to restore the layer structure as it was - with the foreground, lineart and over lineart layers back above the now-halftoned background. My current method of saving these layers and restoring them is clumsy. My action is able to automatically save the "foreground" layer by selecting it, cutting it, then pasting it back in after the mode changing is over. But, for the "ink" and "over ink" layers, I have to manually cut these layers, paste them into a new document, and later re-cut and re-paste after running my action. This is so clunky! What I would like to know is if it's possible to set aside my layers in an automated way, and then bring them back in, also in an automated way. An ugly (but functional) solution would be to replicate my actions of creating new documents and pasting them temporarily there, but I don't think Photoshop allows you to do things outside of your current document with an action. It seems to me that the only way to do what I want is to use the "hack" of incorporating the clipboard into the action as a clever hack, but that leaves me stuck as I have two more layers that can't fit onto that same clipboard. Help or suggestions would be appreciated. I can keep on doing it manually, but to have a comprehensive action would save me a ton of time.

    Read the article

  • Emails sent from Coldfusion using the same SMTP/Exchange server works from one machine but fails for another

    - by Peter Herdenborg
    First, apologies if this question is too vague or has too little information to really be answerable. I am not normally working with these issues, and I don't have full access to the environment. However, the hosting provider seems to have a hard time tracking down the issue, so I am hoping that someone can at least provide me with some qualified guesses about the most likely problem. Here goes: A client I work for has a hosted IT environment, based on virtual machines running Windows 2008 R2 Standard. Our website, based on Coldfusion 9 was recently migrated from one virtual machine to another, and though Coldfusion is configured in the exact same way, using the same SMTP server, i.e. the client's Exchange server hosted in the same environment and in the same AD as both web servers, sending emails to external recipients is no longer working. It is still working fine when testing from the old machine. This is what I've learnt so far (all emails are sent using a valid from-address on the client's domain): Emails sent to other recipients on the same domain are delivered without any problem. Emails sent to external recipients on other domains are never delivered. When sending emails to both internal and external recipients, no emails are delivered. When receiving one of these emails to an internal address, the sender is now indicated as "[email protected]", while when sent from the old machine, it used to say just "sender". This seems to me that it could hint that the Exchange machine "recognizes" the old web server while it is a stranger to the new. In Coldfusion's mail log, all messages appear to be successfully delivered to the SMTP server. Any ideas what settings to look at, what log entries to search for or how to compare the old web server with the new one will be highly appreciated.

    Read the article

  • What is the risk of introducing non standard image machines to a corporate environment

    - by Troy Hunt
    I’m after some feedback from those in the managed desktop or network security space on the risks of introducing machines that are not built on a standard desktop image into a large corporate environment. This particular context relates to the standard corporate image (32 bit Win XP) in a large multi-national not being suitable for a particular segment of users. In short, I’m looking at what hurdles we might come across by proposing the introduction of machines which are built and maintained by a handful of software developers and not based on the corporate desktop image (proposing 64 bit Win 7). I suspect the barriers are primarily around virus definition updates, the rollout of service packs and patches and the compatibility of existing applications with the newer OS. In terms of viruses and software updates, if machines were using common virus protection software with automated updates and using Windows Update for service packs and patches, is there still a viable risk to the corporate environment? For that matter, are large corporate environments normally vulnerable to the introduction of a machine not based on a standard image? I’m trying to get my head around how real the risk of infection and other adverse events are from machines being plugged into the network. There are multiple scenarios outside of just the example above where this might happen (i.e. a vendor plugging in a machine for internet access during a presentation). Would a large corporate network normally be sufficiently hardened against such innocuous activity? I appreciate the theory as to why policies such as standard desktop images exist, I’m just interested in the actual, practical risk and how much a network should be protected by means other than what is managed on individual PCs.

    Read the article

  • Windows Server 2003 (w/Exchange) move to new machine

    - by James Booker
    I have an ageing domain controller (the only one on a 10-pc network) which needs rebooting often. I have a Dell Poweredge 2850 server doing nothing, so I'd like to move the DC to that, but here's the catch - I don't have Win2k Server Std install media any more as it's been lost. I purchased "Easus Todo Backup Advanced Server" which claims to be able to recover to dissimilar metal, but it's not quite working (although I don't think it's the product's fault) I know the server and PERC RAID card are good because I installed Ubuntu on the logical drive (4 x 72GB disks RAID 5) no problems. I've booted frmo the Easus Todo backup CD (which is WinPE based) and recovered to the logical disk on the RAID (after installing driver inside the WinPE environment from a NAS drive) The problem is when I boot the server, I can get the OS selection menu, but any option results in a blank screen, with no errors. I figure this is probably because the driver wasn't installed on the old machine (which is IDE-based (i know, i know!) and doesn;t have a RAID controller) I've booted from the CD and copied the mraid35x.sys file to the c:\windows\system32\drivers folder on the recovered system, but it makes no difference. I made a boot.ini with rdisks 0-10 defined, and booting from each of these resulted in a file error (i.e. 'this isn't a real disk') - the only disk that gets any response (the blank screen) is multi(0)disk(0)rdisk(0)partition(1) which just gives me the blank black screen and no disk activity. Is there any way I can force the drvier to be installed on the source system (so i can do a full backup again), i've tried right-clicking the oemsetup.inf and clicking install, but it didn't actually do anything. I attempted to force it with the 'Add new hardware' wizard and forcing with the 'have disk' option but it still gave me no hardware to select. Also I've got an identical machine running WinXP which uses the PERC driver successfully (which was obviously done at install time) and the boot.ini settings are the same : multi(0)disk(0)rdisk(0)partition(1) Any ideas would be appreciated.

    Read the article

  • CentOS server. What does it mean when the total used RAM does not equal the sum of RES?

    - by Michael Green
    I'm having a problem with a virtual hosted server running CentOS. In the past month a process (java based) that had been running fine started having problems getting memory when the JVM was started. One strange thing I've noticed is that when I start the process, the PID says it is using 470mb of RAM while the 'used' memory immediately drops by over a 1GB. If I run 'top', the total RES used across all processes falls short of the 'used' listed at the top by almost 700mb. The support person says this means I have a memory leak with my process. I don't know what to believe because I would expect a memory leak to simply waste the memory the process is allocated not to consume additional memory that doesn't show up using 'top'. I'm a developer and not a server guy so I'm appealing to the experts. To me, if the total RES memory doesn't add up to the total 'used' it indicates that something is wrong with my virtual server set-up. Would you also suspect a memory leaking java process in this case? If I use free before: total used free shared buffers cached Mem: 2097152 149264 1947888 0 0 0 -/+ buffers/cache: 149264 1947888 Swap: 0 0 0 free after: total used free shared buffers cached Mem: 2097152 1094116 1003036 0 0 0 -/+ buffers/cache: 1094116 1003036 Swap: 0 0 0 So it looks as though the process is using (or causing to be used) nearly 1GB of RAM. Since the process (based on top is only using 452mb, does that mean that the kernal is all of a sudden using an additional 500mb?

    Read the article

  • troubleshooting postifx -> exchange connection issues

    - by Systemspoet
    I have three linux-based mail routers that run postfix and relay mail to our on-premise exchange server as well as to outlook.com, splitting the mail based on ldap atttributes. What I've observed sporadically since upgrading this spring from Exchange 2007 to 2010 is that all three of the mail relays will, for about 20 minutes, fail to connect to exchange. Postfix logs it as "lost connection with exchange.contosso.edu" ; this problem almost always occurs to all three mail relays at the same time, and lasts for slightly under 20 minutes. If I can catch it while it's occuring, and I manually do "telnet exchange.contosso.edu 25" from one mail relay and force a message through (helo, mail from, rcpt to, data, etc), then it clears that relay up. The exchange "server" is actually two machines with the HT role on them, load balanced via windows NLB. I've worked pretty hard to figure out what's happening from the postfix side and I can't see any evidence of any misbehavior. My question is, how do I attack the problem from the exchange side? Is there a connection log, or a debug setting, or something I can do to log all of the inbound connections and tell me what's causing exchange to drop them?

    Read the article

  • Change the background color of selected text in Google Docs to increase readability [migrated]

    - by gene_wood
    How can I override or change the background color of text selected in Google Docs? It is difficult for me to see the difference and I would like to increase the contrast or difference. After Google restyled Google Docs last year (or earlier this year), I've been unable to see selected text. It's possible this is a visual deficiency with my eyes. In Google Docs, under both Google Chrome (17.0.963.83 (Official Build 127885) m) and Firefox (11.0), when I select text inside a Google Doc, the selected text has a background of color #d6e0f5. Compare this to the default browser background color of #2f65c0. (I determined the color of the selected text background by taking a screenshot and using the color picker tool in Photoshop). I've tested this using a brand new Firefox profile as well as google chrome profile. Here's a section of a screenshot showing the selected text : I've tried using a userscript to override the CSS to go back to the default text selection color using the "Stylish" plugin with this css : ::selection { background:#2f65c0; color:#ffffff; } ::-moz-selection { background:#2f65c0; color:#ffffff; } ::-webkit-selection { background:#2f65c0; color:#ffffff; } This code works on other sites, but I'm unable to get it to work on Google Docs. (I tested on other sites but applying the userscript to a different domain and using bright yellow instead of the default dark blue #2f65c0.) When you use Google Docs, do you have the same color background for selected text or something different? (To test this, browse to docs.google.com , create a document, type text into the document, select the text with the mouse by dragging over it, take a screenshot, load the screenshot up in an image editor and determine the background color of the selected text.) This color differential (between light blue #d6e0f5 and white #fffff) may be easy to see for others and the problem lies with my eyes.

    Read the article

  • visually documenting web server configuration and infrastructure

    - by Alex Ciarlillo
    I have just finished a large re-organization and update of our institutions web server(s). This server hosts 3 virtual hosts, 3-4 blogs, 2 wikis, some legacy static HTML pages, and many hosted documents (PDF, .jpg, .xls). I have organized the site into a structure of something like: /var/www/sites/vhost1, vhost2, vhost3 .../wordpress/blogX .../mediawiki/wikiX Data is in a seperate directory structure so I can run a cron task over it to make sure it is all writeable and such. I then symlink to these data directories for each application. /var/www/data/vhost1, vhost2, vhost3 .../wordpress/blogX/uploads .../mediawiki/wikiX/images All Apache configs are in /etc/httpd/conf.d/vhosts.d/vhost1,2,3.conf On top of this there is also a testing server which mirrors this setup. Once changes are fully tested, they are rsynced down to the live server. All the wordpress installs and mediawiki installs are straight form SVN and updates are done by switching branches or "svn up". So my question is how can I best document to share with a) co-workers, b) possible future replacement, c) myself 6 months from now. Obviously I can make a wiki page, excel document, whatever and fill it with text, but I am looking for a more visual representation that I can use to explain the architecture to less-technical people. Ideally it would be awesome if this visual representation could then be expanded to get more technical details.

    Read the article

  • SSH & SFTP: Should I assign one port to each user to facilitate bandwidth monitoring?

    - by BertS
    There is no easy way to track real-time per-user bandwidth usage for SSH and SFTP. I think assigning one port to each user may help. Idea of implementation Use case Bob, with UID 1001, shall connect on port 31001. Alice, with UID 1002, shall connect on port 31002. John, with UID 1003, shall connect on port 31003. (I do not want to lauch several sshd instances as proposed in question 247291.) 1. Setup for SFTP: In /etc/ssh/sshd_config: Port 31001 Port 31002 Port 31003 Subsystem sftp /usr/bin/sftp-wrapper.sh The file sftp-wrapper.sh starts the sftp server only if the port is the correct one: #!/bin/sh mandatory_port=3`id -u` current_port=`echo $SSH_CONNECTION | awk '{print $4}'` if [ $mandatory_port -eq $current_port ] then exec /usr/lib/openssh/sftp-server fi 2. Additional setup for SSH: A few lines in /etc/profile prevents the user from connecting on the wrong port: if [ -n "$SSH_CONNECTION" ] then mandatory_port=3`id -u` current_port=`echo $SSH_CONNECTION | awk '{print $4}'` if [ $mandatory_port -ne $current_port ] then echo "Please connect on port $mandatory_port." exit 1 fi fi Benefits Now it should be easy to monitor per-user bandwidth usage. A Rrdtool-based application could produce charts like this: I know this won't be a perfect calculation of the bandwidth usage: for example, if somebody launches a bruteforce attack on port 31001, there will be a lot of traffic on this port although not from Bob. But this is not a problem to me: I do not need an exact computation of per-user bandwidth usage, but an indicator that is approximately correct in standard situations. Questions Is the idea of assigning one port for each user is a good one? Is the proposed setup an reliable one? If I have to open dozens of ports for many users, should I expect a performance drawback? Do you know a rrdtool-based application which could make the chart above?

    Read the article

  • How much free memory should I have on my webserver?

    - by neanderslob
    I have a webserver that's currently hosting two Wordpress sites and some java-based collaboration software. The server has 2G of memory and is currently using about 1.8G of the available memory. Right now what's on here is pretty much a pilot project that's getting negligible traffic so I think it's pretty clear that I'll be needing more memory. I was wondering, if I was to release it, how I might anticipate my memory needs based on the traffic it gets. I've poked around on Google and what I've found has been a bit tenuous. Is there a good heuristic that one should use when calculating memory demands as a function of the base (no traffic) load on the server? For reference, the output of free -m can be seen below: total used free shared buffers cached Mem: 2048 1832 215 0 0 0 -/+ buffers/cache: 1832 215 Swap: 0 0 0 To me this looks like actual memory used and isn't an illusion due to caching or anything else. I figure the demands of my collaboration software will have to be experimentally tested so here's free -m without that software running: total used free shared buffers cached Mem: 2048 1109 938 0 0 0 -/+ buffers/cache: 1109 938 Swap: 0 0 0 My plan B to figure this out is to add a bunch of swap space to the server, give it some traffic and adjust according the the amount that swap gets used. I was just wondering if anyone had a good rule of thumb to estimate how much memory I should plan on in advance...or if what I'm thinking is nuts. Many thanks in advance (I'm really quite new to this).

    Read the article

  • Which free RDBMS is best for small in-house development?

    - by Nic Waller
    I am the sole sysadmin for a small firm of about 50 people, and I have been asked to develop an in-house application for tracking job completion and providing reports based on that data. I'm planning on building it as a web application. I have roughly equal experience developing for MySQL, PostgreSQL, and MSSQL. We are primarily a Windows-based shop, but I'm fairly comfortable with both Windows and Linux system administration. These are my two biggest concerns: Ease of managability. I don't expect to be maintaining this database forever. For the sake of the person that eventually has to take over for me, which database has the lowest barrier to entry? Data integrity. This means transaction-safe, robust storage, and easy backup/recovery. Even better if the database can be easily replicated. There is not a lot of budget for this project, so I am restricted to working with one of the free database systems mentioned above. What would you choose?

    Read the article

  • PHP Web Server Solution (Apache/IIS)

    - by njk
    I apologize if this is too broad or belongs on Super User (please vote to move if it does). I'm in the process of creating requirements for an internal PHP web server to submit to our architecture team and would like to get some insight whether to use a Windows or *nix platform and what applications would be required. The server will host a small PHP application that will be connecting to SQL Server. The application will need to send mail. We would also like to incorporate a FTP server to allow files to be dropped in. From what I've read regarding a Windows platform using IIS, it seems as though IIS would only be advantageous if using a .NET or ASP application. Does IIS have mail functionality? Or how is mail traditionally configured (esp. on *nix)? Also, does IIS have directory configuration functionality like Apache does with .htaccess? For a Windows based solution; IIS (comes with FTP) Apache (has mod_ftp module) For a *nix based solution; Apache

    Read the article

  • Microsoft Word 2008 on the Mac sometimes "Disappears" documents, really.

    - by Ross Charette
    This happens in a computer lab environment, has happened at least 3 times. We are running Microsoft Office 2008 for mac on Leopard, everything is updated. Our user's home directories are on a network drive, but the /Library/Cache folder is running locally. Typically a student will have a Word file that they have been working on, it's been saved before they even logged onto the computer that day. They log on, open the document, click the save icon (not go to File Save), sometimes even save multiple times, then close Word. The document is now gone. It's not hidden, there are no autosaves or anything in the Cache folder. Definitely not in the trash or trashes folder. It can't find it when you click on it in 'recent documents'. Searching meticulously though every folder in their home drive turns up nothing. They look using Finder, I look ssh'd as root into their home using ls -la. I look for similar files in case they renamed it by mistake. It's gone. Disappeared. Vaporized. It's happened to at least 3 different users in the past year. Much whining. Any idea?

    Read the article

  • amazon ec2-medium apache requests per second terrible

    - by TheDayIsDone
    EDITED -- test running from localhost now to rule out network... i have a c1.medium using EBS. when i do an apache benchmark and i'm just printing a "hello" for the test from localhost - no database hits, it's very slow. i can repeat this test many times with the same results. any thoughts? thanks in advance. ab -n 1000 -c 100 http://localhost/home/test/ Benchmarking localhost (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Apache/2.2.23 Server Hostname: localhost Server Port: 80 Document Path: /home/test/ Document Length: 5 bytes Concurrency Level: 100 Time taken for tests: 25.300 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 816000 bytes HTML transferred: 5000 bytes Requests per second: 39.53 [#/sec] (mean) Time per request: 2530.037 [ms] (mean) Time per request: 25.300 [ms] (mean, across all concurrent requests) Transfer rate: 31.50 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 7 21.0 0 73 Processing: 81 2489 665.7 2500 4057 Waiting: 80 2443 654.0 2445 4057 Total: 85 2496 653.5 2500 4057 Percentage of the requests served within a certain time (ms) 50% 2500 66% 2651 75% 2842 80% 2932 90% 3301 95% 3506 98% 3762 99% 3838 100% 4057 (longest request)

    Read the article

  • Apache not routing to tomcat on correct Virtual host

    - by ttheobald
    We are looking at moving from Websphere to Tomcat. I'm trying to send traffic to tomcat from apache web server based on the virtual host directives in apache web server. After some playing around I have it sort of working, but I'm noticing that if I have a JKMount directive in the first VirtualHost in apache, all virtualHosts will send to the application server. If I have the JKMount in Virtual hosts further down in the configs, then only that VirtualHost works with the request. For Example, with the configs below here are my symptoms mysite.com/Webapp1/ -- I resolve to the proper application mysite2.com/Webapp1/ -- I resolve to the proper application (bad!) mysite.com/MonitorApp/ -- I resolve to the proper application mysite2.com/MonitorApp/ -- I resolve to the proper application (bad!) mysite.com/Webapp2/ -- I DO NOT get to the app (good) mysite2.com/Webapp2/ -- I resolve to the proper application Here's what my web server virtualhosts look like. <VirtualHost 255.255.255.1:80> ServerName mysite.com ServerAlias aliasmysite.ca ##all our rewrite rules JkMount /Webapp1/* LoadBalanceWorker JKmount /MonitorApp/* LoadBalanceWorker </VirtualHost> <VirtualHost 255.255.255.2:80> ServerName mysite2.com ServerAlias aliasmysite2.ca ##all our rewrite rules JkMount /Webapp2/* LoadBalanceWorker </VirtualHost> we are running apache webserver 2.2.10 and tomcat 7.0.29 on Solaris10 I've posted an image of our architecture here. http://imgur.com/IFaA6Rh I HAVE not defined VirtualHosts on Tomcat. Based on what I've read, my understanding is that it's only needed if I'm accessing Tomcat directly. Any assistance is appreciated. Edit Here's my worker.properties. worker.list= LoadBalanceWorker,App1,App2 worker.intApp1.port=8009 worker.intApp1.host=10.15.8.8 worker.intApp1.type=ajp13 worker.intApp1.lbfactor=1 worker.intApp1.socket_timeout=30 worker.intApp1.socket_connect_timeout=5000 worker.intApp1.fail_on_status=302,500,503 worker.intApp1.recover_time=30 worker.intApp2.port=8009 worker.intApp2.host=10.15.8.9 worker.intApp2.type=ajp13 worker.intApp2.lbfactor=1 worker.intApp2.socket_timeout=30 worker.intApp2.socket_connect_timeout=5000 worker.intApp2.fail_on_status=302,500,503 worker.intApp2.recover_time=30 worker.LoadBalanceWorker.type=lb worker.LoadBalanceWorker.balanced_workers=intApp1,intApp2 worker.LoadBalanceWorker.sticky_session=1

    Read the article

  • Excel 2010 - more than 1 calculation within an IF() statement

    - by Da Bajan
    I have a situation where I need to calculate shipping values based on the length of the supply chain. Easy, however I need to have instances where an increased amount is required based on specific date criteria. My example is as follows: Shipvalue = 100 Date1 = 1/1/2013 (Jan) - ship 50% more than usual Date2 = 2/1/2013 (Feb) - ship 25% more than usual Date3 = 3/1/2013 (Mar) - ship 25% more than usual Supply chain length is: June - October 100 days November - March 140 days April - June 100 days The issue I have is that as there is an increase in the number of days, my formula: IF( Date1-(Supply chain length + any extra days)=today's date, shipvalue+(shipvalue X 50%), IF( Date2-(Supply chain length + any extra days)=today's date, shipvalue+(shipvalue x 50%) IF( Date2-(Supply chain length + any extra days)=today's date, shipvalue+(shipvalue x 50%), IF( preceding cell<>0,shipvalue, 0) ) ) ) Now the problem with this is that if the length of the supply chain increases then the formula misses all but the 1st increase. So, I thought of adding a variable that would be incremented and checked every time you made an increased shipping amount. So, how do I do both the calculation for the increased shipping value, and set the variable in one part of the IF statement?

    Read the article

  • How to deploy new instances of the same application (on 1 server) automatically?

    - by Intru
    I'm working on a SaaS application where each customer runs its own version of the application. All the application instances currently run on a single server. This works quite well for us (we need less resources in total). The application doesn't use a lot of resources, so even a small VPS would be overkill (and more expensive). Adding a new customer is currently quite a bit of work: Create a user that is allowed to ssh Create a new MySQL database and user Create a virtual host for the application Log in with the new user, do a git checkout of the application (in the right location) Create tables in the new database, and add some init data Add some cron jobs Create a first user that can log in Add this new instance to capistrano What would be the best way to automate these tasks? Are the applications that can (given proper configuration) do this? Ideally this should be usable for a sales-person (so something web-based). I could write a (bash) script that does most of these tasks, and then maybe add a small web-based wrapper where someone could provider the domain/default user information. Of course, this would also require a delete-script, since some customers will eventually leave, which means that you need a list of all existing customers/instances.

    Read the article

  • Enterprise level Ticketing and inventory system reccomendations [closed]

    - by TrackingSystem
    My company is sort off at a stand still when it comes to our technician ticketing and inventory system. We currently use Numara TrackIt! - which isn't cutting it to say the least. Dell recommended KACE, but it's web based which is what we would like to avoid. We need a good ticketing and inventory system with the following: Server/Client setup Client supports XP/Windows 7 Ent. Web Based as well as Client is a plus Technician ticketing Active Directory integration Inventory System (Asset tag tracking etc/PO tracking) Exchange integrated - when tickets are made you have an option to send to the requester. Something that will scale well Please, if anyone is a Systems Admin or has knowledge regarding use of a great ticketing system please let me know. We have a large international corporation - price honestly isn't an issue. Keep in mind this will be mainly used for technicians to create tickets, enter inventory(track PCS) and possible even an option to track purchase orders. We want an enterprise level ticketing system with these capabilities please help! Thank you.

    Read the article

  • Several web applications on a single port

    - by Nevermind
    We're developing an online browser-based game. The game itself is a plugin in the web page, that uses TCP connection to a game server, and also sends http requests to "content server" web application. This makes 3 servers total: the site itself, game server and content server. Site and content server are IIS web applications, game server is a custom application communicating over TCP with proprietary protocol. While the game is in beta stage, all these servers are physically hosted on a single machine, and distinguished by ports. For example, website is game.example.com:80, game server is game.example.com:34285 and content server is game.example.com:50000. This works OK most of the time, but some of our players have ports other than 80 closed. Is there any way to make all these application work through port 80, while still having them one one physical server? Maybe using different sub-domains? There's probably a way to make IIS forward requests to different web applications based on URL alone, but that doesn't help with game server. Edit Server is Windows Server 2008, IIS 7

    Read the article

  • building a debian base image

    - by Michael
    Is there a preferred way to create base images for Debian-based customized installations? We are currently going with multistrap but although it's better than hand-crafted chroot stuff, it still has a lot of edges and corners. Is there a more reliable and less error-prone way to produce a root filesystem of a Debian installation with some additional .debs installed? (I don't want to send out a Debian installer with a preseed file though.) Addendum 1: To clarify things a bit: We are delivering some kind of software appliance to our customers. That is, a debian operating system, with some additional software packages -- both our own and third-party ones -- and some configuration changes. To ease the installation process, we have an installer that does nothing more than partitioning, copying files to the partitions and setting up grub. So it's basically an image-based installer. So we are basically running the debian installation ourselves and just distribute the already installed operating system. The question is about the installation part. I want to have that as easy and robust as possible, and of course, it should be an automated process.

    Read the article

  • Virtual host Alias not routing properly

    - by Jacob
    I apologize if this question has been asked many times in the past. I am not 100% sure of the exact cause of my issue and am out of google magic right now. Basically I have a virtual host file setup with an Alias record that points to a different directory other the document root. It basically looks like this <VirtualHost *:80> ServerName iBusinessCentral.com ServerAdmin [email protected] DocumentRoot /var/www/marketingsites/ ServerAlias iBusinessCentral.com *.iBusinessCentral.com Alias /unsub "/var/www/unsub/site_index/" </VirtualHost> When I navigate to ibusinesscentral.com/unsub/?randomquerystring I am directed to the correct folder. If I remove the query string and navigate to ibusinesscentral.com/unsub/ I am taken to the directory in the document root. The unsub directory is a zend application and I need to be able to navigate to different url paths like ibusinesscentral.com/unsub/unenroll?querystring I have tried using AliasMatch instead of Alias. I have also tried adding a slash after the unsub portion of the Alias record, and have not had any luck to this point. Thanks in advance for any assistance

    Read the article

  • How to get apache to look for files in different subfolders folders?

    - by prb
    I am definitely new to mod-rewrite stuff. Note:- here the URL is common, and all the folders and subfolders on same host. The url a user uses to access their page is http://myurl.com/1234/filename.jpg Here the name of the subfolder is an integer is unique and generated dynamically by another application. The subfolder stores images specific to an individual user. So the folder structure is as follows main1 = document root main2 is another folder within main1 or document root. /main1/1234/filename.jpg /main1/5678/filename.jpg /main1/2345/filename.jpg /main1/1212/filename.jpg /main1/main2/2367/filename.jpg /main1/main2/8790/filename.jpg /main1/main2/9966/filename.jpg So, I want to write a rewrite a rule so that if a user tries to type in http://myurl.com/1234/filename.jpg, the rewrite rule will need to look where the file is and serve the request; so for request http:/myurl.com/1234/filename.jpg the actual page is located at /main1/1234/filename.jpg and then need to serve that page from that folder. So, if another users makes a request as http://myurl.com/9966/filename.jpg, it should serve the page from the following destination /main1/main2/9966/filename.jpg Please let me know if the question is still not clear. This is what i have done so far and does not work at all. RewriteCond {DOCUMENT_ROOT}/%{REQUEST_FILENAME} -f RewriteRule ^(.*)$ {DOCUMENT_ROOT}/$1 [L] RewriteCond {DOCUMENT_ROOT}/main2/%{REQUEST_FILENAME} -f RewriteRule ^(.*)$ {DOCUMENT_ROOT}/main2/$1 [L] any help is really grateful

    Read the article

< Previous Page | 440 441 442 443 444 445 446 447 448 449 450 451  | Next Page >