Search Results

Search found 1480 results on 60 pages for 'jav 000'.

Page 30/60 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Nginx virtual server only partially working with Java application

    - by MFB
    Final Cut Server is a Java application (made by Apple) which launches from a web page. I have Nginx in front of this web server (amongst others) and, whilst the web server can be browsed externally, the Java app fails to launch correctly and throws the errors below. Can anyone offer clues as to what additional config I many have to add to Nginx to get this working? My existing Nginx config: user xxxx; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/conf/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; gzip on; gzip_disable "msie6"; server { server_name _; return 444; } upstream fcs-site { server 10.10.5.20:8080; } server { listen 80; server_name example.com 10.10.5.90; access_log /var/log/nginx/fcs_access.log; error_log /var/log/nginx/fcs_error.log; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 60s; proxy_send_timeout 90s; proxy_read_timeout 90s; proxy_buffering off; proxy_temp_file_write_size 64k; proxy_pass http://fcs-site; proxy_redirect off; } } upstream myapp-site { server 127.0.0.1:6543; } server { listen 80; server_name otherexample.com www.otherexample.com; rewrite ^ https://$server_name$request_uri? permanent; } server { listen 443; ssl on; ssl_certificate /etc/ssl/otherapp.crt; ssl_certificate_key /etc/ssl/otherapp.key; server_name otherexample.com www.otherexample.com; access_log /var/log/nginx/otherapp_access.log; error_log /var/log/nginx/other_error.log; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 60s; proxy_send_timeout 90s; proxy_read_timeout 90s; proxy_buffering off; proxy_temp_file_write_size 64k; proxy_pass http://myapp-site; proxy_redirect off; } location /static { root /www; expires 30d; add_header Cache-Control public; access_log off; } } Java errors: com.sun.deploy.net.FailedDownloadException: Unable to load resource: http://example.com:8080/FinalCutServer/FinalCutServer_mac.jnlp at com.sun.deploy.net.DownloadEngine.actionDownload(Unknown Source) at com.sun.deploy.net.DownloadEngine._downloadCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResourceCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResourceCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResource(Unknown Source) at com.sun.javaws.Launcher.updateFinalLaunchDesc(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.launch(Unknown Source) at com.sun.javaws.Main.launchApp(Unknown Source) at com.sun.javaws.Main.continueInSecureThread(Unknown Source) at com.sun.javaws.Main.access$000(Unknown Source) at com.sun.javaws.Main$1.run(Unknown Source) at java.lang.Thread.run(Thread.java:722) java.net.ConnectException: Operation timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at java.net.Socket.connect(Socket.java:528) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:378) at sun.net.www.http.HttpClient.openServer(HttpClient.java:473) at sun.net.www.http.HttpClient.(HttpClient.java:203) at sun.net.www.http.HttpClient.New(HttpClient.java:290) at sun.net.www.http.HttpClient.New(HttpClient.java:306) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:995) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:931) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:849) at com.sun.deploy.net.BasicHttpRequest.doRequest(Unknown Source) at com.sun.deploy.net.BasicHttpRequest.doRequest(Unknown Source) at com.sun.deploy.net.BasicHttpRequest.doGetRequest(Unknown Source) at com.sun.deploy.net.DownloadEngine.actionDownload(Unknown Source) at com.sun.deploy.net.DownloadEngine._downloadCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResourceCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResourceCacheEntry(Unknown Source) at com.sun.deploy.cache.ResourceProviderImpl.getResource(Unknown Source) at com.sun.javaws.Launcher.updateFinalLaunchDesc(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.prepareToLaunch(Unknown Source) at com.sun.javaws.Launcher.launch(Unknown Source) at com.sun.javaws.Main.launchApp(Unknown Source) at com.sun.javaws.Main.continueInSecureThread(Unknown Source) at com.sun.javaws.Main.access$000(Unknown Source) at com.sun.javaws.Main$1.run(Unknown Source) at java.lang.Thread.run(Thread.java:722)

    Read the article

  • Symbolic links and 7zip

    - by Fire Lancer
    I'm trying to compress a folder into a .7z archive. This folder contains symbolic links to some other stuff outside the folder (both directories and files). Apparently 7zip just archives the link it's self which is not what I intended. Is there a way to tell 7zip that I want it to archive the stuff that it links too, not the link its self (so if there is a symlink name "foo" which points to "C:\stuff\foo" I want it to include the "C:\stuff\foo" stuff in the archive in place of foo, not a "0 byte" symlink... Failing that is there any reasonable around it, apart from adding the files and folders in question? Including the stuff through the symlink there's like 10 000 files, the large proportion of which are via symlinks so adding them all individually would take hours... I'm thinking mayby a program that creates a staging folder with the real files in it then passes that to 7zip, or just an archiver that does handle them better?

    Read the article

  • unknown codec 0x0064 (G726 ADPCM)

    - by Colin Pickard
    I've got a number of audio recordings which I can't play back. It appears to be a case of a missing codec, but I can't locate the codec in question. VLC will not play the files. ffmpeg gives the error Unsupported codec (id=0) for input stream 0. This is the output from MediaInfo. General CompleteName : Format : Wave FileSize/String : 164 KiB Duration/String : 1mn 24s OverallBitRate/String : 16.0 Kbps Audio Format : ADPCM CodecID : 64 CodecID/Info : G.726 CodecID/Hint : APICOM Duration/String : 1mn 24s BitRate_Mode/String : Constant BitRate/String : 16.0 Kbps Channel(s)/String : 1 channel SamplingRate/String : 8 000 Hz BitDepth/String : 2 bits StreamSize/String : 164 KiB (100%) Can anyone help?

    Read the article

  • SVN do unnecessary chmod on .svn/tempfiles

    - by ???
    My working dir is on an TrueCrypt NTFS volume, with umask 000. So I can read/write on any files with no problem. But I can't do svn command on it, for example `svn update' shows error: svn: Can't set permissions on '.svn/tempfile.8.tmp': strace svn up gives: ... chmod("sbin/.svn/tempfile.2.tmp", 0770) = -1 EPERM (Operation not permitted) fcntl64(3, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0 write(3, "( failure ( ( 1 76:Can't set per"..., 172) = 172 fcntl64(3, F_GETFL) = 0x802 (flags O_RDWR|O_NONBLOCK) fcntl64(3, F_SETFL, O_RDWR) = 0 read(3, "( abort-edit ( ) ) ( failure ( ("..., 4096) = 191 gettimeofday({1276661368, 382789}, NULL) = 0 lstat64("sbin", {st_mode=S_IFDIR|0770, st_size=0, ...}) = 0 select(0, NULL, NULL, NULL, {0, 1000}) = 0 (Timeout) write(2, "svn: Can't set permissions on 's"..., 82svn: Can't set permissions on 'sbin/.svn/tempfile.2.tmp': Operation not permitted) = 82 close(3) = 0 So, the error occurred when svn chmod on some tmp files. But this is disallowed in the TrueCrypt volumes, and it's just unnecessary. Can I bypass the chmod lib calls when launch svn on TrueCrypt volumes?

    Read the article

  • site timing out when under heavy load

    - by naunu
    My client sends out eblasts at 8am monday/wed/friday. Between 8:15-8:45 the site becomes extremely slow and many users sessions timeout. My setup: Mediatemple VE 2gb dedicated ram (3 burst) Ubuntu 9.10 Apache2-mpm-worker PHP5.3-fcgi MySQL 5 I recently tried to remedy the problem by switching from apache2-mpm-prefork to mpm-worker, but am still having the same issues. My apache settings are: Timeout 100 KeepAlive On MaxKeepAliveRequests 100 <IfModule mpm_worker_module> StartServers 12 MinSpareThreads 25 MaxSpareThreads 96 ThreadLimit 96 ThreadsPerChild 25 MaxClients 225 MaxRequestsPerChild 0 </IfModule> The site is only getting ~10,000 page views during the 8am-9am hour, which I dont think should be stressing the server too badly. Maybe it is an error with the PHP settings, or bandwidth per unit time, or the site outgrew the server? Any suggestions would be very helpful - as you can see i've given it a good go before looking for help (installed mpm-worker). Also, can anyone suggest to me some free load testing software, or a tutorial on mod_status? Thank you

    Read the article

  • .htaccess has no effect

    - by Primož Kralj
    I am loosing hours with this (should-be) simple task. I want to restrict access to my website, which is on my server in /var/www/. I've created /etc/apache2/passwords file with httpasswd successfuly (user primoz). I've put .htaccess in /var/www/ and this is the content: AuthType Basic AuthName "RestrictedFiles" AuthBasicProvider file AuthUserFile /etc/apache2/passwords Require user primoz My website is still accessible. I also tried editing the /etc/apache2/sites-enabled/000-default - line AllowOverride None to AllowOverride All. No need to mention that it didn't make any changes. Should restricting really be this frustrating? EDIT: /etc/apache2/httpd.conf is empty by default because I run server on Debian - which uses apache2.conf instead. Here is the whole apache2.conf.

    Read the article

  • wget not converting links

    - by acrosman
    I am trying to mirror a fairly large site (20,000+ pages) prior to a major overhaul. Basically, I need a backup before cutting over to the new one in case we forgot something we need (we'll have about 1,000 pages at launch). The site is run on a CMS that I cannot easily extract usable data from, so I'm trying to make the copy with wget. My problem is that wget does not appear to be actually converting links, despite the presence of --convert-links or -k in the command. I've tried a couple of different combinations of flags, but I haven't been able to get the output I need. Most recent failed attempt was: nohup wget --mirror -k -l10 -PafscSnapshot --html-extension -R *calendar* -o wget.log http://www.example.org & I've also included the --backup-converted, and --convert-links instead of -k (not that it have mattered). I've done it with and without -P and -l, again no that they should matter. Results in files that still have links like: http://www.example.org//ht/d/sp/i/17770

    Read the article

  • Dummy/default page for apache

    - by Ency
    I'm trying to set up default page for my apache2, for following cases: User is accessing http://IP_Address instead of hostname Requested protocol (HTTP/HTTPS) is not available (eg. only http*s*://domain.com exists) Currently I've got something like that <VirtualHost eserver:80> ServerAdmin webmaster@localhost DocumentRoot /var/www/local/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> </VirtualHost> I think, it works well, i'm trying to do similar thing for HTTPS, but it does not work. <VirtualHost eserver:443> SSLCertificateKeyFile /etc/apache2/ssl/dummy.key SSLCertificateFile /etc/apache2/ssl/dummy.crt SSLProtocol all SSLCipherSuite HIGH:MEDIUM ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn ServerSignature Off </VirtualHost> My default is places in sites-enabled as a first one 000-default I do not care about not certificate validity during accessing default page, my goal is not show different HTTPS page if user one of points is applied

    Read the article

  • Users can't change password trough OWA for Exchange 2010

    - by Rémy Roux
    Here's our problem, users who want to change their password trough OWA get this error "The password you entered doesn't meet the minimum security requirements.", even if users are respecting the minimum security requirements. With these settings, we have the error: Enforced password history 1 passwords remembered Maximum password age 185 days Minimum password age 1 day Minimum password length 7 characters Password must meet complexity requirements enabled With these test settings, we don't have an error: Enforced password history not defined Maximum password age not defined Minimum password age not defined Minimum password length not defined Password must meet complexity requirements not defined People can change their password but there is no more security! Just changing one parameter of the GPO for example "Enforced password history", brings back this error. Here's our server configuration : Windows Server 2008 R2 Exchange Server 2010 Version: 14.00.0722.000 If anybody has a clue it would very helpful !

    Read the article

  • Multi Thread Rsync Transfer

    - by reefine
    For some reason when running a single rsync command I am getting 1 MB/sec to 2 MB/sec even when I connecting 2 servers both connected to 1 Gbps ports. rsync -v --progress -e ssh /backup/mysqldata/mysql-bin.000199 [email protected]:/secondary/mysqldata/mysqldata/mysql-bin.000199 I have over 800 GB of data to transfer split among 500 or so files all starting with: mysql-bin.000* I've found that running 25-30 rsync simultaneously from seperate SSH windows gets me upwards of 25 MB/sec but it will take me hours to run these all manually. Is there anyway to get the 25 MB/sec from a single rsync command?

    Read the article

  • How to choose size for a cloud server (rackspace)

    - by Emil
    We're going to test the rackspace cloud next week to see how it's working with our web app. It's a LAMP environment with a lot of MySQL databases. How do I choose the "right" server size? On Rackspace I can choose slices with the memory of 256, 512, 1024, 2048, 4096 etc. Right now we don't have a lot of traffic (approx. 1000 visitors/day) but I thought the whole "cloud" idea was to not be limited and auto scale. Update: What I'm looking for is now a specification of what I need. I know it's too complex. I'm looking for examples, case studies etc. It would be interesting to hear something like "Yes we're serving 10 000 daily requests without spikes on a LAMP stack with only one slice on with 2 GB RAM".

    Read the article

  • Apache2 proxypass

    - by gatsby
    i'm trying to figure out why my apache2 reverse proxy doesn't work... hope someone can clarify. i'm using an apache server as a gateway with proxy pass: 10.184.1.2 is the IP. these are PP instructions i inserted in the 000-default config file. ProxyPass / http://192.168.102.31/ ProxyPassReverse / http://192.168.102.31/ the host 192.168.102.31 is an internal IP of a subnet wich is not reachable directly by clients, but only by the apache gateway. when i try to access such a address: http://apache_gateway_name/dir i see the client trying to reach 192.168.102.31 address and of course timeout occurs. can someone help? Best regards

    Read the article

  • wget not converting links

    - by acrosman
    I am trying to mirror a fairly large site (20,000+ pages) prior to a major overhaul. Basically, I need a backup before cutting over to the new one in case we forgot something we need (we'll have about 1,000 pages at launch). The site is run on a CMS that I cannot easily extract usable data from, so I'm trying to make the copy with wget. My problem is that wget does not appear to be actually converting links, despite the presence of --convert-links or -k in the command. I've tried a couple of different combinations of flags, but I haven't been able to get the output I need. Most recent failed attempt was: nohup wget --mirror -k -l10 -PafscSnapshot --html-extension -R *calendar* -o wget.log http://www.example.org & I've also included the --backup-converted, and --convert-links instead of -k (not that it have mattered). I've done it with and without -P and -l, again no that they should matter. Results in files that still have links like: http://www.example.org/ht/d/sp/i/17770

    Read the article

  • AWS EC2: How to determine whether my EC2/scalr AMI was hacked? What to do to secure it?

    - by Niro
    I received notification from Amazon that my instance tried to hack another server. there was no additional information besides log dump: Original report: Destination IPs: Destination Ports: Destination URLs: Abuse Time: Sun May 16 10:13:00 UTC 2010 NTP: N Log Extract: External 184.xxx.yyy.zzz, 11.842.000 packets/300s (39.473 packets/s), 5 flows/300s (0 flows/s), 0,320 GByte/300s (8 MBit/s) (184.xxx.yyy.zzz is my instance ip) How can I tell whether someone has penetrated my instance? What are the steps I should take to make sure my instance is clean and safe to use? Is there some intrusion detection techinque or log that I can use? Any information is highly appreciated.

    Read the article

  • I can not watch any videos on youtube

    - by hamed
    I can not watch any videos on Youtube with any browser ( Chrome 20 , Firefox 13 , IE 8 ). I get this error An error accurred. Please try again later. My OS is Windows 7 (32bit) and the antivirus is zonealarm extreme security 10.1.079.000 . I installed flash player 11 but no success. I exited zone alarm to watch video, but no success again and error is the same. I don't know what's the problem . I searched a lot but didn't find the problem. I'd appreciate any idea.

    Read the article

  • Mod_rewrite not working on ISPConfig 3 Server

    - by Akahadaka
    Problem I recently migrated a Drupal site from a shared hosting server to my own VM. Everything appears to work correctly, except clean urls. My VM Setup Ubuntu 10.04 LAMP ISPConfig 3 What I've tried From reading up on a number of drupal forums I've tried the following in this order Check that mod_rewrite is installed and enabled Changed PHP from FastCGI to Mod_PHP (prefer to use FastCGI or suPHP though to avoid having tmp/files folders with 777 permissions) Changed the Redirect type to L in ISPConfig Sites-domain.com-Redirect Changed /etc/apache2/sites-enabled/000-default <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All ... </Directory> Not sure about points 3 and 4, I do want all domains to be able to use mod_rewrite out of the box. Question Have I done something wrong or am I missing a step? Ultimately I would like to use FastCGI and clean urls working on all ISPConfig 3 domains without having to make any changes to individual domain settings. Any ideas appreciated, I'll try them all.

    Read the article

  • OpenLDAP RHEL 6

    - by AndyM
    Hi all I've been configuring OpenLDAP on RHEL 6 and its seems you have run the following to rebuild the config dirs. I'm ok with that , but my issues is , say I want to change the server passwd , do I have to go through the whole process every time I change the config ? Is there a way of changing the slapd config after its been built using the RHEL6 method ? below is the advice I've found on the net from http://www.linuxtopia.org/online_books/rhel6/rhel_6_migration_guide/rhel_6_migration_ch07s03.html This example assumes that the file to convert from the old slapd configuration is located at /etc/openldap/slapd.conf and the new directory for OpenLDAP configuration is located at /etc/openldap/slapd.d/. Remove the contents of the new /etc/openldap/slapd.d/ directory: rm -rf /etc/openldap/slapd.d/* Run slaptest to check the validity of the configuration file and specify the new configuration directory: slaptest -f /etc/openldap/slapd.conf -F /etc/openldap/slapd.d Configure permissions on the new directory: chown -R ldap:ldap /etc/openldap/slapd.d chmod -R 000 /etc/openldap/slapd.d chmod -R u+rwX /etc/openldap/slapd.d

    Read the article

  • How to estimate a server specifications for this particular system? [on hold]

    - by Alvaro Fallas
    I'm working in a college project and I'm supposed to specify the server's hardware to hold a system. The system is some kind of social network. And it is supposed to hold around 100 000 users the first year, also the system must be able to handle 1000 users working at the same time. It is the first time I'm asked to do something like this, so I hope you can give me a hand and help me because I feel a little lost. The system's data base is Mysql. I found some server configurations offered by Amazon Web Services, but I don't know which of them is the better for my system due to lack of experience Hope you can help me.

    Read the article

  • HP Laserjet "maintenance interval" vs "fuser life"

    - by marienbad
    I posted a question about the Laserjet 8100DN earlier here: http://serverfault.com/questions/139043/buying-an-old-laser-printer-what-will-need-to-be-replaced and from doing some more research I have a new question: I found the "maintence interval" -- "the interval at which you should install a maintenance kit" (which is a fuser and rollers), and it is...350,000 pages. BUT, when I look at the specs for an HP 8100 fuser, it says the fuser has a life span of 150,000 pages. What gives? – Will the fuser go bad after 150 or 350? ==== BTW I hope it's ok to ask another similar question in a new thread -- I'm just following instructions from my thread on the topic at Meta.

    Read the article

  • Need Help with fixing permissions in mounted Drive

    - by Master
    I am trying a lot still my problem is not solved. I have a partion called Server and inside it i have 5 folders like Folder 1 FOlder 2 Folder 3 I am mounting the drive on startup by using following command as told to me by some senoir members and it works but with some problems /dev/sdb1 /media/Server ntfs defaults,umask=006,fmask=000,dmask=007,uid=1000,gid=1001 0 0 The problem is with this command the permission are applied to all folders like Folder 1 , Folder 2 , FOlder3 But i want that only FOlder 3 should be publicly readable and writable while all other should be private and no one should have access to that. How can i achieve that

    Read the article

  • Replacing files in a folder structure with files from an unsorted folder

    - by Andrew
    I have over 50,000 PDFs organized into folders in a file called PDFACT. I needed to compress these files so I ran them through Adobe to batch compress them and this worked—except Adobe could only output the files without their folder structure. So basically I have 50,000 PDFs set up in a folder with hundreds of subfolders, and everything was organized. I ended up with one folder with 50,000 compressed PDFs in it, just in alphabetical order. Somehow I need to replace all the orginal PDFs with their compressed copies. Let me give an example: In the folder PDFACT we have the following file: C:\PDFACT\BIG DINNER\BILL\NEWESTBILL.PDF … and in the output folder that Adobe created we have just: C:\COMPRESSED_PDF_FOLDER\NEWESTBILL.PDF This copy is smaller then the one in PDFACT and has the same name but it is just lumped in with every other PDF. The folder structure and subfolders are gone. Is there any way to replace all the larger uncompressed PDFS inside the orginal folder structure with their now compressed counterparts?

    Read the article

  • Webserver: Performance impact when storing session files on /dev/shm

    - by GetFree
    I have a website runing on a typical setup: Linux, Apache, PHP, MySQL. However, what's not typical about it, is that it's getting tons of traffic (400,000+ visits a day) and so, efficiency is becoming more and more important to me. I'm constantly looking for things I could optimize and, right now, my attention is focused on PHP's session files. There's a hell lot of session files constantly being read and created on the /tmp directory. So my question is: Is it a good idea to store the session files in /dev/shm (tmpfs) in order to speed things up a little bit??

    Read the article

  • SQL 2008 Memory Usage

    - by Danilo Brambilla
    I have a SQL Server 2008 (ver 10.0.1600) running on a Windows Server 2008 R2 Enterprise server with 8 GB of physical ram. If I open Task Manager I can see on 'Physical Memory' section of 'Performance' tab that only 340 MB are Available of 8191 Total, but I can't see any process using such amount of memory. Please note SQL Server is memory limited to 6GB (Maximum Server Memory = 6000). If I open Sysinternals Process Explorer, I can see sqlsrvr.exe process has: Private Bytes: 227.000 K Working Set: 140.000 K Virtual Size: 8.762.000 K What does this means? Is there any way to free up this memory for other process? Why Virtual Size figure as allocated memory? I thought that Virtual Size was 'reserved memory' only.

    Read the article

  • How to exclude a specific URL from basic authentication in Apache?

    - by ripper234
    Two scenarios: Directory I want my entire server to be password-protected, so I included this directory config in my sites-enabled/000-default: <Directory /> Options FollowSymLinks AllowOverride None AuthType Basic AuthName "Restricted Files" AuthUserFile /etc/apache2/passwords Require user someuser </Directory> The question is how can I exclude a specific URL from this? Proxy I found that the above password protection doesn't apply to mod_proxy, so I added this to my proxy.conf: <Proxy *> Order deny,allow Allow from all AuthType Basic AuthName "Restricted Files" AuthUserFile /etc/apache2/passwords Require user someuser </Proxy> How do I exclude a specific proxied URL from the password protection? I tried adding a new segment: <Proxy http://myspecific.url/> AuthType None </Proxy> but that didn't quite do the trick.

    Read the article

  • SQL 2008 Memory Usage

    - by Danilo Brambilla
    I have a SQL Server 2008 (ver 10.0.1600) running on a Windows Server 2008 R2 Enterprise server with 8 GB of physical ram. If I open Task Manager I can see on 'Physical Memory' section of 'Performance' tab that only 340 MB are Available of 8191 Total, but I can't see any process using such amount of memory. Please note SQL Server is memory limited to 6GB (Maximum Server Memory = 6000). If I open Sysinternals Process Explorer, I can see sqlsrvr.exe process has: Private Bytes: 227.000 K Working Set: 140.000 K Virtual Size: 8.762.000 K What does this means? Is there any way to free up this memory for other process? Why Virtual Size figure as allocated memory? I thought that Virtual Size was 'reserved memory' only.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >