Search Results

Search found 36645 results on 1466 pages for 'local content'.

Page 453/1466 | < Previous Page | 449 450 451 452 453 454 455 456 457 458 459 460  | Next Page >

  • Why can't I compare two Texture2D's?

    - by Fiona
    I am trying to use an accessor, as it seems to me that that is the only way to accomplish what I want to do. Here is my code: Game1.cs public class GroundTexture { private Texture2D dirt; public Texture2D Dirt { get { return dirt; } set { dirt = value; } } } public class Main : Game { public static Texture2D texture = tile.Texture; GroundTexture groundTexture = new GroundTexture(); public static Texture2D dirt; protected override void LoadContent() { Tile tile = (Tile)currentLevel.GetTile(20, 20); dirt = Content.Load<Texture2D>("Dirt"); groundTexture.Dirt = dirt; Texture2D texture = tile.Texture; } protected override void Update(GameTime gameTime) { if (texture == groundTexture.Dirt) { player.TileCollision(groundBounds); } base.Update(gameTime); } } I removed irrelevant information from the LoadContent and Update functions. On the following line: if (texture == groundTexture.Dirt) I am getting the error Operator '==' cannot be applied to operands of type 'Microsoft.Xna.Framework.Graphics.Texture2D' and 'Game1.GroundTexture' Am I using the accessor correctly? And why do I get this error? "Dirt" is Texture2D, so they should be comparable. This using a few functions from a program called Realm Factory, which is a tile editor. The numbers "20, 20" are just a sample of the level I made below: tile.Texture returns the sprite, which here is the content item Dirt.png Thank you very much! (I posted this on the main Stackoverflow site, but after several days didn't get a response. Since it has to do mainly with Texture2D, I figured I'd ask here.)

    Read the article

  • keymapping when ssh-ing from mac to linux

    - by Yair
    I'm using Lion to ssh -X to a linux machine and work on some code thats located on it. I open up an editor on the remote machine (usually matlab) and program on it. My problem is that in the linux there is no concept of the command key. So if I want to copy some text from a local window to the editor that runs on the remote, I need to to command-c to copy, and then control-v to paste. This obviously drives me nuts. I was wondering if there is a way to change the keymapping such that the command key will be recognized as a control key on the remote processes. Or is this something I need to change on my local (mac) X configuration?

    Read the article

  • How to remove settings from a Microsoft Account Windows 8?

    - by Stevie G
    When installing windows 8 for the first time, I did not create a microsoft account and just installed as the local user. However I recently updated to Windows 8.1 and it forces you to use a microsoft account. I did not want to create an account so one of my friends used his and I logged in. After logging in all the friend's details like apps, wallpaper, lock screen, search mechanism, when I use search i see the friends facebook friends popping up. it is really annoying. How can I remove all of this excess, as I have logged out of the microsoft account and am just using local user but these problems have persisted. Thanks

    Read the article

  • Powershell - how to set multiple action on get-aduser "dataset"

    - by Patrick Pellegrino
    I'm trying to run a script that modify password for multiple AD user accounts, enable the accounts and force a password change at next logon. I use this code but that's not work : Get-ADUSER -Filter * -SearchScope Subtree -SearchBase "OU=myou,OU=otherou,DC=mydc,DC=local" | Set-ADAccountPassword -Reset -NewPassword (ConvertTo-SecureString -AsPlainText "NewPassord" -Force) | Enable-ADAccount | Set-ADUSER -ChangePasswordAtLogon $true If I run the Get-ADuser line with ONLY one of the other line that's run fine ex : Get-ADUSER -Filter * -SearchScope Subtree -SearchBase "OU=myou,OU=otherou,DC=mydc,DC=local" | Enable-ADAccount Where I'm wrong ? I'm new to PowerShell probably I'm misunderstanding something.

    Read the article

  • Test tomcat for ssl renegotiation vulnerability

    - by Jim
    How can I test if my server is vulnerable for SSL renegotiation? I tried the following (using OpenSSL 0.9.8j-fips 07 Jan 2009: openssl s_client -connect 10.2.10.54:443 I see it connects, it brings the certificate chain, it shows the server certificate, and last: SSL handshake has read 2275 bytes and written 465 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 1024 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 50B4839724D2A1E7C515EB056FF4C0E57211B1D35253412053534C4A20202020 Session-ID-ctx: Master-Key: 7BC673D771D05599272E120D66477D44A2AF4CC83490CB3FDDCF62CB3FE67ECD051D6A3E9F143AE7C1BA39D0BF3510D4 Key-Arg : None Start Time: 1354008417 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate) What does Secure Renegotiation IS supported mean? That SSL renegotiation is allowed? Then I did but did not get an exception or get the certificate again: verify error:num=20:unable to get local issuer certificate verify return:1 verify error:num=27:certificate not trusted verify return:1 verify error:num=21:unable to verify the first certificate verify return:1 HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/html;charset=ISO-8859-1 Content-Length: 174 Date: Tue, 27 Nov 2012 09:13:14 GMT Connection: close So is the server vulnerable to SSL renegotiation or not?

    Read the article

  • Windows Server 2003 - passwordless access to \\myhost\ but not \\myhost.mydomain.net\

    - by Charles Duffy
    I have a Windows Server 2003 system on which passwordless access to local UNC paths is possible using the server's unqualified hostname or its IP address, but not via its FQDN -- even when the hosts file is used to map that FQDN directly to 127.0.0.1. That is: \\127.0.0.1\ - passwordless \\myhost\ - passwordless \\myhost.mydomain.com\ - brings up an authentication dialog Unfortunately, I have a local application trying to resolve UNC paths including the host's FQDN. I've tried resolving myhost.mydomain.com to 127.0.0.1 in both hosts and lmhosts, and calling ping myhost.mydomain.com at the command prompt gives the appearance that this resolution has taken effect; even so, attempting to open \\myhost.mydomain.com\ from Windows Explorer brings up a password prompt, while \\127.0.0.1\ does not. The system is using an OpenDirectory server (Apple's Kerberos+LDAP directory service) for authentication.

    Read the article

  • Running .NET app from network share in Win 7?

    - by mlangille
    We have a .NET 1.1 application that we keep on a netowork share. We install the .NET Framerwork to the local PCs and also set the full trust via the following: %windir%\Microsoft.NET\Framework\v1.1.4322\caspol -pp off -cg LocalIntranet_Zone FullTrust This has worked fine on all PCs to date however now we have a few new PCs with Win7 and the process no longer is working. The app will run fine from a local drive in Win7 but running the networked copy results in a general exception error. Any ideas on how to get this to work under Win7?

    Read the article

  • Routing tables don't show ppp0 after 12.04 kernel upgrade to 3.5.0: Haier CE682 modem configuration

    - by ubunsteve
    I'm trying to get my Haier CE682 EVDO modem, model number 201e:1022 to work in ubuntu 12.04 kernel 3.5.0-030500-generic #201207211835 . I had it working in a previous 12.04 kernel, using compat-wireless and these instructions http://zulkhamsyahmh.blogspot.com/2012/05/install-smartfren-haier-ce682-on-ubuntu.html, and to get it working had to edit the routing tables so that there was a ppp0 showing up, as suggested at http://www.linuxquestions.org/questions/slackware-14/wvdial-is-connecting-but-im-unable-to-do-anything-714861/ Network manager doesn't work with this modem, so I use either wvdial or gpppon to connect to it, both which work (after I run the command sudo modprobe usbserial vendor=0x201e product=0x1022 ) This is the output of when I connect with gpppon to the modem: Using interface ppp0 Connect: ppp0 <-- /dev/ttyUSB0 sent [LCP ConfReq id=0x1 ] rcvd [LCP ConfAck id=0x1 ] rcvd [LCP ConfReq id=0x2 ] sent [LCP ConfAck id=0x2 ] sent [LCP EchoReq id=0x0 magic=0x819c86db] rcvd [CHAP Challenge id=0x1 <1ac8f12799e953967a3cc222c9254690, name = ""] sent [CHAP Response id=0x1 <6f12a903dc40915ca2761c17b87f8fbd, name = "smart"] rcvd [LCP EchoRep id=0x0 magic=0x0] rcvd [CHAP Success id=0x1 ""] CHAP authentication succeeded CHAP authentication succeeded sent [CCP ConfReq id=0x1 ] sent [IPCP ConfReq id=0x1 ] rcvd [IPCP ConfReq id=0x1 ] sent [IPCP ConfAck id=0x1 ] rcvd [CCP ConfReq id=0x1] sent [CCP ConfAck id=0x1] rcvd [CCP ConfRej id=0x1 ] sent [CCP ConfReq id=0x2] rcvd [IPCP ConfRej id=0x1 ] sent [IPCP ConfReq id=0x2 ] rcvd [CCP ConfAck id=0x2] rcvd [IPCP ConfNak id=0x2 ] sent [IPCP ConfReq id=0x3 ] rcvd [IPCP ConfAck id=0x3 ] not replacing existing default route via 192.168.3.1 local IP address 10.191.248.154 remote IP address 10.17.95.25 primary DNS address 10.17.3.244 secondary DNS address 10.17.3.245 as you can see there is a problem with "not replacing existing default route via 192.168.3.1" This it the out put of route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 192.168.3.1 0.0.0.0 UG 0 0 0 wlan0 link-local * 255.255.0.0 U 1000 0 0 wlan0 192.168.3.0 * 255.255.255.0 U 2 0 0 wlan0 I had tried these commands, which had previously worked in the earlier kernel: route del default route add default ppp0 but that broke my wireless internet connection. I then added the default routing as shown above with sudo route add default gw 192.168.3.1 wlan0 So it seems I need to add or change the routing to show a ppp0 connection, but I don't know how to do that.

    Read the article

  • Large keepalive_requests values are severely slowing-down Nginx

    - by Gil
    When running a bacon (43-byte transparent pixel) load test on Nginx, we have tried several keepalive_requests values (from 10 to 100,000) and the optimal value seems to be 10. Here are the server HTTP headers of this tiny reply: HTTP/1.1 200 OK Server: nginx/1.5.6 Date: Wed, 23 Oct 2013 12:39:45 GMT Content-Type: image/gif Content-Length: 43 Last-Modified: Mon, 28 Sep 1970 06:00:00 GMT Connection: keep-alive Nginx is twice slower with keepalive_requests 100000 than with keepalive_requests 10. Can you help understanding that result? Or tell what we do wrong? For reference, here is the nginx.conf file.

    Read the article

  • ISA caching with no cache-related info in response header

    - by Mike M. Lin
    From the documentation, I can't figure out what criteria an ISA server uses to figure out if a cached file is valid when no cache-related info is in the response header. Let's say I got this header in my response on Thu, 13 Jan 2011 18:43:35 GMT: HTTP/1.1 200 OK Date: Thu, 13 Jan 2011 18:43:35 GMT Server: Apache/2.2.3 (Red Hat) Content-Language: en X-Powered-By: Servlet/2.5 JSP/2.1 Keep-Alive: timeout=15 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=ISO-8859-1 There's no cache directive, no last-modified field, no expires field. How will the ISA server decide for how long to cache this response?

    Read the article

  • How can I to take an HDMI TV broadcast and overlay text in real time?

    - by ObligatoryMoniker
    Our company wants to be able to have LCD TVs displaying TV with the ability to add an overlay, like a stock ticker at the bottom of the screen, where human resources can add content to be displayed. I have been trying to nail down the correct terminology for this and come across terms like Keying, Compositing, Live Broadcast Graphics Presentation, and Hardware Overlay but I don't know which of these terms is truly the correct way to refer to what I am trying to do. Black Magic offers a product that seems like it can do what I am looking for but their product seems like it is geared for a totally different purpose than what I would be using it for. Compix also seems to have a product that would do what I need but again it seems like killing a fly with a sledge hammer. How can I to take an HDMI TV broadcast and overlay arbitrary content in real time?

    Read the article

  • Internet Explorer cannot display page from apache with single SSL virtual host

    - by P.scheit
    I have a question that has come up somehow in different questions but I still can't find the solution, yet. My problem is that I'm hosting a site on apache 2.4 on debian with SSL and Internet Explorer 7 on windows xp shows Internet Explorer cannot display the webpage I have only ONE virtual host that uses ssl, but DIFFERENT virtual hosts that use http. Here is my config for the site with SSL enabled (etc/sites-avaible/default-ssl is NOT linked) <Virtualhost xx.yyy.86.193:443> ServerName www.my-certified-domain.de ServerAlias my-certified-domain.de DocumentRoot "/var/local/www/my-certified-domain.de/current/www" Alias /files "/var/local/www/my-certified-domain.de/current/files" CustomLog /var/log/apache2/access.my-certified-domain.de.log combined <Directory "/var/local/www/my-certified-domain.de/current/www"> AllowOverride All </Directory> SSLEngine on SSLCertificateFile /etc/ssl/certs/www.my-certified-domain.de.crt SSLCertificateKeyFile /etc/ssl/private/www.my-certified-domain.de.key SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM SSLCertificateChainFile /etc/apache2/ssl.crt/www.my-certified-domain.de.ca BrowserMatch "MSIE [2-8]" nokeepalive downgrade-1.0 force-response-1.0 </VirtualHost> <VirtualHost *:80> ServerName www.my-certified-domain.de ServerAlias my-certified-domain.de CustomLog /var/log/apache2/access.my-certified-domain.de.log combined Redirect permanent / https://www.my-certified-domain.de/ </VirtualHost> my ports.conf looks like this: NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> the output from apache2ctl -S is like this: xx.yyy.86.193:443 www.my-certified-domain.de (/etc/apache2/sites-enabled/020-my-certified-domain.de:1) wildcard NameVirtualHosts and _default_ servers: *:80 is a NameVirtualHost default server phpmyadmin.my-certified-domain.de (/etc/apache2/conf.d/phpmyadmin.conf:3) port 80 namevhost phpmyadmin.my-certified-domain.de (/etc/apache2/conf.d/phpmyadmin.conf:3) port 80 namevhost staging.my-certified-domain.de (/etc/apache2/sites-enabled/010-staging.my-certified-domain.de:1) port 80 namevhost testing.my-certified-domain.de (/etc/apache2/sites-enabled/015-testing.my-certified-domain.de:1) port 80 namevhost www.my-certified-domain.de (/etc/apache2/sites-enabled/020-my-certified-domain.de:31) I included the solution for this question: Internet explorer cannot display the page, other browsers can, possibly htaccess / server error And I understand the answer from this question: How to setup Apache NameVirtualHost on SSL? In fakt: I only have one ssl certificate for the domain. And I only want to run ONE virtual host with ssl. So I just want to use the one ip for the ssl virtual host. But still (after rebooting / restarting / testing) internet explorer will still not show the page. When I intepret the apachectl -S as well, I already have only one SSL host and this should response to the initial SSH handshake, shouldn't it? What is wrong in this setup? Thank you so much Philipp

    Read the article

  • mongodb : Can create new thread on FreeBSD?

    - by user197739
    We experienced some strange thing in our mongodb gridfs platform. The platform actually is a bi Xeon E5 (bi quad core) with 128GB of memory, running on freebsd 9 with a zfs pool dedicated for mongodb. [root@mongofile1 ~]# uname -sr FreeBSD 9.1-RELEASE our /boot/loader.conf vfs.zfs.arc_min="2048M" vfs.zfs.arc_max="7680M" vm.kmem_size_max="16G" vm.kmem_size="12G" vfs.zfs.prefetch_disable="1" kern.ipc.nmbclusters="32768" /etc/sysctl.conf net.inet.tcp.msl=15000 net.inet.tcp.keepidle=300000 kern.ipc.nmbclusters=32768 kern.ipc.maxsockbuf=2097152 kern.ipc.somaxconn=8192 kern.maxfiles=65536 kern.maxfilesperproc=32768 net.inet.tcp.delayed_ack=0 net.inet.tcp.sendspace=65535 net.inet.udp.recvspace=65535 net.inet.udp.maxdgram=57344 net.local.stream.recvspace=65535 net.local.stream.sendspace=65535 we follow the recommendation for the ulimit : [root@mongofile1 ~]# su - mongodb $ ulimit -a cpu time (seconds, -t) unlimited file size (512-blocks, -f) unlimited data seg size (kbytes, -d) 33554432 stack size (kbytes, -s) 524288 core file size (512-blocks, -c) unlimited max memory size (kbytes, -m) unlimited locked memory (kbytes, -l) unlimited max user processes (-u) 5547 open files (-n) 32768 virtual mem size (kbytes, -v) unlimited swap limit (kbytes, -w) unlimited sbsize (bytes, -b) unlimited pseudo-terminals (-p) unlimited This server have a twin (same config exactly) for ReplSet in other data center and we have a virtualized arbiter. Some time, almost 3 days, the process of mongodb exit. The problem begin with: Fri Nov 8 11:27:31.741 [conn774697] end connection 192.168.10.162:47963 (23 connections now open) Fri Nov 8 11:27:31.770 [initandlisten] can't create new thread, closing connection Fri Nov 8 11:27:31.771 [rsHealthPoll] replSet member mongofile2:27017 is now in state DOWN Fri Nov 8 11:27:31.774 [initandlisten] connection accepted from 192.168.10.162:47968 #774702 (20 connections now open) Fri Nov 8 11:27:31.774 [initandlisten] connection accepted from 192.168.10.161:28522 #774703 (21 connections now open) Fri Nov 8 11:27:31.774 [initandlisten] connection accepted from 192.168.10.164:15406 #774704 (22 connections now open) Fri Nov 8 11:27:31.774 [initandlisten] connection accepted from 192.168.10.163:25750 #774705 (23 connections now open) Fri Nov 8 11:27:31.810 [initandlisten] connection accepted from 192.168.10.182:20779 #774706 (24 connections now open) Fri Nov 8 11:27:31.855 [initandlisten] connection accepted from 192.168.10.161:28524 #774707 (25 connections now open) Fri Nov 8 11:27:31.869 [initandlisten] connection accepted from 192.168.10.182:20786 #774708 (26 connections now open) and after many "can create new thread" [root@mongofile1 /usr/mongodb]# tail -n 15000 mongod.log.old |grep "create new thread"|wc 5020 55220 421680 and finish by a magnificent Fri Nov 8 11:30:22.333 [rsMgr] replSet warning caught unexpected exception in electSelf() pure virtual method called Fri Nov 8 11:30:22.333 Got signal: 6 (Abort trap: 6). Fri Nov 8 11:30:22.337 Backtrace: 0x599efc 0x8035cb516 0x599efc <_ZN5mongo10abruptQuitEi+988> at /usr/local/bin/mongod 0x8035cb516 <_pthread_sigmask+918> at /lib/libthr.so.3 Extract of mongodb from top 78126 mongodb 77 20 0 1253G 1449M sbwait 0 0:20 0.00% mongod If I restart the process when it crash, the problem is fixed for almost 3 days. Has anyone seen this before, or know of a fix?

    Read the article

  • How do I set zone priority in Microsoft DNS?

    - by Justin
    I have a standard small network setup (20 users) on Active Directory. All Windows machines have a primary DNS server as the AD and a secondary DNS server as Google PDNS. I want to setup a DNS entry that exists in real DNS but set it up on our DC so that local requests would route this public domain to a local development machine on the network. I setup the zone in DNS which results in the clients resolving the public FQDN to our internal IP. However, sometimes it still resolves to the "real" value (I check by pinging it). Is there some way to give the zone definition in my DC DNS higher priority? Or will the client that has secondary public DNS always at sometimes have a competing entry for this zone?

    Read the article

  • Automatically triggering standard spaceship controls to stop its motion

    - by Garan
    I have been working on a 2D top-down space strategy/shooting game. Right now it is only in the prototyping stage (I have gotten basic movement) but now I am trying to write a function that will stop the ship based on it's velocity. This is being written in Lua, using the Love2D engine. My code is as follows (note- object.dx is the x-velocity, object.dy is the y-velocity, object.acc is the acceleration, and object.r is the rotation in radians): function stopMoving(object, dt) local targetr = math.atan2(object.dy, object.dx) if targetr == object.r + math.pi then local currentspeed = math.sqrt(object.dx*object.dx+object.dy*object.dy) if currentspeed ~= 0 then object.dx = object.dx + object.acc*dt*math.cos(object.r) object.dy = object.dy + object.acc*dt*math.sin(object.r) end else if (targetr - object.r) >= math.pi then object.r = object.r - object.turnspeed*dt else object.r = object.r + object.turnspeed*dt end end end It is implemented in the update function as: if love.keyboard.isDown("backspace") then stopMoving(player, dt) end The problem is that when I am holding down backspace, it spins the player clockwise (though I am trying to have it go the direction that would be the most efficient at getting to the angle it would have to be) and then it never starts to accelerate the player in the direction opposite to it's velocity. What should I change in this code to get that to work? EDIT : I'm not trying to just stop the player in place, I'm trying to get it to use it's normal commands to neutralize it's existing velocity. I also changed math.atan to math.atan2, apparently it's better. I noticed no difference when running it, though.

    Read the article

  • Mapping an sFTP connection to a Windows drive?

    - by Nicolas
    I'm looking for a way to map an sFTP connection to a Windows (Vista) drive. In other words, a tool that would add a new drive (let's say N:) to my computer, that would directly point to my remote server via sFTP. That way, "N:\my_dir\file.txt" would actually be something like "/home/user/my_dir/file.txt" on the remote server. Reading the file on Windows would download it, and writing content in it would upload it...network transfers being made via sFTP. I'm aware of Novell NetDrive, but it has various issues with long filenames, and seems to corrupt UTF-8 files content depending on the BOM. Do you know about any reliable alternative ? Thanks ! Edit : I have complete control of the remote server, except that it's remote enough for me not to be able to physically access it.

    Read the article

  • Sendmail Alias for Nonlocal Email Account

    - by Mark Roddy
    I admin a server which is running a number of web applications for a software dev team (source control, bug tracking, etc). The server has sendmail running solely as a transport to the departmental email server over which I have no control. We have someone who is still in the department but no longer on the dev team so I need to configure the transport agent to redirect all outgoing email (which would be coming from these applications) to the person that has taken their place. I added an entry in /etc/aliases like such: [email protected]: [email protected] But when I run /etc/init.d/sendmail newaliases I get the following error: /etc/mail/aliases: line 32: [email protected]... cannot alias non-local names So clearly I'm doing something I shouldn't. Is there a way to get aliases to work with non-local names or alternatively is their a way to accomplish my goal of redirecting outgoing mail for this user to another one? Technical Specs if the matter: Ubuntu 6.06 sendmail 8.13 (ubuntu provided package)

    Read the article

  • What are the hard and fast rules for Cache Control?

    - by Metalshark
    Confession: sites I maintain have different rules for Cache Control mostly based on the default configuration of the server followed up with recommendations from the Page Speed & Y-Slow Firefox plug-ins and the Network Resources view in Google's Speed Tracer. Cache-Control is set to private/public depending on what they say to do, ETag's/Last-Modified headers are only tinkered with if Y-Slow suggests there is something wrong and Vary-Accept-Encoding seems necessary when manually gziping files for Amazon CloudFront. When reading through the material on the different options and what they do there seems to be conflicting information, rules for broken proxies and cargo cult configurations. Any of the official information provided by the analysis tools mentioned above is quite inaccessible as it deals with each topic individually instead of as a unified strategy (so there is no cross-referencing of techniques). For example, it seems to make no sense that the speed analysis tools rate a site with ETag's the same as a site without them if they are meant to help with caching. What are the hard and fast rules for a platform agnostic Cache Control strategy? EDIT: A link through Jeff Atwood's article explains Caching in superb depth. For the record though here are the hard and fast rules: If the file is Compressed using GZIP, etc - use "cache-control: private" as a proxy may return the compressed version to a client that does not support it (the browser cache will hold files marked this way though). Also remember to include a "Vary: Accept-Encoding" to say that it is compressible. Use Last-Modified in conjunction with ETag - belt and braces usage provides both validators, whilst ETag is based on file contents instead of modification time alone, using both covers all bases. NOTE: AOL's PageTest has a carte blanche approach against ETags for some reason. If you are using Apache on more than one server to host the same content then remove the implicitly declared inode from ETags by excluding it from the FileETag directive (i.e. "FileETag MTime Size") unless you are genuinely using the same live filesystem. Use "cache-control: public" wherever you can - this means that proxy servers (and the browser cache) will return your content even if the rest of the page needs HTTP authentication, etc.

    Read the article

  • Setup a proxy (Not reverse proxy) using Varnish/Squid.

    - by shabda
    I need to setup a proxy server where we can request remote urls and get them served locally. Basically what I need is mysever:8000/varnish/serverfault.com get me serverfault.com served from my local varnish or myserver:8080/squid/serverfault.com get me serverfault.com served from my local squid. (Both should cache the site for 24 hours) I am evaluating if Varnish or Squid will be a good choice for it. Which one will be a better fit? How do I do it. Links to tutorials would be good.

    Read the article

  • iptables: built-in INPUT chain in nat table?

    - by ughmandaem
    I have a Gentoo Linux system running linux 2.6.38-rc8. I also have a machine running Ubuntu with linux 2.6.35-27. I also have a virtual machine running Debian Unstable with linux 2.6.37-2. On the Gentoo and Debian systems I have an INPUT chain built into my nat table in addition to PREROUTING, OUTPUT, and POSTROUTING. On Ubuntu, I only have PREROUTING, OUTPUT, and POSTROUTING. I am able to use this INPUT chain to use SNAT to modify the source of a packet that is destined to the local machine (imagine simulating an incoming spoofed IP to a local application or just to test a virtual host configuration). This is possible with 2 firewall rules on Gentoo and Debian but seemingly not so on Ubuntu. I looked around for documentation on changes to the SNAT target and the INPUT chain of the nat table and I couldn't find anything. Does anyone know if this is a configuration issue or is it something that was just added in more recent versions of linux?

    Read the article

  • Where does the temporary flash file get stored when I am viweing from Firefox

    - by Nishant
    I am watching a lecture and it seems to be adobe flash ...I wanna save this video that I am viewing . The website I am checking is http://cs75.tv/2009/fall/ . I am using Firefox . Dont know if this info helps , but .... My about:cache result is this . Memory cache device Number of entries: 212 Maximum storage size: 13312 KiB Storage in use: 8087 KiB Inactive storage: 6819 KiB List Cache Entries Disk cache device Number of entries: 3224 Maximum storage size: 500000 KiB Storage in use: 26066 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\Cache List Cache Entries Offline cache device Number of entries: 0 Maximum storage size: 512000 KiB Storage in use: 0 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\OfflineCache List Cache Entries

    Read the article

  • Where does the temporary flash file get stored when I am viweing from Firefox

    - by Nishant
    I am watching a lecture and it seems to be adobe flash ...I wanna save this video that I am viewing . The website I am checking is http://cs75.tv/2009/fall/ . I am using Firefox . Dont know if this info helps , but .... My about:cache result is this . Memory cache device Number of entries: 212 Maximum storage size: 13312 KiB Storage in use: 8087 KiB Inactive storage: 6819 KiB List Cache Entries Disk cache device Number of entries: 3224 Maximum storage size: 500000 KiB Storage in use: 26066 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\Cache List Cache Entries Offline cache device Number of entries: 0 Maximum storage size: 512000 KiB Storage in use: 0 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\OfflineCache List Cache Entries

    Read the article

  • "Bad response to Storage command" when scheduling job with Bacula

    - by Joril
    I have a Bacula setup with 9 clients, and it's working happily. Today I had to add another client, so I went and copied+adapted the existing configuration files from another client, but when I schedule a job for the new client, I get these errors: 20-Mar 17:50 tools-dir JobId 39: Start Backup JobId 39, Job=BackupPresenze2.2012-03-20_17.50.49_04 20-Mar 17:50 tools-dir JobId 39: Using Device "FileStorage" 20-Mar 17:50 presenze2-fd JobId 39: Fatal error: Failed to connect to Storage daemon: bacula.mylan.local:9103 20-Mar 17:50 tools-dir JobId 39: Fatal error: Bad response to Storage command: wanted 2000 OK storage , got 2902 Bad storage From the client I can telnet to bacula.mylan.local:9103 just fine, and jobs for other clients work successfully... What could I check? (Server and client run Ubuntu 10.04, if it's relevant)

    Read the article

  • Who is a CMS really for?

    - by Eirc man
    I have started lately discovering Content Management Systems, and I was wondering, who is really CMS for? What I mean by that: is it only for companies, small businesses or individuals, that pays a contractor to make a website that it's users can just upload content through a easy interface. Or is it used also by programmers, to build their own websites, projects? Would a Facebook, Tweeter, StackExhange ever started by using a CMS, a very powerful one for example. Would you as a programmer build your own "fancy" website on top of a CMS, for example like Typo3, or you would build it from scratch? P.S To be more clear is a summary: What I mean to begin with is, would I as a developer choose a CMS to develop a website that can be scaled with a big base of users, be stuck if I choose to start with a CMS system. What if I build a website using CMS, and the website explodes in popularity, and then I wanted to add much more functionality that I have planed, is it possible that the CMS will limit the growth, because it might have not been build for that kind of scale?

    Read the article

  • Get Unlimited Oracle Training for Your Team for an Entire Year

    - by KJones
    Written By Amit Kumar, Senior Director Oracle University Digital Training  Oracle University has been in the training business for a long time (over 30 years!) and has worked with many Oracle customers over the years.  We understand that getting your teams trained on the latest Oracle technologies is not always easy. Training becomes more challenging when you have remote teams, team members with different skill levels or experienced team members who just need the content that covers the latest product features. It can also be challenging to predict your training needs for the year, making it all the more difficult to provide training in a timely manner. Oracle Unlimited Learning Subscription is the Answer We’ve listened to our customers and we’ve worked hard to put together a flexible training solution that enables team members to get the training that addresses their individual needs, right when they need it. This new Oracle Unlimited Learning Subscription provides teams with one year of unlimited access to: •    All of Oracle's Training On Demand video courses for in-depth product training •    All of Oracle's Learning Streams, which provide fresh product content from Oracle experts for continuous learning •    Live connections with Oracle's top instructors  •    Dedicated labs for hands-on practice The Oracle Unlimited Learning Subscription is 100% digital, giving you maximum flexibility. It simplifies how you plan and budget for your team training.  Learning Oracle and staying connected with Oracle really has never been easier. Take a tour and contact your Oracle University representative today to learn more and request a demo. 

    Read the article

< Previous Page | 449 450 451 452 453 454 455 456 457 458 459 460  | Next Page >