Search Results

Search found 7960 results on 319 pages for 'distributed cache'.

Page 46/319 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Hibernate database integrity with multiple java applications

    - by Austen
    We have 2 java web apps both are read/write and 3 standalone java read/write applications (one loads questions via email, one processes an xml feed, one sends email to subscribers) all use hibernate and share a common code base. The problem we have recently come across is that questions loaded via email sometimes overwrite questions created in one of the web apps. We originally thought this to be a caching issue. We've tried turning off the second level cache, but this doesn't make a difference. We are not explicitly opening and closing sessions, but rather let hibernate manage them via Util.getSessionFactory().getCurrentSession(), which thinking about it, may actually be the issue. We'd rather not setup a clustered 2nd level cache at this stage as this creates another layer of complexity and we're more than happy with the level of performance we get from the app as a whole. So does implementing a open-session-in-view pattern in the web apps and manually managing the sessions in the standalone apps sound like it would fix this? Or any other suggestions/ideas please? <property name="hibernate.transaction.factory_class">org.hibernate.transaction.JDBCTransactionFactory</property> <property name="hibernate.current_session_context_class">thread</property> <property name="hibernate.cache.use_second_level_cache">false</property>

    Read the article

  • What should the name of this class be?

    - by Tim Murphy
    Naming classes is sometimes hard. What do you think name of the class should be? I originally created the class to use as a cache but can see its may have other uses. Example code to use the class. Dim cache = New NamePendingDictionary(Of String, Sample) Dim value = cache("a", Function() New Sample()) And here is the class that needs a name. ''' <summary> ''' Enhancement of <see cref="System.Collections.Generic.Dictionary"/>. See the Item property ''' for more details. ''' </summary> ''' <typeparam name="TKey">The type of the keys in the dictionary.</typeparam> ''' <typeparam name="TValue">The type of the values in the dictionary.</typeparam> Public Class NamePendingDictionary(Of TKey, TValue) Inherits Dictionary(Of TKey, TValue) Delegate Function DefaultValue() As TValue ''' <summary> ''' Gets or sets the value associated with the specified key. If the specified key does not exist ''' then <paramref name="createDefaultValue"/> is invoked and added to the dictionary. The created ''' value is then returned. ''' </summary> ''' <param name="key">The key of the value to get.</param> ''' <param name="createDefaultValue"> ''' The delegate to invoke if <paramref name="key"/> does not exist in the dictionary. ''' </param> ''' <exception cref="T:System.ArgumentNullException"><paramref name="key" /> is null.</exception> Default Public Overloads ReadOnly Property Item(ByVal key As TKey, ByVal createDefaultValue As DefaultValue) As TValue Get Dim value As TValue If createDefaultValue Is Nothing Then Throw New ArgumentNullException("createValue") End If If Not Me.TryGetValue(key, value) Then value = createDefaultValue.Invoke() Me.Add(key, value) End If Return value End Get End Property End Class

    Read the article

  • Making IE "forget" information entered in form when using back button.

    - by typoknig
    I have a page with a form where many of the fields are populated from variables passed in the URL. Those fields are disabled (NON-EDITABLE) and are only there for the user to view. The remaining fields require user input and are NOT disabled (EDITABLE). When the form is submitted a confirmation page comes up. It may be the case that the user needs to submit several of these forms where the NON-EDITABLE information is identical from form to form, so being able to go back to the form page from the confirmation page would save a lot of time. The way I want this to work is when a user presses the back button all the NON-EDITABLE fields are populated, but the EDITABLE fields are blank. This is what Firefox is doing, but IE8 is does not "forget" what has been entered in the EDITABLE fields. To disable the cache the following appears at the beginning of my page AND at the end of my page. <head> <meta http-equiv="Pragma" content="no-cache"/> <meta http-equiv="Cache-Control" content="no-store"/> <head/> What more must I do to make IE forget what was entered in the EDITABLE fields when the back button is pressed? All of my pages are generated with PHP if that matters. EDIT: It appears to me that this is a problem of IE caching my page even though I have told it not to. Are my meta tags correct? Do I need to do something else to prevent IE from caching my page?

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Clustering filesystem for small files

    - by viraptor
    Hi, I'm looking for a distributed filesystem which I could use for storing lots of small files (<1MB usually). What I want to get is: 2 servers which have the fs mounted themselves and mirror the data locking support (among reachable nodes) some kind of best-effort automatic resynchronisation after one node goes down and comes back again What I mean by the resync is that, I'm ok with both servers doing read/write operations even if they split-brain. I'm also ok if a local process obtains a lock if the other host is not reachable. From the resync I expect only a file-level consistent view after a while - that is - if file x is modified on both nodes during a split-brain, I don't really care which one is available after they join again, as long as it's full file, not one block coming from node1 and another block from node2. Is there a solution like that out there? I see that gluster has some problems with file locks (even in 3.1). I also noticed that OCFS2 will panic if both nodes split-brain. What other filesystem would allow me to do what I want?

    Read the article

  • Distributing processing for an application that wasn't designed with that in mind

    - by Tim
    We've got the application at work that just sits and does a whole bunch of iterative processing on some data files to perform some simulations. This is done by an "old" Win32 application that isn't multi-processor aware, so new(ish) computers and workstations are mostly sitting idle running this application. However, since it's installed by a typical Windows Install Shield installer, I can't seem to install and run multiple copies of the application. The work can be split up manually before processing, enabling the work to be distributed across multiple machines, but we still can't take advantage of multiple core CPUs. The results can be joined back together after processing to make a complete simulation. Is there a product out there that would let me "compartmentalize" an installation (or 4) so I can take advantage of a multi-core CPU? I had thought of using MS Softgrid, but I believe that still depends on a remote server to do the heavy lifting (though please correct me if I'm wrong). Furthermore, is there a way I can distribute the workload off the one machine? So an input could be split into 50 chunks, handed out to 50 machines, and worked on? All without really changing the initial application? In a perfect world, I'd get the application to take advantage of a DesktopGrid (BOINC), but like most "mission critical corporate applications", the need is there, but the money is not. Thank you in advance (and sorry if this isn't appropriate for serverfault).

    Read the article

  • Nagios DNX plugins

    - by danneh3826
    I'm toying with the idea of multiple Nagios instances setup to monitor our infrastructure. I've looked at all the various methods of distributed Nagios checks, and I think DNX comes out the closest. DNX handles failure of worker nodes, that's fine. What happens if the main DNX server fails though? Is there a way to replicate the server too? I'm using AWS EC2 primarily, so I can utilise Elastic Load Balancing for the web UI, but I need to be able to handle the AZ where the monitoring server is to fail over, and essentially for a second to pick up the checking load (active/passive, active/active, so long as it doesn't fail completely) The other thing I'm trying to solve is an issue with routing. What I'd like is to have multiple nodes report a fault before Nagios confirms it as critical. Not the NRPE checks, as they're pretty self explanitory, but things more like check_ping. I often have routing issues out of AWS to certain datacenters, so Nagios can often report bad/no ping/timeout as a critical issue, even though the machine in question is working fine. Would it be possible to have a setup where a worker complains a service check is critical, and have a second worker node (positioned in another datacenter/AZ) also report the service as critical before the Nagios central server issues a critical alert? I realise I might be asking a bit much (how far down the line do you go setting up failover systems before it starts to get ridiculous), however surely someone must have thought of this scenario when developing DNX?

    Read the article

  • How to distribute multiple executions of an app across many machines

    - by Salec
    I've got a simulation app (64-bit windows) that runs without any user interaction. This app gathers information and pushes it to a remote MS SQL Server. What I'd like to do is execute this simulation as many times as I can on multiple machines after our nightly build has finished and it has passed the test suite. If possible I'd love to have the ability to configure it to stop after x total runs or if the entire batch has taken over y hours. I've tried using Visual Studio's built in test framework since we already have a test lab set up with multiple agents. I created a single unit test that simply runs the simulation then I created an ordered test and added that single test multiple times (from what I gather, this is the only way to execute the same unit test more than once). I found that ordered tests are only run on a single agent and not distributed which is very limiting. We use TeamCity to perform our nightly builds and I suspect it's possible to implement this on top of that, but I'm fairly new to TeamCity. We also have Jenkins and Bamboo available and I'm open to any other software that would get the job done presuming it runs on a 64-bit Windows OS. Any suggestions?

    Read the article

  • Cannot ping host stale ARP cache?

    - by gkchicago
    I am having a strange issue with a Debian (Lenny/Linux 2.6.26-2-amd64) that has been driving me nuts. On some machines within my network I can ping the host in question just fine, other times I have to manually hard-code the ARP ethernet address for the IP in order to establish connectivity. I've finally worked it down to somehow involving ARP. I just found how to fix it in a way that made it work but I'm looking for help explaining this issue and also I don't trust my fix to be permanent.. My thought process has been the following but I just can't make any sense out of it: Could it be the card? (Intel 82555 rev 4) Could it be because there are two network cards? (Default route is eth0) Could it be because of the network aliases? Lenny? AMD x86_64? Argh.. Thank you for any insight you might have // Ping doesn't go thru [gordon@ubuntu ~]$ ping 192.168.135.101 PING 192.168.135.101 (192.168.135.101) 56(84) bytes of data. --- 192.168.135.101 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3014ms // Here's the ARP Table, sometimes the .151 address is good, sometimes it // also matches the Gateways MAC like .101 is doing right here. [gordon@ubuntu ~]$ cat /proc/net/arp IP address HW type Flags HW address Mask Device 192.168.135.15 0x1 0x2 00:0B:DB:2B:24:89 * eth0 192.168.135.151 0x1 0x2 00:0B:6A:3A:30:A6 * eth0 192.168.135.1 0x1 0x2 00:1A:A2:2D:2A:04 * eth0 192.168.135.101 0x1 0x2 00:1A:A2:2D:2A:04 * eth0 // Drop the bad arp table listing and set it manually based on /sbin/ifconfig [gordon@ubuntu ~]$ sudo arp -d 192.168.135.101 [gordon@ubuntu ~]$ sudo arp -s 192.168.135.101 00:0B:6A:3A:30:A6 // Ping starts going thru..?!? [gordon@ubuntu ~]$ ping 192.168.135.101 PING 192.168.135.101 (192.168.135.101) 56(84) bytes of data. 64 bytes from 192.168.135.101: icmp_seq=1 ttl=64 time=15.8 ms 64 bytes from 192.168.135.101: icmp_seq=2 ttl=64 time=15.9 ms 64 bytes from 192.168.135.101: icmp_seq=3 ttl=64 time=16.0 ms 64 bytes from 192.168.135.101: icmp_seq=4 ttl=64 time=15.9 ms --- 192.168.135.101 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3012ms rtt min/avg/max/mdev = 15.836/15.943/16.064/0.121 ms The following is my network config on this. gordon@db01:~$ /sbin/ifconfig eth0 Link encap:Ethernet HWaddr 00:0b:6a:3a:30:a6 inet addr:192.168.135.151 Bcast:192.168.135.255 Mask:255.255.255.0 inet6 addr: fe80::20b:6aff:fe3a:30a6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:15476725 errors:0 dropped:0 overruns:0 frame:0 TX packets:10030036 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:18565307359 (17.2 GiB) TX bytes:3412098075 (3.1 GiB) eth0:0 Link encap:Ethernet HWaddr 00:0b:6a:3a:30:a6 inet addr:192.168.135.150 Bcast:192.168.135.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth0:1 Link encap:Ethernet HWaddr 00:0b:6a:3a:30:a6 inet addr:192.168.135.101 Bcast:192.168.135.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth1 Link encap:Ethernet HWaddr 00:e0:81:2a:6e:d0 inet addr:10.10.62.1 Bcast:10.10.62.255 Mask:255.255.255.0 inet6 addr: fe80::2e0:81ff:fe2a:6ed0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10233315 errors:0 dropped:0 overruns:0 frame:0 TX packets:19400286 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1112500658 (1.0 GiB) TX bytes:27952809020 (26.0 GiB) Interrupt:24 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:387 errors:0 dropped:0 overruns:0 frame:0 TX packets:387 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:41314 (40.3 KiB) TX bytes:41314 (40.3 KiB) gordon@db01:~$ sudo mii-tool -v eth0 eth0: negotiated 100baseTx-FD, link ok product info: Intel 82555 rev 4 basic mode: autonegotiation enabled basic status: autonegotiation complete, link ok capabilities: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD advertising: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control link partner: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD gordon@db01:~$ sudo route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface localnet * 255.255.255.0 U 0 0 0 eth0 10.10.62.0 * 255.255.255.0 U 0 0 0 eth1 default 192.168.135.1 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • Does MS Forefront TMG cache authentication?

    - by SnOrfus
    I'm testing a client machine that makes requests to a biztalk server using a forefront machine as a web proxy. Upon first test I put in an invalid name/password into the receive port and received the correct error message (407). Then, I set the correct name/password and everything worked correctly. From there, I kept the correct information in the receive port but put an invalid name/password into the send adapter but the process completed successfully (should have failed with 407). I've ensured that both the recieve and send ports are not bypassing the proxy for local addresses. So the only thing that seems to make sense is if TMG is caching the authentication request coming from the machine I'm working on. Is this thinking correct, and if so, does anyone know how to disable it in TMG?

    Read the article

  • When DNS doesn't cache

    - by John Francis
    We've had some odd DNS problems over the past couple of days that I don't fully understand. Some of our DNS names stopped resolving for some of our customers due to some 'unknown' server reconfiguration at our DNS provider. The problem seemed to be intermittent i.e. stopped working and started working within a few minutes over a couple of days. I'm no expert on DNS, but I'd have expected DNS caches to prevent this sort of thing from happening - when we need to change an IP address for a DNS record, it can take 24 hours to propogate, so how can our DNS provider be breaking name resolution intermittently for our customers so easily? Shouldn't the DNS caches kick in here? We had a similar problem about a month ago when one of their nameservers 'decided to reload the DNS database from scratch' - this broke our name resolution too. Again, why didn't the caches satisfy the name resolution requests. Any guesses would be appreciated. John

    Read the article

  • Centos swap cache memory leak

    - by user30008
    We have a image server that keeps running out of memory and crashing, we thought there was a hardware issue with the machine because the code base has not changed and this is a new issue. We brought a new machine online with newer kernel and fresh centos 5.4 install and just brought online one subdomain and the exact same error is occurring on the new machine. How should I try and troubleshoot this issue.

    Read the article

  • Varnish : Non-Cache/Data Fetch + Load-Balance

    - by xperator
    Someone commented at my previous question and said it's possible to do this with Varnish: Instead of : Client Request Varnish LB Backend Varnish LB Client I want to have (Direct reply from backend to client, instead of going through the LB) : Client Request Varnish LB Backend Client This is not working : sub vcl_pass { if (req.http.host ~ "^(www.)?example.com$") { set req.backend = baz; return (pass); } }

    Read the article

  • Disqus cache of unposted posts

    - by user129107
    Some webpages implement Disqus and also have the rather bad policy of adding auto refresh to the page. This result in for example one writing a long answer in a debate and then a refresh comes along – and everything is gone. Is the comments, written, but not posted, cached somewhere? Is it possible to retrieve? I have experienced this on various pages. In the current case the debate page was reloaded and a rather lengthy post with a lot of references and long thought out sentences vanished. This page closes the debate during night time, and do a auto refresh of the page when one pass midnight – as such I'm not able to retrieve the debate for another 8 hours. Other pages implement for example an auto refresh after 20 minutes. Linux, Google Chrome.

    Read the article

  • No recent books on MPI: is it dying?

    - by Jono
    I've never used Message Passing Interface (MPI), but I've heard its name thrown about, most recently with Windows HPC Server. I had a quick look on amazon to see if there were any books on it, but they're all dated around 7 or more years ago. Is MPI still a valid technology choice for new applications, or has it been largely superceded by other distributed programming alternatives (e.g. DataSynapse GridServer)? As it's not really an implementation, but rather a standard, what is the likelihood (assuming it's not dead) that learning it will result in better design of distributed programming systems? Is there something else I should be looking at instead?

    Read the article

  • Distributing cpu-bound compression jobs to multiple computers?

    - by barnaby
    The other day I needed to archive a lot of data on our network and I was frustrated I had no immediate way to harness the power of multiple machines to speed-up the process. I understand that creating a distributed job management system is a leap from a command-line archiving tool. I'm now wondering what the simplest solution to this type of distributed performance scenario could be. Would a custom tool always be a requirement or are there ways to use standard utilities and somehow distribute their load transparently at a higher level? Thanks for any suggestions.

    Read the article

  • How to utilize Varnish for A/B Testing and Feature Rollout?

    - by Ken
    Hi all, wasn't really sure if this should go here on or stackoverlow - admins, please move if i'm mistaken (and sorry). Today we have our web layer exposed to the world. We would like to add Varnish in front of our web layer to accelerate the site and reduce calls to the backend. However, we have some concerns and i was wondering how most people approach them: A/B Testing - How do you test two "versions" of each page and compare? I mean, how does varnish know which page to serve up? If and how do you save seperate versions on each page? Feature rollout - how would you set up a simple feature rollout mechanism? Let's say i want to open a new feature/page to just 10% of the traffic.. and then later increase that to 20%? How do you handle code deployments? Do you purge your entire varnish cache every deployment? (We have deployments on a daily basis). Or do you just let it slowly expire (using TTL)? Any ideas and examples regarding these issues is greatly appreciated! Thanks in advance. Ken.

    Read the article

  • ZFS with L2ARC (SSD) slower for random seeks than without L2ARC

    - by Florian Kruse
    I am currently testing ZFS (Opensolaris 2009.06) in an older fileserver to evaluate its use for our needs. Our current setup is as follows: Dual core (2,4 GHz) with 4 GB RAM 3x SATA controller with 11 HDDs (250 GB) and one SSD (OCZ Vertex 2 100 GB) We want to evaluate the use of a L2ARC, so the current ZPOOL is: $ zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM afstank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c11t0d0 ONLINE 0 0 0 c11t1d0 ONLINE 0 0 0 c11t2d0 ONLINE 0 0 0 c11t3d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c13t0d0 ONLINE 0 0 0 c13t1d0 ONLINE 0 0 0 c13t2d0 ONLINE 0 0 0 c13t3d0 ONLINE 0 0 0 cache c14t3d0 ONLINE 0 0 0 where c14t3d0 is the SSD (of course). We run IO tests with bonnie++ 1.03d, size is set to 200 GB (-s 200g) so that the test sample will never be completely in ARC/L2ARC. The results without SSD are (average values over several runs which show no differences) write_chr write_blk rewrite read_chr read_blk random seeks 101.998 kB/s 214.258 kB/s 96.673 kB/s 77.702 kB/s 254.695 kB/s 900 /s With SSD it becomes interesting. My assumption was that the results should be in worst case at least the same. While write/read/rewrite rates are not different, the random seek rate differs significantly between individual bonnie++ runs (between 188 /s and 1333 /s so far), average is 548 +- 200 /s, so below the value w/o SSD. So, my questions are mainly: Why do the random seek rates differ so much? If the seeks are really random, they should not differ much (my assumption). So, even if the SSD is impairing the performance it should be the same in each bonnie++ run. Why is the random seek performance worse in most of the bonnie++ runs? I would assume that some part of the bonnie++ data is in the L2ARC and random seeks on this data performs better while random seeks on other data just performs similarly like before.

    Read the article

  • iptables secure squid proxy

    - by Lytithwyn
    I have a setup where my incoming internet connection feeds into a squid proxy/caching server, and from there into my local wireless router. On the wan side of the proxy server, I have eth0 with address 208.78.∗∗∗.∗∗∗ On the lan side of the proxy server, I have eth1 with address 192.168.2.1 Traffic from my lan gets forwarded through the proxy transparently to the internet via the following rules. Note that traffic from the squid server itself is also routed through the proxy/cache, and this is on purpose: # iptables forwarding iptables -A FORWARD -i eth1 -o eth0 -s 192.168.2.0/24 -m state --state NEW -j ACCEPT iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A POSTROUTING -t nat -j MASQUERADE # iptables for squid transparent proxy iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j DNAT --to 192.168.2.1:3128 iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128 How can I set up iptables to block any connections made to my server from the outside, while not blocking anything initiated from the inside? I have tried doing: iptables -A INPUT -i eth0 -s 192.168.2.0/24 -j ACCEPT iptables -A INPUT -i eth0 -j REJECT But this blocks everything. I have also tried reversing the order of those commands in case I got that part wrong, but that didn't help. I guess I don't fully understand everything about iptables. Any ideas?

    Read the article

  • Caching/preloading files on Linux into RAM

    - by Andrioid
    I have a rather old server that has 4GB of RAM and it is pretty much serving the same files all day, but it is doing so from the hard drive while 3GBs of RAM are "free". Anyone who has ever tried running a ram-drive can witness that It's awesome in terms of speed. The memory usage of this system is usually never higher than 1GB/4GB so I want to know if there is a way to use that extra memory for something good. Is it possible to tell the filesystem to always serve certain files out of RAM? Are there any other methods I can use to improve file reading capabilities by use of RAM? More specifically, I am not looking for a 'hack' here. I want file system calls to serve the files from RAM without needing to create a ram-drive and copy the files there manually. Or at least a script that does this for me. Possible applications here are: Web servers with static files that get read alot Application servers with large libraries Desktop computers with too much RAM Any ideas? Edit: Found this very informative: The Linux Page Cache and pdflush As Zan pointed out, the memory isn't actually free. What I mean is that it's not being used by applications and I want to control what should be cached in memory.

    Read the article

  • Would a PHP application benefit from being served from a RAM drive?

    - by Tom Marthenal
    I am in charge of hosting a PHP application that is large and slow, but easy to scale. The application is entirely static, with writable disk storage needed. We've profiled the application, and the main bottleneck appears to come from loading the application and not the work the application does. The application is not CPU-intensive, although it does use a fair amount of memory (think Magento). Currently we distribute it by having a series of servers with the same PHP files on their hard drive and a load balancer in front of them. Easy but expensive. I've been reading about RAM disks and the IO benefits they offer, and was wondering if they would be well-suited to PHP applications. Since PHP applications are loaded from disk for every request and often involve lots of different files (as opposed to being kept in memory like with a Java application), I would figure that disk performance can be a severe bottleneck. Would placing the PHP files on a RAM disk and using the mount point as Apache's document root offer performance benefits? A startup script could create the RAM drive and then copy the files (which are plain-text and small) from a permanent location to the temporary RAM drive. Does this make sense, or should I just trust the linux kernel to cache the appropriate files in memory by itself?

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >