Search Results

Search found 49 results on 2 pages for 'filemaker'.

Page 2/2 | < Previous Page | 1 2 

  • My father is a doctor. He is insisting on writing a database to store non-critical patient information, with no programming background

    - by Dominic Bou-Samra
    So, my father is currently in the process of "hacking" together a database using FileMaker Pro, a GUI based databasing tool for his small (4 doctor) practice. The database will be used to help ease the burden on reporting from medical machines, streamlining quite a clumsy process. He's got no programming background, and seems to be doing everything in his power to not learn things correctly. He's got duplicate data types, no database-enforced relationships (foreign/primary key constraints) and a dozen other issues. He's doing it all by hand via GUI tool using Youtube videos. My issue is, that whilst I want him to succeed 100%, I don't think it's appropriate for him to be handling these types of decisions. How do I convince him that without some sort of education in these topics, a hacked together solution is a bad idea? He's can be quite stubborn and I think he sees these types of jobs as "childs play" How should I approach this? Is it even that bad an idea - or am I correct in thinking he should hire a proper DBA/developer to handle this so that it doesn't become a maintenance nightmare? NB: I am a developer consultant of 4 years and I've seen my share of painful customer implementations.

    Read the article

  • VisualStudio 2010 Settings Page - Collection Settings

    - by Ed Eichman
    I have a list of Filemaker database names Each DB name has a list of field/attribute pairs associated with it I have a windows form application in C# 4.0 (vs2010) that wants to use the above data I would like to maintain the list either in the Visual Studio settings page, or in one of the standard visual studio settings files using the standard .NET settings calls I would like to avoid writing my own custom settings, xml, xds (to avoid the "Could not find schema information for the element/attribute " errors) I just have a slightly complicated INI file! I don't want to complicate my life! Do any easy solutions exist? Unless someone has a brighter idea, I am simply going to write string settings with names that indicate it's a FM DB (e.g. "fmdbAddresses"), and values that concat my field/attribute pairs (e.g. "gUserResult=skipField|gAddressID=convertToInt|gAddressID=uniqueIx")

    Read the article

  • Is Alpha Five Version 10 really all that its reported to be?

    - by Gary B2312321321
    I came across this RDMS via the advert on stackoverflow. Seems to be in the vein of MS Access / Filemaker / Apex database devlopment tools but focused on web based applications. It quotes rave reviews from EWeek and a favourable mention from Dr Dobbs regarding its ability to create AJAX web applications without coding. The Eweek review, apparently written by an ASP.NET programmer, goes on to proclaim the ease at which apps can be extended using the inbuilt XBasic language and how custom javascript can easily be added without wading through code. Has anyone here built a web app with Alpha 5? Does anyone have comments on the development process, the speed of it or limitations they encountered along the way? To me it seems Oracle APEX comes closest to the feature set, has anyone programmed in both and have any comments?

    Read the article

  • PHP e-commerce site talking to internal database for stock / ordering?

    - by CitrusTree
    Hi. I'm working on an e-commerce site (either bespoke with PHP, or using Drupal/Ubercart), and I'd like to investigate the site interacting with an internal (filemaker) database we use to manage stock and orders. Currently we manually transfer orders from the web site to our own database, and the site does not check or record changes in stock. My plan to allow the 2 to interact is as follows: Make the internal database available externaly on a machine with a fixed IP Allow external access from the site only Connect to the internal database using ODBC (or similar) Use simple queries to check stock / record stock changes / record order details Am I missing something here as this sounds quite straight forward? Is there another solution I should be taking a look at? Thanks in advance for any help or comments.

    Read the article

  • Recursive algorithm for coalescing / collapsing list of dates into ranges.

    - by Dycey
    Given a list of dates 12/07/2010 13/07/2010 14/07/2010 15/07/2010 12/08/2010 13/08/2010 14/08/2010 15/08/2010 19/08/2010 20/08/2010 21/08/2010 I'm looking for pointers towards a recursive pseudocode algorithm (which I can translate into a FileMaker custom function) for producing a list of ranges, i.e. 12/07/2010 to 15/07/2010, 12/08/2010 to 15/08/2010, 19/08/2010 to 20/08/2010 The list is presorted and de-deuplicated. I've tried starting from both the first value and working forwards, and the last value and working backwards but I just can't seem to get it to work. Having one of those frustrating days... It would be nice if the signature was something like CollapseDateList( dateList, separator, ellipsis ) :-)

    Read the article

  • How to display a border-bottom only if table cells are not empty (CSS)

    - by Polarpro
    Hey there, I've got a Filemaker calculation that generates an HTML page with several tables. If the calculation results in values for certain fields the result would be <table> <tr><td>Example value 1</td></tr> <tr><td>Example value 2</td></tr> ... </table> If the calculation finds no values to be displayed, the result would simply be <table> </table> In the first case, I want to the table to display a border at the bottom (or any other horizontal line); in the second case, I don't want to display a border at the bottom. I cannot find a way to get this done using a CSS... Thanks in adavance :-)

    Read the article

  • Squid: caching *.swf with variables

    - by stfn
    I'd recently upgraded my Ubuntu 11.10 x64 server to 12.04. In this process Squid was updated from 2.7 to 3.1. Squid 3.1 has many different options witch broke my setup. So I completely removed squid 2.7 and 3.1 and started from scratch. Everything is now working as before except for 1 thing: caching of .swf files with ?/variables. Squid 3 sees a ? as dynamic content and does not cache it. For example, Squid 2.7 was caching the .swf file at http://ninjakiwi.com/Games/Tower-Defense/Play/Bloons-Tower-Defense-5.html and 3.1 is not. <object id="mov" name="movn" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" width="800" height="620"> <param name="movie" value="http://www.ninjakiwifiles.com/Games/gameswfs/btd5.swf?v=160512-2"> <param name="allowscriptaccess" value="always"> <param name="bgcolor" value="#000000"> <param name="flashvars" value="file=http://www.ninjakiwifiles.com/Games/gameswfs/btd5-dat.swf?v=280512"> <p>Get Flash play Ninja Kiwi games.</p> </object> It is because of the "?v=160512-2" and "?v=280512" part. This line should be responsible for that: refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 But disabling it still doesn't cache the .swf files. How do I configure Squid 3.1 to cache those files? My current config is: acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT acl localnet src 192.168.2.0-192.168.2.255 acl localnet src 192.168.3.0-192.168.3.255 http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access allow localnet http_access deny all http_port 3128 cache_dir ufs /var/spool/squid 10240 16 256 maximum_object_size 100 MB coredump_dir /var/spool/squid3 refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200 override-expire ignore-no-cache ignore-no-store ignore-private refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200 90% 432000 override-expire ignore-no-cache ignore-no-store ignore-private refresh_pattern -i \.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff)$ 10080 90% 43200 override-expire ignore-no-cache ignore-no-store ignore-private refresh_pattern -i \.index.(html|htm)$ 0 40% 10080 refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320 refresh_pattern Packages\.bz2$ 0 20% 4320 refresh-ims refresh_pattern Sources\.bz2$ 0 20% 4320 refresh-ims refresh_pattern Release\.gpg$ 0 20% 4320 refresh-ims refresh_pattern Release$ 0 20% 4320 refresh-ims refresh_pattern . 0 40% 40320 cache_effective_user proxy cache_effective_group proxy

    Read the article

  • Squid configuration for proxy server

    - by Ian Rob
    I have a server with 10 ip's that I want to give access to some friends via authentication but I'm stuck on squid's config file. Let's say I have these ip's available on my server: 212.77.23.10 212.77.1.10 68.44.82.112 And I want to allocate each one of them to a different user like so: 212.77.23.10 goes to user manilodisan using password 123456 212.77.1.10 goes to user manilodisan1 using password 123456 68.44.82.112 goes to user manilodisan2 using password 123456 I managed to add the passwords and authentication works ok but how do I do to restrict one user to one of the available ip's? I have a basic setup from different bits I found over the internet but nothing seems to work. Here's my squid.conf (all comments are removed to make it lighter): acl ip1 myip 212.77.23.10 acl ip2 myip 212.77.1.10 tcp_outgoing_address 212.77.23.10 ip1 tcp_outgoing_address 212.77.1.10 ip2 http_port 8888 visible_hostname weezie auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/squid-passwd acl ncsa_users proxy_auth REQUIRED http_access allow ncsa_users acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 # https acl SSL_ports port 563 # snews acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access deny all icp_access allow all hierarchy_stoplist cgi-bin ? access_log /var/log/squid/access.log squid acl QUERY urlpath_regex cgi-bin \? cache deny QUERY refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl apache rep_header Server ^Apache broken_vary_encoding allow apache extension_methods REPORT MERGE MKACTIVITY CHECKOUT hosts_file /etc/hosts forwarded_for off coredump_dir /var/spool/squid

    Read the article

  • configure Squid3 proxy server on Ubuntu with caching and logging

    - by Panshul
    I have a ubuntu 11.10 machine. Installed Squid3. When i configure the squid as http_access allow all, everything works fine. my current configuration mostly default is as follows: 2012/09/10 13:19:57| Processing Configuration File: /etc/squid3/squid.conf (depth 0) 2012/09/10 13:19:57| Processing: acl manager proto cache_object 2012/09/10 13:19:57| Processing: acl localhost src 127.0.0.1/32 ::1 2012/09/10 13:19:57| Processing: acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 2012/09/10 13:19:57| Processing: acl SSL_ports port 443 2012/09/10 13:19:57| Processing: acl Safe_ports port 80 # http 2012/09/10 13:19:57| Processing: acl Safe_ports port 21 # ftp 2012/09/10 13:19:57| Processing: acl Safe_ports port 443 # https 2012/09/10 13:19:57| Processing: acl Safe_ports port 70 # gopher 2012/09/10 13:19:57| Processing: acl Safe_ports port 210 # wais 2012/09/10 13:19:57| Processing: acl Safe_ports port 1025-65535 # unregistered ports 2012/09/10 13:19:57| Processing: acl Safe_ports port 280 # http-mgmt 2012/09/10 13:19:57| Processing: acl Safe_ports port 488 # gss-http 2012/09/10 13:19:57| Processing: acl Safe_ports port 591 # filemaker 2012/09/10 13:19:57| Processing: acl Safe_ports port 777 # multiling http 2012/09/10 13:19:57| Processing: acl CONNECT method CONNECT 2012/09/10 13:19:57| Processing: http_access allow manager localhost 2012/09/10 13:19:57| Processing: http_access deny manager 2012/09/10 13:19:57| Processing: http_access deny !Safe_ports 2012/09/10 13:19:57| Processing: http_access deny CONNECT !SSL_ports 2012/09/10 13:19:57| Processing: http_access allow localhost 2012/09/10 13:19:57| Processing: http_access deny all 2012/09/10 13:19:57| Processing: http_port 3128 2012/09/10 13:19:57| Processing: coredump_dir /var/spool/squid3 2012/09/10 13:19:57| Processing: refresh_pattern ^ftp: 1440 20% 10080 2012/09/10 13:19:57| Processing: refresh_pattern ^gopher: 1440 0% 1440 2012/09/10 13:19:57| Processing: refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 2012/09/10 13:19:57| Processing: refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880 2012/09/10 13:19:57| Processing: refresh_pattern . 0 20% 4320 2012/09/10 13:19:57| Processing: http_access allow all 2012/09/10 13:19:57| Processing: cache_mem 512 MB 2012/09/10 13:19:57| Processing: logformat squid3 %ts.%03tu %6tr %>a %Ss/%03>Hs %<st %rm %ru 2012/09/10 13:19:57| Processing: access_log /home/panshul/squidCache/log/access.log squid3 The problem starts when I enable the following line: access_log /home/panshul/squidCache/log/access.log I start to get proxy server is refusing connections error in the browser. on commenting out the above line in my config, things go back to normal. The second problem starts when i add the following line to my config: cache_dir ufs /home/panshul/squidCache/cache 100 16 256 The squid server fails to start. Any suggestions what am I missing in the config. Please help.!!

    Read the article

  • Unable to get squid working for remote users

    - by Sean
    I am trying to setup squid 3.2.4, but I have not been able to get it working for remote users. Works fine locally. Unable to figure out what I am doing wrong... http_port 3128 transparent ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/usr/share/ssl-cert/myCA.pem refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 acl localnet src 10.0.0.0/8 # RFC 1918 possible internal network acl localnet src 172.16.0.0/12 # RFC 1918 possible internal network acl localnet src 192.168.0.0/16 # RFC 1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access allow localhost http_access allow localnet http_access allow all cache deny all via off forwarded_for off header_access From deny all header_access Server deny all header_access WWW-Authenticate deny all header_access Link deny all header_access Cache-Control deny all header_access Proxy-Connection deny all header_access X-Cache deny all header_access X-Cache-Lookup deny all header_access Via deny all header_access Forwarded-For deny all header_access X-Forwarded-For deny all header_access Pragma deny all header_access Keep-Alive deny all acl ip1 localip 1.1.1.90 acl ip2 localip 1.1.1.91 acl ip3 localip 1.1.1.92 acl ip4 localip 1.1.1.93 acl ip5 localip 1.1.1.94 tcp_outgoing_address 1.1.1.90 ip1 tcp_outgoing_address 1.1.1.91 ip2 tcp_outgoing_address 1.1.1.92 ip3 tcp_outgoing_address 1.1.1.93 ip4 tcp_outgoing_address 1.1.1.94 ip5 tcp_outgoing_address 1.1.1.90

    Read the article

  • How do I properly configure a ZipInstaller .zic file?

    - by Iszi Rory or Isznti
    As of version 1.20, ZipInstaller is supposed to support the use of a configuration file to customize its installation options. Generally, all the options I want to use are available through the dialog so I really haven't bothered with the configuration file until now. The problem now is that certain tools, such as PsTools from Sysinternals, do not properly show their Product Name to ZipInstaller. ZipInstaller's dialog will let you customize the Start Menu folder and Program Files folder, but that still doesn't change the Product Name that it sees for the software. So, instead of having "PsTools" in my Add/Remove Programs, I get "Sysinternals Software". For some things, the situation is even more confusing. For example, the NIST SP 800-53 Reference Database Application gets installed as "FileMaker Pro Runtime". To rectify this, I've tried to use the aforementioned .zic configuration file. As I understand it, it's a basic INI file you create and put in the root of the ZIP file. ZipInstaller is supposed to read that file, and adjust its parameters accordingly. Mine looks like this: [install] ProductName=NIST_SP_800-53 ProductVersion=1.4.1 CompanyName=NIST Description=NIST_SP_800-53 InstallFolder=%zi.ProgramFiles%\%zi.ProductName% StartMenuFolder=%zi.CompanyName%\%zi.ProductName% I've named it `~zipinst~.zic and placed it in the root of the ZIP file, but when I run ZipInstaller it doesn't seem to recognize any of the information I've given it in the .zic file. What might I be doing wrong here?

    Read the article

  • squid bypass for a domain

    - by krisdigitx
    i am using squid with adzap, it possible that squid/adzap does not cache for a particluar domain eg. cnn.com this is my squid.conf file # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 #acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 #acl to_localhost dst ::1/128 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 192.168.1.0/24 acl localnet src 192.168.2.0/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port xxx.xxx.xxx.yyy:3128 transparent visible_hostname proxyserver.local # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. cache_dir ufs /var/spool/squid 1024 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 access_log /var/log/squid/squid.log squid access_log syslog squid redirect_program /usr/local/adzap/scripts/wrapzap fixed using acl allow_domains dstdomain www.cnn.com always_direct allow allow_domains

    Read the article

  • What is the worst programming language you ever worked with? [closed]

    - by Ludwig Weinzierl
    If you have an interesting story to share, please post an answer, but do not abuse this question for bashing a language. We are programmers, and our primary tool is the programming language we use. While there is a lot of discussion about the best one, I'd like to hear your stories about the worst programming languages you ever worked with and I'd like to know exactly what annoyed you. I'd like to collect this stories partly to avoid common pitfalls while designing a language (especially a DSL) and partly to avoid quirky languages in the future in general. This question is not subjective. If a language supports only single character identifiers (see my own answer) this is bad in a non-debatable way. EDIT Some people have raised concerns that this question attracts trolls. Wading through all your answers made one thing clear. The large majority of answers is appropriate, useful and well written. UPDATE 2009-07-01 19:15 GMT The language overview is now complete, covering 103 different languages from 102 answers. I decided to be lax about what counts as a programming language and included anything reasonable. Thank you David for your comments on this. Here are all programming languages covered so far (alphabetical order, linked with answer, new entries in bold): ABAP, all 20th century languages, all drag and drop languages, all proprietary languages, APF, APL (1), AS400, Authorware, Autohotkey, BancaStar, BASIC, Bourne Shell, Brainfuck, C++, Centura Team Developer, Cobol (1), Cold Fusion, Coldfusion, CRM114, Crystal Syntax, CSS, Dataflex 2.3, DB/c DX, dbase II, DCL, Delphi IDE, Doors DXL, DOS batch (1), Excel Macro language, FileMaker, FOCUS, Forth, FORTRAN, FORTRAN 77, HTML, Illustra web blade, Informix 4th Generation Language, Informix Universal Server web blade, INTERCAL, Java, JavaScript (1), JCL (1), karol, LabTalk, Labview, Lingo, LISP, Logo, LOLCODE, LotusScript, m4, Magic II, Makefiles, MapBasic, MaxScript, Meditech Magic, MEL, mIRC Script, MS Access, MUMPS, Oberon, object extensions to C, Objective-C, OPS5, Oz, Perl (1), PHP, PL/SQL, PowerDynamo, PROGRESS 4GL, prova, PS-FOCUS, Python, Regular Expressions, RPG, RPG II, Scheme, ScriptMaker, sendmail.conf, Smalltalk, Smalltalk , SNOBOL, SpeedScript, Sybase PowerBuilder, Symbian C++, System RPL, TCL, TECO, The Visual Software Environment, Tiny praat, TransCAD, troff, uBasic, VB6 (1), VBScript (1), VDF4, Vimscript, Visual Basic (1), Visual C++, Visual Foxpro, VSE, Webspeed, XSLT The answers covering 80386 assembler, VB6 and VBScript have been removed.

    Read the article

  • Apache2 cgi's crash on odbc db access (but run fine from shell)

    - by Martin
    Problem overview (details below): I'm having an apache2 + ruby integration problem when trying to connect to an ODBC data source. The main problem boils down to the fact that scripts that run fine from an interactive shell crash ruby on the database connect line when run as a cgi from apache2. Ruby cgi's that don't try to access the ODBC datasource work fine. And (again) ruby scripts that connect to a database with ODBC do fine when executed from the command line (or cron). This behavior is identical when I use perl instead of ruby. So, the issue seems to be with the environment provided for ruby (perl) by apache2, but I can't figure out what is wrong or what to do about it. Does anyone have any suggestions on how to get these cgi scripts to work properly? I've tried many different things to get this to work, and I'm happy to provide more detail of any aspect if that will help. Details: Mac OS X Server 10.5.8 Xserve 2 x 2.66 Dual-Core Intel Xeon (12 GB) Apache 2.2.13 ruby 1.8.6 (2008-08-11 patchlevel 287) [universal-darwin9.0] ruby-odbc 0.9997 dbd-odbc (0.2.5) dbi (0.4.3) mod_ruby 1.3.0 Perl -- 5.8.8 DBI -- 1.609 DBD::ODBC -- 1.23 odbc driver: DataDirect SequeLink v5.5 (/Library/ODBC/SequeLink.bundle/Contents/MacOS/ivslk20.dylib) odbc datasource: FileMaker Server 10 (v10.0.2.206) ) a minimal version of a script (anonymized) that will crash in apache but run successfully from a shell: #!/usr/bin/ruby require 'cgi' require 'odbc' cgi = CGI.new("html3") aConnection = ODBC::connect('DBFile', "username", 'password') aQuery = aConnection.prepare("SELECT zzz_kP_ID FROM DBTable WHERE zzz_kP_ID = 81044") aQuery.execute aRecord = aQuery.fetch_hash.inspect aQuery.drop aConnection.disconnect # aRecord = '{"zzz_kP_ID"=>81044.0}' cgi.out{ cgi.html{ cgi.body{ "<pre>Primary Key: #{aRecord}</pre>" } } } Example of running this from a shell: gamma% ./minimal.rb (offline mode: enter name=value pairs on standard input) Content-Type: text/html Content-Length: 134 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><HTML><BODY><pre>Primary Key: {"zzz_kP_ID"=>81044.0}</pre></font></BODY></HTML>% gamma% ) typical crash log lines: Dec 22 14:02:38 gamma ReportCrash[79237]: Formulating crash report for process perl[79236] Dec 22 14:02:38 gamma ReportCrash[79237]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140237_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0 Dec 22 14:03:13 gamma ReportCrash[79256]: Formulating crash report for process perl[79253] Dec 22 14:03:13 gamma ReportCrash[79256]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140311_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0

    Read the article

  • tproxy squid bridge very slow when cache is full

    - by Roberto
    I have installed a bridge tproxy proxy in a fast server with 8GB ram. The traffic is around 60Mb/s. When I start for first time the proxy (with the cache empty) the proxy works very well but when the cache becomes full (few hours later) the bridge goes very slow, the traffic goes below 10Mb/s and the proxy server becomes unusable. Any hints of what may be happening? I'm using: linux-2.6.30.10 iptables-1.4.3.2 squid-3.1.1 compiled with these options: ./configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --datadir=/usr/share --localstatedir=/var/lib --sysconfdir=/etc/squid --libexecdir=/usr/libexec/squid --localstatedir=/var --datadir=/usr/share/squid --enable-removal-policies=lru,heap --enable-icmp --disable-ident-lookups --enable-cache-digests --enable-delay-pools --enable-arp-acl --with-pthreads --with-large-files --enable-htcp --enable-carp --enable-follow-x-forwarded-for --enable-snmp --enable-ssl --enable-async-io=32 --enable-linux-netfilter --enable-epoll --disable-poll --with-maxfd=16384 --enable-err-languages=Spanish --enable-default-err-language=Spanish My squid.conf: cache_mem 100 MB memory_pools off acl manager proto cache_object acl localhost src 127.0.0.1/32 acl localhost src ::1/128 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 acl to_localhost dst ::1/128 acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl net-g1 src xxx.xxx.xxx.xxx/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow net-g1 from where browsing should be allowed http_access allow localnet http_access allow localhost http_access deny all http_port 3128 http_port 3129 tproxy hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid 8000 16 256 access_log none cache_log /var/log/squid/cache.log coredump_dir /var/spool/squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . I have this issue when the cache is full, but do not really know if it is because of that. Thanks in advance and sorry my english. roberto

    Read the article

  • configure squid3 to set up a web proxy in ubuntu12.04

    - by Gnijuohz
    I am in a LAN and have to use a proxy given to access the web in a very limited way. I can't even use google, github.com or SE sites. However I can use ssh to log into a server, which I have root access so basically I can do anything I want with it. So I was thinking that maybe I could use that server as a proxy so I can visit sites through it. I tested it using ssh -vT [email protected] which gave a proper response. And In my computer I can't do this. Also I tried downloading something from the gun.org using wget, which can't be done in my computer too. And it succeeded on that server. I don't know if that's enough to say that this server have full access to the Internet. But I assumed so and I installed squid3 on it. After trying some while, I failed to get it working. I got this after I run squid3 -k parse 2012/07/06 21:45:18| Processing Configuration File: /etc/squid3/squid.conf (depth 0) 2012/07/06 21:45:18| Processing: acl manager proto cache_object 2012/07/06 21:45:18| Processing: acl localhost src 127.0.0.1/32 ::1 2012/07/06 21:45:18| Processing: acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 2012/07/06 21:45:18| Processing: acl localnet src 10.1.0.0/16 # RFC1918 possible internal network 2012/07/06 21:45:18| Processing: acl SSL_ports port 443 2012/07/06 21:45:18| Processing: acl Safe_ports port 80 # http 2012/07/06 21:45:18| Processing: acl Safe_ports port 21 # ftp 2012/07/06 21:45:18| Processing: acl Safe_ports port 443 # https 2012/07/06 21:45:18| Processing: acl Safe_ports port 70 # gopher 2012/07/06 21:45:18| Processing: acl Safe_ports port 210 # wais 2012/07/06 21:45:18| Processing: acl Safe_ports port 1025-65535 # unregistered ports 2012/07/06 21:45:18| Processing: acl Safe_ports port 280 # http-mgmt 2012/07/06 21:45:18| Processing: acl Safe_ports port 488 # gss-http 2012/07/06 21:45:18| Processing: acl Safe_ports port 591 # filemaker 2012/07/06 21:45:18| Processing: acl Safe_ports port 777 # multiling http 2012/07/06 21:45:18| Processing: acl CONNECT method CONNECT 2012/07/06 21:45:18| Processing: http_port 3128 transparent vhost vport 2012/07/06 21:45:18| Starting Authentication on port [::]:3128 2012/07/06 21:45:18| Disabling Authentication on port [::]:3128 (interception enabled) 2012/07/06 21:45:18| Disabling IPv6 on port [::]:3128 (interception enabled) 2012/07/06 21:45:18| Processing: cache_mem 1000 MB 2012/07/06 21:45:18| Processing: cache_swap_low 90 2012/07/06 21:45:18| Processing: coredump_dir /var/spool/squid3 2012/07/06 21:45:18| Processing: refresh_pattern ^ftp: 1440 20% 10080 2012/07/06 21:45:18| Processing: refresh_pattern ^gopher: 1440 0% 1440 2012/07/06 21:45:18| Processing: refresh_pattern -i (/cgi-bin/|?) 0 0% 0 2012/07/06 21:45:18| Processing: refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880 2012/07/06 21:45:18| Processing: refresh_pattern . 0 20% 4320 2012/07/06 21:45:18| Processing: ipcache_high 95 2012/07/06 21:45:18| Processing: http_access allow all I deleted some allow and deny rules and added http_access allow all so that all the request would be allowed. After configuring my computer, I got this error: Access control configuration prevents your request from being allowed at this time. Please contact your service provider if you feel this is incorrect. And the log in the server showed that my TCP requests had all been denied. So, first of all, is what I am trying to do achievable? If so, how to configure the squid in the server so that I use it as a proxy to surf the Internet? My computer and the server both run Ubuntu11.04. Thanks for any help~

    Read the article

  • Squid Proxy: url_regex acl is not working?

    - by bharathi
    I am using squid proxy 3.1 in ubuntu machine. I want to allow only urls matching our pattern through our proxy server. I configured acl like below. Acl for dstdomain is working fine. If i access any url besides .zmedia.com , I got proxy connection refused. But the url_regex is not working. What i am trying here is. Allow only request from ".zmedia.com" domain and the request url should be in "/blog" context. # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 ::1 acl urlwhitelist url_regex -i ^http(s)://([a-zA-Z]+).zmedia.com/blog/.*$ acl allowdomain dstdomain .zmedia.com acl Safe_ports port 80 8080 8500 7272 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl SSL_ports port 7272 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager http_access deny !allowdomain http_access allow urlwhitelist http_access allow CONNECT SSL_ports http_access deny CONNECT !SSL_ports # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port 3128 # We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? # Uncomment and adjust the following to add a disk cache directory. #cache_dir ufs /var/spool/squid 100 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid append_domain .zmedia.com # Add any of your own refresh_pattern entries above these. refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 Please correct me , If i did anything wrong?

    Read the article

  • Ubuntu 9.10 and Squid 2.7 Transparent Proxy TCP_DENIED

    - by user38400
    Hi, We've spent the last two days trying to get squid 2.7 to work with ubuntu 9.10. The computer running ubuntu has two network interfaces: eth0 and eth1 with dhcp running on eth1. Both interfaces have static ip's, eth0 is connected to the Internet and eth1 is connected to our LAN. We have followed literally dozens of different tutorials with no success. The tutorial here was the last one we did that actually got us some sort of results: http://www.basicconfig.com/linuxnetwork/setup_ubuntu_squid_proxy_server_beginner_guide. When we try to access a site like seriouswheels.com from the LAN we get the following message on the client machine: ERROR The requested URL could not be retrieved Invalid Request error was encountered while trying to process the request: GET / HTTP/1.1 Host: www.seriouswheels.com Connection: keep-alive User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/532.9 (KHTML, like Gecko) Chrome/5.0.307.11 Safari/532.9 Cache-Control: max-age=0 Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,/;q=0.5 Accept-Encoding: gzip,deflate,sdch Cookie: __utmz=88947353.1269218405.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); __qca=P0-1052556952-1269218405250; __utma=88947353.1027590811.1269218405.1269218405.1269218405.1; __qseg=Q_D Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Some possible problems are: Missing or unknown request method. Missing URL. Missing HTTP Identifier (HTTP/1.0). Request is too large. Content-Length missing for POST or PUT requests. Illegal character in hostname; underscores are not allowed. Your cache administrator is webmaster. Below are all the configuration files: /etc/squid/squid.conf, /etc/network/if-up.d/00-firewall, /etc/network/interfaces, /var/log/squid/access.log. Something somewhere is wrong but we cannot figure out where. Our end goal for all of this is the superimpose content onto every page that a client requests on the LAN. We've been told that squid is the way to do this but at this point in the game we are just trying to get squid setup correctly as our proxy. Thanks in advance. squid.conf acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl localnet src 192.168.0.0/24 acl SSL_ports port 443 # https acl SSL_ports port 563 # snews acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access allow localnet http_access deny all icp_access allow localnet icp_access deny all http_port 3128 hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid/cache1 1000 16 256 access_log /var/log/squid/access.log squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern (Release|Package(.gz)*)$ 0 20% 2880 refresh_pattern . 0 20% 4320 acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9] upgrade_http0.9 deny shoutcast acl apache rep_header Server ^Apache broken_vary_encoding allow apache extension_methods REPORT MERGE MKACTIVITY CHECKOUT cache_mgr webmaster cache_effective_user proxy cache_effective_group proxy hosts_file /etc/hosts coredump_dir /var/spool/squid access.log 1269243042.740 0 192.168.1.11 TCP_DENIED/400 2576 GET NONE:// - NONE/- text/html 00-firewall iptables -F iptables -t nat -F iptables -t mangle -F iptables -X echo 1 | tee /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -j MASQUERADE iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3128 networking auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 142.104.109.179 netmask 255.255.224.0 gateway 142.104.127.254 auto eth1 iface eth1 inet static address 192.168.1.100 netmask 255.255.255.0

    Read the article

  • Ubuntu 9.10 and Squid 2.7 Transparent Proxy TCP_DENIED

    - by user298814
    Hi, We've spent the last two days trying to get squid 2.7 to work with ubuntu 9.10. The computer running ubuntu has two network interfaces: eth0 and eth1 with dhcp running on eth1. Both interfaces have static ip's, eth0 is connected to the Internet and eth1 is connected to our LAN. We have followed literally dozens of different tutorials with no success. The tutorial here was the last one we did that actually got us some sort of results: http://www.basicconfig.com/linuxnetwork/setup_ubuntu_squid_proxy_server_beginner_guide. When we try to access a site like seriouswheels.com from the LAN we get the following message on the client machine: ERROR The requested URL could not be retrieved Invalid Request error was encountered while trying to process the request: GET / HTTP/1.1 Host: www.seriouswheels.com Connection: keep-alive User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/532.9 (KHTML, like Gecko) Chrome/5.0.307.11 Safari/532.9 Cache-Control: max-age=0 Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,/;q=0.5 Accept-Encoding: gzip,deflate,sdch Cookie: __utmz=88947353.1269218405.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none); __qca=P0-1052556952-1269218405250; __utma=88947353.1027590811.1269218405.1269218405.1269218405.1; __qseg=Q_D Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Some possible problems are: Missing or unknown request method. Missing URL. Missing HTTP Identifier (HTTP/1.0). Request is too large. Content-Length missing for POST or PUT requests. Illegal character in hostname; underscores are not allowed. Your cache administrator is webmaster. Below are all the configuration files: /etc/squid/squid.conf, /etc/network/if-up.d/00-firewall, /etc/network/interfaces, /var/log/squid/access.log. Something somewhere is wrong but we cannot figure out where. Our end goal for all of this is the superimpose content onto every page that a client requests on the LAN. We've been told that squid is the way to do this but at this point in the game we are just trying to get squid setup correctly as our proxy. Thanks in advance. squid.conf acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/32 acl to_localhost dst 127.0.0.0/8 acl localnet src 192.168.0.0/24 acl SSL_ports port 443 # https acl SSL_ports port 563 # snews acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost http_access allow localnet http_access deny all icp_access allow localnet icp_access deny all http_port 3128 hierarchy_stoplist cgi-bin ? cache_dir ufs /var/spool/squid/cache1 1000 16 256 access_log /var/log/squid/access.log squid refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern (Release|Package(.gz)*)$ 0 20% 2880 refresh_pattern . 0 20% 4320 acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9] upgrade_http0.9 deny shoutcast acl apache rep_header Server ^Apache broken_vary_encoding allow apache extension_methods REPORT MERGE MKACTIVITY CHECKOUT cache_mgr webmaster cache_effective_user proxy cache_effective_group proxy hosts_file /etc/hosts coredump_dir /var/spool/squid access.log 1269243042.740 0 192.168.1.11 TCP_DENIED/400 2576 GET NONE:// - NONE/- text/html 00-firewall iptables -F iptables -t nat -F iptables -t mangle -F iptables -X echo 1 | tee /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -j MASQUERADE iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3128 networking auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 142.104.109.179 netmask 255.255.224.0 gateway 142.104.127.254 auto eth1 iface eth1 inet static address 192.168.1.100 netmask 255.255.255.0

    Read the article

  • Why does Process Explorer cause highly targeted failure of some applications / basic UI functions in a high-power EC2 Windows instance?

    - by Dan Nissenbaum
    Update: I have determined that Process Explorer itself - the program I am using to debug a performance issue - seems to be the cause of the issue. See note, with updated question, at end. I am running a high-power (cc2.8xlarge) Amazon AWS EC2 Windows instance off of a boot EBS volume, provisioned at 2500 PIOPS, which was created from a snapshot of a previous boot volume. My purpose with the instance is to use it as a development workstation with many developer tools installed, such as Visual Studio, a local XAMPP stack, etc. I have upwards of 40 programs installed on the machine. The usability of the instance as a development machine often works quite well. The RDP lag is adequately small. I have used it for hours on end without problems for some of my most intense development tasks. As a result, I have just purchased a reserved instance, and I opted to rebuild my development machine starting from scratch with a Windows Server 2012 AMI. After having installed all of my desired/required applications for development over this past week, again the machine seems to often work well and I have worked for up to an hour at a time without problems doing heavy development work. However, I continue to run into catastrophic OS usability issues that may prevent me from being able to rely on this machine as a development machine. I would like to track down the source of the problem, if there is an easily identifiable source. (Update: I have tracked down the source to be Process Explorer, the very program I was using to debug the problem. See update at end.) The issues are as follows. (These are some primary examples) Some applications, after a period of adequate responsiveness, suddenly begin to respond very, very slowly to basic user interface actions such as clicking on menus and pressing Ctrl-Tab to switch between open documents. Two examples are UltraEdit and PhpEd. It typically takes ~2 seconds for a menu to appear, and ~4 seconds to switch between open documents. Additionally, insertion point motion in the editor is lagged by upwards of ~2 seconds. Process Explorer, which I am using to help debug the problem, seems to run acceptably for a couple of minutes, but on multiple occasions Process Explorer itself hangs completely. It hangs at the same time as the problems noted above. When it hangs, it is 100% unresponsive. Clicking on its taskbar icon neither causes it to come to the top or go behind, and its viewable area is filled with nothing but a region partially containing pure white and partially containing incomplete windows widgets that are unreadable, and that never change. Waiting 10 minutes does not clear the problem. Attempting to force-quit Process Explorer by right-clicking on its taskbar icon and choosing "Close Window" takes about 5 full minutes to exit (Process Explorer itself can't be used to exit Process Explorer, and it is registered as a Task Manager substitute). Other programs work just fine during this time. For example, Chrome tabs flip very quickly back and forth, menus pop open instantly, web pages load quickly, and typing in forms/web applications inside the browser works promptly. Another example of an application that works crisply is Filemaker - its menus open instantly, and switching views in this application occurs promptly. Other applications also work without issue. Also, switching between applications occurs promptly as well. It is only a handful of applications that exhibit the problem, with some primary examples given above. At first I thought that EBS IOPS might be a problem. Therefore, I ran Performance Monitor, and watched the "Disk Transfers/sec" monitor in real time. At no point did this measure come anywhere close to hitting the 2500 PIOPS provisioned for the EBS volume. The RAM was also well under the limit (~10 GB used out of 60 GB). I did notice that one CPU core (out of 32 logical cores) was fully thrashing at 100% (i.e., ~3.1%) during the problematic periods. This seems to indicate that a single CPU core is handling the menus / flipping between open documents (for some applications only) / managing the Process Explorer user interface, and that this single core was hosed for some reason during the problematic periods. Also note that I have a desktop workstation (Windows 7) that I also use as a development machine, via a remote connection, with a nearly identical set of programs installed, and this desktop workstation does not exhibit any of the problems I've discussed above. I have been using it heavily for well over a year now. Any suggestions regarding either the source of the problem, or steps I might take to investigate the source of the problem, would be appreciated. Thanks. Note: After extensive testing & investigation, I have noticed that when I quit Process Explorer, the problem vanishes and the system performance returns to normal, and then reappears quickly when I run Process Explorer again (note: again, the performance problems only appear for a subset of applications - other applications work perfectly fine during the same period). My question is therefore (thankfully) more specific: Why does Process Explorer cause highly targeted failure of some applications (including itself) and basic UI functions, in a high-power EC2 Windows instance?

    Read the article

  • FreeBSD high load loopback interface

    - by user1740915
    I have a problem with a FreeBSD server. There is a FreeBSD 9.0 amd64, two network cards em1 (internet), em0 (local network) configured firewall ipfw, natd, squid (not transparent), the server acts as a gateway for access to the Internet. Next problem: upload via squid is very low. At this moment I see next: natd, dhcpd load the cpu at that time when uploading through squid and there are a lot of traffic through the loopback interface. ipfw show output 0100 655389684 36707144666 allow ip from any to any via lo0 00200 0 0 deny ip from any to 127.0.0.0/8 00300 0 0 deny ip from 127.0.0.0/8 to any 00400 0 0 deny ip from any to ::1 00500 0 0 deny ip from ::1 to any 00600 4 292 allow ipv6-icmp from :: to ff02::/16 00700 0 0 allow ipv6-icmp from fe80::/10 to fe80::/10 00800 1 76 allow ipv6-icmp from fe80::/10 to ff02::/16 00900 0 0 allow ipv6-icmp from any to any ip6 icmp6types 1 01000 0 0 allow ipv6-icmp from any to any ip6 icmp6types 2,135,136 01100 1615 76160 deny ip from 192.168.1.1 to any in via em1 01200 0 0 deny ip from 199.69.99.11 to any in via em0 01300 46652 3705426 deny ip from any to 172.16.0.0/12 via em1 01400 3936404 345618870 deny ip from any to 192.168.0.0/16 via em1 01500 4 336 deny ip from any to 0.0.0.0/8 via em1 01600 4129 387621 deny ip from any to 169.254.0.0/16 via em1 01700 0 0 deny ip from any to 192.0.2.0/24 via em1 01800 917566 33777571 deny ip from any to 224.0.0.0/4 via em1 01900 147872 22029252 deny ip from any to 240.0.0.0/4 via em1 02000 1132194739 1190981955947 divert 8668 ip4 from any to any via em1 02100 3 248 deny ip from 172.16.0.0/12 to any via em1 02200 35925 2281289 deny ip from 192.168.0.0/16 to any via em1 02300 1808 122494 deny ip from 0.0.0.0/8 to any via em1 02400 3 174 deny ip from 169.254.0.0/16 to any via em1 02500 0 0 deny ip from 192.0.2.0/24 to any via em1 02600 0 0 deny ip from 224.0.0.0/4 to any via em1 02700 0 0 deny ip from 240.0.0.0/4 to any via em1 02800 960156249 1095316736582 allow tcp from any to any established 02900 64236062 8243196577 allow ip from any to any frag 03000 34 1756 allow tcp from any to me dst-port 25 setup 03100 193 11580 allow tcp from any to me dst-port 53 setup 03200 63 4222 allow udp from any to me dst-port 53 03300 64 8350 allow udp from me 53 to any 03400 417 24140 allow tcp from any to me dst-port 80 setup 03500 211 10472 allow ip from any to me dst-port 3389 setup 05300 77 4488 allow ip from any to me dst-port 1723 setup 05400 3 156 allow ip from any to me dst-port 8443 setup 05500 9882 590596 allow tcp from any to me dst-port 22 setup 05600 1 60 allow ip from any to me dst-port 2000 setup 05700 0 0 allow ip from any to me dst-port 2201 setup 07400 4241779 216690096 deny log logamount 1000 ip4 from any to any in via em1 setup proto tcp 07500 21135656 1048824936 allow tcp from any to any setup 07600 474447 35298081 allow udp from me to any dst-port 53 keep-state 07700 532 40612 allow udp from me to any dst-port 123 keep-state 65535 1990638432 1122305322718 allow ip from any to any systat -ifstat when uploading via squid Load Average ||| Interface Traffic Peak Total tun0 in 79.507 KB/s 232.479 KB/s 42.314 GB out 2.022 MB/s 2.424 MB/s 59.662 GB lo0 in 4.450 MB/s 4.450 MB/s 43.723 GB out 4.450 MB/s 4.450 MB/s 43.723 GB em1 in 2.629 MB/s 2.982 MB/s 464.533 GB out 2.493 MB/s 2.875 MB/s 484.673 GB em0 in 240.458 KB/s 296.941 KB/s 442.368 GB out 512.508 KB/s 850.857 KB/s 416.122 GB top output PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 66885 root 1 92 0 26672K 2784K CPU3 3 528:43 65.48% natd 9160 dhcpd 1 45 0 31032K 9280K CPU1 1 7:40 32.96% dhcpd 66455 root 1 20 0 18344K 2856K select 1 119:27 1.37% openvpn 16043 squid 1 20 0 44404K 17884K kqread 2 0:22 0.29% squid squid.conf cat /usr/local/etc/squid/squid.conf # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port 192.168.1.1:3128 # Uncomment and adjust the following to add a disk cache directory. #cache_dir ufs /var/squid/cache 100 16 256 # Leave coredumps in the first cache dir coredump_dir /var/squid/cache I understand that the traffic passes through the SQUID several times. But can not find why.

    Read the article

  • How to export SSIS to Microsoft Excel without additional software?

    - by Dr. Zim
    This question is long winded because I have been updating the question over a very long time trying to get SSIS to properly export Excel data. I managed to solve this issue, although not correctly. Aside from someone providing a correct answer, the solution listed in this question is not terrible. The only answer I found was to create a single row named range wide enough for my columns. In the named range put sample data and hide it. SSIS appends the data and reads metadata from the single row (that is close enough for it to drop stuff in it). The data takes the format of the hidden single row. This allows headers, etc. WOW what a pain in the butt. It will take over 450 days of exports to recover the time lost. However, I still love SSIS and will continue to use it because it is still way better than Filemaker LOL. My next attempt will be doing the same thing in the report server. Original question notes: If you are in Sql Server Integrations Services designer and want to export data to an Excel file starting on something other than the first line, lets say the forth line, how do you specify this? I tried going in to the Excel Destination of the Data Flow, changed the AccessMode to OpenRowSet from Variable, then set the variable to "YPlatters$A4:I20000" This fails saying it cannot find the sheet. The sheet is called YPlatters. I thought you could specify (Sheet$)(Starting Cell):(Ending Cell)? Update Apparently in Excel you can select a set of cells and name them with the name box. This allows you to select the name instead of the sheet without the $ dollar sign. Oddly enough, whatever the range you specify, it appends the data to the next row after the range. Oddly, as you add data, it increases the named selection's row count. Another odd thing is the data takes the format of the last line of the range specified. My header rows are bold. If I specify a range that ends with the header row, the data appends to the row below, and makes all the entries bold. if you specify one row lower, it puts a blank line between the header row and the data, but the data is not bold. Another update No matter what I try, SSIS samples the "first row" of the file and sets the metadata according to what it finds. However, if you have sample data that has a value of zero but is formatted as the first row, it treats that column as text and inserts numeric values with a single quote in front ('123.34). I also tried headers that do not reflect the data types of the columns. I tried changing the metadata of the Excel destination, but it always changes it back when I run the project, then fails saying it will truncate data. If I tell it to ignore errors, it imports everything except that column. Several days of several hours a piece later... Another update I tried every combination. A mostly working example is to create the named range starting with the column headers. Format your column headers as you want the data to look as the data takes on this format. In my example, these exist from A4 to E4, which is my defined range. SSIS appends to the row after the defined range, so defining A4 to E68 appends the rows starting at A69. You define the Connection as having the first row contains the field names. It takes on the metadata of the header row, oddly, not the second row, and it guesses at the data type, not the formatted data type of the column, i.e., headers are text, so all my metadata is text. If your headers are bold, so is all of your data. I even tried making a sample data row without success... I don't think anyone actually uses Excel with the default MS SSIS export. If you could define the "insert range" (A5 to E5) with no header row and format those columns (currency, not bold, etc.) without it skipping a row in Excel, this would be very helpful. From what I gather, noone uses SSIS to export Excel without a third party connection manager. Any ideas on how to set this up properly so that data is formatted correctly, i.e., the metadata read from Excel is proper to the real data, and formatting inherits from the first row of data, not the headers in Excel? One last update (July 17, 2009) I got this to work very well. One thing I added to Excel was the IMEX=1 in the Excel connection string: "Excel 8.0;HDR=Yes;IMEX=1". This forces Excel (I think) to look at all rows to see what kind of data is in it. Generally, this does not drop information, say for instance if you have a zip code then about 9 rows down you have a zip+4, Excel without this blanks that field entirely without error. With IMEX=1, it recognizes that Zip is actually a character field instead of numeric. And of course, one more update (August 27, 2009) The IMEX=1 will succeed importing data with missing contents in the first 8 rows, but it will fail exporting data where no data exists. So, have it on your import connection string, but not your export Excel connection string. I have to say, after so much fiddling, it works pretty well.

    Read the article

  • XSLT ... I can'f find a (working) find minimum value in XML and set variable

    - by Bob
    I've search for hours and not found an example that allows for the very first position to be the lowest. I'm getting 'False' instead of the value returned .... EDIT: Oddly enough if I run a 2nd instance as MAX_Landed with ascending it returns a value just fine. If I switch the order in the XSLT the first instance will return 'False' and the 2nd will work. Hope I'm making sense ..... Example XML which I can't get formatted to show correctly and in a hurry so you get the gist I hope: <?xml version="1.0"?> <GetLowestOfferListingsForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01"> <GetLowestOfferListingsForASINResult ASIN="0470067802" status="Success"> <AllOfferListingsConsidered>false</AllOfferListingsConsidered> <Product xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01" xmlns:ns2="http://mws.amazonservices.com/schema/Products/2011-10-01/default.xsd"> <LowestOfferListings> <LowestOfferListing> <Qualifiers> <ItemCondition>Used</ItemCondition> <ItemSubcondition>Good</ItemSubcondition> </Qualifiers> <Price> <LandedPrice> <Amount>15.71</Amount> </LandedPrice> </Price> </LowestOfferListing> <LowestOfferListing> <Qualifiers> <ItemCondition>Used</ItemCondition> <ItemSubcondition>Good</ItemSubcondition> </Qualifiers> <Price> <LandedPrice> <Amount>16.71</Amount> </LandedPrice> </Price> </LowestOfferListing> <LowestOfferListing> <Qualifiers> <ItemCondition>Used</ItemCondition> <ItemSubcondition>Good</ItemSubcondition> </Qualifiers> <Price> <LandedPrice> <Amount>18.71</Amount> </LandedPrice> </Price> </LowestOfferListing> </LowestOfferListings> </Product> </GetLowestOfferListingsForASINResult> </GetLowestOfferListingsForASINResponse> Example XSLT : <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0" xmlns:amz="http://mws.amazonservices.com/schema/Products/2011-10-01" exclude-result-prefixes="amz"> <xsl:output method="xml" version="1.0" encoding="utf-8" indent="yes"/> <xsl:template match="/"> <xsl:variable name="MIN_Landed"> <xsl:for-each select="//Amount"> <xsl:sort data-type="number" order="ascending"/> <xsl:if test="position()=1"><xsl:value-of select="."/></xsl:if> </xsl:for-each> </xsl:variable> <FMPXMLRESULT xmlns="http://www.filemaker.com/fmpxmlresult"> <ERRORCODE>0</ERRORCODE> <PRODUCT BUILD="" NAME="" VERSION=""/> <DATABASE DATEFORMAT="M/d/yyyy" LAYOUT="" NAME="" RECORDS="1" TIMEFORMAT="h:mm:ss a"/> <METADATA> <FIELD EMPTYOK="YES" MAXREPEAT="1" NAME="DATA" TYPE="TEXT"/> <FIELD EMPTYOK="YES" MAXREPEAT="1" NAME="Min_Landed" TYPE="TEXT"/> </METADATA> <RESULTSET> <xsl:attribute name="FOUND">1</xsl:attribute> <xsl:for-each select="amz:GetLowestOfferListingsForASINResponse/amz:GetLowestOfferListingsForASINResult/amz:Product/amz:LowestOfferListings/amz:LowestOfferListing"> <ROW> <xsl:attribute name="MODID">0</xsl:attribute> <xsl:attribute name="RECORDID">1</xsl:attribute> <COL> <DATA> <xsl:value-of select="amz:Qualifiers/amz:ItemCondition"/> </DATA> </COL> <COL> <DATA> <xsl:value-of select="$MIN_Landed"/> </DATA> </COL> </ROW> </xsl:for-each> </RESULTSET> </FMPXMLRESULT> </xsl:template> </xsl:stylesheet> HELP PLEASE! I really didn't want to post so much Amazon code but here it is stripped down to a bare bones response

    Read the article

  • video and file caching with squid lusca?

    - by moon
    hello all i have configured squid lusca on ubuntu 11.04 version and also configured the video caching but the problem is the squid cannot configure the video more than 2 min long and the file of size upto 5.xx mbs only. here is my config please guide me how can i cache the long videos and files with squid: > # PORT and Transparent Option http_port 8080 transparent server_http11 on icp_port 0 > > # Cache Directory , modify it according to your system. > # but first create directory in root by mkdir /cache1 > # and then issue this command chown proxy:proxy /cache1 > # [for ubuntu user is proxy, in Fedora user is SQUID] > # I have set 500 MB for caching reserved just for caching , > # adjust it according to your need. > # My recommendation is to have one cache_dir per drive. zzz > > #store_dir_select_algorithm round-robin cache_dir aufs /cache1 500 16 256 cache_replacement_policy heap LFUDA memory_replacement_policy heap > LFUDA > > # If you want to enable DATE time n SQUID Logs,use following emulate_httpd_log on logformat squid %tl %6tr %>a %Ss/%03Hs %<st %rm > %ru %un %Sh/%<A %mt log_fqdn off > > # How much days to keep users access web logs > # You need to rotate your log files with a cron job. For example: > # 0 0 * * * /usr/local/squid/bin/squid -k rotate logfile_rotate 14 debug_options ALL,1 cache_access_log /var/log/squid/access.log > cache_log /var/log/squid/cache.log cache_store_log > /var/log/squid/store.log > > #I used DNSAMSQ service for fast dns resolving > #so install by using "apt-get install dnsmasq" first dns_nameservers 127.0.0.1 101.11.11.5 ftp_user anonymous@ ftp_list_width 32 ftp_passive on ftp_sanitycheck on > > #ACL Section acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl > to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 563 # https, snews > acl SSL_ports port 873 # rsync acl Safe_ports port 80 # http acl > Safe_ports port 21 # ftp acl Safe_ports port 443 563 # https, snews > acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl > Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port > 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port > 591 # filemaker acl Safe_ports port 777 # multiling http acl > Safe_ports port 631 # cups acl Safe_ports port 873 # rsync acl > Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method > CONNECT http_access allow manager localhost http_access deny manager > http_access allow purge localhost http_access deny purge http_access > deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow > localhost http_access allow all http_reply_access allow all icp_access > allow all > > #========================== > # Administrative Parameters > #========================== > > # I used UBUNTU so user is proxy, in FEDORA you may use use squid cache_effective_user proxy cache_effective_group proxy cache_mgr > [email protected] visible_hostname proxy.aacable.net unique_hostname > [email protected] > > #============= > # ACCELERATOR > #============= half_closed_clients off quick_abort_min 0 KB quick_abort_max 0 KB vary_ignore_expire on reload_into_ims on log_fqdn > off memory_pools off > > # If you want to hide your proxy machine from being detected at various site use following via off > > #============================================ > # OPTIONS WHICH AFFECT THE CACHE SIZE / zaib > #============================================ > # If you have 4GB memory in Squid box, we will use formula of 1/3 > # You can adjust it according to your need. IF squid is taking too much of RAM > # Then decrease it to 128 MB or even less. > > cache_mem 256 MB minimum_object_size 512 bytes maximum_object_size 500 > MB maximum_object_size_in_memory 128 KB > > #============================================================$ > # SNMP , if you want to generate graphs for SQUID via MRTG > #============================================================$ > #acl snmppublic snmp_community gl > #snmp_port 3401 > #snmp_access allow snmppublic all > #snmp_access allow all > > #============================================================ > # ZPH , To enable cache content to be delivered at full lan speed, > # To bypass the queue at MT. > #============================================================ tcp_outgoing_tos 0x30 all zph_mode tos zph_local 0x30 zph_parent 0 > zph_option 136 > > # Caching Youtube acl videocache_allow_url url_regex -i \.youtube\.com\/get_video\? acl videocache_allow_url url_regex -i > \.youtube\.com\/videoplayback \.youtube\.com\/videoplay > \.youtube\.com\/get_video\? acl videocache_allow_url url_regex -i > \.youtube\.[a-z][a-z]\/videoplayback \.youtube\.[a-z][a-z]\/videoplay > \.youtube\.[a-z][a-z]\/get_video\? acl videocache_allow_url url_regex > -i \.googlevideo\.com\/videoplayback \.googlevideo\.com\/videoplay \.googlevideo\.com\/get_video\? acl videocache_allow_url url_regex -i > \.google\.com\/videoplayback \.google\.com\/videoplay > \.google\.com\/get_video\? acl videocache_allow_url url_regex -i > \.google\.[a-z][a-z]\/videoplayback \.google\.[a-z][a-z]\/videoplay > \.google\.[a-z][a-z]\/get_video\? acl videocache_allow_url url_regex > -i proxy[a-z0-9\-][a-z0-9][a-z0-9][a-z0-9]?\.dailymotion\.com\/ acl videocache_allow_url url_regex -i vid\.akm\.dailymotion\.com\/ acl > videocache_allow_url url_regex -i > [a-z0-9][0-9a-z][0-9a-z]?[0-9a-z]?[0-9a-z]?\.xtube\.com\/(.*)flv acl > videocache_allow_url url_regex -i \.vimeo\.com\/(.*)\.(flv|mp4) acl > videocache_allow_url url_regex -i > va\.wrzuta\.pl\/wa[0-9][0-9][0-9][0-9]? acl videocache_allow_url > url_regex -i \.youporn\.com\/(.*)\.flv acl videocache_allow_url > url_regex -i \.msn\.com\.edgesuite\.net\/(.*)\.flv acl > videocache_allow_url url_regex -i \.tube8\.com\/(.*)\.(flv|3gp) acl > videocache_allow_url url_regex -i \.mais\.uol\.com\.br\/(.*)\.flv acl > videocache_allow_url url_regex -i > \.blip\.tv\/(.*)\.(flv|avi|mov|mp3|m4v|mp4|wmv|rm|ram|m4v) acl > videocache_allow_url url_regex -i > \.apniisp\.com\/(.*)\.(flv|avi|mov|mp3|m4v|mp4|wmv|rm|ram|m4v) acl > videocache_allow_url url_regex -i \.break\.com\/(.*)\.(flv|mp4) acl > videocache_allow_url url_regex -i redtube\.com\/(.*)\.flv acl > videocache_allow_dom dstdomain .mccont.com .metacafe.com > .cdn.dailymotion.com acl videocache_deny_dom dstdomain > .download.youporn.com .static.blip.tv acl dontrewrite url_regex > redbot\.org \.php acl getmethod method GET > > storeurl_access deny dontrewrite storeurl_access deny !getmethod > storeurl_access deny videocache_deny_dom storeurl_access allow > videocache_allow_url storeurl_access allow videocache_allow_dom > storeurl_access deny all > > storeurl_rewrite_program /etc/squid/storeurl.pl > storeurl_rewrite_children 7 storeurl_rewrite_concurrency 10 > > acl store_rewrite_list urlpath_regex -i > \/(get_video\?|videodownload\?|videoplayback.*id) acl > store_rewrite_list urlpath_regex -i \.flv$ \.mp3$ \.mp4$ \.swf$ \ > storeurl_access allow store_rewrite_list storeurl_access deny all > > refresh_pattern -i \.flv$ 10080 80% 10080 override-expire > override-lastmod reload-into-ims ignore-reload ignore-no-cache > ignore-private ignore-auth refresh_pattern -i \.mp3$ 10080 80% 10080 > override-expire override-lastmod reload-into-ims ignore-reload > ignore-no-cache ignore-private ignore-auth refresh_pattern -i \.mp4$ > 10080 80% 10080 override-expire override-lastmod reload-into-ims > ignore-reload ignore-no-cache ignore-private ignore-auth > refresh_pattern -i \.swf$ 10080 80% 10080 override-expire > override-lastmod reload-into-ims ignore-reload ignore-no-cache > ignore-private ignore-auth refresh_pattern -i \.gif$ 10080 80% 10080 > override-expire override-lastmod reload-into-ims ignore-reload > ignore-no-cache ignore-private ignore-auth refresh_pattern -i \.jpg$ > 10080 80% 10080 override-expire override-lastmod reload-into-ims > ignore-reload ignore-no-cache ignore-private ignore-auth > refresh_pattern -i \.jpeg$ 10080 80% 10080 override-expire > override-lastmod reload-into-ims ignore-reload ignore-no-cache > ignore-private ignore-auth refresh_pattern -i \.exe$ 10080 80% 10080 > override-expire override-lastmod reload-into-ims ignore-reload > ignore-no-cache ignore-private ignore-auth > > # 1 year = 525600 mins, 1 month = 10080 mins, 1 day = 1440 refresh_pattern (get_video\?|videoplayback\?|videodownload\?|\.flv?) > 10080 80% 10080 ignore-no-cache ignore-private override-expire > override-lastmod reload-into-ims refresh_pattern > (get_video\?|videoplayback\?id|videoplayback.*id|videodownload\?|\.flv?) > 10080 80% 10080 ignore-no-cache ignore-private override-expire > override-lastmod reload-into-ims refresh_pattern \.(ico|video-stats) > 10080 80% 10080 override-expire ignore-reload ignore-no-cache > ignore-private ignore-auth override-lastmod negative-ttl=10080 > refresh_pattern \.etology\? 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern galleries\.video(\?|sz) 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern brazzers\? 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern \.adtology\? 10080 > 80% 10080 override-expire ignore-reload ignore-no-cache > refresh_pattern > ^.*(utm\.gif|ads\?|rmxads\.com|ad\.z5x\.net|bh\.contextweb\.com|bstats\.adbrite\.com|a1\.interclick\.com|ad\.trafficmp\.com|ads\.cubics\.com|ad\.xtendmedia\.com|\.googlesyndication\.com|advertising\.com|yieldmanager|game-advertising\.com|pixel\.quantserve\.com|adperium\.com|doubleclick\.net|adserving\.cpxinteractive\.com|syndication\.com|media.fastclick.net).* > 10080 20% 10080 ignore-no-cache ignore-private override-expire > ignore-reload ignore-auth negative-ttl=40320 max-stale=10 > refresh_pattern ^.*safebrowsing.*google 10080 80% 10080 > override-expire ignore-reload ignore-no-cache ignore-private > ignore-auth negative-ttl=10080 refresh_pattern > ^http://((cbk|mt|khm|mlt)[0-9]?)\.google\.co(m|\.uk) 10080 80% > 10080 override-expire ignore-reload ignore-private negative-ttl=10080 > refresh_pattern ytimg\.com.*\.jpg > 10080 80% 10080 override-expire ignore-reload refresh_pattern > images\.friendster\.com.*\.(png|gif) 10080 80% > 10080 override-expire ignore-reload refresh_pattern garena\.com > 10080 80% 10080 override-expire reload-into-ims refresh_pattern > photobucket.*\.(jp(e?g|e|2)|tiff?|bmp|gif|png) 10080 80% > 10080 override-expire ignore-reload refresh_pattern > vid\.akm\.dailymotion\.com.*\.on2\? 10080 80% > 10080 ignore-no-cache override-expire override-lastmod refresh_pattern > mediafire.com\/images.*\.(jp(e?g|e|2)|tiff?|bmp|gif|png) 10080 80% > 10080 reload-into-ims override-expire ignore-private refresh_pattern > ^http:\/\/images|pics|thumbs[0-9]\. 10080 80% > 10080 reload-into-ims ignore-no-cache ignore-reload override-expire > refresh_pattern ^http:\/\/www.onemanga.com.*\/ > 10080 80% 10080 reload-into-ims ignore-no-cache ignore-reload > override-expire refresh_pattern > ^http://v\.okezone\.com/get_video\/([a-zA-Z0-9]) 10080 80% 10080 > override-expire ignore-reload ignore-no-cache ignore-private > ignore-auth override-lastmod negative-ttl=10080 > > #images facebook refresh_pattern -i \.facebook.com.*\.(jpg|png|gif) 10080 80% 10080 ignore-reload override-expire ignore-no-cache > refresh_pattern -i \.fbcdn.net.*\.(jpg|gif|png|swf|mp3) > 10080 80% 10080 ignore-reload override-expire ignore-no-cache > refresh_pattern static\.ak\.fbcdn\.net*\.(jpg|gif|png) > 10080 80% 10080 ignore-reload override-expire ignore-no-cache > refresh_pattern ^http:\/\/profile\.ak\.fbcdn.net*\.(jpg|gif|png) > 10080 80% 10080 ignore-reload override-expire ignore-no-cache > > #All File refresh_pattern -i \.(3gp|7z|ace|asx|bin|deb|divx|dvr-ms|ram|rpm|exe|inc|cab|qt) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(rar|jar|gz|tgz|bz2|iso|m1v|m2(v|p)|mo(d|v)|arj|lha|lzh|zip|tar) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(jp(e?g|e|2)|gif|pn[pg]|bm?|tiff?|ico|swf|dat|ad|txt|dll) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(avi|ac4|mp(e?g|a|e|1|2|3|4)|mk(a|v)|ms(i|u|p)|og(x|v|a|g)|rm|r(a|p)m|snd|vob) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims refresh_pattern -i > \.(pp(t?x)|s|t)|pdf|rtf|wax|wm(a|v)|wmx|wpl|cb(r|z|t)|xl(s?x)|do(c?x)|flv|x-flv) > 10080 80% 10080 ignore-no-cache override-expire override-lastmod > reload-into-ims > > refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern ^gopher: > 1440 0% 1440 refresh_pattern ^ftp: 10080 95% 10080 > override-lastmod reload-into-ims refresh_pattern . 1440 > 95% 10080 override-lastmod reload-into-ims

    Read the article

< Previous Page | 1 2