Search Results

Search found 7217 results on 289 pages for 'jboss cache'.

Page 69/289 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Send multidimensional array to php with jQuery

    - by robertdd
    hy, i have a little, big problem here :) after i upload some images i get a list with all the images. I have some jQuery function for rotate, duplicate, delete, shuffle images! when i select a image and hit delete i send a post to php with the alt="" value of the image,i identify the picture and edit. I want to make a save button, Instead of sending a post every time i rotate a image, better send a post after editing the list of images with an array that contains all data? my php array after upload looks like this: [files] => Array ( [lcxkijgr] => lcxkijgr.jpg [xcewxpfv] => xcewxpfv.jpg [rtiurwxf] => rtiurwxf.jpg [gsbxdsdc] => gsbxdsdc.jpg ) say that I uploaded 4 pictures, firs picture i rotate 90 degrees second i want to duplicate third i rotate 270 degrees and the fourth image i delete i can do all this only with jQuery, but on the server the images are the same, after a refresh the images are the same this is the list with the images: <div class="upimage"> <ul id="upimagesQueue"> <li id="upimagesHPVEJM"> <a href="javascript:jQuery('#upimagesHPVEJM').showlargeimage('HPVEJM')"> <img alt="lcxkijgr" src="uploads/s6id9r9icnp8q9102h8md9kfd7/lcxkijgr.jpg?1272087830477" id="HPVEJM" style="display: block;" > </a> </li> <li id="upimagesSTCSAV"> <a href="javascript:jQuery('#upimagesSTCSAV').showlargeimage('STCSAV')"> <img alt="xcewxpfv" src="uploads/s6id9r9icnp8q9102h8md9kfd7/xcewxpfv.jpg?1272087831360" id="STCSAV" style="display: block;" > </a> </li> <li id="upimagesBFPUEQ"> <a href="javascript:jQuery('#upimagesBFPUEQ').showlargeimage('BFPUEQ')"> <img alt="rtiurwxf" src="uploads/s6id9r9icnp8q9102h8md9kfd7/rtiurwxf.jpg?1272087832162" id="BFPUEQ" style="display: block;" > </a> </li> <li id="upimagesRKXNSV"> <a href="javascript:jQuery('#upimagesRKXNSV').showlargeimage('RKXNSV')"> <img alt="gsbxdsdc" src="uploads/s6id9r9icnp8q9102h8md9kfd7/gsbxdsdc.jpg?1272087832957" id="RKXNSV" style="display: block;"> </a> </li> <ul> </div> is ok if i make one array like this: array{ imgFromLi = array(img1,img2,img3,img4,img5,img6) rotate = array{img1=90, img2=270, img3=90} delete = array{img4,img5,img6} duplicate = array{img2, img3} } how i can make/send/cache this array?? sorry for my very bad english

    Read the article

  • No more memory available in Mathematica, Fit the parameters of system of differential equation

    - by user1058051
    I encountered a memory problem in Mathematica, when I tried to process my experimental data. It's a system of two differential equations and I need to find most suitable parameters. Unfortunately I am not a Pro in Mathematica, so the program used a lot of memory, when the parameter epsilon is more than 0.4. When it less than 0.4, the program work properly. The command 'historylength = 0' and attempts to reduce the Accuracy Goal and WorkingPrecision didn`t help. I can't use ' clear Cache ', because there isnt a circle. I'm trying to understand what mistakes I made, and how I may limit the memory usage. I have already bought extra-RAM, now its 4GB, and now I haven't free memory-slots in motherboard Remove["Global`*"]; T=13200; L = 0.085; e = 0.41; v = 0.000557197; q = 0.1618; C0 = 0.0256; R = 0.00075; data = {{L,600,0.141124587},{L,1200,0.254134509},{L,1800,0.342888644}, {L,2400,0.424476295},{L,3600,0.562844542},{L,4800,0.657111356}, {L,6000,0.75137817},{L,7200,0.815876516},{L,8430,0.879823594}, {L,9000,0.900771775},{L,13200,1}}; model[(De_)?NumberQ, (Kf_)?NumberQ, (Y_)?NumberQ] := model[De, Kf, Y] = yeld /.Last[Last[ NDSolve[{ v (Ci^(1,0))[z,t]+(Ci^(0,1))[z,t]== -((3 (1-e) Kf (Ci[z,t]-C0))/ (R e (1-(R Kf (1-R/r[z,t]))/De))), (r^(0,1))[z,t]== (R^2 Kf (Ci[z,t]-C0))/ (q r[z,t]^2 (1-(R Kf (1-R/r[z,t]))/De)), (yeld^(0,1))[z,t]== Y*(v e Ci[z,t])/(L q (1-e)), r[z,0]==R, Ci[z,0]==0, Ci[0,t]==0, yeld[z,0]==0}, {r[z,t],Ci[z,t],yeld},{z,0,L},{t,0,T}]]] fit = FindFit[data, {model[De, Kf, Y][z, t], {Y > 0.97, Y < 1.03, Kf > 10^-6, Kf < 10^-4, De > 10^-13, De < 10^-9}}, {{De,7*10^-13}, {Kf, 10^-5}, {Y, 1}}, {z, t}, Method -> NMinimize] data = {{600,0.141124587},{1200,0.254134509},{1800,0.342888644}, {2400,0.424476295},{3600,0.562844542},{4800,0.657111356}, {6000,0.75137817},{7200,0.815876516},{8430,0.879823594}, {9000,0.900771775},{13200,1}}; YYY = model[ De /. fit[[1]], Kf /. fit[[2]], Y /. fit[[3]]]; Show[Plot[Evaluate[YYY[L,t]],{t,0,T},PlotRange->All], ListPlot[data,PlotStyle->Directive[PointSize[Medium],Red]]] the link on the .nb file http://www.4shared.com/folder/249TSjlz/_online.html

    Read the article

  • Is my dns server being attacked? And what should I do about it?

    - by Mnebuerquo
    I've been having some intermittent dns problems with a web server, where certain isp's dns servers don't have my hostnames in cache and fail to look them up. At the same time, queries to opendns for those hostnames resolve correctly. It's intermittent, and it always works fine for me, so it's hard to identify the problem when someone reports connectivity problems to my site. In trying to figure this out, I've been looking at my logs to see if there are any errors I should know about. I found thousands of the following messages in my logs, from different ip's, but all requesting similar dns records: May 12 11:42:13 localhost named[26399]: client 94.76.107.2#36141: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#29075: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#47924: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#4727: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:14 localhost named[26399]: client 94.76.107.2#16153: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:14 localhost named[26399]: client 94.76.107.2#40267: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:43:35 localhost named[26399]: client 82.209.240.241#63507: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:43:35 localhost named[26399]: client 82.209.240.241#63721: query (cache) 'burningpianos.org/MX/IN' denied May 12 11:43:36 localhost named[26399]: client 82.209.240.241#3537: query (cache) 'burningpianos.com/MX/IN' denied I've read of Dan Kaminski's dns cache poisoning vulnerability, and I'm wondering if these log records are an attempt by some evildoer to attack my dns server. There are thousands of records in my logs, all requesting "burningpianos", some for com and some for org, most looking for an mx record. There are requests from multiple ip's, but each ip will request hundreds of times per day. So this smells to me like an attack. What is the defense against this?

    Read the article

  • Winetricks fails to find program files directory

    - by EgyptLovesUbuntu
    I installed a fresh copy of Ubuntu 12 desktop then: Installed WINE from the Ubuntu Software Center. Installed WineTricks from the Ubuntu Software Center. When I type the following commands in the terminal: sudo winetricks dotnet40 I get this error message: wine cmd.exe /c echo '%ProgramFiles%' returned empty string If i try the command without sudo winetricks dotnet40 The output is as follows Executing w_do_call dotnet40 Executing load_dotnet40 ------------------------------------------------------ dotnet40 does not yet fully work or install on wine. Caveat emptor. ------------------------------------------------------ Executing mkdir -p /home/vectoruser/.cache/winetricks/dotnet40 mkdir: cannot create directory `/home/vectoruser/.cache/winetricks/dotnet40': Permission denied ------------------------------------------------------ Note: command 'mkdir -p /home/vectoruser/.cache/winetricks/dotnet40' returned status 1. Aborting. ------------------------------------------------------ My current user is vectoruser which i use to logon to Ubuntu The output of ls -ld /home/vectoruser /home/vectoruser/.cache /home/vectoruser/.cache/winetricks Gives: drwxr-xr-x 32 vectoruser vectoruser 4096 Aug 2 19:26 /home/vectoruser drwx------ 19 vectoruser vectoruser 4096 Aug 2 19:25 /home/vectoruser/.cache drwxr-xr-x 2 root root 4096 Aug 2 18:09 /home/vectoruser/.cache/winetricks

    Read the article

  • How can I properly implement inetcpl.cpl as an external dll?

    - by Kyt
    I have the following 2 sets of code, both of which produce the same results: using System.Linq; using System.Runtime.InteropServices; using System.Text; using System.Threading.Tasks; using System.Windows.Forms; namespace ResetIE { class Program { [DllImport("InetCpl.cpl", SetLastError=true, CharSet=CharSet.Unicode, EntryPoint="ClearMyTracksByProcessW")] public static extern long ClearMyTracksByProcess(IntPtr hwnd, IntPtr hinst, ref TargetHistory lpszCmdLine, FormWindowState nCmdShow); static void Main(string[] args) { TargetHistory th = TargetHistory.CLEAR_TEMPORARY_INTERNET_FILES; ClearMyTracksByProcessW(Process.GetCurrentProcess().Handle, Marshal.GetHINSTANCE(typeof(Program).Module), ref th, FormWindowState.Maximized); Console.WriteLine("Done."); } } and ... static class NativeMethods { [DllImport("kernel32.dll")] public static extern IntPtr LoadLibrary(string dllToLoad); [DllImport("kernel32.dll")] public static extern IntPtr GetProcAddress(IntPtr hModule, string procedureName); [DllImport("kernel32.dll")] public static extern bool FreeLibrary(IntPtr hModule); } public class CallExternalDLL { [UnmanagedFunctionPointer(CallingConvention.Cdecl)] public delegate long ClearMyTracksByProcessW(IntPtr hwnd, IntPtr hinst, ref TargetHistory lpszCmdLine, FormWindowState nCmdShow); public static void Clear_IE_Cache() { IntPtr pDll = NativeMethods.LoadLibrary(@"C:\Windows\System32\inetcpl.cpl"); if (pDll == IntPtr.Zero) { Console.WriteLine("An Error has Occurred."); } IntPtr pAddressOfFunctionToCall = NativeMethods.GetProcAddress(pDll, "ClearMyTracksByProcessW"); if (pAddressOfFunctionToCall == IntPtr.Zero) { Console.WriteLine("Function Not Found."); } ClearMyTracksByProcessW cmtbp = (ClearMyTracksByProcessW)Marshal.GetDelegateForFunctionPointer(pAddressOfFunctionToCall, typeof(ClearMyTracksByProcessW)); TargetHistory q = TargetHistory.CLEAR_TEMPORARY_INTERNET_FILES; long result = cmtbp(Process.GetCurrentProcess().Handle, Marshal.GetHINSTANCE(typeof(ClearMyTracksByProcessW).Module), ref q, FormWindowState.Normal); } } both use the following Enum: public enum TargetHistory { CLEAR_ALL = 0xFF, CLEAR_ALL_WITH_ADDONS = 0x10FF, CLEAR_HISTORY = 0x1, CLEAR_COOKIES = 0x2, CLEAR_TEMPORARY_INTERNET_FILES = 0x8, CLEAR_FORM_DATA = 0x10, CLEAR_PASSWORDS = 0x20 } Both methods of doing this compile and run just fine, offering no errors, but both churn endlessly never returning from their work. The PInvoke code was ported from the following VB, which was fairly difficult to track down: Option Explicit Private Enum TargetHistory CLEAR_ALL = &HFF& CLEAR_ALL_WITH_ADDONS = &H10FF& CLEAR_HISTORY = &H1& CLEAR_COOKIES = &H2& CLEAR_TEMPORARY_INTERNET_FILES = &H8& CLEAR_FORM_DATA = &H10& CLEAR_PASSWORDS = &H20& End Enum Private Declare Function ClearMyTracksByProcessW Lib "InetCpl.cpl" _ (ByVal hwnd As OLE_HANDLE, _ ByVal hinst As OLE_HANDLE, _ ByRef lpszCmdLine As Byte, _ ByVal nCmdShow As VbAppWinStyle) As Long Private Sub Command1_Click() Dim b() As Byte Dim o As OptionButton For Each o In Option1 If o.Value Then b = o.Tag ClearMyTracksByProcessW Me.hwnd, App.hInstance, b(0), vbNormalFocus Exit For End If Next End Sub Private Sub Form_Load() Command1.Caption = "??" Option1(0).Caption = "?????????????" Option1(0).Tag = CStr(CLEAR_TEMPORARY_INTERNET_FILES) Option1(1).Caption = "Cookie" Option1(1).Tag = CStr(CLEAR_COOKIES) Option1(2).Caption = "??" Option1(2).Tag = CStr(CLEAR_HISTORY) Option1(3).Caption = "???? ???" Option1(3).Tag = CStr(CLEAR_HISTORY) Option1(4).Caption = "?????" Option1(4).Tag = CStr(CLEAR_PASSWORDS) Option1(5).Caption = "?????" Option1(5).Tag = CStr(CLEAR_ALL) Option1(2).Value = True End Sub The question is simply what am I doing wrong? I need to clear the internet cache, and would prefer to use this method as I know it does what I want it to when it works (rundll32 inetcpl.cpl,ClearMyTracksByProcess 8 works fine). I've tried running both as normal user and admin to no avail. This project is written using C# in VS2012 and compiled against .NET3.5 (must remain at 3.5 due to client restrictions)

    Read the article

  • Android application Database Framework

    - by Marek Sebera
    When creating mobile (specially Android) application, I usually come to touch with similar pattern of working with data. Usually I need to fetch some remote data (covered by authorization process) to local cache. And on next request: Check networking Check presence of cache file Check version of cache file (if networking) Get new version and save cache (if networking and file not in cache, or outdated) Data store is no-SQL JSON Document-Based (and yes, I know about CouchDB Android version, but it doesn't fit my needs yet.) Process of authorizing to data source and code for check version of local cache is adapted to application. But the other code (handling network, saving cache, handling exceptions,...) is always the same. Is there any Data Store helper I can use, which provides functions I described above?

    Read the article

  • Coherence Data Guarantees for Data Reads - Basic Terminology

    - by jpurdy
    When integrating Coherence into applications, each application has its own set of requirements with respect to data integrity guarantees. Developers often describe these requirements using expressions like "avoiding dirty reads" or "making sure that updates are transactional", but we often find that even in a small group of people, there may be a wide range of opinions as to what these terms mean. This may simply be due to a lack of familiarity, but given that Coherence sits at an intersection of several (mostly) unrelated fields, it may be a matter of conflicting vocabularies (e.g. "consistency" is similar but different in transaction processing versus multi-threaded programming). Since almost all data read consistency issues are related to the concept of concurrency, it is helpful to start with a definition of that, or rather what it means for two operations to be concurrent. Rather than implying that they occur "at the same time", concurrency is a slightly weaker statement -- it simply means that it can't be proven that one event precedes (or follows) the other. As an example, in a Coherence application, if two client members mutate two different cache entries sitting on two different cache servers at roughly the same time, it is likely that one update will precede the other by a significant amount of time (say 0.1ms). However, since there is no guarantee that all four members have their clocks perfectly synchronized, and there is no way to precisely measure the time it takes to send a given message between any two members (that have differing clocks), we consider these to be concurrent operations since we can not (easily) prove otherwise. So this leads to a question that we hear quite frequently: "Are the contents of the near cache always synchronized with the underlying distributed cache?". It's easy to see that if an update on a cache server results in a message being sent to each near cache, and then that near cache being updated that there is a window where the contents are different. However, this is irrelevant, since even if the application reads directly from the distributed cache, another thread update the cache before the read is returned to the application. Even if no other member modifies a cache entry prior to the local near cache entry being updated (and subsequently read), the purpose of reading a cache entry is to do something with the result, usually either displaying for consumption by a human, or by updating the entry based on the current state of the entry. In the former case, it's clear that if the data is updated faster than a human can perceive, then there is no problem (and in many cases this can be relaxed even further). For the latter case, the application must assume that the value might potentially be updated before it has a chance to update it. This almost aways the case with read-only caches, and the solution is the traditional optimistic transaction pattern, which requires the application to explicitly state what assumptions it made about the old value of the cache entry. If the application doesn't want to bother stating those assumptions, it is free to lock the cache entry prior to reading it, ensuring that no other threads will mutate the entry, a pessimistic approach. The optimistic approach relies on what is sometimes called a "fuzzy read". In other words, the application assumes that the read should be correct, but it also acknowledges that it might not be. (I use the qualifier "sometimes" because in some writings, "fuzzy read" indicates the situation where the application actually sees an original value and then later sees an updated value within the same transaction -- however, both definitions are roughly equivalent from an application design perspective). If the read is not correct it is called a "stale read". Going back to the definition of concurrency, it may seem difficult to precisely define a stale read, but the practical way of detecting a stale read is that is will cause the encompassing transaction to roll back if it tries to update that value. The pessimistic approach relies on a "coherent read", a guarantee that the value returned is not only the same as the primary copy of that value, but also that it will remain that way. In most cases this can be used interchangeably with "repeatable read" (though that term has additional implications when used in the context of a database system). In none of cases above is it possible for the application to perform a "dirty read". A dirty read occurs when the application reads a piece of data that was never committed. In practice the only way this can occur is with multi-phase updates such as transactions, where a value may be temporarily update but then withdrawn when a transaction is rolled back. If another thread sees that value prior to the rollback, it is a dirty read. If an application uses optimistic transactions, dirty reads will merely result in a lack of forward progress (this is actually one of the main risks of dirty reads -- they can be chained and potentially cause cascading rollbacks). The concepts of dirty reads, fuzzy reads, stale reads and coherent reads are able to describe the vast majority of requirements that we see in the field. However, the important thing is to define the terms used to define requirements. A quick web search for each of the terms in this article will show multiple meanings, so I've selected what are generally the most common variations, but it never hurts to state each definition explicitly if they are critical to the success of a project (many applications have sufficiently loose requirements that precise terminology can be avoided).

    Read the article

  • WordPress not resizing images with Nginx + php-fpm and other issues

    - by Julian Fernandes
    Recently i setup a Ubuntu 12.04 VPS with 512mb/1ghz CPU, Nginx + php-fpm + Varnish + APC + Percona's MySQL server + CloudFlare Pro for our Ubuntu LoCo Team's WordPress blog. The blog get about 3~4k daily hits, use about 180MB and 8~20% CPU. Everything seems to be working insanely fast... page load is really good and is about 16x faster than any of our competitors... but there is one problem. When we upload a image, WordPress don't resize it, so all we can do it insert the full image in the post. If the imagem have, let's say, 30kb, it resize fine... but if the image have 100kb+, it won't... In nginx error logs i see this: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "POST /wp-admin/async-upload.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/wp-admin/media-upload.php?post_id=2668&" It seems to be related with the issue, but i dunno. When that timeout happens, i started to get it when i'm trying to view a post too: upstream timed out (110: Connection timed out) while reading response header from upstream, client: 150.162.216.64, server: www.ubuntubrsc.com, request: "GET /tutoriais-gimp-6-adicionando-aplicando-novos-pinceis.html HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "www.ubuntubrsc.com", referrer: "http://www.ubuntubrsc.com/" And only a restart of php5-fpm fix it. I tryed increasing some timeouts and stuffs but it did not worked, so i guess it's some kind of limitation i did not figured yet. Could someone help me with it, please? /etc/nginx/nginx.conf: user www-data; worker_processes 1; pid /var/run/nginx.pid; events { worker_connections 1024; use epoll; multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 15; keepalive_requests 2000; types_hash_max_size 2048; server_tokens off; server_name_in_redirect off; open_file_cache max=1000 inactive=300s; open_file_cache_valid 360s; open_file_cache_min_uses 2; open_file_cache_errors off; server_names_hash_bucket_size 64; # server_name_in_redirect off; client_body_buffer_size 128K; client_header_buffer_size 1k; client_max_body_size 2m; large_client_header_buffers 4 8k; client_body_timeout 10m; client_header_timeout 10m; send_timeout 10m; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## error_log /var/log/nginx/error.log; access_log off; ## # CloudFlare's IPs (uncomment when site goes live) ## set_real_ip_from 204.93.240.0/24; set_real_ip_from 204.93.177.0/24; set_real_ip_from 199.27.128.0/21; set_real_ip_from 173.245.48.0/20; set_real_ip_from 103.22.200.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 108.162.192.0/18; set_real_ip_from 190.93.240.0/20; real_ip_header CF-Connecting-IP; set_real_ip_from 127.0.0.1/32; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 9; gzip_min_length 1000; gzip_proxied expired no-cache no-store private auth; gzip_buffers 32 8k; # gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } /etc/nginx/fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $https; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; /etc/nginx/sites-avaiable/default: ## # DEFAULT HANDLER # ubuntubrsc.com ## server { listen 8080; # Make site available from main domain server_name www.ubuntubrsc.com; # Root directory root /var/www; index index.php index.html index.htm; include /var/www/nginx.conf; access_log off; location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location ~ /\. { deny all; access_log off; log_not_found off; } location ~* ^/wp-content/uploads/.*.php$ { deny all; access_log off; log_not_found off; } rewrite /wp-admin$ $scheme://$host$uri/ permanent; error_page 404 = @wordpress; log_not_found off; location @wordpress { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_NAME /index.php; fastcgi_param SCRIPT_FILENAME $document_root/index.php; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; if (-f $request_filename) { fastcgi_pass unix:/var/run/php5-fpm.sock; } } } server { listen 8080; server_name ubuntubrsc.* www.ubuntubrsc.net www.ubuntubrsc.org www.ubuntubrsc.com.br www.ubuntubrsc.info www.ubuntubrsc.in; return 301 $scheme://www.ubuntubrsc.com$request_uri; } /var/www/nginx.conf: # BEGIN W3TC Minify cache location ~ /wp-content/w3tc/min.*\.js$ { types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*\.css$ { types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; } location ~ /wp-content/w3tc/min.*js\.gzip$ { gzip off; types {} default_type application/x-javascript; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } location ~ /wp-content/w3tc/min.*css\.gzip$ { gzip off; types {} default_type text/css; expires modified 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; add_header Vary "Accept-Encoding"; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header Content-Encoding gzip; } # END W3TC Minify cache # BEGIN W3TC Browser Cache gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header Pragma "public"; add_header Cache-Control "max-age=3600, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; try_files $uri $uri/ $uri.html /index.php?$args; } location ~ \.(asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$ { expires 31536000s; add_header Pragma "public"; add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; add_header X-Powered-By "W3 Total Cache/0.9.2.5b"; } # END W3TC Browser Cache # BEGIN W3TC Minify core rewrite ^/wp-content/w3tc/min/w3tc_rewrite_test$ /wp-content/w3tc/min/index.php?w3tc_rewrite_test=1 last; set $w3tc_enc ""; if ($http_accept_encoding ~ gzip) { set $w3tc_enc .gzip; } if (-f $request_filename$w3tc_enc) { rewrite (.*) $1$w3tc_enc break; } rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; # END W3TC Minify core # BEGIN W3TC Skip 404 error handling by WordPress for static files if (-f $request_filename) { break; } if (-d $request_filename) { break; } if ($request_uri ~ "(robots\.txt|sitemap(_index)?\.xml(\.gz)?|[a-z0-9_\-]+-sitemap([0-9]+)?\.xml(\.gz)?)") { break; } if ($request_uri ~* \.(css|js|htc|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml|asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$) { return 404; } # END W3TC Skip 404 error handling by WordPress for static files # BEGIN Better WP Security location ~ /\.ht { deny all; } location ~ wp-config.php { deny all; } location ~ readme.html { deny all; } location ~ readme.txt { deny all; } location ~ /install.php { deny all; } set $susquery 0; set $rule_2 0; set $rule_3 0; rewrite ^wp-includes/(.*).php /not_found last; rewrite ^/wp-admin/includes(.*)$ /not_found last; if ($request_method ~* "^(TRACE|DELETE|TRACK)"){ return 403; } set $rule_0 0; if ($request_method ~ "POST"){ set $rule_0 1; } if ($uri ~ "^(.*)wp-comments-post.php*"){ set $rule_0 2$rule_0; } if ($http_user_agent ~ "^$"){ set $rule_0 4$rule_0; } if ($rule_0 = "421"){ return 403; } if ($args ~* "\.\./") { set $susquery 1; } if ($args ~* "boot.ini") { set $susquery 1; } if ($args ~* "tag=") { set $susquery 1; } if ($args ~* "ftp:") { set $susquery 1; } if ($args ~* "http:") { set $susquery 1; } if ($args ~* "https:") { set $susquery 1; } if ($args ~* "(<|%3C).*script.*(>|%3E)") { set $susquery 1; } if ($args ~* "mosConfig_[a-zA-Z_]{1,21}(=|%3D)") { set $susquery 1; } if ($args ~* "base64_encode") { set $susquery 1; } if ($args ~* "(%24&x)") { set $susquery 1; } if ($args ~* "(\[|\]|\(|\)|<|>|ê|\"|;|\?|\*|=$)"){ set $susquery 1; } if ($args ~* "(&#x22;|&#x27;|&#x3C;|&#x3E;|&#x5C;|&#x7B;|&#x7C;|%24&x)"){ set $susquery 1; } if ($args ~* "(%0|%A|%B|%C|%D|%E|%F|127.0)") { set $susquery 1; } if ($args ~* "(globals|encode|localhost|loopback)") { set $susquery 1; } if ($args ~* "(request|select|insert|concat|union|declare)") { set $susquery 1; } if ($http_cookie !~* "wordpress_logged_in_" ) { set $susquery "${susquery}2"; set $rule_2 1; set $rule_3 1; } if ($susquery = 12) { return 403; } # END Better WP Security /etc/php5/fpm/php-fpm.conf: pid = /var/run/php5-fpm.pid error_log = /var/log/php5-fpm.log emergency_restart_threshold = 3 emergency_restart_interval = 1m process_control_timeout = 10s events.mechanism = epoll /etc/php5/fpm/php.ini (only options i changed): open_basedir ="/var/www/" disable_functions = pcntl_alarm,pcntl_fork,pcntl_waitpid,pcntl_wait,pcntl_wifexited,pcntl_wifstopped,pcntl_wifsignaled,pcntl_wexitstatus,pcntl_wtermsig,pcntl_wstopsig,pcntl_signal,pcntl_signal_dispatch,pcntl_get_last_error,pcntl_strerror,pcntl_sigprocmask,pcntl_sigwaitinfo,pcntl_sigtimedwait,pcntl_exec,pcntl_getpriority,pcntl_setpriority,dl,system,shell_exec,fsockopen,parse_ini_file,passthru,popen,proc_open,proc_close,shell_exec,show_source,symlink,proc_close,proc_get_status,proc_nice,proc_open,proc_terminate,shell_exec ,highlight_file,escapeshellcmd,define_syslog_variables,posix_uname,posix_getpwuid,apache_child_terminate,posix_kill,posix_mkfifo,posix_setpgid,posix_setsid,posix_setuid,escapeshellarg,posix_uname,ftp_exec,ftp_connect,ftp_login,ftp_get,ftp_put,ftp_nb_fput,ftp_raw,ftp_rawlist,ini_alter,ini_restore,inject_code,syslog,openlog,define_syslog_variables,apache_setenv,mysql_pconnect,eval,phpAds_XmlRpc,phpA ds_remoteInfo,phpAds_xmlrpcEncode,phpAds_xmlrpcDecode,xmlrpc_entity_decode,fp,fput,virtual,show_source,pclose,readfile,wget expose_php = off max_execution_time = 30 max_input_time = 60 memory_limit = 128M display_errors = Off post_max_size = 2M allow_url_fopen = off default_socket_timeout = 60 APC settings: [APC] apc.enabled = 1 apc.shm_segments = 1 apc.shm_size = 64M apc.optimization = 0 apc.num_files_hint = 4096 apc.ttl = 60 apc.user_ttl = 7200 apc.gc_ttl = 0 apc.cache_by_default = 1 apc.filters = "" apc.mmap_file_mask = "/tmp/apc.XXXXXX" apc.slam_defense = 0 apc.file_update_protection = 2 apc.enable_cli = 0 apc.max_file_size = 10M apc.stat = 1 apc.write_lock = 1 apc.report_autofilter = 0 apc.include_once_override = 0 apc.localcache = 0 apc.localcache.size = 512 apc.coredump_unmap = 0 apc.stat_ctime = 0 /etc/php5/fpm/pool.d/www.conf user = www-data group = www-data listen = /var/run/php5-fpm.sock listen.owner = www-data listen.group = www-data listen.mode = 0666 pm = ondemand pm.max_children = 5 pm.process_idle_timeout = 3s; pm.max_requests = 50 I also started to get 404 errors in front page if i use W3 Total Cache's Page Cache (Disk Enhanced). It worked fine untill somedays ago, and then, out of nowhere, it started to happen. Tonight i will disable my mobile plugin and activate only W3 Total Cache to see if it's a conflict with them... And to finish all this, i have been getting this error: PHP Warning: apc_store(): Unable to allocate memory for pool. in /var/www/wp-content/plugins/w3-total-cache/lib/W3/Cache/Apc.php on line 41 I already modifed my APC settings, but no sucess. So... could anyone help me with those issuees, please? Ooohh... if it helps, i instaled PHP like this: sudo apt-get install php5-fpm php5-suhosin php-apc php5-gd php5-imagick php5-curl And Nginx from the official PPA. Sorry for my bad english and thanks for your time people! (:

    Read the article

  • Once an HTML document has a manifest (cache.manifest), how can you remove it?

    - by Michael F
    It seems that once you have a manifest entry, a la: <html manifest="cache.manifest"> Then that page (the master entry in the cache) will always be cached (at least by Safari) until the user does something to remove the cache, even if you later remove the manifest attribute from the html tag and update the manifest (by changing something within it), forcing the master entry to be reloaded along with everything else. In other words, if you have: index.html (with manifest defined) file1.js (referenced in manifest) file2.js (referenced in manifest) cache.manifest (lists the two js files) -- removing the manifest entry from index.html and modifying the manifest (so it gets expired by the browser and all content reloaded) will not stop this page from behaving as if it's still fully cached. If you view source on index.html you won't see the manifest listed anymore, but the browser will still request only the cache.manifest file, and unless that file's content is changed, no other changes to any files will be shown to the user. It seems like a pretty glaring bug, and it's present on iOS as well as Mac versions of Safari. Has anyone found a way of resetting the page and getting rid of the cache without requiring user intervention?

    Read the article

  • Question about how AppFabric's cache feature can be used.

    - by Kevin Buchan
    Question about how AppFabric's cache feature can be used. I apologize for asking a question that I should be able to answer from the documentation, but I have read and read and searched and cannot answer this question, which leads me to believe that I have a fundamentally flawed understanding of what AppFabric's caching capabilities are intended for. I work for a geographically disperse company. We have a particular application that was originally written as a client/server application. It’s so massive and business critical that we want to baby step converting it to a better architected solution. One of the ideas we had was to convert the app to read its data using WCF calls to a co-located web server that would cache communication with the database in the United States. The nature of the application is such that everyone will tend to be viewing the same 2000 records or so with only occasional updates and those updates will be made by a limited set of users. I was hoping that AppFabric’s cache mechanism would allow me to set up one global cache and when a user in Asia, for example, requested data that was not in the cache or was stale that the web server would read from the database in the USA, provide the data to the user, then update the cache which would propagate that data to the other web servers so that they would know not to go back to the database themselves. Can AppFabric work this way or should I just have the servers retrieve their own data from the database?

    Read the article

  • eAccelerator settings for PHP/Centos/Apache

    - by bobbyh
    I have eAccelerator installed on a server running Wordpress using PHP/Apache on CentOS. I am occassionally getting persistent "white pages", which presumably are PHP Fatal Errors (although these errors don't appear in my error_log). These "white pages" are sprinkled here and there throughout the site. They persist until I go to my eAccelerator control.php page and clear/clean/purge my caches, which suggests to me that I've configured eAccelerator improperly. Here are my current /etc/php.ini settings: memory_limit = 128M; eaccelerator.shm_size="64", where shm.size is "the amount of shared memory eAccelerator should allocate to cache PHP scripts" (see http://eaccelerator.net/wiki/Settings) eaccelerator.shm_max="0", where shm_max is "the maximum size a user can put in shared memory with functions like eaccelerator_put ... The default value is "0" which disables the limit" eaccelerator.shm_ttl="0" - "When eAccelerator doesn't have enough free shared memory to cache a new script it will remove all scripts from shared memory cache that haven't been accessed in at least shm_ttl seconds. By default this value is set to "0" which means that eAccelerator won't try to remove any old scripts from shared memory." eaccelerator.shm_prune_period="0" - "When eAccelerator doesn't have enough free shared memory to cache a script it tries to remove old scripts if the previous try was made more then "shm_prune_period" seconds ago. Default value is "0" which means that eAccelerator won't try to remove any old script from shared memory." eaccelerator.keys = "shm_only" - "These settings control the places eAccelerator may cache user content. ... 'shm_only' cache[s] data in shared memory" On my phpinfo page, it says: memory_limit 128M Version 0.9.5.3 and Caching Enabled true On my eAccelerator control.php page, it says 64 MB of total RAM available Memory usage 77.70% (49.73MB/ 64.00MB) 27.6 MB is used by cached scripts in the PHP opcode cache (I added up the file sizes myself) 22.1 MB is used by the cache keys, which is populated by the Wordpress object cache. My questions are: Is it true that there is only 36.4 MB of room in the eAccelerator cache for total "cache keys" (64 MB of total RAM minus whatever is taken by cached scripts, which is 27.6 MB at the moment)? What happens if my app tries to write more than 22.1 MB of cache keys to the eAccelerator memory cache? Does this cause eAccelerator to go crazy, like I've seen? If I change eaccelerator.shm_max to be equal to (say) 32 MB, would that avoid this problem? Do I also need to change shm_ttl and shm_prune_period to make eAccelerator respect the MB limit set by shm_max? Thanks! :-)

    Read the article

  • View a pdf with quick webview though apache proxy

    - by Musa
    I have a site(IIS) that is accessed via a proxy in apache(on an IBM i). This site serves PDFs which has quick web view and if I access a pdf directly from the IIS server the PDFs starts to display immediately but if I go through the proxy I have to wait until the entire pdf downloads before I can view it. In the apache config file I use ProxyPass /path/ http://xxx.xxx.xxx.xxx/ <LocationMatch "/path/"> Header set Cache-Control "no-cache" </LocationMatch> I tried adding SetEnv proxy-sendcl to LocationMatch directive this had no effect. The PDFs that view quickly makes a lot of partial requests This is the initial request and response headers GET http://xxx.xxx.xxx.xxx/xxx.PDF HTTP/1.1 Host: xxx.xxx.xxx.xxx Proxy-Connection: keep-alive Cache-Control: no-cache Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Pragma: no-cache User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Cookie: chocolatechip HTTP/1.1 200 OK Via: 1.1 xxxxxxxx Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 15330238 Date: Mon, 25 Aug 2014 12:48:31 GMT Content-Type: application/pdf ETag: "b6262940bbecf1:0" Server: Microsoft-IIS/7.5 Last-Modified: Fri, 22 Aug 2014 13:16:14 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET This is a partial request and response GET http://xxx.xxx.xxx.xxx/xxx.PDF HTTP/1.1 Host: xxx.xxx.xxx.xxx Proxy-Connection: keep-alive Cache-Control: no-cache Pragma: no-cache User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Accept: */* Referer: http://xxx.xxx.xxx.xxx/xxxx.PDF Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Cookie: chocolatechip Range: bytes=0-32767 HTTP/1.1 206 Partial Content Via: 1.1 xxxxxxxx Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 32768 Date: Mon, 25 Aug 2014 12:48:31 GMT Content-Range: bytes 0-32767/15330238 Content-Type: application/pdf ETag: "b6262940bbecf1:0" Server: Microsoft-IIS/7.5 Last-Modified: Fri, 22 Aug 2014 13:16:14 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET These are the headers I get if I go through he proxy GET /path/xxx.PDF HTTP/1.1 Host: domain:xxxx Connection: keep-alive Cache-Control: no-cache Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Pragma: no-cache User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 HTTP/1.1 200 OK Date: Mon, 25 Aug 2014 13:28:42 GMT Server: Microsoft-IIS/7.5 Content-Type: application/pdf Last-Modified: Fri, 22 Aug 2014 13:16:14 GMT Accept-Ranges: bytes ETag: "b6262940bbecf1:0"-gzip X-Powered-By: ASP.NET Cache-Control: no-cache Expires: Thu, 24 Aug 2017 13:28:42 GMT Vary: Accept-Encoding Content-Encoding: gzip Keep-Alive: timeout=300, max=100 Connection: Keep-Alive Transfer-Encoding: chunked I'm guessing its because the proxy uses Transfer-Encoding: chunked but I'm not sure and wasn't able to turn it off to check. Browser Chrome 36.0.1985.143 m Using the native PDF viewer Any help to get the pdf quick web view through the proxy working would be appreciated.

    Read the article

  • Hibernate - EhCache - Which region to Cache associations/sets/collections ??

    - by lifeisnotfair
    Hi all, I am a newcomer to hibernate. It would be great if someone could comment over the following query that i have: Say i have a parent class and each parent has multiple children. So the mapping file of parent class would be something like: parent.hbm.xml <hibernate-mapping > <class name="org.demo.parent" table="parent" lazy="true"> <cache usage="read-write" region="org.demo.parent"/> <id name="id" column="id" type="integer" length="10"> <generator class="native"> </generator> </id> <property name="name" column="name" type="string" length="50"/> <set name="children" lazy="true"> <cache usage="read-write" region="org.demo.parent.children" /> <key column="parent_id"/> <one-to-many class="org.demo.children"/> </set> </class> </hibernate-mapping> children.hbm.xml <hibernate-mapping > <class name="org.demo.children" table="children" lazy="true"> <cache usage="read-write" region="org.demo.children"/> <id name="id" column="id" type="integer" length="10"> <generator class="native"> </generator> </id> <property name="name" column="name" type="string" length="50"/> <many-to-one name="parent_id" column="parent_id" type="integer" length="10" not-null="true"/> </class> </hibernate-mapping> So for the set children, should we specify the region org.demo.parent.children where it should cache the association or should we use the cache region of org.demo.children where the children would be getting cached. I am using EHCache as the 2nd level cache provider. I tried to search for the answer to this question but couldnt find any answer in this direction. It makes more sense to use org.demo.children but I dont know in which scenarios one should use a separate cache region for associations/sets/collections as in the above case. Kindly provide your inputs also let me know if I am not clear in my question. Thanks all.

    Read the article

  • ORDER BY job failed in the Pig script while running EmbeddedPig using Java

    - by C.c. Huang
    I have this following pig script, which works perfectly using grunt shell (stored the results to HDFS without any issues); however, the last job (ORDER BY) failed if I ran the same script using Java EmbeddedPig. If I replace the ORDER BY job by others, such as GROUP or FOREACH GENERATE, the whole script then succeeded in Java EmbeddedPig. So I think it's the ORDER BY which causes the issue. Anyone has any experience with this? Any help would be appreciated! The Pig script: REGISTER pig-udf-0.0.1-SNAPSHOT.jar; user_similarity = LOAD '/tmp/sample-sim-score-results-31/part-r-00000' USING PigStorage('\t') AS (user_id: chararray, sim_user_id: chararray, basic_sim_score: float, alt_sim_score: float); simplified_user_similarity = FOREACH user_similarity GENERATE $0 AS user_id, $1 AS sim_user_id, $2 AS sim_score; grouped_user_similarity = GROUP simplified_user_similarity BY user_id; ordered_user_similarity = FOREACH grouped_user_similarity { sorted = ORDER simplified_user_similarity BY sim_score DESC; top = LIMIT sorted 10; GENERATE group, top; }; top_influencers = FOREACH ordered_user_similarity GENERATE com.aol.grapevine.similarity.pig.udf.AssignPointsToTopInfluencer($1, 10); all_influence_scores = FOREACH top_influencers GENERATE FLATTEN($0); grouped_influence_scores = GROUP all_influence_scores BY bag_of_topSimUserTuples::user_id; influence_scores = FOREACH grouped_influence_scores GENERATE group AS user_id, SUM(all_influence_scores.bag_of_topSimUserTuples::points) AS influence_score; ordered_influence_scores = ORDER influence_scores BY influence_score DESC; STORE ordered_influence_scores INTO '/tmp/cc-test-results-1' USING PigStorage(); The error log from Pig: 12/04/05 10:00:56 INFO pigstats.ScriptState: Pig script settings are added to the job 12/04/05 10:00:56 INFO mapReduceLayer.JobControlCompiler: mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3 12/04/05 10:00:58 INFO mapReduceLayer.JobControlCompiler: Setting up single store job 12/04/05 10:00:58 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized 12/04/05 10:00:58 INFO mapReduceLayer.MapReduceLauncher: 1 map-reduce job(s) waiting for submission. 12/04/05 10:00:58 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 12/04/05 10:00:58 INFO input.FileInputFormat: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths to process : 1 12/04/05 10:00:58 INFO util.MapRedUtil: Total input paths (combined) to process : 1 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating tmp-1546565755 in /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134-work-6955502337234509704 with rwxr-xr-x 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Cached hdfs://localhost/tmp/temp1725960134/tmp-1546565755#pigsample_854728855_1333645258470 as /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:58 WARN mapred.LocalJobRunner: LocalJobRunner does not support symlinking into current working dir. 12/04/05 10:00:58 INFO mapred.TaskRunner: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/pigsample_854728855_1333645258470 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.jar.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.jar.crc 12/04/05 10:00:58 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.split.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.split.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.splitmetainfo.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.splitmetainfo.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/.job.xml.crc <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/.job.xml.crc 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.jar <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.jar 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.split <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.split 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.splitmetainfo <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.splitmetainfo 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Creating symlink: /var/lib/hadoop-0.20/cache/cchuang/mapred/staging/cchuang402164468/.staging/job_local_0004/job.xml <- /var/lib/hadoop-0.20/cache/cchuang/mapred/local/localRunner/job.xml 12/04/05 10:00:59 INFO mapred.Task: Using ResourceCalculatorPlugin : null 12/04/05 10:00:59 INFO mapred.MapTask: io.sort.mb = 100 12/04/05 10:00:59 INFO mapred.MapTask: data buffer = 79691776/99614720 12/04/05 10:00:59 INFO mapred.MapTask: record buffer = 262144/327680 12/04/05 10:00:59 WARN mapred.LocalJobRunner: job_local_0004 java.lang.RuntimeException: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:139) at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117) at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:560) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:639) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:323) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:210) Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/Users/cchuang/workspace/grapevine-rec/pigsample_854728855_1333645258470 at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:231) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigFileInputFormat.listStatus(PigFileInputFormat.java:37) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:248) at org.apache.pig.impl.io.ReadToEndLoader.init(ReadToEndLoader.java:153) at org.apache.pig.impl.io.ReadToEndLoader.<init>(ReadToEndLoader.java:115) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.partitioners.WeightedRangePartitioner.setConf(WeightedRangePartitioner.java:112) ... 6 more 12/04/05 10:00:59 INFO filecache.TrackerDistributedCacheManager: Deleted path /var/lib/hadoop-0.20/cache/cchuang/mapred/local/archive/4334795313006396107_361978491_57907159/localhost/tmp/temp1725960134/tmp-1546565755 12/04/05 10:00:59 INFO mapReduceLayer.MapReduceLauncher: HadoopJobId: job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: job job_local_0004 has failed! Stop running all dependent jobs 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: 100% complete 12/04/05 10:01:04 ERROR pigstats.PigStatsUtil: 1 map reduce job(s) failed! 12/04/05 10:01:04 INFO pigstats.PigStats: Script Statistics: HadoopVersion PigVersion UserId StartedAt FinishedAt Features 0.20.2-cdh3u3 0.8.1-cdh3u3 cchuang 2012-04-05 10:00:34 2012-04-05 10:01:04 GROUP_BY,ORDER_BY Some jobs have failed! Stop running all dependent jobs Job Stats (time in seconds): JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MaxReduceTime MinReduceTime AvgReduceTime Alias Feature Outputs job_local_0001 0 0 0 0 0 0 0 0 all_influence_scores,grouped_user_similarity,simplified_user_similarity,user_similarity GROUP_BY job_local_0002 0 0 0 0 0 0 0 0 grouped_influence_scores,influence_scores GROUP_BY,COMBINER job_local_0003 0 0 0 0 0 0 0 0 ordered_influence_scores SAMPLER Failed Jobs: JobId Alias Feature Message Outputs job_local_0004 ordered_influence_scores ORDER_BY Message: Job failed! Error - NA /tmp/cc-test-results-1, Input(s): Successfully read 0 records from: "/tmp/sample-sim-score-results-31/part-r-00000" Output(s): Failed to produce result in "/tmp/cc-test-results-1" Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_local_0001 -> job_local_0002, job_local_0002 -> job_local_0003, job_local_0003 -> job_local_0004, job_local_0004 12/04/05 10:01:04 INFO mapReduceLayer.MapReduceLauncher: Some jobs have failed! Stop running all dependent jobs

    Read the article

  • jboss 5.0 data source configuration in ear file. How can I run oracle 10g and 11g on the same serve

    - by Joe
    Currently my setup is: in my ear META-INF/jboss-app.xml <jboss-app> <service>datasource-ds.xml</service> </module> and datasource-ds.xml <datasources> <local-tx-datasource> <jndi-name>jdbc/mydeployment</jndi-name> <connection-url>jdbc:oracle:thin:@eir:myport:mydbname</connection-url> <driver-class>oracle.jdbc.driver.OracleDriver</driver-class> <user-name>myuser</user-name> <password>mypassword</password> <exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.OracleExceptionSorter</exception-sorter-class-name> <metadata> <type-mapping>Oracle9i</type-mapping> </metadata> </local-tx-datasource> </datasources> and it works when ojdbc5.jar is in my servername/lib How can I config my oracle driver information in my .ear file so that I can have two different ear deployments, one using oracle 10g and one using oracle 11g?

    Read the article

  • Meet the New Windows Azure

    - by ScottGu
    Today we are releasing a major set of improvements to Windows Azure.  Below is a short-summary of just a few of them: New Admin Portal and Command Line Tools Today’s release comes with a new Windows Azure portal that will enable you to manage all features and services offered on Windows Azure in a seamless, integrated way.  It is very fast and fluid, supports filtering and sorting (making it much easier to use for large deployments), works on all browsers, and offers a lot of great new features – including built-in VM, Web site, Storage, and Cloud Service monitoring support. The new portal is built on top of a REST-based management API within Windows Azure – and everything you can do through the portal can also be programmed directly against this Web API. We are also today releasing command-line tools (which like the portal call the REST Management APIs) to make it even easier to script and automate your administration tasks.  We are offering both a Powershell (for Windows) and Bash (for Mac and Linux) set of tools to download.  Like our SDKs, the code for these tools is hosted on GitHub under an Apache 2 license. Virtual Machines Windows Azure now supports the ability to deploy and run durable VMs in the cloud.  You can easily create these VMs using a new Image Gallery built-into the new Windows Azure Portal, or alternatively upload and run your own custom-built VHD images. Virtual Machines are durable (meaning anything you install within them persists across reboots) and you can use any OS with them.  Our built-in image gallery includes both Windows Server images (including the new Windows Server 2012 RC) as well as Linux images (including Ubuntu, CentOS, and SUSE distributions).  Once you create a VM instance you can easily Terminal Server or SSH into it in order to configure and customize the VM however you want (and optionally capture your own image snapshot of it to use when creating new VM instances).  This provides you with the flexibility to run pretty much any workload within Windows Azure.   The new Windows Azure Portal provides a rich set of management features for Virtual Machines – including the ability to monitor and track resource utilization within them.  Our new Virtual Machine support also enables the ability to easily attach multiple data-disks to VMs (which you can then mount and format as drives).  You can optionally enable geo-replication support on these – which will cause Windows Azure to continuously replicate your storage to a secondary data-center at least 400 miles away from your primary data-center as a backup. We use the same VHD format that is supported with Windows virtualization today (and which we’ve released as an open spec), which enables you to easily migrate existing workloads you might already have virtualized into Windows Azure.  We also make it easy to download VHDs from Windows Azure, which also provides the flexibility to easily migrate cloud-based VM workloads to an on-premise environment.  All you need to do is download the VHD file and boot it up locally, no import/export steps required. Web Sites Windows Azure now supports the ability to quickly and easily deploy ASP.NET, Node.js and PHP web-sites to a highly scalable cloud environment that allows you to start small (and for free) and then scale up as your traffic grows.  You can create a new web site in Azure and have it ready to deploy to in under 10 seconds: The new Windows Azure Portal provides built-in administration support for Web sites – including the ability to monitor and track resource utilization in real-time: You can deploy to web-sites in seconds using FTP, Git, TFS and Web Deploy.  We are also releasing tooling updates today for both Visual Studio and Web Matrix that enable developers to seamlessly deploy ASP.NET applications to this new offering.  The VS and Web Matrix publishing support includes the ability to deploy SQL databases as part of web site deployment – as well as the ability to incrementally update database schema with a later deployment. You can integrate web application publishing with source control by selecting the “Set up TFS publishing” or “Set up Git publishing” links on a web-site’s dashboard: Doing do will enable integration with our new TFS online service (which enables a full TFS workflow – including elastic build and testing support), or create a Git repository that you can reference as a remote and push deployments to.  Once you push a deployment using TFS or Git, the deployments tab will keep track of the deployments you make, and enable you to select an older (or newer) deployment and quickly redeploy your site to that snapshot of the code.  This provides a very powerful DevOps workflow experience.   Windows Azure now allows you to deploy up to 10 web-sites into a free, shared/multi-tenant hosting environment (where a site you deploy will be one of multiple sites running on a shared set of server resources).  This provides an easy way to get started on projects at no cost. You can then optionally upgrade your sites to run in a “reserved mode” that isolates them so that you are the only customer within a virtual machine: And you can elastically scale the amount of resources your sites use – allowing you to increase your reserved instance capacity as your traffic scales: Windows Azure automatically handles load balancing traffic across VM instances, and you get the same, super fast, deployment options (FTP, Git, TFS and Web Deploy) regardless of how many reserved instances you use. With Windows Azure you pay for compute capacity on a per-hour basis – which allows you to scale up and down your resources to match only what you need. Cloud Services and Distributed Caching Windows Azure also supports the ability to build cloud services that support rich multi-tier architectures, automated application management, and scale to extremely large deployments.  Previously we referred to this capability as “hosted services” – with this week’s release we are now referring to this capability as “cloud services”.  We are also enabling a bunch of new features with them. Distributed Cache One of the really cool new features being enabled with cloud services is a new distributed cache capability that enables you to use and setup a low-latency, in-memory distributed cache within your applications.  This cache is isolated for use just by your applications, and does not have any throttling limits. This cache can dynamically grow and shrink elastically (without you have to redeploy your app or make code changes), and supports the full richness of the AppFabric Cache Server API (including regions, high availability, notifications, local cache and more).  In addition to supporting the AppFabric Cache Server API, it also now supports the Memcached protocol – allowing you to point code written against Memcached at it (no code changes required). The new distributed cache can be setup to run in one of two ways: 1) Using a co-located approach.  In this option you allocate a percentage of memory in your existing web and worker roles to be used by the cache, and then the cache joins the memory into one large distributed cache.  Any data put into the cache by one role instance can be accessed by other role instances in your application – regardless of whether the cached data is stored on it or another role.  The big benefit with the “co-located” option is that it is free (you don’t have to pay anything to enable it) and it allows you to use what might have been otherwise unused memory within your application VMs. 2) Alternatively, you can add “cache worker roles” to your cloud service that are used solely for caching.  These will also be joined into one large distributed cache ring that other roles within your application can access.  You can use these roles to cache 10s or 100s of GBs of data in-memory very effectively – and the cache can be elastically increased or decreased at runtime within your application: New SDKs and Tooling Support We have updated all of the Windows Azure SDKs with today’s release to include new features and capabilities.  Our SDKs are now available for multiple languages, and all of the source in them is published under an Apache 2 license and and maintained in GitHub repositories. The .NET SDK for Azure has in particular seen a bunch of great improvements with today’s release, and now includes tooling support for both VS 2010 and the VS 2012 RC. We are also now shipping Windows, Mac and Linux SDK downloads for languages that are offered on all of these systems – allowing developers to develop Windows Azure applications using any development operating system. Much, Much More The above is just a short list of some of the improvements that are shipping in either preview or final form today – there is a LOT more in today’s release.  These include new Virtual Private Networking capabilities, new Service Bus runtime and tooling support, the public preview of the new Azure Media Services, new Data Centers, significantly upgraded network and storage hardware, SQL Reporting Services, new Identity features, support within 40+ new countries and territories, and much, much more. You can learn more about Windows Azure and sign-up to try it for free at http://windowsazure.com.  You can also watch a live keynote I’m giving at 1pm June 7th (later today) where I’ll walk through all of the new features.  We will be opening up the new features I discussed above for public usage a few hours after the keynote concludes.  We are really excited to see the great applications you build with them. Hope this helps, Scott

    Read the article

  • Varnish default.vcl grace period

    - by Vladimir
    These are my settings for a grace period (/etc/varnish/default.vcl) sub vcl_recv { .... set req.grace = 360000s; ... } sub vcl_fetch { ... set beresp.grace = 360000s; ... } I tested Varnish using localhost and nodejs as a server. I started localhost, the site was up. Then I disconnected server and the site got disconnected in less than 2 min. It says: Error 503 Service Unavailable Service Unavailable Guru Meditation: XID: 1890127100 Varnish cache server Could you tell me what could be the problem? sub vcl_fetch { if (beresp.ttl < 120s) { ##std.log("Adjusting TTL"); set beresp.ttl = 36000s; ##120s; } # Do not cache the object if the backend application does not want us to. if (beresp.http.Cache-Control ~ "(no-cache|no-store|private|must-revalidate)") { return(hit_for_pass); } # Do not cache the object if the status is not in the 200s if (beresp.status >= 300) { # Remove the Set-Cookie header #remove beresp.http.Set-Cookie; return(hit_for_pass); } # # Everything below here should be cached # # Remove the Set-Cookie header ####remove beresp.http.Set-Cookie; # Set the grace time ## set beresp.grace = 1s; //change this to minutes in case of app shutdown set beresp.grace = 360000s; ## 10 hour - reduce if it has negative impact # Static assets - browser caches tpiphem for a long time. if (req.url ~ "\.(css|js|.js|jpg|jpeg|gif|ico|png)\??\d*$") { /* Remove Expires from backend, it's not long enough */ unset beresp.http.expires; /* Set the clients TTL on this object */ set beresp.http.cache-control = "public, max-age=31536000"; /* marker for vcl_deliver to reset Age: */ set beresp.http.magicmarker = "1"; } else { set beresp.http.Cache-Control = "private, max-age=0, must-revalidate"; set beresp.http.Pragma = "no-cache"; } if (req.url ~ "\.(css|js|min|)\??\d*$") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } ##do not duplicate these settings if (req.url ~ ".css") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } if (req.url ~ ".js") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } if (req.url ~ ".min") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } ## If the request to the backend returns a code other than 200, restart the loop ## If the number of restarts reaches the value of the parameter max_restarts, ## the request will be error'ed. max_restarts defaults to 4. This prevents ## an eternal loop in the event that, e.g., the object does not exist at all. if (beresp.status != 200 && beresp.status != 403 && beresp.status != 404) { return(restart); } if (beresp.status == 302) { return(deliver); } # Never cache posts if (req.url ~ "\/post\/" || req.url ~ "\/submit\/" || req.url ~ "\/ask\/" || req.url ~ "\/add\/") { return(hit_for_pass); } ##check this setting to ensure that it does not cause issues for browsers with no gzip if (beresp.http.content-type ~ "text") { set beresp.do_gzip = true; } if (beresp.http.Set-Cookie) { return(deliver); } ##if (req.url == "/index.html") { set beresp.do_esi = true; ##} ## check if this is needed or should be used # return(deliver); the object return(deliver); } sub vcl_recv { ##avoid leeching of images call hot_link; set req.grace = 360000s; ##2m ## if one backend is down - use another if (req.restarts == 0) { set req.backend = cache_director; ##we can specify individual VMs } else if (req.restarts == 1) { set req.backend = cache_director; } ## post calls should not be cached - add cookie for these requests if using micro-caching # Pass requests that are not GET or HEAD if (req.request != "GET" && req.request != "HEAD") { return(pass); ## return(pass) goes to backend - not cache } # Don't cache the result of a redirect if (req.http.Referer ~ "redir" || req.http.Origin ~ "jumpto") { return(pass); } # Don't cache the result of a redirect (asking for logon) if (req.http.Referer ~ "post" || req.http.Referer ~ "submit" || req.http.Referer ~ "add" || req.http.Referer ~ "ask") { return(pass); } # Never cache posts - ensure that we do not use these strings in our URLs' that need to be cached if (req.url ~ "\/post\/" || req.url ~ "\/submit\/" || req.url ~ "\/ask\/" || req.url ~ "\/add\/") { return(pass); } ## if (req.http.Authorization || req.http.Cookie) { if (req.http.Authorization) { /* Not cacheable by default */ return (pass); } # Handle compression correctly. Different browsers send different # "Accept-Encoding" headers, even though they mostly all support the same # compression mechanisms. By consolidating these compression headers into # a consistent format, we can reduce the size of the cache and get more hits. # @see: http:// varnish.projects.linpro.no/wiki/FAQ/Compression if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|ico)$") { # No point in compressing these remove req.http.Accept-Encoding; } else if (req.http.Accept-Encoding ~ "gzip") { # If the browser supports it, we'll use gzip. set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { # Next, try deflate if it is supported. set req.http.Accept-Encoding = "deflate"; } else { # Unknown algorithm. Remove it and send unencoded. unset req.http.Accept-Encoding; } } # lookup graphics, css, js & ico files in the cache if (req.url ~ "\.(png|gif|jpg|jpeg|css|.js|ico)$") { return(lookup); } ##added on 0918 - check if it causes issues with user specific content if (req.request == "GET" && req.http.cookie) { return(lookup); } # Pipe requests that are non-RFC2616 or CONNECT which is weird. if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { ##closing connection and calling pipe return(pipe); } ##purge content via localhost only if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } ## do we need this? ## return(lookup); }

    Read the article

  • Errors Code: /var/cache/apt/archives/linux-image-3.8.0-19-generic_3.8.0-19.30~precise1_amd64.deb

    - by user286682
    $ sudo apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following packages will be upgraded: linux-image-3.8.0-19-generic 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 6 not fully installed or removed. Need to get 0 B/47.8 MB of archives. After this operation, 142 MB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 164064 files and directories currently installed.) Preparing to replace linux-image-3.8.0-19-generic 3.8.0-19.29 (using .../linux-image-3.8.0-19-generic_3.8.0-19.30~precise1_amd64.deb) ... Done. Unpacking replacement linux-image-3.8.0-19-generic ... dpkg: error processing /var/cache/apt/archives/linux-image-3.8.0-19-generic_3.8.0-19.30~precise1_amd64.deb (--unpack): trying to overwrite '/lib/modules/3.8.0-19-generic/kernel/arch/x86/kvm/kvm-intel.ko', which is also in package linux-image-extra-3.8.0-19-generic 3.8.0-19.29 dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.8.0-19-generic /boot/vmlinuz-3.8.0-19-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.8.0-19-generic /boot/vmlinuz-3.8.0-19-generic Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.8.0-19-generic_3.8.0-19.30~precise1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) How can I get this update to work?

    Read the article

  • Caching queries in Django

    - by dolma33
    In a django project I only need to cache a few queries, using, because of server limitations, a cache table instead of memcached. One of those queries looks like this: Let's say I have a Parent object, which has a lot of Child objects. I need to store the result of the simple query parent.childs.all(). I have no problem with that, and everything works as expected with some code like key = "%s_children" %(parent.name) value = cache.get(key) if value is None: cache.set(key, parent.children.all(), CACHE_TIMEOUT) value = cache.get(key) But sometimes, just sometimes, the cache.set does nothing, and, after executing cache.set, cache.get(key) keeps returning None. After some test, I've noticed that cache.set is not working when parent.children.all().count() has higher values. That means that if I'm storing inside of key (for example) 600 children objects, it works fine, but it wont work with 1200 children. So my question is: is there a limit to the data that a key could store? How can I override it? Second question: which way is "better", the above code, or the following one? key = "%s_children" %(parent.name) value = cache.get(key) if value is None: value = parent.children.all() cache.set(key, value, CACHE_TIMEOUT) The second version won't cause errors if cache.set doesn't work, so it could be a workaround to my issue, but obviously not a solution. In general, let's forget about my issue, which version would you consider "better"?

    Read the article

  • Should I use /etc/bind/zones/ or /var/cache/bind/?

    - by nbolton
    Each tutorial seems to have a different opinion on this. For my ISC BIND zones, should I use /etc/bind/zones/ or /var/cache/bind/? In the last install, I used /var/cache/bind/ but only because I was guided to do so; however I just spotted a pid file in there for this new Debian install, so I figured that using the "working directory" to store zone files probably wasn't the best idea. It seems that many admins use this so they don't have to type the full path when declaring a new zone. For example: file "/etc/bind/zones/db.foobar.com"; Instead of: file "db.foobar.com"; Is obviously easier to type, but is it good or bad practice? Some may also suggest setting the working directory to /etc/bind/zones: options { // directory "/var/cache/bind"; directory "/etc/bind/zones"; } ... but something tells me this isn't good practice, since the pid file would be created there I assume (unless it's just in /var/cache/bind by coincidence). I took a look at the manpage but it didn't seem to say what the directory option was for, any ideas exactly what it was design for?

    Read the article

  • What's with the accesses to $random_existing_file/cache/df.php?

    - by Bernd Jendrissek
    Occasionally I eyeball Apache's access_log and lately I've been noticing these accesses to URLs that I don't serve. They're correctly 404'ed, but I'd like to know just who and what is involved here. "Obviously" it's some sort of vulnerability probing; I'd like to know which. (Not that it affects me, but I like to know the score.) Here's an example: 69.89.31.206 - - [28/Nov/2012:17:36:34 +0200] "GET /cvfull.pdf/cache/df.php HTTP/1.1" 404 489 "-" "-" Oddly, all 26 attempts are to either /cache/df.php, or to /cvfull.pdf/cache/df.php - they come in pairs. A few weeks ago it was zx.php, now it's df.php - I'm assuming the target is the same. Perhaps I should be flattered that a script is thinking of hiring me. Seriously, my CV is one of only two PDF files on my site, so I can only guess that non-PDF URLs aren't interesting? I've tried Googling for "cache df php", but my Google-fu is weak at the best of times, so I can only find a few reports of other script attacks. What's the vulnerability being scanned for here?

    Read the article

  • Can you share offline files cache with two user accounts?

    - by Joel Coehoorn
    I have a new laptop that I use for both home and work. It runs windows 7 ultimate, and is joined to the domain at work. It is okay to use this laptop for both work and personal activities, and I even have an account set up on the local machine in addition to the work domain account specifically for this to help keep the two separate. At home, I have a file server that I use to share files and printers with my wife's laptop, this new laptop, and my old desktop which will now become the family machine. My mp3 library is on there, among other things. What I want to do is use the windows Offline Files feature to keep a synced copy of my music library on the laptop. That part is easy. What's tricky is that I want to share this offline cache between both the local account on the laptop and my work domain account. I could do them both separately, but then I have two copies of a very large music library stored locally. This also means twice the sync burden, when the domain account is rarely connected to the file share. I really want to be able to sync from the local machine account only, and have the domain account be able to use the synced files. I know where the offline file cache is kept (\Windows\CSC) and I can find the cached files (not encrypted), but permissions on the cache are setup weird, and so using that cache directly is not trivial. Any ideas appreciated.

    Read the article

  • How can I configure apache to cache the images that it is serving? Right now it is giving headers t

    - by Tchalvak
    Serving up images that don't seem to cache There's a LAPP (postgresql instead of mysql) running over on http://ninjawars.net. I just recently noticed that images don't seem to be caching with any kind of good frequency as I was reloading a page with a few images on it here: http://www.ninjawars.net/attack_player.php Here is an example image (they're probably all being served exactly the same): http://www.ninjawars.net/images/characters/fighter.png Checking the header, it seems that the caching is set to: Cache-Control:max-age=0 (the full header for this image-like-all-the-others is... Request URL:http://www.ninjawars.net/images/characters/fighter.png Request Method:GET Status Code:200 OK Request Headers Accept:application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,/;q=0.5 Cache-Control:max-age=0 Referer:http://www.ninjawars.net/images/characters/fighter.png User-Agent:Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.3 Safari/533.4 Response Headers Accept-Ranges:bytes Content-Length:938 Content-Type:image/png Date:Thu, 13 May 2010 21:24:07 GMT ETag:"ffd4d-3aa-4837efc120540" Last-Modified:Mon, 05 Apr 2010 15:28:45 GMT Server:Apache ) So what modules or config or htaccess or whatever do I change to have it cache images, e.g. for 24 hours?

    Read the article

  • Installing Lubuntu 14.04.1 forcepae fails

    - by Rantanplan
    I tried to install Lubuntu 14.04.1 from a CD. First, I chose Try Lubuntu without installing which gave: ERROR: PAE is disabled on this Pentium M (PAE can potentially be enabled with kernel parameter "forcepae" ... Following the description on https://help.ubuntu.com/community/PAE, I used forcepae and tried Try Lubuntu without installing again. That worked fine. dmesg | grep -i pae showed: [ 0.000000] Kernel command line: file=/cdrom/preseed/lubuntu.seed boot=casper initrd=/casper/initrd.lz quiet splash -- forcepae [ 0.008118] PAE forced! On the live-CD session, I tried installing Lubuntu double clicking on the install button on the desktop. Here, the CD starts running but then stops running and nothing happens. Next, I rebooted and tried installing Lubuntu directly from the boot menu screen using forcepae again. After a while, I receive the following error message: The installer encountered an unrecoverable error. A desktop session will now be run so that you may investigate the problem or try installing again. Hitting Enter brings me to the desktop. For what errors should I search? And how? Finally, I rebooted once more and tried Check disc for defects with forcepae option; no errors have been found. Now, I am wondering how to find the error or whether it would be better to follow advice c in https://help.ubuntu.com/community/PAE: "Move the hard disk to a computer on which the processor has PAE capability and PAE flag (that is, almost everything else than a Banias). Install the system as usual but don't add restricted drivers. After the install move the disk back." Thanks for some hints! Perhaps some of the following can help: On Lubuntu 12.04: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr mce cx8 mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe up bts est tm2 bogomips : 1284.76 clflush size : 64 cache_alignment : 64 address sizes : 32 bits physical, 32 bits virtual power management: uname -a Linux humboldt 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:45:51 UTC 2014 i686 i686 i386 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.5 LTS Release: 12.04 Codename: precise cpuid eax in eax ebx ecx edx 00000000 00000002 756e6547 6c65746e 49656e69 00000001 000006d6 00000816 00000180 afe9f9bf 00000002 02b3b001 000000f0 00000000 2c04307d 80000000 80000004 00000000 00000000 00000000 80000001 00000000 00000000 00000000 00000000 80000002 20202020 20202020 65746e49 2952286c 80000003 6e655020 6d756974 20295228 7270204d 80000004 7365636f 20726f73 30352e31 007a4847 Vendor ID: "GenuineIntel"; CPUID level 2 Intel-specific functions: Version 000006d6: Type 0 - Original OEM Family 6 - Pentium Pro Model 13 - Stepping 6 Reserved 0 Brand index: 22 [not in table] Extended brand string: " Intel(R) Pentium(R) M processor 1.50GHz" CLFLUSH instruction cache line size: 8 Feature flags afe9f9bf: FPU Floating Point Unit VME Virtual 8086 Mode Enhancements DE Debugging Extensions PSE Page Size Extensions TSC Time Stamp Counter MSR Model Specific Registers MCE Machine Check Exception CX8 COMPXCHG8B Instruction SEP Fast System Call MTRR Memory Type Range Registers PGE PTE Global Flag MCA Machine Check Architecture CMOV Conditional Move and Compare Instructions FGPAT Page Attribute Table CLFSH CFLUSH instruction DS Debug store ACPI Thermal Monitor and Clock Ctrl MMX MMX instruction set FXSR Fast FP/MMX Streaming SIMD Extensions save/restore SSE Streaming SIMD Extensions instruction set SSE2 SSE2 extensions SS Self Snoop TM Thermal monitor 31 reserved TLB and cache info: b0: unknown TLB/cache descriptor b3: unknown TLB/cache descriptor 02: Instruction TLB: 4MB pages, 4-way set assoc, 2 entries f0: unknown TLB/cache descriptor 7d: unknown TLB/cache descriptor 30: unknown TLB/cache descriptor 04: Data TLB: 4MB pages, 4-way set assoc, 8 entries 2c: unknown TLB/cache descriptor On Lubuntu 14.04.1 live-CD with forcepae: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fdiv_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe bts est tm2 bogomips : 1284.68 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 32 bits virtual power management: uname -a Linux lubuntu 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:12 UTC 2014 i686 i686 i686 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.1 LTS Release: 14.04 Codename: trusty cpuid CPU 0: vendor_id = "GenuineIntel" version information (1/eax): processor type = primary processor (0) family = Intel Pentium Pro/II/III/Celeron/Core/Core 2/Atom, AMD Athlon/Duron, Cyrix M2, VIA C3 (6) model = 0xd (13) stepping id = 0x6 (6) extended family = 0x0 (0) extended model = 0x0 (0) (simple synth) = Intel Pentium M (Dothan B1) / Celeron M (Dothan B1), 90nm miscellaneous (1/ebx): process local APIC physical ID = 0x0 (0) cpu count = 0x0 (0) CLFLUSH line size = 0x8 (8) brand index = 0x16 (22) brand id = 0x16 (22): Intel Pentium M, .13um feature information (1/edx): x87 FPU on chip = true virtual-8086 mode enhancement = true debugging extensions = true page size extensions = true time stamp counter = true RDMSR and WRMSR support = true physical address extensions = false machine check exception = true CMPXCHG8B inst. = true APIC on chip = false SYSENTER and SYSEXIT = true memory type range registers = true PTE global bit = true machine check architecture = true conditional move/compare instruction = true page attribute table = true page size extension = false processor serial number = false CLFLUSH instruction = true debug store = true thermal monitor and clock ctrl = true MMX Technology = true FXSAVE/FXRSTOR = true SSE extensions = true SSE2 extensions = true self snoop = true hyper-threading / multi-core supported = false therm. monitor = true IA64 = false pending break event = true feature information (1/ecx): PNI/SSE3: Prescott New Instructions = false PCLMULDQ instruction = false 64-bit debug store = false MONITOR/MWAIT = false CPL-qualified debug store = false VMX: virtual machine extensions = false SMX: safer mode extensions = false Enhanced Intel SpeedStep Technology = true thermal monitor 2 = true SSSE3 extensions = false context ID: adaptive or shared L1 data = false FMA instruction = false CMPXCHG16B instruction = false xTPR disable = false perfmon and debug = false process context identifiers = false direct cache access = false SSE4.1 extensions = false SSE4.2 extensions = false extended xAPIC support = false MOVBE instruction = false POPCNT instruction = false time stamp counter deadline = false AES instruction = false XSAVE/XSTOR states = false OS-enabled XSAVE/XSTOR = false AVX: advanced vector extensions = false F16C half-precision convert instruction = false RDRAND instruction = false hypervisor guest status = false cache and TLB information (2): 0xb0: instruction TLB: 4K, 4-way, 128 entries 0xb3: data TLB: 4K, 4-way, 128 entries 0x02: instruction TLB: 4M pages, 4-way, 2 entries 0xf0: 64 byte prefetching 0x7d: L2 cache: 2M, 8-way, sectored, 64 byte lines 0x30: L1 cache: 32K, 8-way, 64 byte lines 0x04: data TLB: 4M pages, 4-way, 8 entries 0x2c: L1 data cache: 32K, 8-way, 64 byte lines extended feature flags (0x80000001/edx): SYSCALL and SYSRET instructions = false execution disable = false 1-GB large page support = false RDTSCP = false 64-bit extensions technology available = false Intel feature flags (0x80000001/ecx): LAHF/SAHF supported in 64-bit mode = false LZCNT advanced bit manipulation = false 3DNow! PREFETCH/PREFETCHW instructions = false brand = " Intel(R) Pentium(R) M processor 1.50GHz" (multi-processing synth): none (multi-processing method): Intel leaf 1 (synth) = Intel Pentium M (Dothan B1), 90nm

    Read the article

  • NUMA-aware placement of communication variables

    - by Dave
    For classic NUMA-aware programming I'm typically most concerned about simple cold, capacity and compulsory misses and whether we can satisfy the miss by locally connected memory or whether we have to pull the line from its home node over the coherent interconnect -- we'd like to minimize channel contention and conserve interconnect bandwidth. That is, for this style of programming we're quite aware of where memory is homed relative to the threads that will be accessing it. Ideally, a page is collocated on the node with the thread that's expected to most frequently access the page, as simple misses on the page can be satisfied without resorting to transferring the line over the interconnect. The default "first touch" NUMA page placement policy tends to work reasonable well in this regard. When a virtual page is first accessed, the operating system will attempt to provision and map that virtual page to a physical page allocated from the node where the accessing thread is running. It's worth noting that the node-level memory interleaving granularity is usually a multiple of the page size, so we can say that a given page P resides on some node N. That is, the memory underlying a page resides on just one node. But when thinking about accesses to heavily-written communication variables we normally consider what caches the lines underlying such variables might be resident in, and in what states. We want to minimize coherence misses and cache probe activity and interconnect traffic in general. I don't usually give much thought to the location of the home NUMA node underlying such highly shared variables. On a SPARC T5440, for instance, which consists of 4 T2+ processors connected by a central coherence hub, the home node and placement of heavily accessed communication variables has very little impact on performance. The variables are frequently accessed so likely in M-state in some cache, and the location of the home node is of little consequence because a requester can use cache-to-cache transfers to get the line. Or at least that's what I thought. Recently, though, I was exploring a simple shared memory point-to-point communication model where a client writes a request into a request mailbox and then busy-waits on a response variable. It's a simple example of delegation based on message passing. The server polls the request mailbox, and having fetched a new request value, performs some operation and then writes a reply value into the response variable. As noted above, on a T5440 performance is insensitive to the placement of the communication variables -- the request and response mailbox words. But on a Sun/Oracle X4800 I noticed that was not the case and that NUMA placement of the communication variables was actually quite important. For background an X4800 system consists of 8 Intel X7560 Xeons . Each package (socket) has 8 cores with 2 contexts per core, so the system is 8x8x2. Each package is also a NUMA node and has locally attached memory. Every package has 3 point-to-point QPI links for cache coherence, and the system is configured with a twisted ladder "mobius" topology. The cache coherence fabric is glueless -- there's not central arbiter or coherence hub. The maximum distance between any two nodes is just 2 hops over the QPI links. For any given node, 3 other nodes are 1 hop distant and the remaining 4 nodes are 2 hops distant. Using a single request (client) thread and a single response (server) thread, a benchmark harness explored all permutations of NUMA placement for the two threads and the two communication variables, measuring the average round-trip-time and throughput rate between the client and server. In this benchmark the server simply acts as a simple transponder, writing the request value plus 1 back into the reply field, so there's no particular computation phase and we're only measuring communication overheads. In addition to varying the placement of communication variables over pairs of nodes, we also explored variations where both variables were placed on one page (and thus on one node) -- either on the same cache line or different cache lines -- while varying the node where the variables reside along with the placement of the threads. The key observation was that if the client and server threads were on different nodes, then the best placement of variables was to have the request variable (written by the client and read by the server) reside on the same node as the client thread, and to place the response variable (written by the server and read by the client) on the same node as the server. That is, if you have a variable that's to be written by one thread and read by another, it should be homed with the writer thread. For our simple client-server model that means using split request and response communication variables with unidirectional message flow on a given page. This can yield up to twice the throughput of less favorable placement strategies. Our X4800 uses the QPI 1.0 protocol with source-based snooping. Briefly, when node A needs to probe a cache line it fires off snoop requests to all the nodes in the system. Those recipients then forward their response not to the original requester, but to the home node H of the cache line. H waits for and collects the responses, adjudicates and resolves conflicts and ensures memory-model ordering, and then sends a definitive reply back to the original requester A. If some node B needed to transfer the line to A, it will do so by cache-to-cache transfer and let H know about the disposition of the cache line. A needs to wait for the authoritative response from H. So if a thread on node A wants to write a value to be read by a thread on node B, the latency is dependent on the distances between A, B, and H. We observe the best performance when the written-to variable is co-homed with the writer A. That is, we want H and A to be the same node, as the writer doesn't need the home to respond over the QPI link, as the writer and the home reside on the very same node. With architecturally informed placement of communication variables we eliminate at least one QPI hop from the critical path. Newer Intel processors use the QPI 1.1 coherence protocol with home-based snooping. As noted above, under source-snooping a requester broadcasts snoop requests to all nodes. Those nodes send their response to the home node of the location, which provides memory ordering, reconciles conflicts, etc., and then posts a definitive reply to the requester. In home-based snooping the snoop probe goes directly to the home node and are not broadcast. The home node can consult snoop filters -- if present -- and send out requests to retrieve the line if necessary. The 3rd party owner of the line, if any, can respond either to the home or the original requester (or even to both) according to the protocol policies. There are myriad variations that have been implemented, and unfortunately vendor terminology doesn't always agree between vendors or with the academic taxonomy papers. The key is that home-snooping enables the use of a snoop filter to reduce interconnect traffic. And while home-snooping might have a longer critical path (latency) than source-based snooping, it also may require fewer messages and less overall bandwidth. It'll be interesting to reprise these experiments on a platform with home-based snooping. While collecting data I also noticed that there are placement concerns even in the seemingly trivial case when both threads and both variables reside on a single node. Internally, the cores on each X7560 package are connected by an internal ring. (Actually there are multiple contra-rotating rings). And the last-level on-chip cache (LLC) is partitioned in banks or slices, which with each slice being associated with a core on the ring topology. A hardware hash function associates each physical address with a specific home bank. Thus we face distance and topology concerns even for intra-package communications, although the latencies are not nearly the magnitude we see inter-package. I've not seen such communication distance artifacts on the T2+, where the cache banks are connected to the cores via a high-speed crossbar instead of a ring -- communication latencies seem more regular.

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >