Search Results

Search found 10693 results on 428 pages for 'max requests'.

Page 17/428 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Increasing file descriptor limit on Debian does not work! Help!

    - by Aco
    I am running Debian 6 and I am trying to increase the file descriptor limit but it does not want to work. This is what I have done: I edited /etc/sysctl.conf by adding fs.file-max = 64000 at the end and applied the changes using sysctl -p. I then edited /etc/security/limits.conf and added the following lines: * soft nofile 64000 and * hard nofile 64000. Now when I execute ulimit -Hn and ulimit -Sn I still see 1024. I rebooted the server and I still get the same result. What have I failed to do?

    Read the article

  • What reasons are there to reduce the max-age of a logo to just 8 days? [closed]

    - by callum
    Most websites set max-age=31536000 (1 year) on the Cache-control headers of static assets such as logo images. Examples: YouTube Yahoo Twitter BBC But there is a notable exception: Google's logo has max-age=691200 (8 days). I've checked the headers on the Google logo in the past, and it definitely used to be 1 year. (Also, it used to be part of a sprite, and now it is a standalone logo image, but that's probably another question...) What could be valid technical reasons why they would want to reduce its cache lifetime to just 8 days? Google's homepage is one of the most carefully optimised pages in the world, so I imagine there's a good reason. Edit: Please make sure you understand these points before answering: Nobody uses short max-age lifetimes to allow modifying a static asset in future. When you modify it, you just serve it at a different URL. So no, it's nothing to do with Google doodles. Think about it: even if Google didn't understand this basic trick of HTTP, 8 days still wouldn't be appropriate, as only those users who don't have the original logo cached would see the doodle on doodle-day – and then that group of users would go on seeing the doodle for the following 8 days after Google changed it back :) Web servers do not worry about "filling up" the caches of clients (or proxies). The client manages this by itself – when it hits its own storage limit, it just starts dropping the lowest priority items to make space for new items. The priority score is based on the question "How likely am I to benefit from having cached this URL?", which is nothing to do with what max-age value the server sent when the URL was originally requested; it's a heuristic based on the "frecency" of requests for that URL. The max-age simply lets the server set a cut-off point – the time at which the client is supposed to discard the item regardless of how often it's being re-used. It would be very nice and trusting of a downstream client/proxy to rely on all origin servers "holding back" from filling up their caches, but I don't think we live in that world ;)

    Read the article

  • Rails App hangs after few requests

    - by Paddy
    I have Bitnami Rails stack installed on my Mac. To better explain my problem i created a simple scaffold based rails app with mysql as the backend. I can get to perform simple POST and GET requests for a while and after a few requests the app just hangs indefinitely. No exception caught or anything worthwhile in the development log to report this strange behavior. This is the last bit from the development log before the app froze: Processing WritedatasController#index (for 127.0.0.1 at 2010-03-30 20:38:51) [GET] [4;36;1mWritedata Load (0.7ms) [0m [0;1mSELECT * FROM `writedatas` [0m Rendering template within layouts/application Rendering writedatas/index [4;35;1mWritedata Columns (2.9ms) [0m [0mSHOW FIELDS FROM `writedatas` [0m Completed in 99ms (View: 88, DB: 4) | 200 OK [http://localhost/writedatas] [4;36;1mSQL (0.2ms) [0m [0;1mSET NAMES 'utf8' [0m [4;35;1mSQL (0.1ms) [0m [0mSET SQL_AUTO_IS_NULL=0 [0m Processing WritedatasController#new (for 127.0.0.1 at 2010-03-30 20:38:52) [GET] [4;36;1mWritedata Columns (2.0ms) [0m [0;1mSHOW FIELDS FROM `writedatas` [0m Rendering template within layouts/application Rendering writedatas/new Rendered writedatas/_form (5.9ms) Completed in 34ms (View: 25, DB: 2) | 200 OK [http://localhost/writedatas/new] [4;36;1mSQL (0.4ms) [0m [0;1mSET NAMES 'utf8' [0m [4;35;1mSQL (0.1ms) [0m [0mSET SQL_AUTO_IS_NULL=0 [0m Processing WritedatasController#index (for 127.0.0.1 at 2010-03-30 20:39:17) [GET] [4;36;1mWritedata Load (0.7ms) [0m [0;1mSELECT * FROM `writedatas` [0m Rendering template within layouts/application Rendering writedatas/index [4;35;1mWritedata Columns (2.6ms) [0m [0mSHOW FIELDS FROM `writedatas` [0m Completed in 101ms (View: 90, DB: 4) | 200 OK [http://localhost/writedatas] It just hung at this point. And after this happens i have to restart the server, for it to hang again after few requests. This is the weirdest problem i have faced and i am truly stumped.

    Read the article

  • Locking a table for getting MAX in LINQ

    - by Hossein Margani
    Hi Every one! I have a query in LINQ, I want to get MAX of Code of my table and increase it and insert new record with new Code. just like the IDENTITY feature of SQL Server, but here my Code column is char(5) where can be alphabets and numeric. My problem is when inserting a new row, two concurrent processes get max and insert an equal Code to the record. my command is: var maxCode = db.Customers.Select(c=>c.Code).Max(); var anotherCustomer = db.Customers.Where(...).SingleOrDefault(); anotherCustomer.Code = GenerateNextCode(maxCode); db.SubmitChanges(); I ran this command cross 1000 threads and each updating 200 customers, and used a Transaction with IsolationLevel.Serializable, after two or three execution an error occured: using (var db = new DBModelDataContext()) { DbTransaction tran = null; try { db.Connection.Open(); tran = db.Connection.BeginTransaction(IsolationLevel.Serializable); db.Transaction = tran; . . . . tran.Commit(); } catch { tran.Rollback(); } finally { db.Connection.Close(); } } error: Transaction (Process ID 60) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. other IsolationLevels generates this error: Row not found or changed. Please help me, thank you.

    Read the article

  • Problem calling std::max

    - by Eric
    I compiled my bison-generated files in Visual Studio and got these errors: ...\position.hh(83): error C2589: '(' : illegal token on right side of '::' ...\position.hh(83): error C2059: syntax error : '::' ...\position.hh(83): error C2589: '(' : illegal token on right side of '::' ...\position.hh(83): error C2059: syntax error : '::' The corresponding code is: inline void columns (int count = 1) { column = std::max (1u, column + count); } I think the problem is with std::max; if I change std::max to equivalent code then there is no problem anymore, but is there a better solution instead of changing the generated code? Here is the bison file I wrote: // // bison.yy // %skeleton "lalr1.cc" %require "2.4.2" %defines %define parser_class_name "cmd_parser" %locations %debug %error-verbose %code requires { class ParserDriver; } %parse-param { ParserDriver& driver } %lex-param { ParserDriver& driver } %union { struct ast *a; double d; struct symbol *s; struct symlist *sl; int fn; } %code { #include "helper_func.h" #include "ParserDriver.h" std::string error_msg = ""; } %token <d> NUMBER %token <s> NAME %token <fn> FUNC %token EOL %token IF THEN ELSE WHILE DO LET %token SYM_TABLE_OVERFLOW %token UNKNOWN_CHARACTER %nonassoc <fn> CMP %right '=' %left '+' '-' %left '*' '/' %nonassoc '|' UMINUS %type <a> exp stmt list explist %type <sl> symlist %{ extern int yylex(yy::cmd_parser::semantic_type *yylval, yy::cmd_parser::location_type* yylloc); %} %start calclist %% ... grammar rules ...

    Read the article

  • HttpWebRequest Timeouts After Ten Consecutive Requests

    - by Bob Mc
    I'm writing a web crawler for a specific site. The application is a VB.Net Windows Forms application that is not using multiple threads - each web request is consecutive. However, after ten successful page retrievals every successive request times out. I have reviewed the similar questions already posted here on SO, and have implemented the recommended techniques into my GetPage routine, shown below: Public Function GetPage(ByVal url As String) As String Dim result As String = String.Empty Dim uri As New Uri(url) Dim sp As ServicePoint = ServicePointManager.FindServicePoint(uri) sp.ConnectionLimit = 100 Dim request As HttpWebRequest = WebRequest.Create(uri) request.KeepAlive = False request.Timeout = 15000 Try Using response As HttpWebResponse = DirectCast(request.GetResponse, HttpWebResponse) Using dataStream As Stream = response.GetResponseStream() Using reader As New StreamReader(dataStream) If response.StatusCode <> HttpStatusCode.OK Then Throw New Exception("Got response status code: " + response.StatusCode) End If result = reader.ReadToEnd() End Using End Using response.Close() End Using Catch ex As Exception Dim msg As String = "Error reading page """ & url & """. " & ex.Message Logger.LogMessage(msg, LogOutputLevel.Diagnostics) End Try Return result End Function Have I missed something? Am I not closing or disposing of an object that should be? It seems strange that it always happens after ten consecutive requests. Notes: In the constructor for the class in which this method resides I have the following: ServicePointManager.DefaultConnectionLimit = 100 If I set KeepAlive to true, the timeouts begin after five requests. All the requests are for pages in the same domain. EDIT I added a delay between each web request of between two and seven seconds so that I do not appear to be "hammering" the site or attempting a DOS attack. However, the problem still occurs.

    Read the article

  • Does mod_php honor HEAD requests properly?

    - by rkulla
    The HTTP/1.1 RFC stipulates "The HEAD method is identical to GET except that the server MUST NOT return a message-body in the response." I know Apache honors the RFC but modules don't have to. My question is, does mod_php5 honor this? The reason I ask is because I just came across an article saying that PHP developers should check this themselves with: if (stripos($_SERVER['REQUEST_METHOD'], 'HEAD') !== FALSE) { exit(); } but seeing as how browsers send HEAD requests for cache checking it seems unlikely to me that no book, docs, etc., advise PHP developers to do this check. I googled a second and not much turned up, other than some people saying they try to strange things like mod_rewrite/redirect after getting HEAD requests and some old bug ticket from like 2002 claiming that mod_php still executed the rest of the script by default. So I just ran a quick test by using PECL::HTTP to run http_head('http://mysite.com/test-head-request.php'); while having: <?php error_log('REST OF SCRIPT STILL RAN'); ?> in test-head-request.php to see if the rest of the script still executed, and it didn't. I figure that should be enough to settle it, but want to get more feedback and maybe help clear up confusion for anyone else who has wondered about this. So if anyone knows off the top of their head (no pun intended) - or have any conventions they use for receiving HEAD requests, that'd be great. Otherwise, I'll grep the C source later and respond in a comment with my findings. Thanks.

    Read the article

  • Servlet requests are executed sequentially for no apparent reason in Glassfish v3

    - by Fabien Benoit
    Hi, I'm using Glassfish 3 Web profile and can't get http workers to execute concurrently requests on a servlet. This is how i observed the problem. I've made a very simple servlet, that writes the current thread name to the standard output and sleep for 10 seconds : protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { System.out.println(Thread.currentThread().getName()); try { Thread.sleep(10000); // 10 sec } catch (InterruptedException ex) {} } } And when i'm running several simultaneous requests, I clearly see in the logs that the requests are sequentially executed (one trace every 10 seconds). INFO: http-thread-pool-8080-(2) (10 seconds later...) INFO: http-thread-pool-8080-(1) (10 seconds later...) INFO: http-thread-pool-8080-(2) etc. All my GF settings are untouched - it's the out-of-the-box config (the default thread pool is 2 threads min, 5 max if I recall properly). ...I really don't understand why the sleep() block all the others worker threads. Any insight would be greatly appreciated ! Thanks, Fabien

    Read the article

  • How to write to a varchar(max) column using ODBC

    - by andyjohnson
    Summary: I'm trying to write a text string to a column of type varchar(max) using ODBC and SQL Server 2005. It fails if the length of the string is greater than 8000. Help! I have some C++ code that uses ODBC (SQL Native Client) to write a text string to a table. If I change the column from, say, varchar(100) to varchar(max) and try to write a string with length greater than 8000, the write fails with the following error [Microsoft][ODBC SQL Server Driver]String data, right truncation So, can anyone advise me on if this can be done, and how? Some example (not production) code that shows what I'm trying to do: SQLHENV hEnv = NULL; SQLRETURN iError = SQLAllocEnv(&hEnv); HDBC hDbc = NULL; SQLAllocConnect(hEnv, &hDbc); const char* pszConnStr = "Driver={SQL Server};Server=127.0.0.1;Database=MyTestDB"; UCHAR szConnectOut[SQL_MAX_MESSAGE_LENGTH]; SWORD iConnectOutLen = 0; iError = SQLDriverConnect(hDbc, NULL, (unsigned char*)pszConnStr, SQL_NTS, szConnectOut, (SQL_MAX_MESSAGE_LENGTH-1), &iConnectOutLen, SQL_DRIVER_COMPLETE); HSTMT hStmt = NULL; iError = SQLAllocStmt(hDbc, &hStmt); const char* pszSQL = "INSERT INTO MyTestTable (LongStr) VALUES (?)"; iError = SQLPrepare(hStmt, (SQLCHAR*)pszSQL, SQL_NTS); char* pszBigString = AllocBigString(8001); iError = SQLSetParam(hStmt, 1, SQL_C_CHAR, SQL_VARCHAR, 0, 0, (SQLPOINTER)pszBigString, NULL); iError = SQLExecute(hStmt); // Returns SQL_ERROR if pszBigString len > 8000 The table MyTestTable contains a single colum defined as varchar(max). The function AllocBigString (not shown) creates a string of arbitrary length. I understand that previous versions of SQL Server had an 8000 character limit to varchars, but not why is this happening in SQL 2005? Thanks, Andy

    Read the article

  • Subquery max sequence number

    - by Andy Levesque
    I'm hesitant to ask because I'm sure it's out there, but I just can't seem to come up with the keywords to find the answer. I'm stepping outside my boundaries by starting with subqueries (normally an Access user). I have a query that has TECH_ID, SEQ_NBR, and PELL_FT_AWD_AMT SELECT ISRS_V_NEED_ANAL_RESULT_PARENT.TECH_ID, ISRS_V_NEED_ANAL_RESULT_PARENT.AWD_YR, ISRS_V_NEED_ANAL_RESULT_PARENT.PELL_FT_AWD_AMT, ISRS_V_NEED_ANAL_RESULT_PARENT.SEQ_NBR FROM ISRS_V_NEED_ANAL_RESULT_PARENT GROUP BY ISRS_V_NEED_ANAL_RESULT_PARENT.TECH_ID, ISRS_V_NEED_ANAL_RESULT_PARENT.AWD_YR, ISRS_V_NEED_ANAL_RESULT_PARENT.PELL_FT_AWD_AMT, ISRS_V_NEED_ANAL_RESULT_PARENT.SEQ_NBR HAVING (((ISRS_V_NEED_ANAL_RESULT_PARENT.AWD_YR)="2013")) ORDER BY ISRS_V_NEED_ANAL_RESULT_PARENT.TECH_ID; What I want to return is add a subquery that selects only the max SEQ_NUM for each record, but I can't seem to get the syntax right. In the past I would cheat and have a separate query that first gave me the TECH_ID and max SEQ_NUM, and then have a second query that use the original table and the first query in a join to get the rest. How can I do this in one query? Example: TECH_ID SEQ_NUM PELL 1 1 4000 1 2 4000 1 3 5000 Using just the max of the sequence number still returns: 1; 2; 4000 and 1; 3; 5000 when I'm only wanting the latter.

    Read the article

  • merging two tables, while applying aggregates on the duplicates (max,min and sum)

    - by cloudraven
    I have a table (let's call it log) with a few millions of records. Among the fields I have Id, Count, FirstHit, LastHit. Id - The record id Count - number of times this Id has been reported FirstHit - earliest timestamp with which this Id was reported LastHit - latest timestamp with which this Id was reported This table only has one record for any given Id Everyday I get into another table (let's call it feed) with around half a million records with these fields among many others: Id Timestamp - Entry date and time. This table can have many records for the same id What I want to do is to update log in the following way. Count - log count value, plus the count() of records for that id found in feed FirstHit - the earliest of the current value in log or the minimum value in feed for that id LastHit - the latest of the current value in log or the maximum value in feed for that id. It should be noticed that many of the ids in feed are already in log. The simple thing that worked is to create a temporary table and insert into it the union of both as in Select Id, Min(Timestamp) As FirstHit, MAX(Timestamp) as LastHit, Count(*) as Count FROM feed GROUP BY Id UNION ALL Select Id, FirstHit,LastHit,Count FROM log; From that temporary table I do a select that aggregates Min(firsthit), max(lasthit) and sum(Count) Select Id, Min(FirstHit),Max(LastHit),Sum(Count) FROM @temp GROUP BY Id; and that gives me the end result. I could then delete everything from log and replace it with everything with temp, or craft an update for the common records and insert the new ones. However, I think both are highly inefficient. Is there a more efficient way of doing this. Perhaps doing the update in place in the log table?

    Read the article

  • JVM process resident set size "equals" max heap size, not current heap size

    - by Volune
    After a few reading about jvm memory (here, here, here, others I forgot...), I am expecting the resident set size of my java process to be roughly equal to the current heap space capacity. That's not what the numbers are saying, it seems to be roughly equal to the max heap space capacity: Resident set size: # echo 0 $(cat /proc/1/smaps | grep Rss | awk '{print $2}' | sed 's#^#+#') | bc 11507912 # ps -C java -O rss | gawk '{ count ++; sum += $2 }; END {count --; print "Number of processes =",count; print "Memory usage per process =",sum/1024/count, "MB"; print "Total memory usage =", sum/1024, "MB" ;};' Number of processes = 1 Memory usage per process = 11237.8 MB Total memory usage = 11237.8 MB Java heap # jmap -heap 1 Attaching to process ID 1, please wait... Debugger attached successfully. Server compiler detected. JVM version is 24.55-b03 using thread-local object allocation. Garbage-First (G1) GC with 18 thread(s) Heap Configuration: MinHeapFreeRatio = 10 MaxHeapFreeRatio = 20 MaxHeapSize = 10737418240 (10240.0MB) NewSize = 1363144 (1.2999954223632812MB) MaxNewSize = 17592186044415 MB OldSize = 5452592 (5.1999969482421875MB) NewRatio = 2 SurvivorRatio = 8 PermSize = 20971520 (20.0MB) MaxPermSize = 85983232 (82.0MB) G1HeapRegionSize = 2097152 (2.0MB) Heap Usage: G1 Heap: regions = 2560 capacity = 5368709120 (5120.0MB) used = 1672045416 (1594.586769104004MB) free = 3696663704 (3525.413230895996MB) 31.144272834062576% used G1 Young Generation: Eden Space: regions = 627 capacity = 3279945728 (3128.0MB) used = 1314914304 (1254.0MB) free = 1965031424 (1874.0MB) 40.089514066496164% used Survivor Space: regions = 49 capacity = 102760448 (98.0MB) used = 102760448 (98.0MB) free = 0 (0.0MB) 100.0% used G1 Old Generation: regions = 147 capacity = 1986002944 (1894.0MB) used = 252273512 (240.5867691040039MB) free = 1733729432 (1653.413230895996MB) 12.702574926293766% used Perm Generation: capacity = 39845888 (38.0MB) used = 38884120 (37.082786560058594MB) free = 961768 (0.9172134399414062MB) 97.58628042120682% used 14654 interned Strings occupying 2188928 bytes. Are my expectations wrong? What should I expect? I need the heap space to be able to grow during spikes (to avoid very slow Full GC), but I would like to have the resident set size as low as possible the rest of the time, to benefit the other processes running on the server. Is there a better way to achieve that? Linux 3.13.0-32-generic x86_64 java version "1.7.0_55" Running in Docker version 1.1.2 Java is running elasticsearch 1.2.0: /usr/bin/java -Xms5g -Xmx10g -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=20 -Xss256k -Djava.awt.headless=true -XX:+UseG1GC -XX:MaxGCPauseMillis=350 -XX:InitiatingHeapOccupancyPercent=45 -XX:+AggressiveOpts -XX:+UseCompressedOops -XX:-OmitStackTraceInFastThrow -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintClassHistogram -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -Xloggc:/opt/elasticsearch/logs/gc.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/opt elasticsearch/logs/heapdump.hprof -XX:ErrorFile=/opt/elasticsearch/logs/hs_err.log -Des.logger.port=99999 -Des.logger.host=999.999.999.999 -Delasticsearch -Des.foreground=yes -Des.path.home=/opt/elasticsearch -cp :/opt/elasticsearch/lib/elasticsearch-1.2.0.jar:/opt/elasticsearch/lib/*:/opt/elasticsearch/lib/sigar/* org.elasticsearch.bootstrap.Elasticsearch There actually are 5 elasticsearch nodes, each in a different docker container. All have about the same memory usage. Some stats about the index: size: 9.71Gi (19.4Gi) docs: 3,925,398 (4,052,694)

    Read the article

  • All internet requests in Windows time out

    - by Brandon
    So, I've run into a very strange problem with my home wireless network. Previously, at seemingly random times, the router seemed to disconnect all wireless hosts and cause all of the wired hosts to have a "limited connection" according to windows. In order to fix this, I had to unplug all of the wired hosts from the router, unplug the modem from the router, and power cycle the router. This seemed to solve the problem for a while until the exact same thing happened a day later and I had to go through the same process again. That's where I noticed something weird happening. There was one wireless host (a Windows Vista laptop) that seemed to be causing the router to disconnect the other hosts whenever it connected. When this happened, only that laptop was able to use the wireless from the router. When this happened, I disconnected it from the wireless (by disabling the wireless adapter) then reconnected it (by re-enabling it) and now it, like the other hosts, couldn't connect. I've never really seen anything this strange happen on our network before. So, I restored the router to factory settings and the problem seems to have vanished except one crucial problem. There's another host (a Windows 7 laptop) that was perfectly able to connect before all of the router issues and even in between the crashing and power-cycling events but now says its connected and says it's able to reach the Internet, but all requests time out. In any browser I've tried, the tab says connecting to [site]... for a solid minute and then tells me the request timed out. When I try to ping google.com in cmd it also says request timed out. In frustration, I booted into a dual-boot Ubuntu installation on the Windows 7 host and the connection works fine, to my surprise, as ubuntu is where I am now typing this rather long question. I haven't looked through the event log in windows but will post anything I find in an edit I haven't tried connecting (in Windows 7) to any other wireless network, since The fact that it works in Ubuntu suggests its Windows and not the router but I didn't change any wireless settings in windows before it being able to reach the Internet and not. Does anyone have any clue what could have happened. I opened to buying another router as this one is almost a year old :) but I would like to know whats going on here. Thanks in Advance! P.S. Sorry for how long my question is, I'm a little anxious (:

    Read the article

  • Crazy python behaviour

    - by Martin
    I have a little piece of python code in the server script for my website which looks a little bit like this: console.append([str(x) for x in data]) console.append(str(max(data))) quite simple, you might think, however the result it's outputting is this: ['3', '12', '3'] 3 for some reason python thinks 3 is the max of [3,12,3]! So am I doing something wrong? Or this is misbehaviour on the part of python?

    Read the article

  • Apache doesn't run multiple requests

    - by Reinderien
    I'm currently running this simple Python CGI script to test rudimentary IPC: #!/usr/bin/python -u import cgi, errno, fcntl, os, os.path, sys, time print("""Content-Type: text/html; charset=utf-8 <!doctype html> <html lang="en"> <head> <meta charset="utf-8" /> <title>IPC test</title> </head> <body> """) ftempname = '/tmp/ipc-messages' master = not os.path.exists(ftempname) if master: fmode = 'w' else: fmode = 'r' print('<p>Opening file</p>') sys.stdout.flush() ftemp = open(ftempname, fmode) print('<p>File opened</p>') if master: print('<p>Operating as master</p>') sys.stdout.flush() for i in range(10): print('<p>' + str(i) + '</p>') sys.stdout.flush() time.sleep(1) ftemp.close() os.remove(ftempname) else: print('<p>Operating as a slave</p>') ftemp.close() print(""" </body> </html>""") The 'server-push' portion works; that is, for the first request, I do see piecewise updates. However, while the first request is being serviced, subsequent requests are not started, only to be started after the first request has finished. Any ideas on why, and how to fix it? Edit: I see the same non-concurrent behaviour with vanilla PHP, running this: <!doctype html> <html lang="en"> <!-- $Id: $--> <head> <meta charset="utf-8" /> <title>IPC test</title> </head> <body> <p> <?php function echofl($str) { echo $str . "</b>\n"; ob_flush(); flush(); } define('tempfn', '/tmp/emailsync'); if (file_exists(tempfn)) $perms = 'r+'; else $perms = 'w'; assert($fsync = fopen(tempfn, $perms)); assert(chmod(tempfn, 0600)); if (!flock($fsync, LOCK_EX | LOCK_NB, $wouldblock)) { assert($wouldblock); $master = false; } else $master = true; if ($master) { echofl('Running as master.'); assert(fwrite($fsync, 'content') != false); assert(sleep(5) == 0); assert(flock($fsync, LOCK_UN)); } else { echofl('Running as slave.'); echofl(fgets($fsync)); } assert(fclose($fsync)); echofl('Done.'); ?> </p> </body> </html>

    Read the article

  • Incorrect gzipping of http requests, can't find who's doing it

    - by Ned Batchelder
    We're seeing some very strange mangling of HTTP responses, and we can't figure out what is doing it. We have an app server handling JSON requests. Occasionally, the response is returned gzipped, but with incorrect headers that prevent the browser from interpreting it correctly. The problem is intermittent, and changes behavior over time. Yesterday morning it seemed to fail 50% of the time, and in fact, seemed tied to one of our two load-balanced servers. Later in the afternoon, it was failing only 20 times out of 1000, and didn't correlate with an app server. The two app servers are running Apache 2.2 with mod_wsgi and a Django app stack. They have identical Apache configs and source trees, and even identical packages installed on Red Hat. There's a hardware load balancer in front, I don't know the make or model. Akamai is also part of the food chain, though we removed Akamai and still had the problem. Here's a good request and response: * Connected to example.com (97.7.79.129) port 80 (#0) > POST /claim/ HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > Referer: http://example.com/apps/ > Accept-Encoding: gzip,deflate > Content-Length: 29 > Content-Type: application/x-www-form-urlencoded > } [data not shown] < HTTP/1.1 200 OK < Server: Apache/2 < Content-Language: en-us < Content-Encoding: identity < Content-Length: 47 < Content-Type: application/x-javascript < Connection: keep-alive < Vary: Accept-Encoding < { [data not shown] * Connection #0 to host example.com left intact * Closing connection #0 {"msg": "", "status": "OK", "printer_name": ""} And here's a bad one: * Connected to example.com (97.7.79.129) port 80 (#0) > POST /claim/ HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > Referer: http://example.com/apps/ > Accept-Encoding: gzip,deflate > Content-Length: 29 > Content-Type: application/x-www-form-urlencoded > } [data not shown] < HTTP/1.1 200 OK < Server: Apache/2 < Content-Language: en-us < Content-Encoding: identity < Content-Type: application/x-javascript < Content-Encoding: gzip < Content-Length: 59 < Connection: keep-alive < Vary: Accept-Encoding < X-N: S < { [data not shown] * Connection #0 to host example.com left intact * Closing connection #0 ?V?-NW?RPR?QP*.I,)-???A??????????T??Z? ??/ There are two things to notice about the bad response: It has two Content-Encoding headers, and the browsers seem to use the first. So they see an identity encoding header, and gzipped content, so they can't interpret the response. The bad response has an extra "X-N: S" header. Perhaps if I could find out what intermediary adds "X-N: S" headers to responses, I could track down the culprit...

    Read the article

  • iptables - quick safety eval & limit max conns over time

    - by Peter Hanneman
    Working on locking down a *nix server box with some fancy iptable(v1.4.4) rules. I'm approaching the matter with a "paranoid, everyone's out to get me" style, not necessarily because I expect the box to be a hacker magnet but rather just for the sake of learning iptables and *nix security more throughly. Everything is well commented - so if anyone sees something I missed please let me know! The *nat table's "--to-ports" point to the only ports with actively listening services. (aside from pings) Layer 2 apps listen exclusively on chmod'ed sockets bridged by one of the layer 1 daemons. Layers 3+ inherit from layer 2 in a similar fashion. The two lines giving me grief are commented out at the very bottom of the *filter rules. The first line runs fine but it's all or nothing. :) Many thanks, Peter H. *nat #Flush previous rules, chains and counters for the 'nat' table -F -X -Z #Redirect traffic to alternate internal ports -I PREROUTING --src 0/0 -p tcp --dport 80 -j REDIRECT --to-ports 8080 -I PREROUTING --src 0/0 -p tcp --dport 443 -j REDIRECT --to-ports 8443 -I PREROUTING --src 0/0 -p udp --dport 53 -j REDIRECT --to-ports 8053 -I PREROUTING --src 0/0 -p tcp --dport 9022 -j REDIRECT --to-ports 8022 COMMIT *filter #Flush previous settings, chains and counters for the 'filter' table -F -X -Z #Set default behavior for all connections and protocols -P INPUT DROP -P OUTPUT DROP -A FORWARD -j DROP #Only accept loopback traffic originating from the local NIC -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -d 127.0.0.0/8 -j DROP #Accept all outgoing non-fragmented traffic having a valid state -A OUTPUT ! -f -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT #Drop fragmented incoming packets (Not always malicious - acceptable for use now) -A INPUT -f -j DROP #Allow ping requests rate limited to one per second (burst ensures reliable results for high latency connections) -A INPUT -p icmp --icmp-type 8 -m limit --limit 1/sec --limit-burst 2 -j ACCEPT #Declaration of custom chains -N INSPECT_TCP_FLAGS -N INSPECT_STATE -N INSPECT #Drop incoming tcp connections with invalid tcp-flags -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL ALL -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL NONE -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,FIN FIN -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,PSH PSH -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,URG URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags SYN,FIN SYN,FIN -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL FIN,PSH,URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags FIN,RST FIN,RST -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags SYN,RST SYN,RST -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL SYN,FIN,PSH,URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL SYN,RST,ACK,FIN,URG -j DROP #Accept incoming traffic having either an established or related state -A INSPECT_STATE -m state --state ESTABLISHED,RELATED -j ACCEPT #Drop new incoming tcp connections if they aren't SYN packets -A INSPECT_STATE -m state --state NEW -p tcp ! --syn -j DROP #Drop incoming traffic with invalid states -A INSPECT_STATE -m state --state INVALID -j DROP #INSPECT chain definition -A INSPECT -p tcp -j INSPECT_TCP_FLAGS -A INSPECT -j INSPECT_STATE #Route incoming traffic through the INSPECT chain -A INPUT -j INSPECT #Accept redirected HTTP traffic via HA reverse proxy -A INPUT -p tcp --dport 8080 -j ACCEPT #Accept redirected HTTPS traffic via STUNNEL SSH gateway (As well as tunneled HTTPS traffic destine for other services) -A INPUT -p tcp --dport 8443 -j ACCEPT #Accept redirected DNS traffic for NSD authoritative nameserver -A INPUT -p udp --dport 8053 -j ACCEPT #Accept redirected SSH traffic for OpenSSH server #Temp solution: -A INPUT -p tcp --dport 8022 -j ACCEPT #Ideal solution: #Limit new ssh connections to max 10 per 10 minutes while allowing an "unlimited" (or better reasonably limited?) number of established connections. #-A INPUT -p tcp --dport 8022 --state NEW,ESTABLISHED -m recent --set -j ACCEPT #-A INPUT -p tcp --dport 8022 --state NEW -m recent --update --seconds 600 --hitcount 11 -j DROP COMMIT *mangle #Flush previous rules, chains and counters in the 'mangle' table -F -X -Z COMMIT

    Read the article

  • Excessive denied requests for port 58322 in syslog

    - by Nathan C.
    My iptables is setup to block all unneeded ports as it should but I'm checking my syslog due to these random but all-to-frequent apache2 crashes and I noticed a lot of requests such as this. In all the archived syslogs that I have these are present from different IP addresses. There is a similar question with an accepted here: What service uses UDP port 60059? Jun 4 06:49:27 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=218.7.74.50 DST=MY.SERVER.IP.HERE LEN=129 TOS=0x00 PREC=0x00 TTL=115 ID=27636 PROTO=UDP SPT=9520 DPT=58322 LEN=109 Jun 4 06:49:31 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=95.160.226.177 DST=MY.SERVER.IP.HERE LEN=131 TOS=0x00 PREC=0x00 TTL=116 ID=31468 PROTO=UDP SPT=47642 DPT=58322 LEN=111 Jun 4 06:49:54 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=78.137.36.10 DST=MY.SERVER.IP.HERE LEN=131 TOS=0x00 PREC=0x00 TTL=118 ID=21872 PROTO=UDP SPT=57872 DPT=58322 LEN=111 Jun 4 06:50:14 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=111.253.217.11 DST=MY.SERVER.IP.HERE LEN=131 TOS=0x00 PREC=0x00 TTL=116 ID=28882 PROTO=UDP SPT=51826 DPT=58322 LEN=111 Jun 4 06:51:02 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=189.45.114.173 DST=MY.SERVER.IP.HERE LEN=131 TOS=0x16 PREC=0x00 TTL=113 ID=19985 PROTO=UDP SPT=41087 DPT=58322 LEN=111 Jun 4 06:51:09 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=87.89.202.28 DST=MY.SERVER.IP.HERE LEN=131 TOS=0x00 PREC=0x00 TTL=116 ID=7874 PROTO=UDP SPT=17524 DPT=58322 LEN=111 Jun 4 06:51:20 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=24.44.124.35 DST=MY.SERVER.IP.HERE LEN=131 TOS=0x00 PREC=0x00 TTL=118 ID=12978 PROTO=UDP SPT=45596 DPT=58322 LEN=111 Jun 4 06:51:22 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=81.174.48.236 DST=MY.SERVER.IP.HERE LEN=93 TOS=0x00 PREC=0x00 TTL=48 ID=0 DF PROTO=UDP SPT=21352 DPT=58322 LEN=73 Jun 4 06:51:23 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=124.107.61.84 DST=MY.SERVER.IP.HERE LEN=131 TOS=0x00 PREC=0x00 TTL=114 ID=13038 PROTO=UDP SPT=14357 DPT=58322 LEN=111 Jun 4 06:51:30 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=88.8.23.200 DST=MY.SERVER.IP.HERE LEN=123 TOS=0x00 PREC=0x00 TTL=117 ID=21062 PROTO=UDP SPT=4291 DPT=58322 LEN=103 Jun 4 06:51:54 HOSTNAME kernel: iptables denied: IN=eth0 OUT= MAC=fe:fd:ad:ff:dd:95:c8:4c:75:f5:d6:3f:08:00 SRC=80.202.244.234 DST=MY.SERVER.IP.HERE LEN=129 TOS=0x00 PREC=0x00 TTL=114 ID=339 PROTO=UDP SPT=14020 DPT=58322 LEN=109 I'm not overly experienced with server configuration and debugging, so I only just installed logcheck after reading that previous question. I guess my question is what steps should I take after reading this log info to 1) further protect myself, 2) understand if this could be causing any other problems with my VPS, and 3) use this data to help others?

    Read the article

  • BIND no longer responds to AXFR Requests

    - by djsumdog
    Recently we moved our primary external DNS server. It has three caching DNS slaves in front of it provided by our ISP. They've told us they've started getting access denied requests when doing zone transfers (AXFR). If I add in my own IPs to the allow-transfer list, I also get a transfer failed when using dig with the AXFR argument. Here is what my bind configuration looks like: options { directory "/var/lib/named"; dump-file "/var/log/named_dump.db"; zone-statistics yes; statistics-file "/var/log/named.stats"; listen-on-v6 { any; }; notify-source 10.19.0.68 port 53; querylog yes; notify yes; allow-transfer { 127.0.0.1; //localhost 1.1.1.1; //public dns slave 1 2.2.2.2; //public dns slave 2 3.3.3.3; //public dns slave 3 }; also-notify { 1.1.1.1; //public dns slave 1 2.2.2.2; //public dns slave 2 3.3.3.3; //public dns slave 3 }; include "/etc/named.d/forwarders.conf"; }; logging { channel simple_log { file "/var/log/bind.log" versions 10 size 3m; severity info; print-time yes; print-severity yes; print-category yes; }; category default{ simple_log; }; channel log_zone_transfers { file "/var/log/axfr.log" versions 10 size 3m; print-time yes; print-category yes; print-severity yes; }; category xfer-out { log_zone_transfers; }; channel log_notify { file "/var/log/notify.log" versions 10 size 3m; print-time yes; print-category yes; print-severity yes; }; category notify { log_notify; }; channel queries { file "/var/log/queries.log" versions 10 size 30m; print-time yes; severity info; print-category yes; print-severity yes; }; category queries { queries; }; }; zone "." in { type hint; file "root.hint"; }; zone "localhost" in { type master; file "localhost.zone"; }; zone "0.0.127.in-addr.arpa" in { type master; file "127.0.0.zone"; }; include "/etc/named.conf.include"; zone "example.net " { type master; file "/var/lib/named/master/example.net.hosts"; }; zone "example.com " { type master; file "/var/lib/named/master/example.com.hosts"; }; ## -- other master files -- And the errors in the xfer log look like the following: 29-Oct-2012 14:20:02.806 xfer-out: info: client 1.1.1.1#59069: bad zone transfer request: 'example.com./IN': non-authoritative zone (NOTAUTH) I've tried adding allow-transfer parameters directly on the zone files and still get failed transfers. Any idea what I'm doing wrong?

    Read the article

  • guvcview recording video and audio out of synchronisation in Ubuntu 10.10

    - by SIJAR
    I finally got Guvcview, a great software for Logitech webcam and it does all the stuff that one wants out of it. But I'm not satisfy with the video recording, video and audio out of synchronisation also video seems to be in slow motion. Please help so that I can tweak in and get a good video recording with the webcam. Below is the log of Guvcview ------------------------------------------------------------------------------- guvcview 1.4.1 video_device: /dev/video0 vid_sleep: 0 cap_meth: 1 resolution: 640 x 480 windowsize: 1024 x 715 vert pane: 578 spin behavior: 0 mode: mjpg fps: 1/25 Display Fps: 0 bpp: 0 hwaccel: 1 avi_format: 4 sound: 1 sound Device: 4 sound samp rate: 0 sound Channels: 0 Sound delay: 0 nanosec Sound Format: 85 Pan Step: 2 degrees Tilt Step: 2 degrees Video Filter Flags: 0 image inc: 0 profile(default):/home/sijar/default.gpfl starting portaudio... bt_audio_service_open: connect() failed: Connection refused (111) bt_audio_service_open: connect() failed: Connection refused (111) bt_audio_service_open: connect() failed: Connection refused (111) bt_audio_service_open: connect() failed: Connection refused (111) Cannot connect to server socket err = No such file or directory Cannot connect to server socket jack server is not running or cannot be started language catalog= dir:/usr/share/locale type:UTF-8 lang:en_US.utf8 cat:guvcview.mo mjpg: setting format to 1196444237 capture method = 1 video device: /dev/video0 libv4lconvert: warning more framesizes then I can handle! libv4lconvert: warning more framesizes then I can handle! /dev/video0 - device 1 libv4lconvert: warning more framesizes then I can handle! libv4lconvert: warning more framesizes then I can handle! Init. UVC Camera (046d:0825) (location: usb-0000:00:1d.7-5) { pixelformat = 'YUYV', description = 'YUV 4:2:2 (YUYV)' } { discrete: width = 640, height = 480 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 160, height = 120 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 176, height = 144 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 176 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 320, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 352, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 432, height = 240 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 544, height = 288 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, { discrete: width = 640, height = 360 } Time interval between frame: 1/30, 1/25, 1/20, 1/15, 1/10, 1/5, ... repeats a couple of times ... vid:046d pid:0825 driver:uvcvideo Adding control for Pan (relative) UVCIOC_CTRL_ADD - Error: Operation not permitted checking format: 1196444237 VIDIOC_G_COMP:: Invalid argument compression control not supported fps is set to 1/25 drawing controls control[0]: 0x980900 Brightness, 0:255:1, default 128 control[0]: 0x980901 Contrast, 0:255:1, default 32 control[0]: 0x980902 Saturation, 0:255:1, default 32 control[0]: 0x98090c White Balance Temperature, Auto, 0:1:1, default 1 control[0]: 0x980913 Gain, 0:255:1, default 0 control[0]: 0x980918 Power Line Frequency, 0:2:1, default 2 control[0]: 0x98091a White Balance Temperature, 0:10000:10, default 4000 control[0]: 0x98091b Sharpness, 0:255:1, default 24 control[0]: 0x98091c Backlight Compensation, 0:1:1, default 1 control[0]: 0x9a0901 Exposure, Auto, 0:3:1, default 3 control[0]: 0x9a0902 Exposure (Absolute), 1:10000:1, default 166 control[0]: 0x9a0903 Exposure, Auto Priority, 0:1:1, default 0 resolutions of format(2) = 19 frame rates of 1º resolution=6 Def. Res: 0 numb. fps:6 --------------------------------------- device #0 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 (hw:0,0) Host API = ALSA Max inputs = 2, Max outputs = 2 Def. low input latency = 0.012 Def. low output latency = 0.012 Def. high input latency = 0.046 Def. high output latency = 0.046 Def. sample rate = 44100.00 --------------------------------------- device #1 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 - MIC ADC (hw:0,1) Host API = ALSA Max inputs = 2, Max outputs = 0 Def. low input latency = 0.011 Def. low output latency = -1.000 Def. high input latency = 0.043 Def. high output latency = -1.000 Def. sample rate = 48000.00 --------------------------------------- device #2 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 - MIC2 ADC (hw:0,2) Host API = ALSA Max inputs = 2, Max outputs = 0 Def. low input latency = 0.011 Def. low output latency = -1.000 Def. high input latency = 0.043 Def. high output latency = -1.000 Def. sample rate = 48000.00 --------------------------------------- device #3 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 - ADC2 (hw:0,3) Host API = ALSA Max inputs = 2, Max outputs = 0 Def. low input latency = 0.011 Def. low output latency = -1.000 Def. high input latency = 0.043 Def. high output latency = -1.000 Def. sample rate = 48000.00 --------------------------------------- device #4 Name = Intel 82801DB-ICH4: Intel 82801DB-ICH4 - IEC958 (hw:0,4) Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.011 Def. high input latency = -1.000 Def. high output latency = 0.043 Def. sample rate = 48000.00 --------------------------------------- device #5 Name = USB Device 0x46d:0x825: USB Audio (hw:1,0) Host API = ALSA Max inputs = 1, Max outputs = 0 Def. low input latency = 0.011 Def. low output latency = -1.000 Def. high input latency = 0.043 Def. high output latency = -1.000 Def. sample rate = 48000.00 --------------------------------------- device #6 Name = front Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.012 Def. high input latency = -1.000 Def. high output latency = 0.046 Def. sample rate = 44100.00 --------------------------------------- device #7 Name = iec958 Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.011 Def. high input latency = -1.000 Def. high output latency = 0.043 Def. sample rate = 48000.00 --------------------------------------- device #8 Name = spdif Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.011 Def. high input latency = -1.000 Def. high output latency = 0.043 Def. sample rate = 48000.00 --------------------------------------- device #9 Name = pulse Host API = ALSA Max inputs = 32, Max outputs = 32 Def. low input latency = 0.012 Def. low output latency = 0.012 Def. high input latency = 0.046 Def. high output latency = 0.046 Def. sample rate = 44100.00 --------------------------------------- device #10 Name = dmix Host API = ALSA Max inputs = 0, Max outputs = 2 Def. low input latency = -1.000 Def. low output latency = 0.043 Def. high input latency = -1.000 Def. high output latency = 0.043 Def. sample rate = 48000.00 --------------------------------------- device #11 [ Default Input, Default Output ] Name = default Host API = ALSA Max inputs = 32, Max outputs = 32 Def. low input latency = 0.012 Def. low output latency = 0.012 Def. high input latency = 0.046 Def. high output latency = 0.046 Def. sample rate = 44100.00 ---------------------------------------------- SampleRate:0 Channels:0 Video driver: x11 A window manager is available VIDIOC_S_EXT_CTRLS for multiple controls failed (error -1) using VIDIOC_S_CTRL for user class controls control(0x0098091a) "White Balance Temperature" failed to set (error -1) VIDIOC_S_EXT_CTRLS for multiple controls failed (error -1) using VIDIOC_S_EXT_CTRLS on single controls for class: 0x009a0000 control(0x009a0902) "Exposure (Absolute)" failed to set (error -1) VIDIOC_S_EXT_CTRLS for multiple controls failed (error -1) using VIDIOC_S_CTRL for user class controls control(0x0098091a) "White Balance Temperature" failed to set (error -1) VIDIOC_S_EXT_CTRLS for multiple controls failed (error -1) using VIDIOC_S_EXT_CTRLS on single controls for class: 0x009a0000 control(0x009a0902) "Exposure (Absolute)" failed to set (error -1) Cap Video toggled: 1 (/home/sijar/Videos/Webcam) 25371756K bytes free on a total of 39908968K (used: 36 %) treshold=51200K using audio codec: 0x0055 Audio frame size is 1152 samples for selected codec IO thread started...OK [libx264 @ 0x8cbd8b0]using cpu capabilities: MMX2 SSE2 Cache64 [libx264 @ 0x8cbd8b0]profile Baseline, level 3.0 [libx264 @ 0x8cbd8b0]non-strictly-monotonic PTS shift sound by -9 ms shift sound by -9 ms shift sound by -9 ms AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data ... repeats a couple of times ... AUDIO: droping audio data (/home/sijar/Videos/Webcam) 25371748K bytes free on a total of 39908968K (used: 36 %) treshold=51200K AUDIO: droping audio data AUDIO: droping audio data ... repeats a couple of times ... Cap Video toggled: 0 Shuting Down IO Thread AUDIO: droping audio data stop= 4426644744000 start=4416533023000 VIDEO: 146 frames in 10111.000000 ms = 14.439719 fps Stoping audio stream Closing audio stream... close avi Last message repeated 145 times [libx264 @ 0x8cbd8b0]frame I:2 Avg QP:14.10 size: 24492 [libx264 @ 0x8cbd8b0]frame P:103 Avg QP:16.06 size: 20715 [libx264 @ 0x8cbd8b0]mb I I16..4: 48.4% 0.0% 51.6% [libx264 @ 0x8cbd8b0]mb P I16..4: 57.5% 0.0% 0.0% P16..4: 40.2% 0.0% 0.0% 0.0% 0.0% skip: 2.3% [libx264 @ 0x8cbd8b0]final ratefactor: 62.05 [libx264 @ 0x8cbd8b0]coded y,uvDC,uvAC intra: 79.7% 92.2% 68.4% inter: 62.4% 87.5% 48.0% [libx264 @ 0x8cbd8b0]i16 v,h,dc,p: 23% 17% 41% 19% [libx264 @ 0x8cbd8b0]i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 30% 24% 26% 2% 5% 3% 3% 3% 4% [libx264 @ 0x8cbd8b0]i8c dc,h,v,p: 53% 20% 23% 4% [libx264 @ 0x8cbd8b0]ref P L0: 63.0% 37.0% [libx264 @ 0x8cbd8b0]kb/s:-0.00 total frames encoded: 0 total audio frames encoded: 0 IO thread finished...OK IO Thread finished enabling controls Cap Video toggled: 1 (/home/sijar/Videos/Webcam) 25379744K bytes free on a total of 39908968K (used: 36 %) treshold=51200K using audio codec: 0x0055 Audio frame size is 1152 samples for selected codec IO thread started...OK [libx264 @ 0x8cfba20]using cpu capabilities: MMX2 SSE2 Cache64 [libx264 @ 0x8cfba20]profile Baseline, level 3.0 [libx264 @ 0x8cfba20]non-strictly-monotonic PTS shift sound by -236 ms shift sound by -236 ms shift sound by -236 ms (/home/sijar/Videos/Webcam) 25377044K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25373408K bytes free on a total of 39908968K (used: 36 %) treshold=51200K AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data ... repeats a couple of times ... (/home/sijar/Videos/Webcam) 25370696K bytes free on a total of 39908968K (used: 36 %) treshold=51200K AUDIO: droping audio data AUDIO: droping audio data AUDIO: droping audio data ... repeats a couple of times ... (/home/sijar/Videos/Webcam) 25367680K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25364052K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25360312K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25356628K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25352908K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25349316K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25345552K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25341828K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25338092K bytes free on a total of 39908968K (used: 36 %) treshold=51200K (/home/sijar/Videos/Webcam) 25334412K bytes free on a total of 39908968K (used: 36 %) treshold=51200K Cap Video toggled: 0 Shuting Down IO Thread stop= 4708817235000 start=4578624714000 VIDEO: 1604 frames in 130192.000000 ms = 12.320265 fps Stoping audio stream Closing audio stream... close avi Last message repeated 1603 times [libx264 @ 0x8cfba20]frame I:16 Avg QP:14.78 size: 42627 [libx264 @ 0x8cfba20]frame P:1547 Avg QP:16.44 size: 28599 [libx264 @ 0x8cfba20]mb I I16..4: 21.6% 0.0% 78.4% [libx264 @ 0x8cfba20]mb P I16..4: 28.1% 0.0% 0.0% P16..4: 70.5% 0.0% 0.0% 0.0% 0.0% skip: 1.4% [libx264 @ 0x8cfba20]final ratefactor: 88.17 [libx264 @ 0x8cfba20]coded y,uvDC,uvAC intra: 74.4% 95.8% 83.2% inter: 75.2% 94.6% 69.2% [libx264 @ 0x8cfba20]i16 v,h,dc,p: 27% 17% 40% 16% [libx264 @ 0x8cfba20]i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 25% 25% 21% 3% 6% 4% 5% 4% 7% [libx264 @ 0x8cfba20]i8c dc,h,v,p: 61% 18% 18% 4% [libx264 @ 0x8cfba20]ref P L0: 64.0% 36.0% [libx264 @ 0x8cfba20]kb/s:-0.00 total frames encoded: 0 total audio frames encoded: 0 IO thread finished...OK IO Thread finished enabling controls Shuting Down Thread Thread terminated... cleaning Thread allocations: 100% SDL Quit Video Thread finished write /home/sijar/.guvcviewrc OK free audio mutex closed v4l2 strutures free controls free controls - vidState cleaned allocations - 100% Closing portaudio ...OK Closing GTK... OK

    Read the article

  • Good bug tracking with Sharepoint?

    - by torbengb
    At my place of work, it has been decided to move many processes to Sharepoint. I'm now looking into how Sharepoint can be used for bug tracking (à la Mantis, FogBugz etc. but within Sharepoint). Specifically, we're using a collaboration room and the solution must work inside that. I know that I can create lists using an "Issue tracker" template, but it lacks workflow, integrated correspondence (like FogBugz), and audit log (any user can edit any field any time, without it being noted anywhere). That's not sufficient, so I am looking for "bigger" solutions but haven't yet found anything at all. This question is similar but aims at Helpdesk use; we aim at bug tracking and change requests to a system. I'm open to suggestions! As I'm not an administrator, I can't just grab a Sharepoint component and install it for testing. I'm looking for experiences, documentation, white papers, screen shots -- the actual downloadable will be relevant later. Ideally, some of these matters should be covered: Support for different ticket types (bug, feature, inquiry, internal task). Configurable workflow per ticket type, no fixed number of steps. Configurable read/write permissions per field and per workflow status. Configurable dashboard for managers with nice charts. Configurable email notifications. Correspondence à la FogBugz. (Challenge: we use Notes, not Exchange.)

    Read the article

  • ISA 2006 refuses VPN DHCP requests as spoofing

    - by Daniel
    I'm running ISA 2006 with PPTP VPN for my AD-controlled network. DHCP is located on the ISA server itself and authentication is done by RADIUS (NPS) located on the DC. Right now my VPN clients can connect, access local DNS, and can ping ISA, the DC, and other clients. Here's where it gets weird. I noticed that despite all this, ipconfig shows the following: PPP adapter North Horizon VPN: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : North Horizon VPN Physical Address. . . . . . . . . : DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 10.42.4.7(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.255 Default Gateway . . . . . . . . . : 0.0.0.0 DNS Servers . . . . . . . . . . . : 10.42.1.10 NetBIOS over Tcpip. . . . . . . . : Enabled So I went over and checked my ISA logs for both DHCP requests and replies, only to find out that my VPN clients are being denied because ISA thinks its a spoof. Here's some relevant information from the log (the VPN subnet is 10.42.4.0/24): Client IP: 10.42.4.6 Destination: 255.255.255.255:67 Client Username: (blank) Protocol: DHCP (request) Action: Denied Connection Rule: (blank) Source Network: VPN Clients Destination Network: Local Host Result Code: 0xc0040014 FWX_E_FWE_SPOOFING_PACKET_DROPPED Network Interface: 10.42.4.11 --------------------------------------------------------- Original Client IP: 10.42.4.6 Destination: 10.42.1.1 Client Username: (valid user) Protocol: PING Action: Initiated Connection Rule: Allow PING to ISA Source Network: VPN Clients Destination Network: Local Host Result Code: 0x0 ERROR_SUCCESS Network Interface: (blank) I wasn't sure what this 10.42.4.11 network interface was - it certainly wasn't something I had setup - untill I saw it in Routing and Remote Access under IP Routing General as an interface called "Internal" bound to the same IP address. I also noticed that since ISA takes blocks of 10 IP addresses from DHCP for VPN, it had reserved 10.42.4.2-11. I'm not sure if it means anything, though. Thanks for your help.

    Read the article

  • MySQL: SELECT highest column value when WHERE finds similar entries

    - by Ike
    My question is comparable to this one, but not quite the same. I have a database with a huge amount of books, with different editions of some of the same book titles. I'm looking for an SQL statement giving me the highest edition number of each of the titles I'm selecting with a WHERE clause (to find specific book series). Here's what the table looks like: |id|title |edition|year| |--|-------------------------|-------|----| |01|Serie One Title One |1 |2007| |02|Serie One Title One |2 |2008| |03|Serie One Title One |3 |2009| |04|Serie One Title Two |1 |2001| |05|Serie One Title Three |1 |2008| |06|Serie One Title Three |2 |2009| |07|Serie One Title Three |3 |2010| |08|Serie One Title Three |4 |2011| |--|-------------------------|-------|----| The result I'm looking for is this: |id|title |edition|year| |--|-------------------------|-------|----| |03|Serie One Title One |3 |2009| |04|Serie One Title Two |1 |2001| |08|Serie One Title Three |4 |2011| |--|-------------------------|-------|----| The closest I got was using this statement: select id, title, max(edition), max(year) from books where title like "serie one%" group by name; but it returns the highest edition and year and includes the first id it finds: |--|-----------------------|-------|----| |01|Serie One Title One |3 |2009| |04|Serie One Title Two |1 |2001| |05|Serie One Title Three |4 |2011| |--|-----------------------|-------|----| This fancy join also comes close, but doesn't give the right result: select b.id, b.title, b.edition, b.year from books b inner join (select name, max(edition) as maxedition from books group by title) g on b.edition = g.maxedition where b.title like "serie one%" group by title; Using this I'm getting unique titles, but mostly old editions.

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • NSMutableURLRequest returns null on real device, while returning image on simulator

    - by Yanchi
    I was testing my app that I've been working on for past 2 months. Basically it requests for JSON, that contains info about items. One field of JSON file is image_url. When I want to display this image, I need to download it from another server, that needs additional credentials. So it goes like this- In my cellForRowAtIndexPath I'm doing NSDictionary *aucdict = [jsonAukResults objectAtIndex:indexPath.row]; NSURL *imageURL = [NSURL URLWithString:[aucdict objectForKey:@"img_url"]]; NSString *authPString = [[[NSString stringWithFormat:@"login:password"]dataUsingEncoding:NSUTF8StringEncoding] base64EncodedString]; NSString *verifPString = [NSString stringWithFormat:@"Image %@",authPString]; NSMutableURLRequest *Prequest = [[NSMutableURLRequest alloc] initWithURL:imageURL]; [Prequest setValue:verifPString forHTTPHeaderField:@"Authorization"]; NSError *error = nil; NSURLResponse *resp = nil; NSData *picresult = [NSURLConnection sendSynchronousRequest:Prequest returningResponse:&resp error:&error]; UIImage *imageLoad = [[UIImage alloc] initWithData:picresult]; Now, I just obscured credentials (they are not login:password :)). My problem is, that right now, I get 3 items. All 3 have image on same server. I can get two of them with this code no problem. However third one is problematic, I always get (NULL) imageLoad. On my simulator, everything works fine, I get all 3 pictures. On real device I get error. I tried to NSURLConnection with error and response so I could debug better. This is what I got in my error. Printing description of error: Error Domain=NSURLErrorDomain Code=-1202 "The certificate for this server is invalid. You might be connecting to a server that is pretending to be “server name” which could put your confidential information at risk." UserInfo=0x1e5a3080 {NSErrorFailingURLStringKey=pictureLink.jpg, NSLocalizedRecoverySuggestion=Would you like to connect to the server anyway?, NSErrorFailingURLKey=pictureLink.jpg, NSLocalizedDescription=The certificate for this server is invalid. You might be connecting to a server that is pretending to be “server name” which could put your confidential information at risk., NSUnderlyingError=0x1e5a30e0 "The certificate for this server is invalid. You might be connecting to a server that is pretending to be “server name” which could put your confidential information at risk.", NSURLErrorFailingURLPeerTrustErrorKey=} I dont use SSL so Im really confused as what could cause this error. Btw, everything worked fine until now (this is my initial screen, so it's been done for good month and a half). Now I started to do graphics and this problem popped up :(

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >