Search Results

Search found 1098 results on 44 pages for 'precise'.

Page 42/44 | < Previous Page | 38 39 40 41 42 43 44  | Next Page >

  • Problem with waitable timers in Windows (timeSetEvent and CreateTimerQueueTimer)

    - by MusiGenesis
    I need high-resolution (more accurate than 1 millisecond) timing in my application. The waitable timers in Windows are (or can be made) accurate to the millisecond, but if I need a precise periodicity of, say, 35.7142857141 milliseconds, even a waitable timer with a 36 ms period will drift out of sync quickly. My "solution" to this problem (in ironic quotes because it's not working quite right) is to use a series of one-shot timers where I use the expiration of each timer to call the next timer. Normally a process like this would be subject to cumulative error over time, but in each timer callback I check the current time (with System.Diagnostics.Stopwatch) and use this to calculate what the period of the next timer needs to be (so if a timer happens to expire a little late, the next timer will automagically have a shorter period to compensate). This works as expected, except that after maybe 10-15 seconds the timer system seems to get bogged down, and a few timer callbacks here and there arrive anywhere from 25 to 100 milliseconds late. After a couple of seconds the problem goes away and everything runs smoothly again for 10-15 seconds, and then the stuttering again. Since I'm using Stopwatch to set each timer period, I'm also using it to monitor the arrival times of each timer callback. During the smooth-running periods, most (maybe 95%) of the intervals are either 35 or 36 milliseconds, and no intervals are ever more than 5 milliseconds away from the expected 35.7142857143. During the "glitchy" stretches, the distribution of intervals is very nearly identical, except that a very small number are unusually large (a couple more than 60 ms and one or two longer than 100 ms during maybe a 3-second stretch). This stuttering is very noticeable, and it's what I'm trying to fix, if possible. For the high-resolution timer, I was using the extremely antique timeSetEvent() multimedia timer from winmm.dll. In pursuit of this problem, I switched to using CreateTimerQueueTimer (along with timeBeginPeriod to set the high-resolution), but I'm seeing the same problem with both timer mechanisms. I've tried experimenting with the various flags for CreateTimerQueueTimer which determine which thread the timer runs on, but the stuttering appears no matter what. Is this just a fundamental problem with using timers in this way (i.e. using each one-shot timer to call the next)? If so, do I have any alternatives? One thing I was considering was to determine how many consecutive 1-millisecond-accuracy ticks would keep my within some arbitrary precision limit before I need to reset the timer. So, for example, if I wanted a 35.71428 period, I could let a 36 ms timer elapse 15 times before it was off by 5 milliseconds, then kill it and start a new one.

    Read the article

  • Problem with waveOutWrite and waveOutGetPosition deadlock

    - by MusiGenesis
    I'm working on an app that plays audio continuously using the waveOut... API from winmm.dll. The app uses "leapfrog" buffers, which are basically a bunch of arrays of samples that you dump into the audio queue. Windows plays them seamlessly in sequence, and as each buffer completes Windows calls a callback function. Inside this function, I load the next set of samples into the buffer, process them however, and then dump the buffer back into the audio queue. In this way, the audio plays indefinitely. For animation purposes, I'm trying to incorporate waveOutGetPosition into the application (since the "buffer done" callbacks are irregular enough to cause jerky animation). waveOutGetPosition returns the current position of playback, so it's hyper-precise. The problem is that in my application, making calls to waveOutGetPosition eventually causes the application to lock up - the sound stops and the call never returns. I've boiled things down to a simple app that demonstrates the problem. You can run the app here: http://www.musigenesis.com/SO/waveOut%20demo.exe If you just hear a tiny bit of piano over and over, it's working. It's just meant to demonstrate the problem. The source code for this project is here: http://www.musigenesis.com/SO/WaveOutDemo.zip The first button runs the app in leapfrog mode without making the calls to waveOutGetPosition. If you click this, the app will play forever without breaking (the X button will close it and shut it off). The second button starts the leapfrogger and also starts a forms timer that calls the waveOutGetPosition and displays the current position. Click this and the app will run for a short while and then lock up. On my laptop, it usually locks up in 15-30 seconds; at most it's taken a minute. I have no idea how to fix this, so any help or suggestions would be most welcome. I've found very few posts on this issue, but it seems that there is a potential deadlock, either from multiple calls to waveOutGetPosition or from calls to that and waveOutWrite that occur at the same time. It's possible that I'm calling this too frequently for the system to handle.

    Read the article

  • Personal Project - Next practical language/tech to learn

    - by Paul Nathan
    I'm working on a personal project doing some finance analysis. It's a totally new field for me, and I'm really having fun with it so far, plus working in the high-level language arena is a great break from my embedded systems daytime work. I have a MySQL backend on a non-local server with a pile of stock data. My task now is to do some analysis of the stocks and produce something approximating a useful result. There are a couple technical difficulties. (1) I have a lot of records. To be precise, I believe I'm near 100K records right now, and this number grows by 6.1K each weekday. I need to create a way to rummage through these fields and do data analysis - based on a given computation, go look at this other set. Fine and dandy, nothing too outre. But this means I could really use a straightforward API for talking to MySQL. (2) Ideally, it runs on OS X 10.4.11. No Windows/Linux machine at home. (3) I can use PHP, C++, Perl, etc. I even have an R installation. I'm pretty flexible with stuff, so long as it runs on OS X. (Lots of options here, pick water, H20, or dihydrogen monoxide ;-) ) (4)Lack of hassle. While I like clever and fun ways of doing things, I'm trying to get some analysis done, not spend ten hours doing installation work and scratching my head figuring out a theoretical syntax question needed to spout out "hello world". What's the question? I'd like to dig into something different than my usual PHP/C++/C toolset. I'm looking for recommendations for languages/technologies that will assist me and meet the above requirements. In particular, I've heard a lot of buzz about F# and Python on SO. I've used CLISP for small problems before, and kinda liked it. I'm seeking opinions about those in particular. edit:since I rent the DB server and have a limited amount of CPU time online, I'm trying to do the analysis on a local machine.

    Read the article

  • How to make efficient code emerge through unit testing

    - by Jean
    Hi, I participate in a TDD Coding Dojo, where we try to practice pure TDD on simple problems. It occured to me however that the code which emerges from the unit tests isn't the most efficient. Now this is fine most of the time, but what if the code usage grows so that efficiency becomes a problem. I love the way the code emerges from unit testing, but is it possible to make the efficiency property emerge through further tests ? Here is a trivial example in ruby: prime factorization. I followed a pure TDD approach making the tests pass one after the other validating my original acceptance test (commented at the bottom). What further steps could I take, if I wanted to make one of the generic prime factorization algorithms emerge ? To reduce the problem domain, let's say I want to get a quadratic sieve implementation ... Now in this precise case I know the "optimal algorithm, but in most cases, the client will simply add a requirement that the feature runs in less than "x" time for a given environment. require 'shoulda' require 'lib/prime' class MathTest < Test::Unit::TestCase context "The math module" do should "have a method to get primes" do assert Math.respond_to? 'primes' end end context "The primes method of Math" do should "return [] for 0" do assert_equal [], Math.primes(0) end should "return [1] for 1 " do assert_equal [1], Math.primes(1) end should "return [1,2] for 2" do assert_equal [1,2], Math.primes(2) end should "return [1,3] for 3" do assert_equal [1,3], Math.primes(3) end should "return [1,2] for 4" do assert_equal [1,2,2], Math.primes(4) end should "return [1,5] for 5" do assert_equal [1,5], Math.primes(5) end should "return [1,2,3] for 6" do assert_equal [1,2,3], Math.primes(6) end should "return [1,3] for 9" do assert_equal [1,3,3], Math.primes(9) end should "return [1,2,5] for 10" do assert_equal [1,2,5], Math.primes(10) end end # context "Functionnal Acceptance test 1" do # context "the prime factors of 14101980 are 1,2,2,3,5,61,3853"do # should "return [1,2,3,5,61,3853] for ${14101980*14101980}" do # assert_equal [1,2,2,3,5,61,3853], Math.primes(14101980*14101980) # end # end # end end and the naive algorithm I created by this approach module Math def self.primes(n) if n==0 return [] else primes=[1] for i in 2..n do if n%i==0 while(n%i==0) primes<<i n=n/i end end end primes end end end

    Read the article

  • Visual Studio Express 2012 debug mode doesn't work

    - by user2350086
    I have a project in Visual Studio that I have been working on for a while, and I have used the debugger extensively. Recently I changed some settings and I have lost the ability to stop the program and step through code. I can't figure out what I had changed that might have affected this. If I put a breakpoint in my code and try to have the program stop there, it doesn't. The break point shows up white with a red outline. If I hover the mouse over it, it says "The breakpoint will not currently be hit. No executable code of the debugger's target code type is associated with this line. Possible causes include: conditional compilation, compiler optimizations, or the target architecture of this line is not supported by the current debugger code type." I know for a fact that the program executes the code where the breakpoint is because I put the breakpoint in the beginning of the InitializeComponent method. The program displays the window fine, but does not stop at the breakpoint. Yes, I am running in debug mode. It seems as though there is a disconnect between the compiled code and the source code displayed. Does anyone know what that would be, or know which compiler settings I should check to re-enable debugging? Here are the compiler options: /GS /analyze- /W3 /Zc:wchar_t /I"D:\dev\libcurl-7.19.3-win32-ssl-msvc\include" /Zi /Od /sdl /Fd"Debug\vc110.pdb" /fp:precise /D "WIN32" /D "_DEBUG" /D "_UNICODE" /D "UNICODE" /errorReport:prompt /WX- /Zc:forScope /Oy- /clr /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\mscorlib.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Data.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Drawing.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Windows.Forms.DataVisualization.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Windows.Forms.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Xml.dll" /MDd /Fa"Debug\" /EHa /nologo /Fo"Debug\" /Fp"Debug\Prog.pch" The linker options are: /OUT:"D:\dev\Prog\Debug\Prog.exe" /MANIFEST /NXCOMPAT /PDB:"D:\dev\Prog\Debug\Prog.pdb" /DYNAMICBASE "curllib.lib" "winmm.lib" "kernel32.lib" "user32.lib" "gdi32.lib" "winspool.lib" "comdlg32.lib" "advapi32.lib" "shell32.lib" "ole32.lib" "oleaut32.lib" "uuid.lib" "odbc32.lib" "odbccp32.lib" /FIXED:NO /DEBUG /MACHINE:X86 /ENTRY:"Main" /INCREMENTAL /PGD:"D:\dev\Prog\Debug\Prog.pgd" /SUBSYSTEM:WINDOWS /MANIFESTUAC:"level='asInvoker' uiAccess='false'" /ManifestFile:"Debug\Prog.exe.intermediate.manifest" /ERRORREPORT:PROMPT /NOLOGO /LIBPATH:"D:\dev\libcurl-7.19.3-win32-ssl-msvc\lib\Debug" /ASSEMBLYDEBUG /TLBID:1

    Read the article

  • What was your the most impressive technical programming achievement performed to impress a romantic

    - by DVK
    OK, so the archetypal human story is for a guy to go out and impress the girl with some wonderful achievement like slaying a dragon or building a monument or conquering neighboring tribe. This being enlightened 21st century on SO, let's morph this into a: StackOverflower performing a feat of programming to impress a romantic interest. There are two ways to do this: Technical achievement: Impressing a person with suitable background/understanding of programming with actual coding powerss you displayed. A dumb movie example would be that kid in "Hackers" move showing off his hacking skills in front of Angeline Jolie. Artistic achievement: Impressing a person with a result of running said code, whether they understand just how incredible the code itself is. An example is the animated ANSI rose (for a guy who actually wrote the ANSI code) This question is only about the first kind (technical achievements) - e.g. the person of interest was presented with impressive code/design that (s)he was able to properly appreciate. Rules (what doesn't qualify): The target audience must have been a person of romantic interest (prospective or present significant other or random hook-up). E.g. showing your program to your sister who's also a software developer doesn't count. The achievement must have been done specifically with the goal to impress such a person. However, it is OK if the achievement was done to impress a generic qualifying person, not someone specific. Although... if you write code to impress girls in general, I'd say "get a better idea of the opposite sex" The achievement must have been done with the goal of impressing the person. In other words, if you would have done it without romantic interest's knowledge anyway, it doesn't count. As examples, the following does not count: programming for your job. Programming for a coding contest. Open Source program that you'd have done anyway. The precise nature of the awesomeness of the achievement is somewhat irrelevant - from learning entire J2EE in 2 days to writing fancy game engine to implementing Python compiler in LOGO. As long as it's programming/software development related. The achievement should preferably be something other people would rank highly as well. If your date was impressed with your skill at calculating Fibonacci sequence without recursive function calls, it doesn't mean most developers will be. But it does mean you need to start finding better things to do on dates ;)

    Read the article

  • VS2008: File creation fails randomly in unit testing?

    - by Tim
    I'm working on implementing a reasonably simple XML serializer/deserializer (log file parser) application in C# .NET with VS 2008. I have about 50 unit tests right now for various parts of the code (mostly for the various serialization operations), and some of them seem to be failing mostly at random when they deal with file I/O. The way the tests are structured is that in the test setup method, I create a new empty file at a certain predetermined location, and close the stream I get back. Then I run some basic tests on the file (varying by what exactly is under test). In the cleanup method, I delete the file again. A large portion (usually 30 or more, though the number varies run to run) of my unit tests will fail at the initialize method, claiming they can't access the file I'm trying to create. I can't pin down the exact reason, since a test that will work one run fails the next; they all succeed when run individually. What's the problem here? Why can't I access this file across multiple unit tests? Relevant methods for a unit test that will fail some of the time: [TestInitialize()] public void LogFileTestInitialize() { this.testFolder = System.Environment.GetFolderPath( System.Environment.SpecialFolder.LocalApplicationData ); this.testPath = this.testFolder + "\\empty.lfp"; System.IO.File.Create(this.testPath); } [TestMethod()] public void LogFileConstructorTest() { string filePath = this.testPath; LogFile target = new LogFile(filePath); Assert.AreNotEqual(null, target); Assert.AreEqual(this.testPath, target.filePath); Assert.AreEqual("empty.lfp", target.fileName); Assert.AreEqual(this.testFolder + "\\empty.lfp.lfpdat", target.metaPath); } [TestCleanup()] public void LogFileTestCleanup() { System.IO.File.Delete(this.testPath); } And the LogFile() constructor: public LogFile(String filePath) { this.entries = new List<Entry>(); this.filePath = filePath; this.metaPath = filePath + ".lfpdat"; this.fileName = filePath.Substring(filePath.LastIndexOf("\\") + 1); } The precise error message: Initialization method LogFileParserTester.LogFileTest.LogFileTestInitialize threw exception. System.IO.IOException: System.IO.IOException: The process cannot access the file 'C:\Users\<user>\AppData\Local\empty.lfp' because it is being used by another process..

    Read the article

  • Segmentation fault in std function std::_Rb_tree_rebalance_for_erase ()

    - by Sarah
    I'm somewhat new to programming and am unsure how to deal with a segmentation fault that appears to be coming from a std function. I hope I'm doing something stupid (i.e., misusing a container), because I have no idea how to fix it. The precise error is Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address: 0x000000000000000c 0x00007fff8062b144 in std::_Rb_tree_rebalance_for_erase () (gdb) backtrace #0 0x00007fff8062b144 in std::_Rb_tree_rebalance_for_erase () #1 0x000000010000e593 in Simulation::runEpidSim (this=0x7fff5fbfcb20) at stl_tree.h:1263 #2 0x0000000100016078 in main () at main.cpp:43 The function that exits successfully just before the segmentation fault updates the contents of two containers. One is a boost::unordered_multimap called carriage; it contains one or more struct Infection objects that contain two doubles. The other container is of type std::multiset< Event, std::less< Event EventPQ called ce. It is full of Event structs. void Host::recover( int s, double recoverTime, EventPQ & ce ) { // Clearing all serotypes in carriage // and their associated recovery events in ce // and then updating susceptibility to each serotype double oldRecTime; int z; for ( InfectionMap::iterator itr = carriage.begin(); itr != carriage.end(); itr++ ) { z = itr->first; oldRecTime = (itr->second).recT; EventPQ::iterator epqItr = ce.find( Event(oldRecTime) ); assert( epqItr != ce.end() ); ce.erase( epqItr ); immune[ z ]++; } carriage.clear(); calcSusc(); // a function that edits an array cout << "Done with sync_recovery event." << endl; } The last cout << line appears immediately before the seg fault. I hope this is enough (but not too much) information. My idea so far is that the rebalancing is being attempting on ce after this function, but I am unsure why it would be failing. (It's unfortunately very hard for me to test this code by removing particular lines, since they would create logical inconsistencies and further problems, but if experienced programmers still think this is the way to go, I'll try.)

    Read the article

  • msysgit git-am can't apply it's own git format-patch sequence

    - by Andrian Nord
    I'm using msysgit git on windows to operate on central svn repository. I'm using git as I want to have it's awesome little local branches for everything and rebasing on each other. I also need to update from central repo often, so using separate svn/git is not an option. Problem is - git svn --help (man page) says that it is not a good idea to use git merge into master branch (which is set to track from svn's trunk) from local branches, as this will ruin the party and git svn dcommit would not work anymore. I know that it's not exactly true and you may use git merge if you are merging from branch which was properly rebased on master prior merge, but I'm trying to make it safer and actually use git format-patch and git am. We are using code review, so I'm making patches anyway. I also knew about git cherry-pick, but I want to just git am /reviewed/patches/dir/* without actually recalling what commits was corresponding to this patches (without reading patches, that is). So, what's wrong with git svn and git am? It's simple - git am for a few very hard points is doing CRLF into LF conversion for patches supplied (git-mailsplit is doing this, to be precise), if not rebasing. git format-patch is also producing proper (LF-ended) patches. As my repo is mostly CRLF (and it should remain so), patches are, obviously, failing due to wrong EOL. Converting diffs to CRLF and somehow hacking git am to prevent it from conversion is not working, too. It will fail if any file was removed or deleted - git apply will complain about expected /dev/null (but he got /dev/null^M). And if I'm applying it with git am --ignore-space-change --ignore-whitespace that it will commit LF endings straight to the index, which is also weird. I don't know if it will preserve over commiting into svn (via git svn dcommit) and checking it out and I don't want to try out. Of course, it's still possible to try hacking around patches to convert only actual diffs, but this is too much hacks for simple task. So, I wonder, is there really no established way to produce patches and apply them to the same repo on the same system? It just feels weird that msysgit can't apply it's own patches.

    Read the article

  • Postgresql: Implicit lock acquisition from foreign-key constraint evaluation

    - by fennec
    So, I'm being confused about foreign key constraint handling in Postgresql. (version 8.4.4, for what it's worth). We've got a couple of tables, mildly anonymized below: device: (id, blah, blah, blah, blah, blah x 50)… primary key on id whooooole bunch of other junk device_foo: (id, device_id, left, right) Foreign key (device_id) references device(id) on delete cascade; primary key on id btree index on 'left' and 'right' So I set out with two database windows to run some queries. db1> begin; lock table device in exclusive mode; db2> begin; update device_foo set left = left + 1; The db2 connection blocks. It seems odd to me that an update of the 'left' column on device_stuff should be affected by activity on the device table. But it is. In fact, if I go back to db1: db1> select * from device_stuff for update; *** deadlock occurs *** The pgsql log has the following: blah blah blah deadlock blah. CONTEXT: SQL statement "SELECT 1 FROM ONLY "public"."device" x WHERE "id" OPERATOR(pg_catalog.=) $1 FOR SHARE OF X: update device_foo set left = left + 1; I suppose I've got two issues: the first is that I don't understand the precise mechanism by which this sort of locking occurs. I have got a couple of useful queries to query pg_locks to see what sort of locks a statement invokes, but I haven't been able to observe this particular sort of locking when I run the update device_foo command in isolation. (Perhaps I'm doing something wrong, though.) I also can't find any documentation on the lock acquisition behavior of foreign-key constraint checks. All I have is a log message. Am I to infer from this that any change to a row will acquire an update lock on all the tables which it's foreign-keyed against? The second issue is that I'd like to find some way to make it not happen like that. I'm ending up with occasional deadlocks in the actual application. I'd like to be able to run big update statements that impact all rows on device_foo without acquiring a big lock on the device table. (There's a lot of access going on in the device table, and it's kind of an expensive lock to get.)

    Read the article

  • Bracketing algorithm when root finding. Single root in "quadratic" function

    - by Ander Biguri
    I am trying to implement a root finding algorithm. I am using the hybrid Newton-Raphson algorithm found in numerical recipes that works pretty nicely. But I have a problem in bracketing the root. While implementing the root finding algorithm I realised that in several cases my functions have 1 real root and all the other imaginary (several of them, usually 6 or 9). The only root I am interested is in the real one so the problem is not there. The thing is that the function approaches the root like a cubic function, touching with the point the y=0 axis... Newton-Rapson method needs some brackets of different sign and all the bracketing methods I found don't work for this specific case. What can I do? It is pretty important to find that root in my program... EDIT: more problems: sometimes due to reaaaaaally small numerical errors, say a variation of 1e-6 in some value the "cubic" function does NOT have that real root, it is just imaginary with a neglectable imaginary part... (checked with matlab) EDIT 2: Much more information about the problem. Ok, I need root finding algorithm. Info I have: The root I need to find is between [0-1] , if there are more roots outside that part I am not interested in them. The root is real, there may be imaginary roots, but I don't want them. Probably all the rest of the roots will be imaginary The root may be double in that point, but I think that actually doesn't mater in numerical analysis problems I need to use the root finding algorithm several times during the overall calculations, but the function will always be a polynomial In one of the particular cases of the root finding, my polynomial will be similar to a quadratic function that touches Y=0 with the point. Example of a real case: The coefficient may not be 100% precise and that really slight imprecision may make the function not to touch the Y=0 axis. I cannot solve for this specific case because in other cases it may be that the polynomial is pretty normal and doesn't make any "strange" thing. The method I am actually using is NewtonRaphson hybrid, where if the derivative is really small it makes a bisection instead of NewRaph (found in numerical recipes). Matlab's answer to the function on the image: roots: 0.853553390593276 + 0.353553390593278i 0.853553390593276 - 0.353553390593278i 0.146446609406726 + 0.353553390593273i 0.146446609406726 - 0.353553390593273i 0.499999999999996 + 0.000000040142134i 0.499999999999996 - 0.000000040142134i The function is a real example I prepared where I know that the answer I want is 0.5 Note: I still haven't check completely some of the answers I you people have give me (Thank you!), I am just trying to give al the information I already have to complete the question.

    Read the article

  • Android Couchdb - libcouch and IPC Aidl Services

    - by dirtySanchez
    I am working on a native CouchdDB app with android. Now just this week CouchOne released libcouch, described as "Library files needed to interact with CouchDB on Android": couchone_libcouch@Github It is a basic app that installs CouchDB if the CouchDB service (that comes with CouchDB if it was installed previously) can't be bound to. To be more precise, as I understand it: libcouch estimates CouchDb's presence on the device by trying to bind to a IPC Service from CouchDB and through that service wants communicate with CouchDB. Please see the method "attemptLaunch()" at CouchAppLauncher.class for reviewing this: public void attemptLaunch() { Log.i(TAG,"1.) called attemptLaunch"); Intent intent = new Intent(ICouchService.class.getName()); Log.i(TAG,"1.a) setup Intent"); Boolean canStart = bindService(intent, couchServiceConn, Context.BIND_AUTO_CREATE); Log.i(TAG,"1.b bound service. canStart: " + Boolean.toString(canStart)); if (!canStart) { setContentView(R.layout.install_couchdb); TextView label = (TextView) findViewById(R.id.install_couchdb_text); Button btn = (Button) this.findViewById(R.id.install_couchdb_btn); String text = getString(R.string.app_name) + " requires Apache CouchDB to be installed."; label.setText(text); // Launching the market will fail on emulators btn.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { launchMarket(); finish(); } }); } } The question(s) I have about this are: libcouch never is able to "find" a previously installed CouchDB. It always attempts to install CouchDB from the market. This is because it never actually is able to bind to the CouchDBService. As I understand the purpose auf AIDL generated service interfaces, the actual service that intends to offer it's IPC to other applications should make use of AIDL. In this case the AIDL has been moved to the application that is trying to bind to the remote service, which is libcouch in this case. Reviewing the commits the AIDL files have just been moved out of that repository to libcouch. For complete linkage, here's the link to the Android CouchDB sources: github.com/couchone/libcouch-android Now, I could be completely wrong in my findings, it could also be lincouch's Manifest that s missing something, but I am really looking forward to get some answers!

    Read the article

  • Moving a unit precisely along a path in x,y coordinates

    - by Adam Eberbach
    I am playing around with a strategy game where squads move around a map. Each turn a certain amount of movement is allocated to a squad and if the squad has a destination the points are applied each turn until the destination is reached. Actual distance is used so if a squad moves one position in the x or y direction it uses one point, but moving diagonally takes ~1.4 points. The squad maintains actual position as float which is then rounded to allow drawing the position on the map. The path is described by touching the squad and dragging to the end position then lifting the pen or finger. (I'm doing this on an iPhone now but Android/Qt/Windows Mobile would work the same) As the pointer moves x, y points are recorded so that the squad gains a list of intermediate destinations on the way to the final destination. I'm finding that the destinations are not evenly spaced but can be further apart depending on the speed of the pointer movement. Following the path is important because obstacles or terrain matter in this game. I'm not trying to remake Flight Control but that's a similar mechanic. Here's what I've been doing, but it just seems too complicated (pseudocode): getDestination() { - self.nextDestination = remove_from_array(destinations) - self.gradient = delta y to destination / delta x to destination - self.angle = atan(self.gradient) - self.cosAngle = cos(self.angle) - self.sinAngle = sin(self.angle) } move() { - get movement allocation for this turn - if self.nextDestination not valid - - getNextDestination() - while(nextDestination valid) && (movement allocation remains) { - - find xStep and yStep using movement allocation and sinAngle/cosAngle calculated for current self.nextDestination - - if current position + xStep crosses the destination - - - find x movement remaining after self.nextDestination reached - - - calculate remaining direct path movement allocation (xStep remaining / cosAngle) - - - make self.position equal to self.nextDestination - - else - - - apply xStep and yStep to current position - } - round squad's float coordinates to integer screen coordinates - draw squad image on map } That's simplified of course, stuff like sign needs to be tweaked to ensure movement is in the right direction. If trig is the best way to do it then lookup tables can be used or maybe it doesn't matter on modern devices like it used to. Suggestions for a better way to do it? an update - iPhone has zero issues with trig and tracking tens of positions and tracks implemented as described above and it draws in floats anyway. The Bresenham method is more efficient, trig is more precise. If I was to use integer Bresenham I would want to multiply by ten or so to maintain a little more positional accuracy to benefit collisions/terrain detection.

    Read the article

  • Symfony : ajax call cause server to queue next queries

    - by Remiz
    Hello, I've a problem with my application when an ajax call on the server takes too much time : it queue all the others queries from the user until it's done server side (I realized that canceling the call client side has no effect and the user still have to wait). Here is my test case : <script type="text/javascript" src="jquery-1.4.1.min.js"></script> <a href="another-page.php">Go to another page on the same server</a> <script type="text/javascript"> url = 'http://localserver/some-very-long-complex-query'; $.get(url); </script> So when the get is fired and then after I click on the link, the server finish serving the first call before bringing me to the other page. My problem is that I want to avoid this behavior. I'm on a LAMP server and I'm looking in a way to inform the server that the user aborted the query with function like connection_aborted(), do you think that's the way to go ? Also, I know that the longest part of this PHP script is a MySQL query, so even if I know that connection_aborted() can detect that the user cancel the call, I still need to check this during the MySQL query... I'm not really sure that PHP can handle this kind of "event". So if you have any better idea, I can't wait to hear it. Thank you. Update : After further investigation, I found that the problem happen only with the Symfony framework (that I omitted to precise, my bad). It seems that an Ajax call lock any other future call. It maybe related to the controller or the routing system, I'm looking into it. Also for those interested by the problem here is my new test case : -new project with Symfony 1.4.3, default configuration, I just created an app and a default module. -jquery 1.4 for the ajax query. Here is my actions.class.php (in my unique module) : class defaultActions extends sfActions { public function executeIndex(sfWebRequest $request) { //Do nothing } public function executeNewpage() { //Do also nothing } public function executeWaitingaction(){ // Wait sleep(30); return false; } } Here is my indexSuccess.php template file : <script type="text/javascript" src="jquery-1.4.1.min.js"></script> <a href="<?php echo url_for('default/newpage');?>">Go to another symfony action</a> <script type="text/javascript"> url = '<?php echo url_for('default/waitingaction');?>'; $.get(url); </script> For the new page template, it's not very relevant... But with this, I'm able to reproduce the lock problem I've on my real application. Is somebody else having the same issue ? Thanks.

    Read the article

  • Memory corruption in System.Move due to changed 8087CW mode (png + stretchblt)

    - by André Mussche
    I have strange a memory corruption problem. After many hours debugging and trying I think I found something. For example: I do a simple string assignment: sTest := 'SET LOCK_TIMEOUT '; However, the result sometimes becomes: sTest = 'SET LOCK'#0'TIMEOUT ' So, the _ gets replaced by an 0 byte. I have seen this happening once (reproducing is tricky, dependent on timing) in the System.Move function, when it uses the FPU stack (fild, fistp) for fast memory copy (in case of 9 till 32 bytes to move): ... @@SmallMove: {9..32 Byte Move} fild qword ptr [eax+ecx] {Load Last 8} fild qword ptr [eax] {Load First 8} cmp ecx, 8 jle @@Small16 fild qword ptr [eax+8] {Load Second 8} cmp ecx, 16 jle @@Small24 fild qword ptr [eax+16] {Load Third 8} fistp qword ptr [edx+16] {Save Third 8} ... Using the FPU view and 2 memory debug views (Delphi - View - Debug - CPU - Memory) I saw it going wrong... once... could not reproduce however... This morning I read something about the 8087CW mode, and yes, if this is changed into $27F I get memory corruption! Normally it is $133F: The difference between $133F and $027F is that $027F sets up the FPU for doing less precise calculations (limiting to Double in stead of Extended) and different infiniti handling (which was used for older FPU’s, but is not used any more). Okay, now I found why but not when! I changed the working of my AsmProfiler with a simple check (so all functions are checked at enter and leave): if Get8087CW = $27F then //normally $1372? if MainThreadID = GetCurrentThreadId then //only check mainthread DebugBreak; I "profiled" some units and dll's and bingo (see stack): Windows.StretchBlt(3372289943,0,0,514,345,4211154027,0,0,514,345,13369376) pngimage.TPNGObject.DrawPartialTrans(4211154027,(0, 0, 514, 345, (0, 0), (514, 345))) pngimage.TPNGObject.Draw($7FF62450,(0, 0, 514, 345, (0, 0), (514, 345))) Graphics.TCanvas.StretchDraw((0, 0, 514, 345, (0, 0), (514, 345)),$7FECF3D0) ExtCtrls.TImage.Paint Controls.TGraphicControl.WMPaint((15, 4211154027, 0, 0)) So it is happening in StretchBlt... What to do now? Is it a fault of Windows, or a bug in PNG (included in D2007)? Or is the System.Move function not failsafe?

    Read the article

  • How to refresh the textbox text when tabs are Changed in WPF

    - by StonedJesus
    Well in my WPF application I am using Tab Control which has around 5 tabs. The view of each tab is a user control which I add via a tool box. Main Xaml File: <Grid> <TabControl Height="Auto" HorizontalAlignment="Stretch" Margin="0" Name="tabControl1" VerticalAlignment="Stretch" Width="Auto"> <TabItem Header="Device Control" Name="Connect"> <ScrollViewer Height="Auto" Name="scrollViewer1" Width="Auto"> <my:ConnectView Name="connectView1" /> </ScrollViewer> </TabItem> <TabItem Header="I2C"> <ScrollViewer Height="Auto" Name="scrollViewer2" Width="Auto"> <my1:I2CControlView Name="i2CControlView1" /> </ScrollViewer> </TabItem> <TabItem Header="Voltage"> <ScrollViewer Height="Auto" Name="scrollViewer3" Width="Auto"> <my2:VoltageView Name="voltageView1" /> </ScrollViewer> </TabItem> </TabControl> </Grid> If you notice each view ie.e Connect, I2C and Voltage is a user control which has a view, viewmodel and model class :) Each of these views have set of textboxes in their respective xaml files. Connect.xaml: <Grid> <Textbox Text="{Binding Box}", Name="hello" /> // Some more textboxes </Grid> I2c.xaml: <Grid> <Textbox Text="{Binding I2CBox}", Name="helI2c" /> // Some more textboxes </Grid> Voltage.xaml: <Grid> <Textbox Text="{Binding VoltBox}", Name="heVoltllo" /> // Some more textboxes </Grid>** By default I have set the text of these textboxes to some value. Lets say "12" "13" "14" respectively in my view model classes. My main requirement is to set the text of these textboxes present in each user control to get refreshed when I change the tab. Description: Lets say Connect View is displayed: Value of Textbox is 12 and I edit it and change it to 16. Now I click on I2C tab and then I go back to Connect tab, I want the textbox value to get refreshed back to the initial value i.e. 12. To be precise, is their a method called visibilitychanged() which I can write in all my user control classes, where I can set the value of these Ui components whenever tabs are changed? Please help :)

    Read the article

  • Controlling the USB from Windows

    - by b-gen-jack-o-neill
    Hi, I know this probably is not the easiest thing to do, but I am trying to connect Microcontroller and PC using USB. I dont want to use internal USART of Microcontroller or USB to RS232 converted, its project indended to help me understand various principles. So, getting the communication done from the Microcontroller side is piece of cake - I mean, when I know he protocol, its relativelly easy to implement it on Micro, becouse I am in direct control of evrything, even precise timing. But this is not the case of PC. I am not very familiar with concept of Windows handling the devices connected. In one of my previous question I ask about how Windows works with devices thru drivers. I understood that for internal use of Windows, drivers must have some default set of functions available to OS. I mean, when OS wants to access HDD, it calls HDD driver (which is probably internal in OS), with specific "questions" so that means that HDD driver has to be written to cooperate with Windows, to have write function in the proper place to be called by the OS. Something similiar is for GPU, Even DirectX, I mean DirectX must call specific functions from drivers, so drivers must be written to work with DX. I know, many functions from WinAPI works on their own, but even "simple" window must be in the end written into framebuffer, using MMIO to adress specified by drivers. Am I right? So, I expected that Windows have internal functions, parts of WinAPI designed to work with certain comonly used things. To call manufacturer-designed drivers. But this seems to not be entirely true becouse Windows has no way to communicate thru Paralel port. I mean, there is no function in the WinAPI to work with serial port, but there are funcions to work with HDD,GPU and so. But now there comes the part I am getting very lost at. So, I think Windows must have some built-in functions to communicate thru USB, becouse for example it handles USB flash memory. So, is there any WinAPI function designed to let user to operate USB thru that function, or when I want to use USB myself, do I have to call desired USB-driver function myself? Becouse all you need to send to USB controller is device adress and the infromation right? I mean, I don´t have to write any new drivers, am I right? Just to call WinAPI function if there is such, or directly call original USB driver. Does any of this make some sense?

    Read the article

  • Installing Lubuntu 14.04.1 forcepae fails

    - by Rantanplan
    I tried to install Lubuntu 14.04.1 from a CD. First, I chose Try Lubuntu without installing which gave: ERROR: PAE is disabled on this Pentium M (PAE can potentially be enabled with kernel parameter "forcepae" ... Following the description on https://help.ubuntu.com/community/PAE, I used forcepae and tried Try Lubuntu without installing again. That worked fine. dmesg | grep -i pae showed: [ 0.000000] Kernel command line: file=/cdrom/preseed/lubuntu.seed boot=casper initrd=/casper/initrd.lz quiet splash -- forcepae [ 0.008118] PAE forced! On the live-CD session, I tried installing Lubuntu double clicking on the install button on the desktop. Here, the CD starts running but then stops running and nothing happens. Next, I rebooted and tried installing Lubuntu directly from the boot menu screen using forcepae again. After a while, I receive the following error message: The installer encountered an unrecoverable error. A desktop session will now be run so that you may investigate the problem or try installing again. Hitting Enter brings me to the desktop. For what errors should I search? And how? Finally, I rebooted once more and tried Check disc for defects with forcepae option; no errors have been found. Now, I am wondering how to find the error or whether it would be better to follow advice c in https://help.ubuntu.com/community/PAE: "Move the hard disk to a computer on which the processor has PAE capability and PAE flag (that is, almost everything else than a Banias). Install the system as usual but don't add restricted drivers. After the install move the disk back." Thanks for some hints! Perhaps some of the following can help: On Lubuntu 12.04: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr mce cx8 mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe up bts est tm2 bogomips : 1284.76 clflush size : 64 cache_alignment : 64 address sizes : 32 bits physical, 32 bits virtual power management: uname -a Linux humboldt 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:45:51 UTC 2014 i686 i686 i386 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.5 LTS Release: 12.04 Codename: precise cpuid eax in eax ebx ecx edx 00000000 00000002 756e6547 6c65746e 49656e69 00000001 000006d6 00000816 00000180 afe9f9bf 00000002 02b3b001 000000f0 00000000 2c04307d 80000000 80000004 00000000 00000000 00000000 80000001 00000000 00000000 00000000 00000000 80000002 20202020 20202020 65746e49 2952286c 80000003 6e655020 6d756974 20295228 7270204d 80000004 7365636f 20726f73 30352e31 007a4847 Vendor ID: "GenuineIntel"; CPUID level 2 Intel-specific functions: Version 000006d6: Type 0 - Original OEM Family 6 - Pentium Pro Model 13 - Stepping 6 Reserved 0 Brand index: 22 [not in table] Extended brand string: " Intel(R) Pentium(R) M processor 1.50GHz" CLFLUSH instruction cache line size: 8 Feature flags afe9f9bf: FPU Floating Point Unit VME Virtual 8086 Mode Enhancements DE Debugging Extensions PSE Page Size Extensions TSC Time Stamp Counter MSR Model Specific Registers MCE Machine Check Exception CX8 COMPXCHG8B Instruction SEP Fast System Call MTRR Memory Type Range Registers PGE PTE Global Flag MCA Machine Check Architecture CMOV Conditional Move and Compare Instructions FGPAT Page Attribute Table CLFSH CFLUSH instruction DS Debug store ACPI Thermal Monitor and Clock Ctrl MMX MMX instruction set FXSR Fast FP/MMX Streaming SIMD Extensions save/restore SSE Streaming SIMD Extensions instruction set SSE2 SSE2 extensions SS Self Snoop TM Thermal monitor 31 reserved TLB and cache info: b0: unknown TLB/cache descriptor b3: unknown TLB/cache descriptor 02: Instruction TLB: 4MB pages, 4-way set assoc, 2 entries f0: unknown TLB/cache descriptor 7d: unknown TLB/cache descriptor 30: unknown TLB/cache descriptor 04: Data TLB: 4MB pages, 4-way set assoc, 8 entries 2c: unknown TLB/cache descriptor On Lubuntu 14.04.1 live-CD with forcepae: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fdiv_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe bts est tm2 bogomips : 1284.68 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 32 bits virtual power management: uname -a Linux lubuntu 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:12 UTC 2014 i686 i686 i686 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.1 LTS Release: 14.04 Codename: trusty cpuid CPU 0: vendor_id = "GenuineIntel" version information (1/eax): processor type = primary processor (0) family = Intel Pentium Pro/II/III/Celeron/Core/Core 2/Atom, AMD Athlon/Duron, Cyrix M2, VIA C3 (6) model = 0xd (13) stepping id = 0x6 (6) extended family = 0x0 (0) extended model = 0x0 (0) (simple synth) = Intel Pentium M (Dothan B1) / Celeron M (Dothan B1), 90nm miscellaneous (1/ebx): process local APIC physical ID = 0x0 (0) cpu count = 0x0 (0) CLFLUSH line size = 0x8 (8) brand index = 0x16 (22) brand id = 0x16 (22): Intel Pentium M, .13um feature information (1/edx): x87 FPU on chip = true virtual-8086 mode enhancement = true debugging extensions = true page size extensions = true time stamp counter = true RDMSR and WRMSR support = true physical address extensions = false machine check exception = true CMPXCHG8B inst. = true APIC on chip = false SYSENTER and SYSEXIT = true memory type range registers = true PTE global bit = true machine check architecture = true conditional move/compare instruction = true page attribute table = true page size extension = false processor serial number = false CLFLUSH instruction = true debug store = true thermal monitor and clock ctrl = true MMX Technology = true FXSAVE/FXRSTOR = true SSE extensions = true SSE2 extensions = true self snoop = true hyper-threading / multi-core supported = false therm. monitor = true IA64 = false pending break event = true feature information (1/ecx): PNI/SSE3: Prescott New Instructions = false PCLMULDQ instruction = false 64-bit debug store = false MONITOR/MWAIT = false CPL-qualified debug store = false VMX: virtual machine extensions = false SMX: safer mode extensions = false Enhanced Intel SpeedStep Technology = true thermal monitor 2 = true SSSE3 extensions = false context ID: adaptive or shared L1 data = false FMA instruction = false CMPXCHG16B instruction = false xTPR disable = false perfmon and debug = false process context identifiers = false direct cache access = false SSE4.1 extensions = false SSE4.2 extensions = false extended xAPIC support = false MOVBE instruction = false POPCNT instruction = false time stamp counter deadline = false AES instruction = false XSAVE/XSTOR states = false OS-enabled XSAVE/XSTOR = false AVX: advanced vector extensions = false F16C half-precision convert instruction = false RDRAND instruction = false hypervisor guest status = false cache and TLB information (2): 0xb0: instruction TLB: 4K, 4-way, 128 entries 0xb3: data TLB: 4K, 4-way, 128 entries 0x02: instruction TLB: 4M pages, 4-way, 2 entries 0xf0: 64 byte prefetching 0x7d: L2 cache: 2M, 8-way, sectored, 64 byte lines 0x30: L1 cache: 32K, 8-way, 64 byte lines 0x04: data TLB: 4M pages, 4-way, 8 entries 0x2c: L1 data cache: 32K, 8-way, 64 byte lines extended feature flags (0x80000001/edx): SYSCALL and SYSRET instructions = false execution disable = false 1-GB large page support = false RDTSCP = false 64-bit extensions technology available = false Intel feature flags (0x80000001/ecx): LAHF/SAHF supported in 64-bit mode = false LZCNT advanced bit manipulation = false 3DNow! PREFETCH/PREFETCHW instructions = false brand = " Intel(R) Pentium(R) M processor 1.50GHz" (multi-processing synth): none (multi-processing method): Intel leaf 1 (synth) = Intel Pentium M (Dothan B1), 90nm

    Read the article

  • Create a Bootable Ubuntu 9.10 USB Flash Drive

    - by Trevor Bekolay
    The Ubuntu Live CD isn’t just useful for trying out Ubuntu before you install it, you can also use it to maintain and repair your Windows PC. Even if you have no intention of installing Linux, every Windows user should have a bootable Ubuntu USB drive on hand in case something goes wrong in Windows. Creating a bootable USB flash drive is surprisingly easy with a small self-contained application called UNetbootin. It will even download Ubuntu for you! Note: Ubuntu will take up approximately 700 MB on your flash drive, so choose a flash drive with at least 1 GB of free space, formatted as FAT32. This process should not remove any existing files on the flash drive, but to be safe you should backup the files on your flash drive. Put Ubuntu on your flash drive UNetbootin doesn’t require installation; just download the application and run it. Select Ubuntu from the Distribution drop-down box, then 9.10_Live from the Version drop-down box. If you have a 64-bit machine, then select 9.10_Live_x64 for the Version. At the bottom of the screen, select the drive letter that corresponds to the USB drive that you want to put Ubuntu on. If you select USB Drive in the Type drop-down box, the only drive letters available will be USB flash drives. Click OK and UNetbootin will start doing its thing. First it will download the Ubuntu Live CD. Then, it will copy the files from the Ubuntu Live CD to your flash drive. The amount of time it takes will vary depending on your Internet speed, an when it’s done, click on Exit. You’re not planning on installing Ubuntu right now, so there’s no need to reboot. If you look at the USB drive now, you should see a bunch of new files and folders. If you had files on the drive before, they should still be present. You’re now ready to boot your computer into Ubuntu 9.10! How to boot into Ubuntu When the time comes that you have to boot into Ubuntu, or if you just want to test and make sure that your flash drive works properly, you will have to set your computer to boot off of the flash drive. The steps to do this will vary depending on your BIOS – which varies depending on your motherboard. To get detailed instructions on changing how your computer boots, search for your motherboard’s manual (or your laptop’s manual for a laptop). For general instructions, which will suffice for 99% of you, read on. Find the important keyboard keys When your computer boots up, a bunch of words and numbers flash across the screen, usually to be ignored. This time, you need to scan the boot-up screen for a few key words with some associated keys: Boot menu and Setup. Typically, these will show up at the bottom of the screen. If your BIOS has a Boot Menu, then read on. Otherwise, skip to the Hard: Using Setup section. Easy: Using the Boot Menu If your BIOS offers a Boot Menu, then during the boot-up process, press the button associated with the Boot Menu. In our case, this is ESC. Our example Boot Menu doesn’t have the ability to boot from USB, but your Boot Menu should have some options, such as USB-CDROM, USB-HDD, USB-FLOPPY, and others. Try the options that start with USB until you find one that works. Don’t worry if it doesn’t work – you can just restart and try again. Using the Boot Menu does not change the normal boot order on your system, so the next time you start up your computer it will boot from the hard drive as normal. Hard: Using Setup If your BIOS doesn’t offer a Boot Menu, then you will have to change the boot order in Setup. Note: There are some options in BIOS Setup that can affect the stability of your machine. Take care to only change the boot order options. Press the button associated with Setup. In our case, this is F2. If your BIOS Setup has a Boot tab, then switch to it and change the order such that one of the USB options occurs first. There may be several USB options, such as USB-CDROM, USB-HDD, USB-FLOPPY, and others; try them out to see which one works for you. If your BIOS does not have a boot tab, boot order is commonly found in Advanced CMOS Options. Note that this changes the boot order permanently until you change it back. If you plan on only plugging in a bootable flash drive when you want to boot from it, then you could leave the boot order as it is, but you may find it easier to switch the order back to the previous order when you reboot from Ubuntu. Booting into Ubuntu If you set the right boot option, then you should be greeted with the UNetbootin screen. Press enter to start Ubuntu with the default options, or wait 10 seconds for this to happen automatically. Ubuntu will start loading. It should go straight to the desktop with no need for a username or password. And that’s it! From this live desktop session, you can try out Ubuntu, and even install software that is not included in the live CD. Installed software will only last for the duration of your session – the next time you start up the live CD it will be back to its original state. Download UNetbootin from sourceforge.net Similar Articles Productive Geek Tips Create a Bootable Ubuntu USB Flash Drive the Easy WayReset Your Ubuntu Password Easily from the Live CDHow-To Geek on Lifehacker: Control Your Computer with Shortcuts & Speed Up Vista SetupHow To Setup a USB Flash Drive to Install Windows 7Speed up Your Windows Vista Computer with ReadyBoost TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer Create Talking Photos using Fotobabble

    Read the article

  • The Incremental Architect&acute;s Napkin - #1 - It&acute;s about the money, stupid

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/24/the-incremental-architectacutes-napkin---1---itacutes-about-the.aspx Software development is an economic endeavor. A customer is only willing to pay for value. What makes a software valuable is required to become a trait of the software. We as software developers thus need to understand and then find a way to implement requirements. Whether or in how far a customer really can know beforehand what´s going to be valuable for him/her in the end is a topic of constant debate. Some aspects of the requirements might be less foggy than others. Sometimes the customer does not know what he/she wants. Sometimes he/she´s certain to want something - but then is not happy when that´s delivered. Nevertheless requirements exist. And developers will only be paid if they deliver value. So we better focus on doing that. Although is might sound trivial I think it´s important to state the corollary: We need to be able to trace anything we do as developers back to some requirement. You decide to use Go as the implementation language? Well, what´s the customer´s requirement this decision is linked to? You decide to use WPF as the GUI technology? What´s the customer´s requirement? You decide in favor of a layered architecture? What´s the customer´s requirement? You decide to put code in three classes instead of just one? What´s the customer´s requirement behind that? You decide to use MongoDB over MySql? What´s the customer´s requirement behind that? etc. I´m not saying any of these decisions are wrong. I´m just saying whatever you decide be clear about the requirement that´s driving your decision. You have to be able to answer the question: Why do you think will X deliver more value to the customer than the alternatives? Customers are not interested in romantic ideals of hard working, good willing, quality focused craftsmen. They don´t care how and why you work - as long as what you deliver fulfills their needs. They want to trust you to recognize this as your top priority - and then deliver. That´s all. Fundamental aspects of requirements If you´re like me you´re probably not used to such scrutinization. You want to be trusted as a professional developer - and decide quite a few things following your gut feeling. Or by relying on “established practices”. That´s ok in general and most of the time - but still… I think we should be more conscious about our decisions. Which would make us more responsible, even more professional. But without further guidance it´s hard to reason about many of the myriad decisions we´ve to make over the course of a software project. What I found helpful in this situation is structuring requirements into fundamental aspects. Instead of one large heap of requirements then there are smaller blobs. With them it´s easier to check if a decisions falls in their scope. Sure, every project has it´s very own requirements. But all of them belong to just three different major categories, I think. Any requirement either pertains to functionality, non-functional aspects or sustainability. For short I call those aspects: Functionality, because such requirements describe which transformations a software should offer. For example: A calculator software should be able to add and multiply real numbers. An auction website should enable you to set up an auction anytime or to find auctions to bid for. Quality, because such requirements describe how functionality is supposed to work, e.g. fast or secure. For example: A calculator should be able to calculate the sinus of a value much faster than you could in your head. An auction website should accept bids from millions of users. Security of Investment, because functionality and quality need not just be delivered in any way. It´s important to the customer to get them quickly - and not only today but over the course of several years. This aspect introduces time into the “requrements equation”. Security of Investments (SoI) sure is a non-functional requirement. But I think it´s important to not subsume it under the Quality (Q) aspect. That´s because SoI has quite special properties. For one, SoI for software means something completely different from what it means for hardware. If you buy hardware (a car, a hair blower) you find that a worthwhile investment, if the hardware does not change it´s functionality or quality over time. A car still running smoothly with hardly any rust spots after 10 years of daily usage would be a very secure investment. So for hardware (or material products, if you like) “unchangeability” (in the face of usage) is desirable. With software you want the contrary. Software that cannot be changed is a waste. SoI for software means “changeability”. You want to be sure that the software you buy/order today can be changed, adapted, improved over an unforseeable number of years so as fit changes in its usage environment. But that´s not the only reason why the SoI aspect is special. On top of changeability[1] (or evolvability) comes immeasurability. Evolvability cannot readily be measured by counting something. Whether the changeability is as high as the customer wants it, cannot be determined by looking at metrics like Lines of Code or Cyclomatic Complexity or Afferent Coupling. They may give a hint… but they are far, far from precise. That´s because of the nature of changeability. It´s different from performance or scalability. Also it´s because a customer cannot tell upfront, “how much” evolvability he/she wants. Whether requirements regarding Functionality (F) and Q have been met, a customer can tell you very quickly and very precisely. A calculation is missing, the calculation takes too long, the calculation time degrades with increased load, the calculation is accessible to the wrong users etc. That´s all very or at least comparatively easy to determine. But changeability… That´s a whole different thing. Nevertheless over time the customer will develop a feedling if changeability is good enough or degrading. He/she just has to check the development of the frequency of “WTF”s from developers ;-) F and Q are “timeless” requirement categories. Customers want us to deliver on them now. Just focusing on the now, though, is rarely beneficial in the long run. So SoI adds a counterweight to the requirements picture. Customers want SoI - whether they know it or not, whether they state if explicitly or not. In closing A customer´s requirements are not monolithic. They are not all made the same. Rather they fall into different categories. We as developers need to recognize these categories when confronted with some requirement - and take them into account. Only then can we make true professional decisions, i.e. conscious and responsible ones. I call this fundamental trait of software “changeability” and not “flexibility” to distinguish to whom it´s a concern. “Flexibility” to me means, software as is can easily be adapted to a change in its environment, e.g. by tweaking some config data or adding a library which gets picked up by a plug-in engine. “Flexibiltiy” thus is a matter of some user. “Changeability”, on the other hand, to me means, software can easily be changed in its structure to adapt it to new requirements. That´s a matter of the software developer. ?

    Read the article

  • Interview with Lenz Grimmer about MySQL Connect

    - by Keith Larson
    Keith Larson: Thank you for allowing me to do this interview with you.  I have been talking with a few different Oracle ACEs   about the MySQL Connect Conference. I figured the MySQL community might be missing you as well. You have been very busy with Oracle Linux but I know you still have an eye on the MySQL Community. How have things been?Lenz Grimmer: Thanks for including me in this series of interviews, I feel honored! I've read the other interviews, and really liked them. I still try to follow what's going on over in the MySQL community and it's good to see that many of the familiar faces are still around. Over the course of the 9 years that I was involved with MySQL, many colleagues and contacts turned into good friends and we still maintain close relationships.It's been almost 1.5 years ago that I moved into my new role here in the Linux team at Oracle, and I really enjoy working on a Linux distribution again (I worked for SUSE before I joined MySQL AB in 2002). I'm still learning a lot - Linux in the data center has greatly evolved in so many ways and there are a lot of new and exciting technologies to explore. Keith Larson: What were your thoughts when you heard that Oracle was going to deliver the MySQL Connect conference to the MySQL Community?Lenz Grimmer: I think it's testament to the fact that Oracle deeply cares about MySQL, despite what many skeptics may say. What started as "MySQL Sunday" two years ago has now evolved into a full-blown sub-conference, with 80 sessions at one of the largest corporate IT events in the world. I find this quite telling, not many products at Oracle enjoy this level of exposure! So it certainly makes me feel proud to see how far MySQL has come. Keith Larson: Have you had a chance to look over the sessions? What are your thoughts on them?Lenz Grimmer: I did indeed look at the final schedule.The content committee did a great job with selecting these sessions. I'm glad to see that the content selection was influenced by involving well-known and respected members of the MySQL community. The sessions cover a broad range of topics and technologies, both covering established topics as well as recent developments. Keith Larson: When you get a chance, what sessions do you plan on attending?Lenz Grimmer: I will actually be manning the Oracle booth in the exhibition area on one of these days, so I'm not sure if I'll have a lot of time attending sessions. But if I do, I'd love to see the keynotes and catch some of the sessions that talk about recent developments and new features in MySQL, High Availability and Clustering . Quite a lot has happened and it's hard to keep up with this constant flow of new MySQL releases.In particular, the following sessions caught my attention: MySQL Connect Keynote: The State of the Dolphin Evaluating MySQL High-Availability Alternatives CERN’s MySQL “as a Service” Deployment with Oracle VM: Empowering Users MySQL 5.6 Replication: Taking Scalability and High Availability to the Next Level What’s New in MySQL Server 5.6? MySQL Security: Past and Present MySQL at Twitter: Development and Deployment MySQL Community BOF MySQL Connect Keynote: MySQL Perspectives Keith Larson: So I will ask you just like I have asked the others I have interviewed, any tips that you would give to people for handling the long hours at conferences?Lenz Grimmer: Wear comfortable shoes and make sure to drink a lot! Also prepare a plan of the sessions you would like to attend beforehand and familiarize yourself with the venue, so you can get to the next talk in time without scrambling to find the location. The good thing about piggybacking on such a large conference like Oracle OpenWorld is that you benefit from the whole infrastructure. For example, there is a nice schedule builder that helps you to keep track of your sessions of interest. Other than that, bring enough business cards and talk to people, build up your network among your peers and other MySQL professionals! Keith Larson: What features of the MySQL 5.6 release do you look forward to the most ?Lenz Grimmer: There has been solid progress in so many areas like the InnoDB Storage Engine, the Optimizer, Replication or Performance Schema, it's hard for me to really highlight anything in particular. All in all, MySQL 5.6 sounds like a very promising release. I'm confident it will follow the tradition that Oracle already established with MySQL 5.5, which received a lot of praise even from very critical members of the MySQL community. If I had to name a single feature, I'm particularly and personally happy that the precise GIS functions have finally made it into a GA release - that was long overdue. Keith Larson:  In your opinion what is the best reason for someone to attend this event?Lenz Grimmer: This conference is an excellent opportunity to get in touch with the key people in the MySQL community and ecosystem and to get facts and information from the domain experts and developers that work on MySQL. The broad range of topics should attract people from a variety of roles and relations to MySQL, beginning with Developers and DBAs, to CIOs considering MySQL as a viable solution for their requirements. Keith Larson: You will be attending MySQL Connect and have some Oracle Linux Demos, do you see a growing demand for MySQL on Oracle Linux ?Lenz Grimmer: Yes! Oracle Linux is our recommended Linux distribution and we have a good relationship to the MySQL engineering group. They use Oracle Linux as a base Linux platform for development and QA, so we make sure that MySQL and Oracle Linux are well tested together. Setting up a MySQL server on Oracle Linux can be done very quickly, and many customers recognize the benefits of using them both in combination.Because Oracle Linux is available for free (including free bug fixes and errata), it's an ideal choice for running MySQL in your data center. You can run the same Linux distribution on both your development/staging systems as well as on the production machines, you decide which of these should be covered by a support subscription and at which level of support. This gives you flexibility and provides some really attractive cost-saving opportunities. Keith Larson: Since I am a Linux user and fan, what is on the horizon for  Oracle Linux?Lenz Grimmer: We're working hard on broadening the ecosystem around Oracle Linux, building up partnerships with ISVs and IHVs to certify Oracle Linux as a fully supported platform for their products. We also continue to collaborate closely with the Linux kernel community on various projects, to make sure that Linux scales and performs well on large systems and meets the demands of today's data centers. These improvements and enhancements will then rolled into the Unbreakable Enterprise Kernel, which is the key ingredient that sets Oracle Linux apart from other distributions. We also have a number of ongoing projects which are making good progress, and I'm sure you'll hear more about this at the upcoming OpenWorld conference :) Keith Larson: What is something that more people should be aware of when it comes to Oracle Linux and MySQL ?Lenz Grimmer: Many people assume that Oracle Linux is just tuned for Oracle products, such as the Oracle Database or our Engineered Systems. While it's of course true that we do a lot of testing and optimization for these workloads, Oracle Linux is and will remain a general-purpose Linux distribution that is a very good foundation for setting up a LAMP-Stack, for example. We also provide MySQL RPM packages for Oracle Linux, so you can easily stay up to date if you need something newer than what's included in the stock distribution.One more thing that is really unique to Oracle Linux is Ksplice, which allows you to apply security patches to the running Linux kernel, without having to reboot. This ensures that your MySQL database server keeps up and running and is not affected by any downtime. Keith Larson: What else would you like to add ?Lenz Grimmer: Thanks again for getting in touch with me, I appreciated the opportunity. I'm looking forward to MySQL Connect and Oracle OpenWorld and to meet you and many other people from the MySQL community that I haven't seen for quite some time! Keith Larson:  Thank you Lenz!

    Read the article

  • Big Data – Is Big Data Relevant to me? – Big Data Questionnaires – Guest Post by Vinod Kumar

    - by Pinal Dave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions of SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. I think the series from Pinal is a good one for anyone planning to start on Big Data journey from the basics. In my daily customer interactions this buzz of “Big Data” always comes up, I react generally saying – “Sir, do you really have a ‘Big Data’ problem or do you have a big Data problem?” Generally, there is a silence in the air when I ask this question. Data is everywhere in organizations – be it big data, small data, all data and for few it is bad data which is same as no data :). Wow, don’t discount me as someone who opposes “Big Data”, I am a big supporter as much as I am a critic of the abuse of this term by the people. In this post, I wanted to let my mind flow so that you can also think in the direction I want you to see these concepts. In any case, this is not an exhaustive dump of what is in my mind – but you will surely get the drift how I am going to question Big Data terms from customers!!! Is Big Data Relevant to me? Many of my customers talk to me like blank whiteboard with no idea – “why Big Data”. They want to jump into the bandwagon of technology and they want to decipher insights from their unexplored data a.k.a. unstructured data with structured data. So what are these industry scenario’s that come to mind? Here are some of them: Financials Fraud detection: Banks and Credit cards are monitoring your spending habits on real-time basis. Customer Segmentation: applies in every industry from Banking to Retail to Aviation to Utility and others where they deal with end customer who consume their products and services. Customer Sentiment Analysis: Responding to negative brand perception on social or amplify the positive perception. Sales and Marketing Campaign: Understand the impact and get closer to customer delight. Call Center Analysis: attempt to take unstructured voice recordings and analyze them for content and sentiment. Medical Reduce Re-admissions: How to build a proactive follow-up engagements with patients. Patient Monitoring: How to track Inpatient, Out-Patient, Emergency Visits, Intensive Care Units etc. Preventive Care: Disease identification and Risk stratification is a very crucial business function for medical. Claims fraud detection: There is no precise dollars that one can put here, but this is a big thing for the medical field. Retail Customer Sentiment Analysis, Customer Care Centers, Campaign Management. Supply Chain Analysis: Every sensors and RFID data can be tracked for warehouse space optimization. Location based marketing: Based on where a check-in happens retail stores can be optimize their marketing. Telecom Price optimization and Plans, Finding Customer churn, Customer loyalty programs Call Detail Record (CDR) Analysis, Network optimizations, User Location analysis Customer Behavior Analysis Insurance Fraud Detection & Analysis, Pricing based on customer Sentiment Analysis, Loyalty Management Agents Analysis, Customer Value Management This list can go on to other areas like Utility, Manufacturing, Travel, ITES etc. So as you can see, there are obviously interesting use cases for each of these industry verticals. These are just representative list. Where to start? A lot of times I try to quiz customers on a number of dimensions before starting a Big Data conversation. Are you getting the data you need the way you want it and in a timely manner? Can you get in and analyze the data you need? How quickly is IT to respond to your BI Requests? How easily can you get at the data that you need to run your business/department/project? How are you currently measuring your business? Can you get the data you need to react WITHIN THE QUARTER to impact behaviors to meet your numbers or is it always “rear-view mirror?” How are you measuring: The Brand Customer Sentiment Your Competition Your Pricing Your performance Supply Chain Efficiencies Predictive product / service positioning What are your key challenges of driving collaboration across your global business?  What the challenges in innovation? What challenges are you facing in getting more information out of your data? Note: Garbage-in is Garbage-out. Hold good for all reporting / analytics requirements Big Data POCs? A number of customers get into the realm of setting a small team to work on Big Data – well it is a great start from an understanding point of view, but I tend to ask a number of other questions to such customers. Some of these common questions are: To what degree is your advanced analytics (natural language processing, sentiment analysis, predictive analytics and classification) paired with your Big Data’s efforts? Do you have dedicated resources exploring the possibilities of advanced analytics in Big Data for your business line? Do you plan to employ machine learning technology while doing Advanced Analytics? How is Social Media being monitored in your organization? What is your ability to scale in terms of storage and processing power? Do you have a system in place to sort incoming data in near real time by potential value, data quality, and use frequency? Do you use event-driven architecture to manage incoming data? Do you have specialized data services that can accommodate different formats, security, and the management requirements of multiple data sources? Is your organization currently using or considering in-memory analytics? To what degree are you able to correlate data from your Big Data infrastructure with that from your enterprise data warehouse? Have you extended the role of Data Stewards to include ownership of big data components? Do you prioritize data quality based on the source system (that is Facebook/Twitter data has lower quality thresholds than radio frequency identification (RFID) for a tracking system)? Do your retention policies consider the different legal responsibilities for storing Big Data for a specific amount of time? Do Data Scientists work in close collaboration with Data Stewards to ensure data quality? How is access to attributes of Big Data being given out in the organization? Are roles related to Big Data (Advanced Analyst, Data Scientist) clearly defined? How involved is risk management in the Big Data governance process? Is there a set of documented policies regarding Big Data governance? Is there an enforcement mechanism or approach to ensure that policies are followed? Who is the key sponsor for your Big Data governance program? (The CIO is best) Do you have defined policies surrounding the use of social media data for potential employees and customers, as well as the use of customer Geo-location data? How accessible are complex analytic routines to your user base? What is the level of involvement with outside vendors and third parties in regard to the planning and execution of Big Data projects? What programming technologies are utilized by your data warehouse/BI staff when working with Big Data? These are some of the important questions I ask each customer who is actively evaluating Big Data trends for their organizations. These questions give you a sense of direction where to start, what to use, how to secure, how to analyze and more. Sign off Any Big data is analysis is incomplete without a compelling story. The best way to understand this is to watch Hans Rosling – Gapminder (2:17 to 6:06) videos about the third world myths. Don’t get overwhelmed with the Big Data buzz word, the destination to what your data speaks is important. In this blog post, we did not particularly look at any Big Data technologies. This is a set of questionnaire one needs to keep in mind as they embark their journey of Big Data. I did write some of the basics in my blog: Big Data – Big Hype yet Big Opportunity. Do let me know if these questions make sense?  Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – SSMS Automatically Generates TOP (100) PERCENT in Query Designer

    - by pinaldave
    Earlier this week, I was surfing various SQL forums to see what kind of help developer need in the SQL Server world. One of the question indeed caught my attention. I am here regenerating complete question as well scenario to illustrate the point in a precise manner. Additionally, I have added added second part of the question to give completeness. Question: I am trying to create a view in Query Designer (not in the New Query Window). Every time I am trying to create a view it always adds  TOP (100) PERCENT automatically on the T-SQL script. No matter what I do, it always automatically adds the TOP (100) PERCENT to the script. I have attempted to copy paste from notepad, build a query and a few other things – there is no success. I am really not sure what I am doing wrong with Query Designer. Here is my query script: (I use AdventureWorks as a sample database) SELECT Person.Address.AddressID FROM Person.Address INNER JOIN Person.AddressType ON Person.Address.AddressID = Person.AddressType.AddressTypeID ORDER BY Person.Address.AddressID This script automatically replaces by following query: SELECT TOP (100) PERCENT Person.Address.AddressID FROM Person.Address INNER JOIN Person.AddressType ON Person.Address.AddressID = Person.AddressType.AddressTypeID ORDER BY Person.Address.AddressID However, when I try to do the same from New Query Window it works totally fine. However, when I attempt to create a view of the same query it gives following error. Msg 1033, Level 15, State 1, Procedure myView, Line 6 The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP, OFFSET or FOR XML is also specified. It is pretty clear to me now that the script which I have written seems to need TOP (100) PERCENT, so Query . Why do I need it? Is there any work around to this issue. I particularly find this question pretty interesting as it really touches the fundamentals of the T-SQL query writing. Please note that the query which is automatically changed is not in New Query Editor but opened from SSMS using following way. Database >> Views >> Right Click >> New View (see the image below) Answer: The answer to the above question can be very long but I will keep it simple and to the point. There are three things to discuss in above script 1) Reason for Error 2) Reason for Auto generates TOP (100) PERCENT and 3) Potential solutions to the above error. Let us quickly see them in detail. 1) Reason for Error The reason for error is already given in the error. ORDER BY is invalid in the views and a few other objects. One has to use TOP or other keywords along with it. The way semantics of the query works where optimizer only follows(honors) the ORDER BY in the same scope or the same SELECT/UPDATE/DELETE statement. There is a possibility that one can order after the scope of the view again the efforts spend to order view will be wasted. The final resultset of the query always follows the final ORDER BY or outer query’s order and due to the same reason optimizer follows the final order of the query and not of the views (as view will be used in another query for further processing e.g. in SELECT statement). Due to same reason ORDER BY is now allowed in the view. For further accuracy and clear guidance I suggest you read this blog post by Query Optimizer Team. They have explained it very clear manner the same subject. 2) Reason for Auto Generated TOP (100) PERCENT One of the most popular workaround to above error is to use TOP (100) PERCENT in the view. Now TOP (100) PERCENT allows user to use ORDER BY in the query and allows user to overcome above error which we discussed. This gives the impression to the user that they have resolved the error and successfully able to use ORDER BY in the View. Well, this is incorrect as well. The way this works is when TOP (100) PERCENT is used the result is not guaranteed as well it is ignored in our the query where the view is used. Here is the blog post on this subject: Interesting Observation – TOP 100 PERCENT and ORDER BY. Now when you create a new view in the SSMS and build a query with ORDER BY to avoid the error automatically it adds the TOP 100 PERCENT. Here is the connect item for the same issue. I am sure there will be more connect items as well but I could not find them. 3) Potential Solutions If you are reading this post from the beginning in that case, it is clear by now that ORDER BY should not be used in the View as it does not serve any purpose unless there is a specific need of it. If you are going to use TOP 100 PERCENT with ORDER BY there is absolutely no need of using ORDER BY rather avoid using it all together. Here is another blog post of mine which describes the same subject ORDER BY Does Not Work – Limitation of the Views Part 1. It is valid to use ORDER BY in a view if there is a clear business need of using TOP with any other percentage lower than 100 (for example TOP 10 PERCENT or TOP 50 PERCENT etc). In most of the cases ORDER BY is not needed in the view and it should be used in the most outer query for present result in desired order. User can remove TOP 100 PERCENT and ORDER BY from the view before using the view in any query or procedure. In the most outer query there should be ORDER BY as per the business need. I think this sums up the concept in a few words. This is a very long topic and not easy to illustrate in one single blog post. I welcome your comments and suggestions. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, SQL View, T SQL, Technology

    Read the article

  • Varnish POST problem "9 FetchError c backend write error: 11" for application/x-www-form-urlencoded content

    - by ompap
    Cutting a longish story short, we have managed to get a more precise error out of Varnishlog. Varnishlog tells us that we are sending a 31 TxRequest - POST 31 TxHeader - Content-Type: application/x-www-form-urlencoded but we are getting 9 FetchError c backend write error: 11 31 BackendClose - [backend name] 9 VCL_call c error 9 VCL_return c deliver 9 Length c 488 9 VCL_call c deliver 9 VCL_return c deliver 9 TxProtocol c HTTP/1.1 9 TxStatus c 503 We still do not know what this is exactly, but apparently Content-Type: application/x-www-form-urlencoded is not getting through as it should. Help still needed, please! Original message below. The title was "Varnish not letting Joomla users to log in - 503 guru meditation error", but I changed it to get more attention to the problem and not to the symptoms. Hello, We have a production site for a local newspaper which is currently behind an Apache reverse proxy, basicly the site on one server and the other being reserved as a reverse proxy only (well, there is more but that has no relevance here). Apache as a reverse proxy works, but could be faster. We want to change the reverse proxy to use Varnish instead of Apache on an Ubuntu 10.4 Server. The Varnish is version 2.10 installed directly from Ubuntu repos. Ubuntu 10.4 uses PHP 5.3.2. For anonymous surfers the site works wonderfully with Varnish. So far we can get very good speed out of Varnish, we just have a few problems with logging in or out. The big one is, that the users cannot log in: they get a Varnish 503 error page every time. The logs do not reveal the cause. It feels as if the request would never leave Varnish. So we are merely guessing - not a strong starting point. We have gone through what has been suggested on various plces on the web. We have increased the timeouts to backend xxx { .host = "xxx.xx"; .port = "http"; .connect_timeout = 60s; .first_byte_timeout = 60s; .between_bytes_timeout = 60s; } but we seem to get the 503 guru error page much faster than that, as in approx. 5 seconds. We have increased the Varnish headers size to 128 in daemon. In vcl_recv we have if (req.http.Authenticate || req.http.Authorization) { return(pass); } and in vcl_fetch ## auhtentication handling if (req.http.Authenticate || req.http.Authorization) { return(pass); } We do not strip cookies. We have tried to make sure that error pages are not cached. As said above, we cannot see anything in the backend Apache logs, apparently it never gets asked for Joomla user authentication. Varnish does not seem to get much mentioning with connection to Joomla. (We cannot dump Joomla, that selection has been done and we just have to live with what we have been given) Has anyone a working Varnish - Joomla combination? Thanks for reading. Please help. We need some hints - desperately. Any suggestions? ompap

    Read the article

  • Pre-rentrée Oracle Open World 2012 : à vos agendas

    - by Eric Bezille
    A maintenant moins d'un mois de l’événement majeur d'Oracle, qui se tient comme chaque année à San Francisco, fin septembre, début octobre, les spéculations vont bon train sur les annonces qui vont y être dévoilées... Et sans lever le voile, je vous engage à prendre connaissance des sujets des "Key Notes" qui seront tenues par Larry Ellison, Mark Hurd, Thomas Kurian (responsable des développements logiciels) et John Fowler (responsable des développements systèmes) afin de vous donner un avant goût. Stratégie et Roadmaps Oracle Bien entendu, au-delà des séances plénières qui vous donnerons  une vision précise de la stratégie, et pour ceux qui seront sur place, je vous engage à ne pas manquer les séances d'approfondissement qui auront lieu dans la semaine, dont voici quelques morceaux choisis : "Accelerate your Business with the Oracle Hardware Advantage" avec John Fowler, le lundi 1er Octobre, 3:15pm-4:15pm "Why Oracle Softwares Runs Best on Oracle Hardware" , avec Bradley Carlile, le responsable des Benchmarks, le lundi 1er Octobre, 12:15pm-13:15pm "Engineered Systems - from Vision to Game-changing Results", avec Robert Shimp, le lundi 1er Octobre 1:45pm-2:45pm "Database and Application Consolidation on SPARC Supercluster", avec Hugo Rivero, responsable dans les équipes d'intégration matériels et logiciels, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle’s SPARC Server Strategy Update", avec Masood Heydari, responsable des développements serveurs SPARC, le mardi 2 Octobre, 10:15am - 11:15am "Oracle Solaris 11 Strategy, Engineering Insights, and Roadmap", avec Markus Flier, responsable des développements Solaris, le mercredi 3 Octobre, 10:15am - 11:15am "Oracle Virtualization Strategy and Roadmap", avec Wim Coekaerts, responsable des développement Oracle VM et Oracle Linux, le lundi 1er Octobre, 12:15pm-1:15pm "Big Data: The Big Story", avec Jean-Pierre Dijcks, responsable du développement produits Big Data, le lundi 1er Octobre, 3:15pm-4:15pm "Scaling with the Cloud: Strategies for Storage in Cloud Deployments", avec Christine Rogers,  Principal Product Manager, et Chris Wood, Senior Product Specialist, Stockage , le lundi 1er Octobre, 10:45am-11:45am Retours d'expériences et témoignages Si Oracle Open World est l'occasion de partager avec les équipes de développement d'Oracle en direct, c'est aussi l'occasion d'échanger avec des clients et experts qui ont mis en oeuvre  nos technologies pour bénéficier de leurs retours d'expériences, comme par exemple : "Oracle Optimized Solution for Siebel CRM at ACCOR", avec les témoignages d'Eric Wyttynck, directeur IT Multichannel & CRM  et Pascal Massenet, VP Loyalty & CRM systems, sur les bénéfices non seulement métiers, mais également projet et IT, le mercredi 3 Octobre, 1:15pm-2:15pm "Tips from AT&T: Oracle E-Business Suite, Oracle Database, and SPARC Enterprise", avec le retour d'expérience des experts Oracle, le mardi 2 Octobre, 11:45am-12:45pm "Creating a Maximum Availability Architecture with SPARC SuperCluster", avec le témoignage de Carte Wright, Database Engineer à CKI, le mercredi 3 Octobre, 11:45am-12:45pm "Multitenancy: Everybody Talks It, Oracle Walks It with Pillar Axiom Storage", avec le témoignage de Stephen Schleiger, Manager Systems Engineering de Navis, le lundi 1er Octobre, 1:45pm-2:45pm "Oracle Exadata for Database Consolidation: Best Practices", avec le retour d'expérience des experts Oracle ayant participé à la mise en oeuvre d'un grand client du monde bancaire, le lundi 1er Octobre, 4:45pm-5:45pm "Oracle Exadata Customer Panel: Packaged Applications with Oracle Exadata", animé par Tim Shetler, VP Product Management, mardi 2 Octobre, 1:15pm-2:15pm "Big Data: Improving Nearline Data Throughput with the StorageTek SL8500 Modular Library System", avec le témoignage du CTO de CSC, Alan Powers, le jeudi 4 Octobre, 12:45pm-1:45pm "Building an IaaS Platform with SPARC, Oracle Solaris 11, and Oracle VM Server for SPARC", avec le témoignage de Syed Qadri, Lead DBA et Michael Arnold, System Architect d'US Cellular, le mardi 2 Octobre, 10:15am-11:15am "Transform Data Center TCO with Oracle Optimized Servers: A Customer Panel", avec les témoignages notamment d'AT&T et Liberty Global, le mardi 2 Octobre, 11:45am-12:45pm "Data Warehouse and Big Data Customers’ View of the Future", avec The Nielsen Company US, Turkcell, GE Retail Finance, Allianz Managed Operations and Services SE, le lundi 1er Octobre, 4:45pm-5:45pm "Extreme Storage Scale and Efficiency: Lessons from a 100,000-Person Organization", le témoignage de l'IT interne d'Oracle sur la transformation et la migration de l'ensemble de notre infrastructure de stockage, mardi 2 Octobre, 1:15pm-2:15pm Echanges avec les groupes d'utilisateurs et les équipes de développement Oracle Si vous avez prévu d'arriver suffisamment tôt, vous pourrez également échanger dès le dimanche avec les groupes d'utilisateurs, ou tous les soirs avec les équipes de développement Oracle sur des sujets comme : "To Exalogic or Not to Exalogic: An Architectural Journey", avec Todd Sheetz - Manager of DBA and Enterprise Architecture, Veolia Environmental Services, le dimanche 30 Septembre, 2:30pm-3:30pm "Oracle Exalytics and Oracle TimesTen for Exalytics Best Practices", avec Mark Rittman, de Rittman Mead Consulting Ltd, le dimanche 30 Septembre, 10:30am-11:30am "Introduction of Oracle Exadata at Telenet: Bringing BI to Warp Speed", avec Rudy Verlinden & Eric Bartholomeus - Managers IT infrastructure à Telenet, le dimanche 30 Septembre, 1:15pm-2:00pm "The Perfect Marriage: Sun ZFS Storage Appliance with Oracle Exadata", avec Melanie Polston, directeur, Data Management, de Novation et Charles Kim, Managing Director de Viscosity, le dimanche 30 Septembre, 9:00am-10am "Oracle’s Big Data Solutions: NoSQL, Connectors, R, and Appliance Technologies", avec Jean-Pierre Dijcks et les équipes de développement Oracle, le lundi 1er Octobre, 6:15pm-7:00pm Testez et évaluez les solutions Et pour finir, vous pouvez même tester les technologies au travers du Oracle DemoGrounds, (1133 Moscone South pour la partie Systèmes Oracle, OS, et Virtualisation) et des "Hands-on-Labs", comme : "Deploying an IaaS Environment with Oracle VM", le mardi 2 Octobre, 10:15am-11:15am "Virtualize and Deploy Oracle Applications in Minutes with Oracle VM: Hands-on Lab", le mardi 2 Octobre, 11:45am-12:45pm (il est fortement conseillé d'avoir suivi le "Hands-on-Labs" précédent avant d'effectuer ce Lab. "x86 Enterprise Cloud Infrastructure with Oracle VM 3.x and Sun ZFS Storage Appliance", le mercredi 3 Octobre, 5:00pm-6:00pm "StorageTek Tape Analytics: Managing Tape Has Never Been So Simple", le mercredi 3 Octobre, 1:15pm-2:15pm "Oracle’s Pillar Axiom 600 Storage System: Power and Ease", le lundi 1er Octobre, 12:15pm-1:15pm "Enterprise Cloud Infrastructure for SPARC with Oracle Enterprise Manager Ops Center 12c", le lundi 1er Octobre, 1:45pm-2:45pm "Managing Storage in the Cloud", le mardi 2 Octobre, 5:00pm-6:00pm "Learn How to Write MapReduce on Oracle’s Big Data Platform", le lundi 1er Octobre, 12:15pm-1:15pm "Oracle Big Data Analytics and R", le mardi 2 Octobre, 1:15pm-2:15pm "Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications", le lundi 1er Octobre, 10:45am-11:45am "Managing Your Data with Built-In Oracle Solaris ZFS Data Services in Release 11", le lundi 1er Octobre, 4:45pm-5:45pm "Virtualizing Your Oracle Solaris 11 Environment", le mardi 2 Octobre, 1:15pm-2:15pm "Large-Scale Installation and Deployment of Oracle Solaris 11", le mercredi 3 Octobre, 3:30pm-4:30pm En conclusion, une semaine très riche en perspective, et qui vous permettra de balayer l'ensemble des sujets au coeur de vos préoccupations, de la stratégie à l'implémentation... Cette semaine doit se préparer, pour tailler votre agenda sur mesure, à travers les plus de 2000 sessions dont je ne vous ai fait qu'un extrait, et dont vous pouvez retrouver l'ensemble en ligne.

    Read the article

< Previous Page | 38 39 40 41 42 43 44  | Next Page >