Search Results

Search found 66801 results on 2673 pages for 'near real time'.

Page 408/2673 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • Natural language processing - Ideas for beginner's projects

    - by Microkernel
    Hi guys, I am a beginner in NLP and NLTK. I am very interested in NLP and hence joined a weekend course on AI in some local institution, which requires me to do a project for completion of the course, and I decided to do it in NLP. The problem is,the instructor is not good at all for this course (According to me she is just a charlatan) (or may not be very interested in teaching as this is her last batch here after which the institute is going to send her out). So I am stuck in a situation where where I got to finish this project in a month to one and half months period, but as a naive person in the field I am feeling it very difficult to comprehend the things required to decide on project. (Also, as I am working full time, I am not finding enough time to dedicate on this). I considered using NLTK toolkit in python for the project for following reasons. (1) Python is famous for ease of use, rapid prototyping and very active community (considering very short span of time I have, and as I am a C programmer by profession, I need a language that I can learn fast and is simple to use). (2) NLTk has good review, and extensive documentation and a very active community. So the problem is what project should I take up, so that I can learn something and will be able to finish project in time. (I know almost nothing in NLP, don't even know what exactly corpora is... :( ) So, please suggest me some topics that I should consider for the project. Regards, MicroKernel :)

    Read the article

  • Prepopulating inlines based on the parent model in the Django Admin

    - by Alasdair
    I have two models, Event and Series, where each Event belongs to a Series. Most of the time, an Event's start_time is the same as its Series' default_time. Here's a stripped down version of the models. #models.py class Series(models.Model): name = models.CharField(max_length=50) default_time = models.TimeField() class Event(models.Model): name = models.CharField(max_length=50) date = models.DateField() start_time = models.TimeField() series = models.ForeignKey(Series) I use inlines in the admin application, so that I can edit all the Events for a Series at once. If a series has already been created, I want to prepopulate the start_time for each inline Event with the Series' default_time. So far, I have created a model admin form for Event, and used the initial option to prepopulate the time field with a fixed time. #admin.py ... import datetime class OEventInlineAdminForm(forms.ModelForm): start_time = forms.TimeField(initial=datetime.time(18,30,00)) class Meta: model = OEvent class EventInline(admin.TabularInline): form = EventInlineAdminForm model = Event class SeriesAdmin(admin.ModelAdmin): inlines = [EventInline,] I am not sure how to proceed from here. Is it possible to extend the code, so that the initial value for the start_time field is the Series' default_time?

    Read the article

  • WPF 3D extrude "a bitmap"

    - by Bgnt44
    Hi, I'm looking for a way to simulate a projector in wpf 3D : i've these "in" parameters : beam shape : a black and white bitmap file beam size ( ex : 30 °) beam color beam intensity ( dimmer ) projector position (x,y,z) beam position (pan(x),tilt(y) relative to projector) First i was thinking of using light object but it seem that wpf can't do that So, now i think that i can make for each projector a polygon from my bitmap... First i need to convert the black and white bitmap to vector. Only Simple shape ( bubble, line,dot,cross ...) Is there any WPF way to do that ? Or maybe a external program file (freeware); then i need to build the polygon, with the shape of the converted bitmap , color , size , orientation in parameter. i don't know how can i defined the lenght of the beam , and if it can be infiny ... To show the beam result, i think of making a room ( floor , wall ...) and beam will end to these wall... i don't care of real light render ( dispersion ...) but the scene render has to be real time and at least 15 times / second (with probably from one to 100 projectors at the same time), information about position, angle,shape,color will be sent for each render... Well so, i need sample for that, i guess that all of these things could be useful for other people If you have sample code : Convert Bitmap to vector Extrude vectors from one point with a angle parameter until collision of a wall Set x,y position of the beam depend of the projector position Set Alpha intensity of the beam, color Maybe i'm totally wrong and WPF is not ready for that , so advise me about other way ( xna,d3D ) with sample of course ;-) Thanks you

    Read the article

  • Bitwise Interval Arithmetic

    - by KennyTM
    I've recently read an interesting thread on the D newsgroup, which basically asks, Given two signed integers a ∈ [amin, amax], b ∈ [bmin, bmax], what is the tightest interval of a | b? I'm think if interval arithmetics can be applied on general bitwise operators (assuming infinite bits). The bitwise-NOT and shifts are trivial since they just corresponds to -1 − x and 2n x. But bitwise-AND/OR are a lot trickier, due to the mix of bitwise and arithmetic properties. Is there a polynomial-time algorithm to compute the intervals of bitwise-AND/OR? Note: Assume all bitwise operations run in linear time (of number of bits), and test/set a bit is constant time. The brute-force algorithm runs in exponential time. Because ~(a | b) = ~a & ~b and a ^ b = (a | b) & ~(a & b), solving the bitwise-AND and -NOT problem implies bitwise-OR and -XOR are done. Although the content of that thread suggests min{a | b} = max(amin, bmin), it is not the tightest bound. Just consider [2, 3] | [8, 9] = [10, 11].)

    Read the article

  • Fast screen capture and lost Vsync

    - by user338759
    Hi, I'd like to generate a movie in real time with a self-made application doing fast screen captures with part of the screen occupied by a running 3D application. I'm aware that several applications already exist for this (like FRAPS or Taksi), and even dedicated DirectShow filters (like UScreenCapture), but i really need to make this with my own external application. When correctly setup (UScreenCapture + ffdshow), capturing an compressing a full screen does not consumes as much CPU as you would expect (about 15%), and does not impairs the performances of the 3D app. The problem of doing a capture from an external application is that the 3D application loses it's Vsync and creates a shaggy, difficult to use 3D application (3D app is only presented on a small part of the screen, the rest being GDI, DirectX) FRAPS solves this problem by allowing you to capture only one application at a time (the one with focus). Depending on the technology used (OpenGl, DirectX, GDI), it hooks the Vsync and does its capture (with glReadPixels,...), without perturbing it. Doing this does not solve my problem, since I want the full composed screen image (including 3D and the rest) AND a smooth 3D app. The UScreenCapture seems to use a fast DirectX call to capture the whole screen, but the openGL 3D app is still out of sync. Doing a BitBlt is too slow and CPU consumming to do real time 30 fps acquisition (at least under windows XP, not sure with 7) My question is to know if there is a way to achieve my goal with Windows 7 and it's brand new DirectX compositing engine? Windows 7 succeeds to show live VSynced duplicated previews of every app (in the taskbar), so there must be a way to access the currenlty displayed screen buffer without perturbing the rendering of the 3D OpenGL app ? Any other suggestion, technology ? thank you

    Read the article

  • Actionscript: NetStream stutters after buffering.

    - by meandmycode
    Using NetStream to stream content from http, I've noticed that esp with certain exported h264's, if the player encounters an empty buffer, it will stop and buffer to the requested length (as expected). However once the buffer is full, the playback doesn't resume, but instead jumps ahead, as such- instantly playing the buffered duration in a brief moment, and thusly triggering an empty buffer again.. this will then continue over and over. Presumably when the netstream pauses to buffer, the playhead position continues, and the player is attempting to snap to that position on resume- however given it could take 5 seconds to build a 2 second buffer- it ends up with a useless buffer again.. (this is an assumption) I've attempted to work around this by listening for an empty buffer netstatus event, pausing the stream, and at the same time setting up a loop to check the current buffer length vs the requested buffer length.. and resuming once the buffer length is greater than or equal to the requested buffer.. however this causes problems when there isn't enough of the video remaining.. for example, a 10 second buffer with only 5 seconds remaining, the loop just sits there waiting for a buffer length of 10 seconds when theres only 5 left... You would think that you could simply check which was smaller, the time left or the requested buffer length.. however the times flash gives are not accurate.. If you add the net streams current time index, plus the buffered time, the total is not the entire duration of the movie (when at the end).. it is close but not the same. This brings me back to the original problem, and if there is another way to fix this, clearly flash knows when the buffer is ready, so how can i get flash pause when it buffers, and resume once the buffer is ready? currently it doesn't.. it pauses and then once the buffer is full- it plays the entire buffered content in about .1 of a second. Thanks in advance, Stephen.

    Read the article

  • python - tkinter - update label from variable

    - by Tom
    I wrote a python script that does some stuff to generate and then keep changing some text stored as a string variable. This works, and I can print the string each time it gets changed. Problems have arisen while trying to display that output in a GUI (just as a basic label) using tkinter. I can get the label to display the string for the first time... but it never updates. This is really the first time I have tried to use tkinter, so it's likely I'm making a foolish error. What I've got looks logical to me, but I'm evidently going wrong somewhere! from tkinter import * outputText = 'Ready' counter = int(0) root = Tk() root.maxsize(400, 400) var = StringVar() l = Label(root, textvariable=var, anchor=NW, justify=LEFT, wraplength=398) l.pack() var.set(outputText) while True: counter = counter + 1 #do some stuff that generates string as variable 'result' outputText = result #do some more stuff that generates new string as variable 'result' outputText = result #do some more stuff that generates new string as variable 'result' outputText = result if counter == 5: break root.mainloop() I also tried: from tkinter import * outputText = 'Ready' counter = int(0) root = Tk() root.maxsize(400, 400) var = StringVar() l = Label(root, textvariable=var, anchor=NW, justify=LEFT, wraplength=398) l.pack() var.set(outputText) while True: counter = counter + 1 #do some stuff that generates string as variable 'result' outputText = result var.set(outputText) #do some more stuff that generates new string as variable 'result' outputText = result var.set(outputText) #do some more stuff that generates new string as variable 'result' outputText = result var.set(outputText) if counter == 5: break root.mainloop() In both cases, the label will show 'Ready' but won't update to change that to the strings as they're generated later. After a fair bit of googling and looking through answers on this site, I thought the solution might be to use update_idletasks - I tried putting that in after each time the variable was changed, but it didn't help. It also seems possible I am meant to be using trace and callback somehow to make all this work...but I can't get my head around how that works (I tried, but didn't manage to make anything that even looked like it would do something, let alone actually worked). I'm still very new to both python and especially tkinter, so, any help much appreciated but please spell it out for me if you can :)

    Read the article

  • Limit the number of rows returned on the server side (forced limit)

    - by evolve
    So we have a piece of software which has a poorly written SQL statement which is causing every row from a table to be returned. There are several million rows in the table so this is causing serious memory issues and crashes on our clients machine. The vendor is in the process of creating a patch for the issue, however it is still a few weeks out. In the mean time we were attempting to figure out a method of limiting the number of results returned on the server side just as a temporary fix. I have no real hope of there being a solution, I've looked around and don't really see any ways of doing this, however I'm hoping someone might have an idea. Thank you in advance. EDIT I forgot an important piece of information, we have no access to the source code so we can not change this on the client side where the SQL statement is formed. There is no real server side component, the client just accesses the database directly. Any solution would basically require a procedure, trigger, or some sort of SQL-Server 2008 setting/command.

    Read the article

  • Browser Based Streaming Video/Audio (not progressive download)

    - by Josh
    Hello, I am trying to understand conceptually the best way to deliver real streaming audio and video content. I would want it to be consumed with a web browser, utilizing the least amount of proprietary technology. I wouldn't be serving static files and using progressive download, this would be real audio streams being captured live. How does one broadcast a stream that will be reasonably in sync with the source? What kind of protocol is suitable? Edit: In research I've found that there are a few protocols: RTSP, HTTP Streaming, RTMP, and RTP. HTTP streaming is somewhat unsuitable if you are streaming a live performance/communication of some kind because it relies on TCP (as its HTTP based) and you don't lose packets. In a low bandwidth situation, the client can get significantly behind in playback. ref RTMP is a proprietary technology, requiring flash media server. Crap on that. The reason I looked at flash is because they are extremely flexible as far as user experience goes. SoundManager2 provides an excellent javascript interface for playing media with flash. This is what I would look for in a client application. RTSP/RTP is what Microsoft switched to using, deprecating their MMS protocol. RTSP is the control protocol. Its similar to HTTP with a few distinct difference -- server can also talk to the client, and there are additional commands, like PAUSE. Its also a stateful protocol, which is maintained with a session id. RTP is the protocol for delivering the payload (encoded audio or video). There are a few open sourced projects, one of them being supported by apple here. It seems like this might do what I want it to, and it looks like quite a few players support it. It sounds like it would be suitable for a "live" broadcast from this page here. Thanks, Josh

    Read the article

  • How to include multiple tables programmaticaly into a Sweave document using R

    - by PaulHurleyuk
    Hello, I want to have a sweave document that will include a variable number of tables in. I thought the example below would work, but it doesn't. I want to loop over the list foo and print each element as it's own table. % \documentclass[a4paper]{article} \usepackage[OT1]{fontenc} \usepackage{longtable} \usepackage{geometry} \usepackage{Sweave} \geometry{left=1.25in, right=1.25in, top=1in, bottom=1in} \listfiles \begin{document} <<label=start, echo=FALSE, include=FALSE>>= startt<-proc.time()[3] library(RODBC) library(psych) library(xtable) library(plyr) library(ggplot2) options(width=80) #Produce some example data, here I'm creating some dummy dataframes and putting them in a list foo<-list() foo[[1]]<-data.frame(GRP=c(rep("AA",10), rep("Aa",10), rep("aa",10)), X1=rnorm(30), X2=rnorm(30,5,2)) foo[[2]]<-data.frame(GRP=c(rep("BB",10), rep("bB",10), rep("BB",10)), X1=rnorm(30), X2=rnorm(30,5,2)) foo[[3]]<-data.frame(GRP=c(rep("CC",12), rep("cc",18)), X1=rnorm(30), X2=rnorm(30,5,2)) foo[[4]]<-data.frame(GRP=c(rep("DD",10), rep("Dd",10), rep("dd",10)), X1=rnorm(30), X2=rnorm(30,5,2)) @ \title{Docuemnt to test putting a variable number of tables into a sweave Document} \author{"Paul Hurley"} \maketitle \section{Text} This document was created on \today, with \Sexpr{print(version$version.string)} running on a \Sexpr{print(version$platform)} platform. It took approx \input{time} sec to process. <<label=test, echo=FALSE, results=tex>>= cat("Foo") @ that was a test, so is this <<label=table1test, echo=FALSE, results=tex>>= print(xtable(foo[[1]])) @ \newpage \subsection{Tables} <<label=Tables, echo=FALSE, results=tex>>= for(i in seq(foo)){ cat("\n") cat(paste("Table_",i,sep="")) cat("\n") print(xtable(foo[[i]])) cat("\n") } #cat("<<label=endofTables>>= ") @ <<label=bye, include=FALSE, echo=FALSE>>= endt<-proc.time()[3] elapsedtime<-as.numeric(endt-startt) @ <<label=elapsed, include=FALSE, echo=FALSE>>= fileConn<-file("time.tex", "wt") writeLines(as.character(elapsedtime), fileConn) close(fileConn) @ \end{document} Here, the table1test chunk works as expected, and produced a table based on the dataframe in foo[[1]], however the loop only produces Table(underscore)1.... Any ideas what I'm doing wrong ?

    Read the article

  • Integrating a Custom Compiler with the Visual Studio IDE

    - by M.A. Hanin
    Background: I want to create a custom VB compiler, extending the "original" compiler, to handle my custom compile-time attributes. Question: after I've created my custom compiler and I've got an executable file capable of compiling VB code via the standard command-line interface, how do I integrate this compiler with the Visual Studio IDE? (such that pressing "compile" or "build" will make use of my compiler instead of the default compiler). EDIT: (Correct me if i'm wrong) From the reactions here, I see this question is a bit shocking, so I shall further explain my needs and background: .NET provides us with a great mechanism called Attributes. As far as I understand, making attributes apply their intended behavior upon the attributed element (assembly, module, class, method, etc.) - attributes must be reflected upon. So the real trick here is reflecting and applying behavior at the right spot. Lets take Serialization for example: We decorate a class with the Serializable attribute. We then pass an instance of the class to the formatter's Serialize method. The formatter reflects upon the instance, checking if it has the Serializable attribute, and acting accordingly. Now, if we examine the Synchronization, Flags, Obsolete and CLSCompliant attributes, then the real question is: who reflects upon them? At least in some cases, it has to be the compiler (and/or IDE). Therefore, it seems that if I wish to create custom attributes that change an element's behavior regardless of any specific consumer, i must extend the compiler to reflect upon them at compilation. Of course, these are not my personal insights: the book "Applied .NET Attributes" provides a complete example of creating a custom attribute and a custom C# compiler to reflect upon that attribute at compilation (the example is used to implement "java-style checked exceptions").

    Read the article

  • Iteration speed of int vs long

    - by jqno
    I have the following two programs: long startTime = System.currentTimeMillis(); for (int i = 0; i < N; i++); long endTime = System.currentTimeMillis(); System.out.println("Elapsed time: " + (endTime - startTime) + " msecs"); and long startTime = System.currentTimeMillis(); for (long i = 0; i < N; i++); long endTime = System.currentTimeMillis(); System.out.println("Elapsed time: " + (endTime - startTime) + " msecs"); Note: the only difference is the type of the loop variable (int and long). When I run this, the first program consistently prints between 0 and 16 msecs, regardless of the value of N. The second takes a lot longer. For N == Integer.MAX_VALUE, it runs in about 1800 msecs on my machine. The run time appears to be more or less linear in N. So why is this? I suppose the JIT-compiler optimizes the int loop to death. And for good reason, because obviously it doesn't do anything. But why doesn't it do so for the long loop as well? A colleague thought we might be measuring the JIT compiler doing its work in the long loop, but since the run time seems to be linear in N, this probably isn't the case.

    Read the article

  • Will fixed-point arithmetic be worth my trouble?

    - by Thomas
    I'm working on a fluid dynamics Navier-Stokes solver that should run in real time. Hence, performance is important. Right now, I'm looking at a number of tight loops that each account for a significant fraction of the execution time: there is no single bottleneck. Most of these loops do some floating-point arithmetic, but there's a lot of branching in between. The floating-point operations are mostly limited to additions, subtractions, multiplications, divisions and comparisons. All this is done using 32-bit floats. My target platform is x86 with at least SSE1 instructions. (I've verified in the assembler output that the compiler indeed generates SSE instructions.) Most of the floating-point values that I'm working with have a reasonably small upper bound, and precision for near-zero values isn't very important. So the thought occurred to me: maybe switching to fixed-point arithmetic could speed things up? I know the only way to be really sure is to measure it, that might take days, so I'd like to know the odds of success beforehand. Fixed-point was all the rage back in the days of Doom, but I'm not sure where it stands anno 2010. Considering how much silicon is nowadays pumped into floating-point performance, is there a chance that fixed-point arithmetic will still give me a significant speed boost? Does anyone have any real-world experience that may apply to my situation?

    Read the article

  • Date since 1600 to NSDate?

    - by Steven Fisher
    I have a date that's stored as a number of days since January 1, 1600 that I need to deal with. This is a legacy date format that I need to read many, many times in my application. Previously, I'd been creating a calendar, empty date components and root date like this: self.gregorian = [[[NSCalendar alloc] initWithCalendarIdentifier: NSGregorianCalendar ] autorelease]; id rootComponents = [[[NSDateComponents alloc] init] autorelease]; [rootComponents setYear: 1600]; [rootComponents setMonth: 1]; [rootComponents setDay: 1]; self.rootDate = [gregorian dateFromComponents: rootComponents]; self.offset = [[[NSDateComponents alloc] init] autorelease]; Then, to convert the integer later to a date, I use this: [offset setDay: theLegacyDate]; id eventDate = [gregorian dateByAddingComponents: offset toDate: rootDate options: 0]; (I never change any values in offset anywhere else.) The problem is I'm getting a different time for rootDate on iOS vs. Mac OS X. On Mac OS X, I'm getting midnight. On iOS, I'm getting 8:12:28. (So far, it seems to be consistent about this.) When I add my number of days later, the weird time stays. OS | legacyDate | rootDate | eventDate ======== | ========== | ==========================|========================== Mac OS X | 143671 | 1600-01-01 00:00:00 -0800 | 1993-05-11 00:00:00 -0700 iOS | 143671 | 1600-01-01 08:12:28 +0000 | 1993-05-11 07:12:28 +0000 In the previous release of my product, I didn't care about the time; now I do. Why the weird time on iOS, and what should I do about it? (I'm assuming the hour difference is DST.) I've tried setting the hour, minute and second of rootComponents to 0. This has no impact. If I set them to something other than 0, it adds them to 8:12:28. I've been wondering if this has something to do with leap seconds or other cumulative clock changes. Or is this entirely the wrong approach to use on iOS?

    Read the article

  • How are you taking advantage of Multicore?

    - by tgamblin
    As someone in the world of HPC who came from the world of enterprise web development, I'm always curious to see how developers back in the "real world" are taking advantage of parallel computing. This is much more relevant now that all chips are going multicore, and it'll be even more relevant when there are thousands of cores on a chip instead of just a few. My questions are: How does this affect your software roadmap? I'm particularly interested in real stories about how multicore is affecting different software domains, so specify what kind of development you do in your answer (e.g. server side, client-side apps, scientific computing, etc). What are you doing with your existing code to take advantage of multicore machines, and what challenges have you faced? Are you using OpenMP, Erlang, Haskell, CUDA, TBB, UPC or something else? What do you plan to do as concurrency levels continue to increase, and how will you deal with hundreds or thousands of cores? If your domain doesn't easily benefit from parallel computation, then explaining why is interesting, too. Finally, I've framed this as a multicore question, but feel free to talk about other types of parallel computing. If you're porting part of your app to use MapReduce, or if MPI on large clusters is the paradigm for you, then definitely mention that, too. Update: If you do answer #5, mention whether you think things will change if there get to be more cores (100, 1000, etc) than you can feed with available memory bandwidth (seeing as how bandwidth is getting smaller and smaller per core). Can you still use the remaining cores for your application?

    Read the article

  • how to export bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    EDIT: I decided to reformulate the question in much simpler terms to see if someone can give me a hand with this. Basically, I'm exporting meshes, skeletons and actions from blender into an engine of sorts that I'm working on. But I'm getting the animations wrong. I can tell the basic motion paths are being followed but there's always an axis of translation or rotation which is wrong. I think the problem is most likely not in my engine code (OpenGL-based) but rather in either my misunderstanding of some part of the theory behind skeletal animation / skinning or the way I am exporting the appropriate joint matrices from blender in my exporter script. I'll explain the theory, the engine animation system and my blender export script, hoping someone might catch the error in either or all of these. The theory: (I'm using column-major ordering since that's what I use in the engine cause it's OpenGL-based) Assume I have a mesh made up of a single vertex v, along with a transformation matrix M which takes the vertex v from the mesh's local space to world space. That is, if I was to render the mesh without a skeleton, the final position would be gl_Position = ProjectionMatrix * M * v. Now assume I have a skeleton with a single joint j in bind / rest pose. j is actually another matrix. A transform from j's local space to its parent space which I'll denote Bj. if j was part of a joint hierarchy in the skeleton, Bj would take from j space to j-1 space (that is to its parent space). However, in this example j is the only joint, so Bj takes from j space to world space, like M does for v. Now further assume I have a a set of frames, each with a second transform Cj, which works the same as Bj only that for a different, arbitrary spatial configuration of join j. Cj still takes vertices from j space to world space but j is rotated and/or translated and/or scaled. Given the above, in order to skin vertex v at keyframe n. I need to: take v from world space to joint j space modify j (while v stays fixed in j space and is thus taken along in the transformation) take v back from the modified j space to world space So the mathematical implementation of the above would be: v' = Cj * Bj^-1 * v. Actually, I have one doubt here.. I said the mesh to which v belongs has a transform M which takes from model space to world space. And I've also read in a couple textbooks that it needs to be transformed from model space to joint space. But I also said in 1 that v needs to be transformed from world to joint space. So basically I'm not sure if I need to do v' = Cj * Bj^-1 * v or v' = Cj * Bj^-1 * M * v. Right now my implementation multiples v' by M and not v. But I've tried changing this and it just screws things up in a different way cause there's something else wrong. Finally, If we wanted to skin a vertex to a joint j1 which in turn is a child of a joint j0, Bj1 would be Bj0 * Bj1 and Cj1 would be Cj0 * Cj1. But Since skinning is defined as v' = Cj * Bj^-1 * v , Bj1^-1 would be the reverse concatenation of the inverses making up the original product. That is, v' = Cj0 * Cj1 * Bj1^-1 * Bj0^-1 * v Now on to the implementation (Blender side): Assume the following mesh made up of 1 cube, whose vertices are bound to a single joint in a single-joint skeleton: Assume also there's a 60-frame, 3-keyframe animation at 60 fps. The animation essentially is: keyframe 0: the joint is in bind / rest pose (the way you see it in the image). keyframe 30: the joint translates up (+z in blender) some amount and at the same time rotates pi/4 rad clockwise. keyframe 59: the joint goes back to the same configuration it was in keyframe 0. My first source of confusion on the blender side is its coordinate system (as opposed to OpenGL's default) and the different matrices accessible through the python api. Right now, this is what my export script does about translating blender's coordinate system to OpenGL's standard system: # World transform: Blender -> OpenGL worldTransform = Matrix().Identity(4) worldTransform *= Matrix.Scale(-1, 4, (0,0,1)) worldTransform *= Matrix.Rotation(radians(90), 4, "X") # Mesh (local) transform matrix file.write('Mesh Transform:\n') localTransform = mesh.matrix_local.copy() localTransform = worldTransform * localTransform for col in localTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) file.write('\n') So if you will, my "world" matrix is basically the act of changing blenders coordinate system to the default GL one with +y up, +x right and -z into the viewing volume. Then I also premultiply (in the sense that it's done by the time we reach the engine, not in the sense of post or pre in terms of matrix multiplication order) the mesh matrix M so that I don't need to multiply it again once per draw call in the engine. About the possible matrices to extract from Blender joints (bones in Blender parlance), I'm doing the following: For joint bind poses: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: bindPoseJoint = skeleton.data.bones[joint.name] bindPoseTransform = bindPoseJoint.matrix_local.inverted() file.write('Joint ' + joint.name + ' Transform {\n') translationV = bindPoseTransform.to_translation() rotationQ = bindPoseTransform.to_3x3().to_quaternion() scaleV = bindPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') Note that I'm actually grabbing the inverse of what I think is the bind pose transform Bj. This is so I don't need to invert it in the engine. Also note I went for matrix_local, assuming this is Bj. The other option is plain "matrix", which as far as I can tell is the same only that not homogeneous. For joint current / keyframe poses: for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) currentPoseJoint = skeleton.pose.bones[i] currentPoseTransform = currentPoseJoint.matrix translationV = currentPoseTransform.to_translation() rotationQ = currentPoseTransform.to_3x3().to_quaternion() scaleV = currentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') Note that here I go for skeleton.pose.bones instead of data.bones and that I have a choice of 3 matrices: matrix, matrix_basis and matrix_channel. From the descriptions in the python API docs I'm not super clear which one I should choose, though I think it's the plain matrix. Also note I do not invert the matrix in this case. The implementation (Engine / OpenGL side): My animation subsystem does the following on each update (I'm omitting parts of the update loop where it's figured out which objects need update and time is hardcoded here for simplicity): static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.joints[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.joints[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4TranslateWithVector3(currentTransform, LERPTranslation); currentTransform = GLKMatrix4ScaleWithVector3(currentTransform, LERPScaling); GLKMatrix4 inverseBindTransform = GLKMatrix4MakeWithQuaternion(aSkeleton.joints[i].inverseBindTransform.q); inverseBindTransform = GLKMatrix4TranslateWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.t); inverseBindTransform = GLKMatrix4ScaleWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.s); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } Finally, this is my vertex shader: #version 100 uniform mat4 modelMatrix; uniform mat3 normalMatrix; uniform mat4 projectionMatrix; uniform mat4 skinningPalette[6]; uniform lowp float skinningEnabled; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } vec4 skinnedNormal = vec4(0.); for (int i = 0; i < 4; i++) { skinnedNormal += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * vec4(normal, 0.); } vec4 finalPosition = mix(position, skinnedVertexPosition, skinningEnabled); vec4 finalNormal = mix(vec4(normal, 0.), skinnedNormal, skinningEnabled); vec3 eyeNormal = normalize(normalMatrix * finalNormal.xyz); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); gl_Position = projectionMatrix * modelMatrix * finalPosition; } The result is that the animation displays wrong in terms of orientation. That is, instead of bobbing up and down it bobs in and out (along what I think is the Z axis according to my transform in the export clip). And the rotation angle is counterclockwise instead of clockwise. If I try with a more than one joint, then it's almost as if the second joint rotates in it's own different coordinate space and does not follow 100% its parent's transform. Which I assume it should from my animation subsystem which I assume in turn follows the theory I explained for the case of more than one joint. Any thoughts?

    Read the article

  • Best fit curve for trend line

    - by Dave Jarvis
    Problem Constraints Size of the data set, but not the data itself, is known. Data set grows by one data point at a time. Trend line is graphed one data point at a time (using a spline/Bezier curve). Graphs The collage below shows data sets with reasonably accurate trend lines: The graphs are: Upper-left. By hour, with ~24 data points. Upper-right. By day for one year, with ~365 data points. Lower-left. By week for one year, with ~52 data points. Lower-right. By month for one year, with ~12 data points. User Inputs The user can select: the type of time series (hourly, daily, monthly, quarterly, annual); and the start and end dates for the time series. For example, the user could select a daily report for 30 days in June. Trend Weight To calculate the window size (i.e., the number of data points to average when calculating the trend line), the following expression is used: data points / trend weight Where data points is derived from user inputs and trend weight is 6.4. Even though a trend weight of 6.4 produces good fits, it is rather arbitrary, and might not be appropriate for different user inputs. Question How should trend weight be calculated given the constraints of this problem?

    Read the article

  • video streaming over http in blackberry

    - by ysnky
    hi all, while i was searching video player over http, i found the article which is located at this url; http://www.blackberry.com/knowledgecenterpublic/livelink.exe/fetch/2000/348583/800332/1089414/Stream ing_media_-_Start_to_finish.html?nodeid=2456737&ve rnum=0 i can run by adding ";deviceside=true" at the end of url. it works fine in the jde4.5 simulator. it gets 3gp videos from my local server. i tested with 580kb files and works fine. but when i get the same file from my server (not local, real server) i have problems with big files (e.g 580 kb). it plays 180kb files (but sometimes it does not play this file either) but not plays 580kb file. and also i deployed my application to my 9000 device it sometimes plays small file (180kb) but never plays big file (580kb). why it plays if it is on my local file, not play in real world? i ve stucked for days. hope you help me. and also the code at the url given below is not work, the only code i ve found is the above. blackberry.com/knowledgecenterpublic/livelink.exe/fetch/2000/348583/800332/1089414/How_To _-_Play_video_within_a_BlackBerry_smartphone_appli cation.html?nodeid=1383173&vernum=0 btw, there is no method such as resize(long param) of CircularByteBuffer class. so i comment relavent line (buffer.resize(buffer.getSize() + (buffer.getSize() * percent / 100)); as shown below. public void increaseBufferCapacity(int percent) { if(percent < 0){ log(0, "FAILED! SP.setBufferCapacity() - " + percent); throw new IllegalArgumentException("Increase factor must be positive.."); } synchronized(readLock){ synchronized(connectionLock){ synchronized(userSeekLock){ synchronized(mediaIStream){ log(0, "SP.setBufferCapacity() - " + percent); //buffer.resize(buffer.getSize() + (buffer.getSize() * percent / 100)); this.bufferCapacity = buffer.getSize(); } } } } } thanks in advance.

    Read the article

  • PHP set timeout for script with system call, set_time_limit not working

    - by tehalive
    I have a command-line PHP script that runs a wget request using each member of an array with foreach. This wget request can sometimes take a long time so I want to be able to set a timeout for killing the script if it goes past 15 seconds for example. I have PHP safemode disabled and tried set_time_limit(15) early in the script, however it continues indefinitely. Update: Thanks to Dor for pointing out this is because set_time_limit() does not respect system() calls. So I was trying to find other ways to kill the script after 15 seconds of execution. However, I'm not sure if it's possible to check the time a script has been running while it's in the middle of a wget request at the same time (a do while loop did not work). Maybe fork a process with a timer and set it to kill the parent after a set amount of time? Thanks for any tips! Update: Below is my relevant code. $url is passed from the command-line and is an array of multiple URLs (sorry for not posting this initially): foreach( $url as $key => $value){ $wget = "wget -r -H -nd -l 999 $value"; system($wget); }

    Read the article

  • SIMPLE PHP MVC Framework!

    - by Allen
    I need a simple and basic MVC example to get me started. I dont want to use any of the available packaged frameworks. I am in need of a simple example of a simple PHP MVC framework that would allow, at most, the basic creation of a simple multi-page site. I am asking for a simple example because I learn best from simple real world examples. Big popular frameworks (such as code ignighter) are to much for me to even try to understand and any other "simple" example I have found are not well explained or seem a little sketchy in general. I should add that most examples of simple MVC frameworks I see use mod_rewrite (for URL routing) or some other Apache-only method. I run PHP on IIS. I need to be able to understand a basic MVC framework, so that I could develop my own that would allow me to easily extend functionality with classes. I am at the point where I understand basic design patterns and MVC pretty well. I understand them in theory, but when it comes down to actually building a real world, simple, well designed MVC framework in PHP, i'm stuck. I would really appreciate some help! Edit: I just want to note that I am looking for a simple example that an experienced programmer could whip up in under an hour. I mean simple as in bare bones simple. I dont want to use any huge frameworks, I am trying to roll my own. I need a decent SIMPLE example to get me going.

    Read the article

  • Templates vs. coded HTML

    - by Alan Harris-Reid
    I have a web-app consisting of some html forms for maintaining some tables (SQlite, with CherryPy for web-server stuff). First I did it entirely 'the Python way', and generated html strings via. code, with common headers, footers, etc. defined as functions in a separate module. I also like the idea of templates, so I tried Jinja2, which I find quite developer-friendly. In the beginning I thought templates were the way to go, but that was when pages were simple. Once .css and .js files were introduced (not necessarily in the same folder as the .html files), and an ever-increasing number of {{...}} variables and {%...%} commands were introduced, things started getting messy at design-time, even though they looked great at run-time. Things got even more difficult when I needed additional javascript in the or sections. As far as I can see, the main advantages of using templates are: Non-dynamic elements of page can easily be viewed in browser during design. Except for {} placeholders, html is kept separate from python code. If your company has a web-page designer, they can still design without knowing Python. while some disadvantages are: {{}} delimiters visible when viewed at design-time in browser Associated .css and .js files have to be in same folder to see effects in browser at design-time. Data, variables, lists, etc., must be prepared in advanced and either declared globally or passed as parameters to render() function. So - when to use 'hard-coded' HTML, and when to use templates? I am not sure of the best way to go, so I would be interested to hear other developers' views. TIA, Alan

    Read the article

  • Jquery button.click bug?

    - by Chris
    I have the following home-grown jquery plugin: (function($) { $.fn.defaultButton = function(button) { var field = $(this); var target = $(button); if (field.attr('type').toLowerCase() != 'text') return; field.keydown(function (e) { if ((e.which || e.keyCode) == 13) { console.log('enter'); target.click(); return false; } }); } })(jQuery); I'm using it like so: $('#SignUpForm input').defaultButton('#SignUpButton'); $('#SignUpButton').click(function(e) { console.log('click'); $.ajax({ type: 'post', url: '<%=ResolveUrl("~/WebServices/ForumService.asmx/SignUp")%>', contentType: 'application/json; charset=utf-8', dataType: 'json', data: JSON.stringify({ email: $('#SignUpEmail').val(), password: $('#SignUpPassword').val() }), success: function(msg) { $.modal.close(); } }); }); The first time, it works. The second time, nothing happens. I see enter and click the first time in the firebug log, but the second time I only see the enter message. It's almost like the button's click handler is being unregistered somehow. Any thoughts?

    Read the article

  • How can I effectively test against the Windows API?

    - by Billy ONeal
    I'm still having issues justifying TDD to myself. As I have mentioned in other questions, 90% of the code I write does absolutely nothing but Call some Windows API functions and Print out the data returned from said functions. The time spent coming up with the fake data that the code needs to process under TDD is incredible -- I literally spend 5 times as much time coming up with the example data as I would spend just writing application code. Part of this problem is that often I'm programming against APIs with which I have little experience, which forces me to write small applications that show me how the real API behaves so that I can write effective fakes/mocks on top of that API. Writing implementation first is the opposite of TDD, but in this case it is unavoidable: I do not know how the real API behaves, so how on earth am I going to be able to create a fake implementation of the API without playing with it? I have read several books on the subject, including Kent Beck's Test Driven Development, By Example, and Michael Feathers' Working Effectively with Legacy Code, which seem to be gospel for TDD fanatics. Feathers' book comes close in the way it describes breaking out dependencies, but even then, the examples provided have one thing in common: The program under test obtains input from other parts of the program under test. My programs do not follow that pattern. Instead, the only input to the program itself is the system upon which it runs. How can one effectively employ TDD on such a project?

    Read the article

  • problems with infinite loop

    - by Tom
    function addAds($n) { for ($i=0;$i<=$n;$i++) { while($row=mysql_fetch_array(mysql_query("SELECT * FROM users"))) { $aut[]=$row['name']; } $author=$aut[rand(0,mysql_num_rows(mysql_query("SELECT * FROM users")))]; $name="pavadinimas".rand(0,3600); $rnd=rand(0,1); if($rnd==0) { $type="siulo"; } else { $type="iesko"; } $text="tekstas".md5("tekstas".rand(0,8000)); $time=time()-rand(3600,86400); $catid=rand(1,9); switch ($catid) { case 1: $subid=rand(1,8); break; case 2: $subid=rand(9,16); break; case 3: $subid=rand(17,24); break; case 4: $subid=rand(25,32); break; case 5: $subid=rand(33,41); break; case 6: $subid=rand(42,49); break; case 7: $subid=rand(50,56); break; case 8: $subid=rand(57,64); break; case 9: $subid=rand(65,70); break; } mysql_query("INSERT INTO advert(author,name,type,text,time,catid,subid) VALUES('$author','$name','$type','$text','$time','$catid','$subid')") or die(mysql_error()); } echo "$n adverts successfully added."; } The problem with this function, is that it never loads. As I noticed, my while loop causes it. If i comment it, everything is ok. It has to get random user from my db and set it to variable $author.

    Read the article

  • Change Data Capture or Change Tracking - Same as Traditional Audit Trail Table?

    - by HardCode
    Before I delve into the abyss of Microsoft documentation any deeper, I'd like to know if someone experienced with Change Data Capture and Change Tracking know if one or both of these can be used to replace the traditional ... "Audit trail table copy of the 'real table' (all of the fields of the original table, plus date/time, user ID, and DML action field) inserted into by Triggers" ... setup for a database table audit trail, where the trigger populates the audit trail table (which is all manual work). The MSDN overview documentation explains at a high level what Change Data Capture and Change Tracking are, but it isn't clear enough to me, and doesn't state outright, that these tools can be used to replace the traditional audit trail tables we've made so often. Can someone with any experience using Change Data Capture and Change Tracking save me a lot of time, or confirm that I am spending time looking at the right tool? The critical part of our audit trail is capturing all changes to a table's fields (on INSERT, UPDATE, DELETE), when it happened, and who did it. These changes are commonly provided to an end user chronologically via an audit trail report. Which is another question ... Change Data Capture or Change Tracking is the solution, I'd assume that this data can be queried just like data from a normal table? EDIT: I need a permanent audit trail, irregardless of time. I see that Change Data Capture has to do with the transaction logs, so this sounds finite to me.

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >