Search Results

Search found 14154 results on 567 pages for 'spooky note'.

Page 63/567 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • javascript removeChild(this) from input[type="submit"] onclick breaks future use of form.submit() un

    - by maximumduncan
    I have come across some strange behaviour, and I'm assuming a bug in firefox, when removing a input submit element from the DOM from within the click event. The following code reproduces the issue: <form name="test_form"> <input type="submit" value="remove me" onclick="this.parentNode.removeChild(this);" /> <input type="submit" value="submit normally" /> <input type="button" value="submit via js" onclick="document.test_form.submit();" /> </form> To reproduce: Click "remove me" Click "submit via js". Note that the form does not get submitted, this is the problem. Click "submit normally". Note that the form still gets submitted normally. It appears that, under Firefox, if you remove a submit button from within the click event it puts the form in an invalid state so that any future calls to form.submit() are simply ignored. But it is a javascript-specific issue as normal submit buttons within this form still function fine. To be honest, this is such a simple example of this issue that I was expecting the internet to be awash with other people exeriencing it, but so far searching has yealded nothing useful. Has anyone else experienced this and if so, did you get to the bottom of it? Many thanks

    Read the article

  • Regex to extract this semi formatted data

    - by Codygman
    Alright, I can't quite figure out how to do this. Given the following text: Roland AX-1: /start Roland's AX-1 strap-on remote MIDI controller has a very impressive 45-note velocity sensitive keyboard, and has switchable velocity curves, goes octave up/down, transpose, split/layering zones, and has fun tempo control for sequencers and more. Roland's AX-1 comes with a built-in GS control for total MIDI control of GM/GS synths. Its "Expression Bar" can control pitch and mod via an almost ribbon-like controller. It's also the newest and most advanced remote controller for your synths or midi modules. /end Roland AX-7: /start Roland's AX-7 builds on the infamous Roland AX-1 design. You just strap it on and put it to the front of the stage. Offering several controllers, such as: a D-Beam, then you can open the door to amazing live performance. 7-segment LED display, larger patch memory (Around 128 patches with MIDI data backup), and comes with GM2/GS compatibility make it extra easy to use. The 45-note, velocity-sensitive keyboard. 5 realtime controllers including a data entry knob, touch controller knob, opression bar, a hold button, and D-Beam. 128 patches with MIDI data backup. 2 MIDI zones. /end I'm trying to use the following: /^([\w\d \-]*):\s\s\s\s^\/start([^\:]*)\/end$/im You can see on rubular here: http://rubular.com/r/BVRRHsnWdp Thanks for any help. I guess i'm trying to match blocks of text until I hit the next title which always ends with a :$

    Read the article

  • Environment variable names with parentheses, like %ProgramFiles(x86)%, in PowerShell?

    - by jwfearn
    How does one get the value of environment variable whose name contains parentheses in a PowerShell script? To complicate matters, some variables names contains parentheses while others have similar names without parenteses. For example (using cmd.exe): C:\>set | find "ProgramFiles" CommonProgramFiles=C:\Program Files\Common Files CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files ProgramFiles=C:\Program Files ProgramFiles(x86)=C:\Program Files (x86) We see that %ProgramFiles% is not the same as %ProgramFiles(x86)%. My PowerShell code is failing in a weird way because it's ignoring the part of the environment variable name after the parentheses. Since this happens to match the name of a different, but existing, environment variable I don't fail, I just get the right value of the wrong variable. Here's a test function in the PowerShell scripting language to illustrate my problem: function Do-Test { $ok = "C:\Program Files (x86)" # note space between 's' and '( $bad = "$Env:ProgramFiles" + "(x86)" # uses %ProgramFiles% $bin32 = "$Env:ProgramFiles(x86)" # LINE 6, I want to use %ProgramFiles(x86)% if ( $bin32 -eq $ok ) { Write-Output "Pass" } elseif ( $bin32 -eq $bad ) { Write-Output "Fail: %ProgramFiles% used instead of %ProgramFiles(x86)%" } else { Write-Output "Fail: some other reason" } } And here's the output: PS> Do-Test Fail: %ProgramFiles% used instead of %ProgramFiles(x86)% Is there a simple change I can make to line 6 above to get the correct value of %ProgramFiles(x86)%? *NOTE: In the text of this post I am using batch file syntax for environment variables as a convenient shorthand. For example %SOME_VARIABLE% means "the value of the environment variable whose name is SOME_VARIABLE". If I knew the properly escaped syntax in PowerShell, I wouldn't need to ask this question.*

    Read the article

  • Relative path from an ASP.NET user control NavigateUrl

    - by Daniel Ballinger
    I have a user control that contains a GridView. The GridView has both a HyperLinkField column and a template column that contains a HyperLink control. The ASP.NET project is structured as follows, with the Default.aspx page in each case using the user control. Application Root Controls UserControl with GridView SystemAdminFolder Default.aspx Edit.aspx OrganisationAdminFolder Default.aspx Edit.aspx StandardUserFolder Default.aspx Edit.aspx Note: The folders are being used to ensure the user has the correct role. I need to be able to set the DataNavigateUrlFormatString for the HyperLinkField and the NavigateUrl for the HyperLink to resolve to the Edit.aspx page in the corresponding folder. If I set the navigate URL to "Edit.aspx" the URL in the browser appears as 'http://Application Root/Controls/Edit.aspx' regardless of the originating directory. I can't use the Web application root operator (~/) as the path needs to be relative to the current page, not the application root. How can I use the same user control in multiple folders and resolve the URL to another page in the same folder? Note: The question is strongly based off a similar question by azhar2000s on the asp.net forums that matches my problem.

    Read the article

  • Which itch does a gravatar scratch?

    - by WizardOfOdds
    This is a very serious question: I've seen lots of threads here about gravatars but I couldn't find and answer to this question: what computer identification/authentication (?) problem, if any, are gravatars supposed to solve? Neither the Wikipedia entry nor the official website are very useful. The official website mentions a "globally unique" picture. Unique in what sense? As far as I can see it's only the hash that is unique: two persons can have two pictures looking very similar if not identical. Note that this question is not about which problems do gravatars unarguably cause (like leaking 10% of the stackoverflow.com accounts email addresses like discussed here : "gravatars can leak email adresses" ) but about which authentication (?) problems, if any, are gravatars supposed to solve? Is the goal just to have a cool/funny/cute icon and save bandwith by having it stored on a remote website or is there more to it, like serving a real authentication purpose which I'd be completely missing? Note that I've got nothing against them and find them rather cool, but I'm just having a hard time figuring out what their purpose is and if I should care or not about them in the webapps I'm developping.

    Read the article

  • Returning ifstream in a function

    - by wrongusername
    Here's probably a very noobish question for you: How (if at all possible) can I return an ifstream from a function? Basically, I need to obtain the filename of a database from the user, and if the database with that filename does not exist, then I need to create that file for the user. I know how to do that, but only by asking the user to restart the program after creating the file. I wanted to avoid that inconvenience for the user if possible, but the function below does not compile in gcc: ifstream getFile() { string fileName; cout << "Please enter in the name of the file you'd like to open: "; cin >> fileName; ifstream first(fileName.c_str()); if(first.fail()) { cout << "File " << fileName << " not found.\n"; first.close(); ofstream second(fileName.c_str()); cout << "File created.\n"; second.close(); ifstream third(fileName.c_str()); return third; //compiler error here } else return first; } EDIT: sorry, forgot to tell you where and what the compiler error was: main.cpp:45: note: synthesized method ‘std::basic_ifstream<char, std::char_traits<char> >::basic_ifstream(const std::basic_ifstream<char, std::char_traits<char> >&)’ first required here EDIT: I changed the function to return a pointer instead as Remus suggested, and changed the line in main() to "ifstream database = *getFile()"; now I get this error again, but this time in the line in main(): main.cpp:27: note: synthesized method ‘std::basic_ifstream<char, std::char_traits<char> >::basic_ifstream(const std::basic_ifstream<char, std::char_traits<char> >&)’ first required here

    Read the article

  • Help with boost::lambda expression

    - by Venkat Shivanandan
    I tried to write a function that calculates a hamming distance between two codewords using the boost lambda library. I have the following code: #include <iostream> #include <numeric> #include <boost/function.hpp> #include <boost/lambda/lambda.hpp> #include <boost/lambda/if.hpp> #include <boost/bind.hpp> #include <boost/array.hpp> template<typename Container> int hammingDistance(Container & a, Container & b) { return std::inner_product( a.begin(), a.end(), b.begin(), (_1 + _2), boost::lambda::if_then_else_return(_1 != _2, 1, 0) ); } int main() { boost::array<int, 3> a = {1, 0, 1}, b = {0, 1, 1}; std::cout << hammingDistance(a, b) << std::endl; } And the error I am getting is: HammingDistance.cpp: In function ‘int hammingDistance(Container&, Container&)’: HammingDistance.cpp:15: error: no match for ‘operator+’ in ‘<unnamed>::_1 + <unnamed>::_2’ HammingDistance.cpp:17: error: no match for ‘operator!=’ in ‘<unnamed>::_1 != <unnamed>::_2’ /usr/include/c++/4.3/boost/function/function_base.hpp:757: note: candidates are: bool boost::operator!=(boost::detail::function::useless_clear_type*, const boost::function_base&) /usr/include/c++/4.3/boost/function/function_base.hpp:745: note: bool boost::operator!=(const boost::function_base&, boost::detail::function::useless_clear_type*) This is the first time I am playing with boost lambda. Please tell me where I am going wrong. Thanks.

    Read the article

  • How to decode a JSON String

    - by ilnur777
    Hi, everybody! Could I ask you to help me to decode this JSON code: $json = '{"inbox":[{"from":"55512351","date":"29\/03\/2010","time":"21:24:10","utcOffsetSeconds":3600,"recipients":[{"address":"55512351","name":"55512351","deliveryStatus":"notRequested"}],"body":"This is message text."},{"from":"55512351","date":"29\/03\/2010","time":"21:24:12","utcOffsetSeconds":3600,"recipients":[{"address":"55512351","name":"55512351","deliveryStatus":"notRequested"}],"body":"This is message text."},{"from":"55512351","date":"29\/03\/2010","time":"21:24:13","utcOffsetSeconds":3600,"recipients":[{"address":"55512351","name":"55512351","deliveryStatus":"notRequested"}],"body":"This is message text."},{"from":"55512351","date":"29\/03\/2010","time":"21:24:13","utcOffsetSeconds":3600,"recipients":[{"address":"55512351","name":"55512351","deliveryStatus":"notRequested"}],"body":"This is message text."}]}'; I would like to organize above structure to this: Note 1: Folder: inbox From (from): ... Date (date): ... Time (time): ... utcOffsetSeconds: ... Recepient (address): ... Recepient (name): ... Status (deliveryStatus): ... Text (body): ... Note 2: ... Thank you in advance!

    Read the article

  • How to change disclosure style when user enters in edit mode of a UITableView?

    - by R31n4ld0
    Hello, guys. I have a UITableView that in 'normal' mode, show a UITableViewCellAccessoryDisclosureIndicator meaning if the user taps the row, another list is showed, like HIG says: "Disclosure indicator. When this element is present, users know they can tap anywhere in the row to see the next level in the hierarchy or the choices associated with the list item. Use a disclosure indicator in a row when selecting the row results in the display of another list. Don’t use a disclosure indicator to reveal detailed information about the list item; instead, use a detail disclosure button for this purpose." When the user tap the edit button in the top bar of the UITableView, I think I have to change the disclosure because if the user tap it, a view for changing the information of the current row is showed (see the bold line above), again, like HIG says: "Detail disclosure button. Users tap this element to see detailed information about the list item. (Note that you can use this element in views other than table views, to reveal additional details about something; see “Detail Disclosure Buttons” for more information.) In a table view, use a detail disclosure button in a row to display details about the list item. Note that the detail disclosure button, unlike the disclosure indicator, can perform an action that is separate from the selection of the row. For example, in Phone Favorites, tapping the row initiates a call to the contact; tapping the detail disclosure button in the row reveals more information about the contact." Have I miss understood the HIG, or I really do have to change the disclosure style in edit mode of UITableView? If yes, how I can intercept the edit mode when the user taps the Edit button? Thanks in advance.

    Read the article

  • Rails 3 - Category Filter Using Select - Load Partial Via Ajax

    - by fourfour
    Hey. I am trying to filter Client Comments by using a select and rendering it in a partial. Right now the partial loads @client.comments. I have a Category model with a Categorizations join. This all works, just need to know how to get the select to call the filter action and load the partial with ajax. Thanks for you help. Categories controller: def filter_category @categories = Category.all respond_to do |format| format.js # filter.rjs end end filter.js.erb: page.replace_html 'client-not-inner', :partial => 'comments', :locals => { :com => Category.first.comments } show.html.erb (clients) <% form_tag(filter_category_path(:id), :method => :put, :class => 'categories', :remote => true, :controller => 'categoires', :action => 'filter') do %> <label>Categories</label> <%= select_tag(:category, options_for_select(Category.all.map {|category| [category.name, category.id]}, @category_id)) %> <% end %> <div class="client-note-inner"> <%= render :partial => 'comments', :locals => { :com => @comments } %> </div><!--end client-note-inner--> Hope that makes sense. Cheers.

    Read the article

  • Entity Framework - Merging 2 physical tables into one "virtual" table problems...

    - by Keith Barrows
    I have been reading up on porting ASP.NET Membership Provider into .NET 3.5 using LINQ & Entities. However, the DB model that every single sample shows is the newer model while I've inherited a rather old model. Differences: The User Table is split into a pair of User & Membership Tables. All of the tables in the DB are prepended with aspnet_ I have Lowered versions of some columns (UserName, Email, etc) To work with this I have copied the properties from the Membership table into the User table (in the DB this is a 1<-1 relationship, not a 1<-0,1), renamed aspnet_Applications to Application, aspnet_Profiles to Profile, aspnet_Users to User and aspnet_Roles to Role. (See image) Link to full size image of model Now, I am running into one of 2 problems when I try to compile. Using the model in the image I get this error: Problem in Mapping Fragment starting at line 464: EntitySets 'UserSet' and 'aspnet_Membership' are both mapped to table 'aspnet_Membership'. Their Primary Keys may collide. If I delete the aspnet_Membership table from my model (to handle the above error) I then get: Problem in Mapping Fragment starting at line 384: Column aspnet_Membership.ApplicationId in table aspnet_Membership must be mapped: It has no default value and is not nullable. My ability to hand edit the backing stores is not the best and I don't want to just hack something in that may break other things. I am looking for suggestions, best practices, etc to handle this. Note: Moving the data tables themselves is not an option as I cannot replace all the logic in the existing apps. I am building this EF Provider for a new App. Over the next 6 months the old app(s) will migrate bit-by-bit to the new structures. Note: I added a link just under the image to the full size image for better viewing.

    Read the article

  • Invoking WCF service from Javascript

    - by KhanS
    I have a asp.net web application, and have some java script code in it. While calling the service I am getting the exception Service1 is undefined. Below is my code. Service: namespace WebApplication2 { // NOTE: You can use the "Rename" command on the "Refactor" menu to change the interface name "IService1" in both code and config file together. [ServiceContract(Namespace="WCFServices")] public interface IService1 { [OperationContract] string HelloWorld(); } } Implementation namespace WebApplication2 { // NOTE: You can use the "Rename" command on the "Refactor" menu to change the class name "Service1" in code, svc and config file together. [ServiceBehavior(IncludeExceptionDetailInFaults = true)] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class Service1 : IService1 { public string HelloWorld() { return "Hello world from service"; } } } ASPX page: <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> <asp:ScriptManager ID="QNAScriptManager" runat="server"> <Services> <asp:ServiceReference Path="~/Service1.svc" /> </Services> <Scripts> <asp:ScriptReference Path="~/Scripts/Questions.js" /> </Scripts> </asp:ScriptManager> </asp:Content> Java Script var ServiceProxy; function pageLoad() { ServiceProxy = new Service1(); ServiceProxy.set_defaultSucceededCallback(SucceededCallback); } function GetString() { ServiceProxy.HelloWorld(); } function SucceededCallback(result, userContext, methodName) { var RsltElem = document.getElementById("Results"); RsltElem.innerHTML = result + " from " + methodName + "."; alert("Msg received from service"); }

    Read the article

  • Archive log transfer from Oracle 9i to Oracle 10g

    - by Jamie Love
    Hi all, I have a situation where I need to transfer Oracle 9i archive logs to an Oracle 10g database, from where they are to be mined by a log-miner and then used by an Oracle streams capture/apply processes. (Oracle 9 archive logs can be read by the Oracle 10 logminer - I can manually copy the archive logs across, manually register them and have them mined, captured then applied). The difficulty is that the way Oracle does archive log transfer changed quite a bit between 9i and 10g and setting up the 9i database to transfer to the remote machine like so: log_archive_dest_state_2 = enable log_archive_dest_2 = "service=OTHERMACHINE arch optional" no longer works. I get this in the 9i logs: *** 2009-05-22 04:03:44.149 RFS network connection lost at host 'OTHERMACHINE' Error 3113 attaching RFS server to standby instance at host 'OTHERMACHINE' Error 3113 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'OTHERMACHINE' Heartbeat failed to connect to standby 'OTHERMACHINE'. Error is 3113. *** 2009-05-22 04:03:44.150 kcrrfail: dest:2 err:3113 force:0 ORA-03113: end-of-file on communication channel And in the 10g log I get: Fri May 22 04:07:42 2009 WARNING: inbound connection timed out (ORA-3136) My question is: Does anyone know how I could configure my 9i or 10g server such that the 10g server will accept the 9i connection in such a way that I can transfer the 9i archive logs to the 10g server. It would be a bonus if the archive logs would be automatically registered in the 10g server. Note I have not set up a full DataGuard configuration here and the 10g database is not a secondary server. Thanks for any suggestions. Edit Note that I can log on to the 10g server from the 9i server via sqlplus, so connectivity is not the problem Edit 2 After a large amount of time searching for a solution, I've finally decided that such a mechanism doesn't work, and that a non-Oracle method of transferring archive logs from 9i to 10g will need to be used (e.g. rsync).

    Read the article

  • Drag-n-Drop on contentEditable elements

    - by Raybiez
    There are numerous WYSIWYG editors available on the internet, but I'm yet to find one that implements some form of drag-n-drop implementation. It is easy to create one's own editor, but I want to the user to be able to drag elements (ie. tokens) from outside the editable area and have them drop it at a location of their choice inside the editable area. It is easy to inject html at a specific location of an editable element, but how do one determine where the caret should be when the user is dragging a DIV over some element in the editable area. To better illustrate what I'm trying to explain, see the following scenario. The editable area (either an IFRAME in edit mode or a DIV with its contentEditable attribute set to true) already contains the following text: "Dear , please take note of ...." The user now drags an element representing some token from a list of elements, over the editable area, moving the cursor over the text until the caret appear just before the comma (,) in the text as shown above. When the user releases the mouse button at that location, HTML will be injected which could result in something like this: "Dear {UserFirstName}, please take note of ...". I do not know if anyone has ever done anything similar to this, or at least know of how one would go about doing this using JavaScript. Any help will be greatly appreciated.

    Read the article

  • Optimal Serialization of Primitive Types

    - by Greg Dean
    We are beginning to roll out more and more WAN deployments of our product (.Net fat client w/ IIS hosted Remoting backend). Because of this we are trying to reduce the size of the data on the wire. We have overridden the default serialization by implementing ISerializable (similar to this), we are seeing anywhere from 12% to 50% gains. Most of our efforts focus on optimizing arrays of primitive types. I would like to know if anyone knows of any fancy way of serializing primitive types, beyond the obvious? For example today we serialize an array of ints as follows: [4-bytes (array length)][4-bytes][4-bytes] Can anyone do significantly better? The most obvious example of a significant improvement, for boolean arrays, is putting 8 bools in each byte, which we already do. Note: Saving 7 bits per bool may seem like a waste of time, but when you are dealing with large magnitudes of data (which we are), it adds up very fast. Note: We want to avoid general compression algorithms because of the latency associated with it. Remoting only supports buffered requests/responses(no chunked encoding). I realize there is a fine line between compression and optimal serialization, but our tests indicate we can afford very specific serialization optimizations at very little cost in latency. Whereas reprocessing the entire buffered response into new compressed buffer is too expensive.

    Read the article

  • Sizers... - wxPython

    - by Francisco Aleixo
    Ok, so I'm learning about sizers in wxPython and I was wondering if it was possible to do something like: ============================================== |WINDOW TITLE _ [] X| |============================================| |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx| |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx| |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx| |xxxxxxxxxxxxxxxxxxNOTEBOOKxxxxxxxxxxxxxxxxxx| |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx| |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx| |xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx| |________ ___________| |IMAGE | |LoginForm | |________| |___________| ============================================== NOTE:Yeah, I literally got this from http://stackoverflow.com/questions/1892110/wxpython-picking-the-right-sizer-to-use-in-an-application With NOTEBOOK expanded to left and bottom, IMAGE to align to left and bottom and loginform align to right and bottom and I managed to do almost everything but now I have a problem.. The problem is that I can't align Loginform and Image separately (im using Box Sizers), and I would like to. This is the code I'm using that is causing the problem at the moment, any help is appreciated. NOTE:The code might be (HUGELY) sloppy as I'm still learning box sizers. sizer = wx.BoxSizer(wx.VERTICAL) sizer1 = wx.BoxSizer(wx.HORIZONTAL) sizer1.Add(self.nb,1, wx.EXPAND) sizer.Add(sizer1,1, wx.LEFT | wx.RIGHT | wx.EXPAND, 10) sizer.Add((-1, 25)) sizer2 = wx.BoxSizer(wx.VERTICAL) sizer2.Add(self.userLabel, 0) sizer2.Add(self.userText, 0) sizer2.Add(self.pwdLabel, 0) sizer2.Add(self.pwdText, 0) sizer2.Add(self.rem, 0) sizer3 = wx.BoxSizer(wx.HORIZONTAL) sizer3.Add(self.login, 0) sizer3.Add(self.sair,0, wx.LEFT, 5) sizer2.Add(sizer3, 0) sizer4 = wx.BoxSizer(wx.HORIZONTAL) sizer4.Add(image, 1, wx.LEFT | wx.BOTTOM) sizer4.Add(sizer2,0, wx.RIGHT | wx.BOTTOM , 5) sizer.Add(sizer4,0, wx.ALIGN_RIGHT | wx.RIGHT, 10)

    Read the article

  • Thumbnail Provider not working

    - by Dan
    I'm trying to write a Windows Explorer thumbnail handler for our custom file type. I've got this working fine for the preview pane, but am having trouble getting it to work for the thumbnails. Windows doesn't even seem to be trying to call the DllGetClassObject entry point. Before I continue, note that I'm using Windows 7 and unmanaged C++. I've registered the following values in the registry: HKCR\CLSID\<my guid> HKCR\CLSID\<my guid>\InprocServer32 (default value being path to my DLL) HKCR\CLSID\<my guid>\InprocServer32\ThreadingModel (value = "Apartment") HKCR\.<my ext>\shellex\{E357FCCD-A995-4576-B01F-234630154E96} (value = my guid) I've also tried using the Win SDK sample, and that doesn't work. And also the sample project in this article (http://www.codemonkeycodes.com/2010/01/11/ithumbnailprovider-re-visited/), and that doesn't work. I'm new to shell programming, so not really sure the best way of debugging this. I've tried attaching the debugger to explorer.exe, but that doesn't seem to work (breakpoints get disabled, and none of my OutputDebugStrings get displayed in the output window). Note that I tried setting the "DesktopProcess" in the registry as described in the WinSDK docs for debugging the shell, but I'm still only seeing one explorer.exe in the task manager - so that "may" be why I can't debug it?? Any help with this would be greatly appreciated! Regards, Dan.

    Read the article

  • how to export bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    EDIT: I decided to reformulate the question in much simpler terms to see if someone can give me a hand with this. Basically, I'm exporting meshes, skeletons and actions from blender into an engine of sorts that I'm working on. But I'm getting the animations wrong. I can tell the basic motion paths are being followed but there's always an axis of translation or rotation which is wrong. I think the problem is most likely not in my engine code (OpenGL-based) but rather in either my misunderstanding of some part of the theory behind skeletal animation / skinning or the way I am exporting the appropriate joint matrices from blender in my exporter script. I'll explain the theory, the engine animation system and my blender export script, hoping someone might catch the error in either or all of these. The theory: (I'm using column-major ordering since that's what I use in the engine cause it's OpenGL-based) Assume I have a mesh made up of a single vertex v, along with a transformation matrix M which takes the vertex v from the mesh's local space to world space. That is, if I was to render the mesh without a skeleton, the final position would be gl_Position = ProjectionMatrix * M * v. Now assume I have a skeleton with a single joint j in bind / rest pose. j is actually another matrix. A transform from j's local space to its parent space which I'll denote Bj. if j was part of a joint hierarchy in the skeleton, Bj would take from j space to j-1 space (that is to its parent space). However, in this example j is the only joint, so Bj takes from j space to world space, like M does for v. Now further assume I have a a set of frames, each with a second transform Cj, which works the same as Bj only that for a different, arbitrary spatial configuration of join j. Cj still takes vertices from j space to world space but j is rotated and/or translated and/or scaled. Given the above, in order to skin vertex v at keyframe n. I need to: take v from world space to joint j space modify j (while v stays fixed in j space and is thus taken along in the transformation) take v back from the modified j space to world space So the mathematical implementation of the above would be: v' = Cj * Bj^-1 * v. Actually, I have one doubt here.. I said the mesh to which v belongs has a transform M which takes from model space to world space. And I've also read in a couple textbooks that it needs to be transformed from model space to joint space. But I also said in 1 that v needs to be transformed from world to joint space. So basically I'm not sure if I need to do v' = Cj * Bj^-1 * v or v' = Cj * Bj^-1 * M * v. Right now my implementation multiples v' by M and not v. But I've tried changing this and it just screws things up in a different way cause there's something else wrong. Finally, If we wanted to skin a vertex to a joint j1 which in turn is a child of a joint j0, Bj1 would be Bj0 * Bj1 and Cj1 would be Cj0 * Cj1. But Since skinning is defined as v' = Cj * Bj^-1 * v , Bj1^-1 would be the reverse concatenation of the inverses making up the original product. That is, v' = Cj0 * Cj1 * Bj1^-1 * Bj0^-1 * v Now on to the implementation (Blender side): Assume the following mesh made up of 1 cube, whose vertices are bound to a single joint in a single-joint skeleton: Assume also there's a 60-frame, 3-keyframe animation at 60 fps. The animation essentially is: keyframe 0: the joint is in bind / rest pose (the way you see it in the image). keyframe 30: the joint translates up (+z in blender) some amount and at the same time rotates pi/4 rad clockwise. keyframe 59: the joint goes back to the same configuration it was in keyframe 0. My first source of confusion on the blender side is its coordinate system (as opposed to OpenGL's default) and the different matrices accessible through the python api. Right now, this is what my export script does about translating blender's coordinate system to OpenGL's standard system: # World transform: Blender -> OpenGL worldTransform = Matrix().Identity(4) worldTransform *= Matrix.Scale(-1, 4, (0,0,1)) worldTransform *= Matrix.Rotation(radians(90), 4, "X") # Mesh (local) transform matrix file.write('Mesh Transform:\n') localTransform = mesh.matrix_local.copy() localTransform = worldTransform * localTransform for col in localTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) file.write('\n') So if you will, my "world" matrix is basically the act of changing blenders coordinate system to the default GL one with +y up, +x right and -z into the viewing volume. Then I also premultiply (in the sense that it's done by the time we reach the engine, not in the sense of post or pre in terms of matrix multiplication order) the mesh matrix M so that I don't need to multiply it again once per draw call in the engine. About the possible matrices to extract from Blender joints (bones in Blender parlance), I'm doing the following: For joint bind poses: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: bindPoseJoint = skeleton.data.bones[joint.name] bindPoseTransform = bindPoseJoint.matrix_local.inverted() file.write('Joint ' + joint.name + ' Transform {\n') translationV = bindPoseTransform.to_translation() rotationQ = bindPoseTransform.to_3x3().to_quaternion() scaleV = bindPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') Note that I'm actually grabbing the inverse of what I think is the bind pose transform Bj. This is so I don't need to invert it in the engine. Also note I went for matrix_local, assuming this is Bj. The other option is plain "matrix", which as far as I can tell is the same only that not homogeneous. For joint current / keyframe poses: for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) currentPoseJoint = skeleton.pose.bones[i] currentPoseTransform = currentPoseJoint.matrix translationV = currentPoseTransform.to_translation() rotationQ = currentPoseTransform.to_3x3().to_quaternion() scaleV = currentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') Note that here I go for skeleton.pose.bones instead of data.bones and that I have a choice of 3 matrices: matrix, matrix_basis and matrix_channel. From the descriptions in the python API docs I'm not super clear which one I should choose, though I think it's the plain matrix. Also note I do not invert the matrix in this case. The implementation (Engine / OpenGL side): My animation subsystem does the following on each update (I'm omitting parts of the update loop where it's figured out which objects need update and time is hardcoded here for simplicity): static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.joints[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.joints[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4TranslateWithVector3(currentTransform, LERPTranslation); currentTransform = GLKMatrix4ScaleWithVector3(currentTransform, LERPScaling); GLKMatrix4 inverseBindTransform = GLKMatrix4MakeWithQuaternion(aSkeleton.joints[i].inverseBindTransform.q); inverseBindTransform = GLKMatrix4TranslateWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.t); inverseBindTransform = GLKMatrix4ScaleWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.s); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } Finally, this is my vertex shader: #version 100 uniform mat4 modelMatrix; uniform mat3 normalMatrix; uniform mat4 projectionMatrix; uniform mat4 skinningPalette[6]; uniform lowp float skinningEnabled; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } vec4 skinnedNormal = vec4(0.); for (int i = 0; i < 4; i++) { skinnedNormal += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * vec4(normal, 0.); } vec4 finalPosition = mix(position, skinnedVertexPosition, skinningEnabled); vec4 finalNormal = mix(vec4(normal, 0.), skinnedNormal, skinningEnabled); vec3 eyeNormal = normalize(normalMatrix * finalNormal.xyz); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); gl_Position = projectionMatrix * modelMatrix * finalPosition; } The result is that the animation displays wrong in terms of orientation. That is, instead of bobbing up and down it bobs in and out (along what I think is the Z axis according to my transform in the export clip). And the rotation angle is counterclockwise instead of clockwise. If I try with a more than one joint, then it's almost as if the second joint rotates in it's own different coordinate space and does not follow 100% its parent's transform. Which I assume it should from my animation subsystem which I assume in turn follows the theory I explained for the case of more than one joint. Any thoughts?

    Read the article

  • Selectively intercepting methods using autofac and dynamicproxy2

    - by Mark Simpson
    I'm currently doing a bit of experimenting using Autofac-1.4.5.676, autofac contrib and castle DynamicProxy2. The goal is to create a coarse-grained profiler that can intercept calls to specific methods of a particular interface. The problem: I have everything working perfectly apart from the selective part. I gather that I need to marry up my interceptor with an IProxyGenerationHook implementation, but I can't figure out how to do this. My code looks something like this: The interface that is to be intercepted & profiled (note that I only care about profiling the Update() method) public interface ISomeSystemToMonitor { void Update(); // this is the one I want to profile void SomeOtherMethodWeDontCareAboutProfiling(); } Now, when I register my systems with the container, I do the following: // Register interceptor gubbins builder.RegisterModule(new FlexibleInterceptionModule()); builder.Register<PerformanceInterceptor>(); // Register systems (just one in this example) builder.Register<AudioSystem>() .As<ISomeSystemToMonitor>) .InterceptedBy(typeof(PerformanceInterceptor)); All ISomeSystemToMonitor instances pulled out of the container are intercepted and profiled as desired, other than the fact that it will intercept all of its methods, not just the Update method. Now, how can I extend this to exclude all methods other than Update()? As I said, I don't understand how I'm meant to say "for the ProfileInterceptor, use this implementation of IProxyHookGenerator". All help appreciated, cheers! Also, please note that I can't upgrade to autofac2.x right now; I'm stuck with 1.

    Read the article

  • Perf4j Not Logging to Separate File

    - by Jehud
    I setup some stop watch calls in my code to measure some code blocks and all the messages are going into my primary log and not into the timing log. The perfStats.log file gets created just fine but all the messages go to the root log which I didn't think was supposed to happen according to the docs I've read. Is there something obvious I'm missing here? Example log4j.xml <!-- This file appender is used to output aggregated performance statistics --> <appender name="fileAppender" class="org.apache.log4j.FileAppender"> <param name="File" value="perfStats.log"/> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%m%n"/> </layout> </appender> <!-- Loggers --> <!-- The Perf4J logger. Note that org.perf4j.TimingLogger is the value of the org.perf4j.StopWatch.DEFAULT_LOGGER_NAME constant. Also, note that additivity is set to false, which is usually what is desired - this means that timing statements will only be sent to this logger and NOT to upstream loggers. --> <logger name="org.perf4j.TimingLogger" additivity="false"> <level value="INFO"/> <appender-ref ref="CoalescingStatistics"/> </logger> <root> <priority value="info"/> <appender-ref ref="STDOUT-DEBUG"/> </root>

    Read the article

  • Write magic bytes to the stack to monitor its usage

    - by tkarls
    I have a problem on an embedded device that I think might be related to a stack overflow. In order to test this I was planning to fill the stack with magic bytes and then periodically check if the stack has overflowed by examining how much of my magic bytes that are left intact. But I can't get the routine for marking the stack to work. The application keeps crashing instantly. This is what I have done just at the entry point of the program. //fill most of stack with magic bytes int stackvar = 0; int stackAddr = int(&stackvar); int stackAddrEnd = stackAddr - 25000; BYTE* stackEnd = (BYTE*) stackAddrEnd; for(int i = 0; i < 25000; ++i) { *(stackEnd + i) = 0xFA; } Please note that the allocated stack is larger than 25k. So I'm counting on some stack space to already be used at this point. Also note that the stack grows from higher to lower addresses that's why I'm trying to fill from the bottom and up. But as I said, this will crash. I must be missing something here.

    Read the article

  • AES Encryption Java Invalid Key length

    - by wuntee
    I am trying to create an AES encryption method, but for some reason I keep getting a 'java.security.InvalidKeyException: Key length not 128/192/256 bits'. Here is the code: public static SecretKey getSecretKey(char[] password, byte[] salt) throws NoSuchAlgorithmException, InvalidKeySpecException{ SecretKeyFactory factory = SecretKeyFactory.getInstance("PBEWithMD5AndDES"); // NOTE: last argument is the key length, and it is 256 KeySpec spec = new PBEKeySpec(password, salt, 1024, 256); SecretKey tmp = factory.generateSecret(spec); SecretKey secret = new SecretKeySpec(tmp.getEncoded(), "AES"); return(secret); } public static byte[] encrypt(char[] password, byte[] salt, String text) throws NoSuchAlgorithmException, InvalidKeySpecException, NoSuchPaddingException, InvalidKeyException, InvalidParameterSpecException, IllegalBlockSizeException, BadPaddingException, UnsupportedEncodingException{ SecretKey secret = getSecretKey(password, salt); Cipher cipher = Cipher.getInstance("AES"); // NOTE: This is where the Exception is being thrown cipher.init(Cipher.ENCRYPT_MODE, secret); byte[] ciphertext = cipher.doFinal(text.getBytes("UTF-8")); return(ciphertext); } Can anyone see what I am doing wrong? I am thinking it may have something to do with the SecretKeyFactory algorithm, but that is the only one I can find that is supported on the end system I am developing against. Any help would be appreciated. Thanks.

    Read the article

  • Persisting Joda DateTime instead of Java Date in hibernate

    - by Tauren
    My entities currently contain java Date properties. I'm starting to use Joda Time for date manipulation and calculations quite frequently. This means that I'm constantly having to convert my Dates into Joda DateTime objects and back again. So I was wondering, is there any reason I shouldn't just change my entities to store Joda DateTime objects instead of Java Date objects? Please note that these entities are persisted via Hibernate. I found the jodatime-hibernate project, but I also was reading on the Joda mailing list that it wasn't compatible with newer versions of hibernate. And it seems like it isn't very well maintained. So I'm wondering if it would be best to just continue converting between Date and DateTime, or if it would be wise to start persisting DateTime objects. My concern is being reliant on a poorly maintained library. Edit: Note that one of my objectives is to be better able to store timezone information. Storing just a Date appears to save the date in the local timezone. As my application can be used globally, I need to know the timezone as well. Joda Time Hibernate seems to address this as well in the user guide.

    Read the article

  • Memory mapped files and "soft" page faults. Unavoidable?

    - by Robert Oschler
    I have two applications (processes) running under Windows XP that share data via a memory mapped file. Despite all my efforts to eliminate per iteration memory allocations, I still get about 10 soft page faults per data transfer. I've tried every flag there is in CreateFileMapping() and CreateFileView() and it still happens. I'm beginning to wonder if it's just the way memory mapped files work. If anyone there knows the O/S implementation details behind memory mapped files I would appreciate comments on the following theory: If two processes share a memory mapped file and one process writes to it while another reads it, then the O/S marks the pages written to as invalid. When the other process goes to read the memory areas that now belong to invalidated pages, this causes a soft page fault (by design) and the O/S knows to reload the invalidated page. Also, the number of soft page faults is therefore directly proportional to the size of the data write. My experiments seem to bear out the above theory. When I share data I write one contiguous block of data. In other words, the entire shared memory area is overwritten each time. If I make the block bigger the number of soft page faults goes up correspondingly. So, if my theory is true, there is nothing I can do to eliminate the soft page faults short of not using memory mapped files because that is how they work (using soft page faults to maintain page consistency). What is ironic is that I chose to use a memory mapped file instead of a TCP socket connection because I thought it would be more efficient. Note, if the soft page faults are harmless please note that. I've heard that at some point if the number is excessive, the system's performance can be marred. If soft page faults intrinsically are not significantly harmful then if anyone has any guidelines as to what number per second is "excessive" I'd like to hear that. Thanks.

    Read the article

  • When can a == b be false and a.Equals(b) true?

    - by alastairs
    I ran into this situation today. I have an object which I'm testing for equality; the Create() method returns a subclass implementation of MyObject. MyObject a = MyObject.Create(); MyObject b = MyObject.Create(); a == b; // is false a.Equals(b); // is true Note I have also over-ridden Equals() in the subclass implementation, which does a very basic check to see whether or not the passed-in object is null and is of the subclass's type. If both those conditions are met, the objects are deemed to be equal. The other slightly odd thing is that my unit test suite does some tests similar to Assert.AreEqual(MyObject.Create(), MyObject.Create()); // Green bar and the expected result is observed. Therefore I guess that NUnit uses a.Equals(b) under the covers, rather than a == b as I had assumed. Side note: I program in a mixture of .NET and Java, so I might be mixing up my expectations/assumptions here. I thought, however, that a == b worked more consistently in .NET than it did in Java where you often have to use equals() to test equality.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >