Search Results

Search found 76 results on 4 pages for 'cj behalfof'.

Page 2/4 | < Previous Page | 1 2 3 4  | Next Page >

  • how to export bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    EDIT: I decided to reformulate the question in much simpler terms to see if someone can give me a hand with this. Basically, I'm exporting meshes, skeletons and actions from blender into an engine of sorts that I'm working on. But I'm getting the animations wrong. I can tell the basic motion paths are being followed but there's always an axis of translation or rotation which is wrong. I think the problem is most likely not in my engine code (OpenGL-based) but rather in either my misunderstanding of some part of the theory behind skeletal animation / skinning or the way I am exporting the appropriate joint matrices from blender in my exporter script. I'll explain the theory, the engine animation system and my blender export script, hoping someone might catch the error in either or all of these. The theory: (I'm using column-major ordering since that's what I use in the engine cause it's OpenGL-based) Assume I have a mesh made up of a single vertex v, along with a transformation matrix M which takes the vertex v from the mesh's local space to world space. That is, if I was to render the mesh without a skeleton, the final position would be gl_Position = ProjectionMatrix * M * v. Now assume I have a skeleton with a single joint j in bind / rest pose. j is actually another matrix. A transform from j's local space to its parent space which I'll denote Bj. if j was part of a joint hierarchy in the skeleton, Bj would take from j space to j-1 space (that is to its parent space). However, in this example j is the only joint, so Bj takes from j space to world space, like M does for v. Now further assume I have a a set of frames, each with a second transform Cj, which works the same as Bj only that for a different, arbitrary spatial configuration of join j. Cj still takes vertices from j space to world space but j is rotated and/or translated and/or scaled. Given the above, in order to skin vertex v at keyframe n. I need to: take v from world space to joint j space modify j (while v stays fixed in j space and is thus taken along in the transformation) take v back from the modified j space to world space So the mathematical implementation of the above would be: v' = Cj * Bj^-1 * v. Actually, I have one doubt here.. I said the mesh to which v belongs has a transform M which takes from model space to world space. And I've also read in a couple textbooks that it needs to be transformed from model space to joint space. But I also said in 1 that v needs to be transformed from world to joint space. So basically I'm not sure if I need to do v' = Cj * Bj^-1 * v or v' = Cj * Bj^-1 * M * v. Right now my implementation multiples v' by M and not v. But I've tried changing this and it just screws things up in a different way cause there's something else wrong. Finally, If we wanted to skin a vertex to a joint j1 which in turn is a child of a joint j0, Bj1 would be Bj0 * Bj1 and Cj1 would be Cj0 * Cj1. But Since skinning is defined as v' = Cj * Bj^-1 * v , Bj1^-1 would be the reverse concatenation of the inverses making up the original product. That is, v' = Cj0 * Cj1 * Bj1^-1 * Bj0^-1 * v Now on to the implementation (Blender side): Assume the following mesh made up of 1 cube, whose vertices are bound to a single joint in a single-joint skeleton: Assume also there's a 60-frame, 3-keyframe animation at 60 fps. The animation essentially is: keyframe 0: the joint is in bind / rest pose (the way you see it in the image). keyframe 30: the joint translates up (+z in blender) some amount and at the same time rotates pi/4 rad clockwise. keyframe 59: the joint goes back to the same configuration it was in keyframe 0. My first source of confusion on the blender side is its coordinate system (as opposed to OpenGL's default) and the different matrices accessible through the python api. Right now, this is what my export script does about translating blender's coordinate system to OpenGL's standard system: # World transform: Blender -> OpenGL worldTransform = Matrix().Identity(4) worldTransform *= Matrix.Scale(-1, 4, (0,0,1)) worldTransform *= Matrix.Rotation(radians(90), 4, "X") # Mesh (local) transform matrix file.write('Mesh Transform:\n') localTransform = mesh.matrix_local.copy() localTransform = worldTransform * localTransform for col in localTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) file.write('\n') So if you will, my "world" matrix is basically the act of changing blenders coordinate system to the default GL one with +y up, +x right and -z into the viewing volume. Then I also premultiply (in the sense that it's done by the time we reach the engine, not in the sense of post or pre in terms of matrix multiplication order) the mesh matrix M so that I don't need to multiply it again once per draw call in the engine. About the possible matrices to extract from Blender joints (bones in Blender parlance), I'm doing the following: For joint bind poses: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: bindPoseJoint = skeleton.data.bones[joint.name] bindPoseTransform = bindPoseJoint.matrix_local.inverted() file.write('Joint ' + joint.name + ' Transform {\n') translationV = bindPoseTransform.to_translation() rotationQ = bindPoseTransform.to_3x3().to_quaternion() scaleV = bindPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') Note that I'm actually grabbing the inverse of what I think is the bind pose transform Bj. This is so I don't need to invert it in the engine. Also note I went for matrix_local, assuming this is Bj. The other option is plain "matrix", which as far as I can tell is the same only that not homogeneous. For joint current / keyframe poses: for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) currentPoseJoint = skeleton.pose.bones[i] currentPoseTransform = currentPoseJoint.matrix translationV = currentPoseTransform.to_translation() rotationQ = currentPoseTransform.to_3x3().to_quaternion() scaleV = currentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') Note that here I go for skeleton.pose.bones instead of data.bones and that I have a choice of 3 matrices: matrix, matrix_basis and matrix_channel. From the descriptions in the python API docs I'm not super clear which one I should choose, though I think it's the plain matrix. Also note I do not invert the matrix in this case. The implementation (Engine / OpenGL side): My animation subsystem does the following on each update (I'm omitting parts of the update loop where it's figured out which objects need update and time is hardcoded here for simplicity): static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.joints[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.joints[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4TranslateWithVector3(currentTransform, LERPTranslation); currentTransform = GLKMatrix4ScaleWithVector3(currentTransform, LERPScaling); GLKMatrix4 inverseBindTransform = GLKMatrix4MakeWithQuaternion(aSkeleton.joints[i].inverseBindTransform.q); inverseBindTransform = GLKMatrix4TranslateWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.t); inverseBindTransform = GLKMatrix4ScaleWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.s); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } Finally, this is my vertex shader: #version 100 uniform mat4 modelMatrix; uniform mat3 normalMatrix; uniform mat4 projectionMatrix; uniform mat4 skinningPalette[6]; uniform lowp float skinningEnabled; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } vec4 skinnedNormal = vec4(0.); for (int i = 0; i < 4; i++) { skinnedNormal += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * vec4(normal, 0.); } vec4 finalPosition = mix(position, skinnedVertexPosition, skinningEnabled); vec4 finalNormal = mix(vec4(normal, 0.), skinnedNormal, skinningEnabled); vec3 eyeNormal = normalize(normalMatrix * finalNormal.xyz); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); gl_Position = projectionMatrix * modelMatrix * finalPosition; } The result is that the animation displays wrong in terms of orientation. That is, instead of bobbing up and down it bobs in and out (along what I think is the Z axis according to my transform in the export clip). And the rotation angle is counterclockwise instead of clockwise. If I try with a more than one joint, then it's almost as if the second joint rotates in it's own different coordinate space and does not follow 100% its parent's transform. Which I assume it should from my animation subsystem which I assume in turn follows the theory I explained for the case of more than one joint. Any thoughts?

    Read the article

  • how to get the source code as register user.

    - by nir143
    hi. i downloaded a sourcecode of a site,but i downloaded it i saw it identify my program as a guest,i search at google and figure out that i can send a cookie when i "ask" the source code. that what i have managed to do and it still dont identify me as register user: CookieContainer cj = new CookieContainer(); string all = ""; HttpWebRequest req = (HttpWebRequest)WebRequest.Create(Url); req.CookieContainer = cj; HttpWebResponse res = (HttpWebResponse)req.GetResponse(); CookieCollection cs=cj.GetCookies(req.RequestUri); CookieContainer cc = new CookieContainer(); cc.Add(cs); req.CookieContainer = cc; StreamReader read = new StreamReader(res.GetResponseStream()); all = read.ReadToEnd(); read.Close(); return all; what is wrong here? tyvm for help:) (if that help,i can have a simple details of a register user of the site).

    Read the article

  • SQL Server 2008 need just like crosstab query on XML column?

    - by user1332896
    <abc id="abc1"> <def id="def1"> <ghi att='ghi1'> <mn id="0742d2ea" name="RF" dt="0" df="3" ty="0" /> <mn id="64d9a11b" name="CJ" dt="0" df="3" ty="0" /> <mn id="db72d154" name="FJ" dt="2" df="4" ty="0" /> <mn id="39af9fa1" name="BS" dt="0" df="2" ty="0" /> </ghi> <jkl att='jkl1'> <mn id="0742d2ea" name="RF" dt="1" gl="19" /> <mn id="64d9a11b" name="CJ" dt="0" gl="6" /> <mn id="db72d154" name="FJ" dt="0" gl="0" /> <mn id="39af9fa1" name="BS" dt="0" gl="12" /> <mn id="ac4f566f" name="DJ" dt="0" gl="9" /> <mn id="4bf3ba2f" name="RP" dt="0" gl="16" /> <mn id="db1af021" name="SC" dt="1" gl="10" /> <mn id="c4c93a2d" name="DN" dt="1" gl="15" /> </jkl> </def> </abc> I need this output. Is this possible in SQL Server 2008? id name ghiDT ghiDF ghiTY jklDT jklGL 0742d2ea RF 0 3 0 1 19 64d9a11b CJ 0 3 0 0 6 db72d154 FJ 2 4 0 0 0 39af9fa1 BS 0 2 0 0 12 ac4f566f DJ 0 0 0 0 9 4bf3ba2f RP 0 0 0 0 16 db1af021 SC 0 0 0 1 10 c4c93a2d DN 0 0 0 1 15

    Read the article

  • Disabling standby modus on Gear4 Blackbox speakers

    - by cj
    I have a set of bluetooth Gear4 Blackbox speakers. The sound quality and the design is great, but it switches to standby modus automatically every now and then, when I am listening to music and when it's silent. In that situation, the only way to get them working again is turning them off and on, since they do not react to the remote nor to the buttons on top of it. I emailed Gear4 with this problem and they recommended me to reflash the firmware again, but the problem is persisting after having done that. It has nothing to do with the device that is streaming music. So, the question is, is there a way to disable the standby modus permamently, so the speakers are all the time active even if I am not using them? To switch them off I would just turn the power switch off. If anyone is curious about what device I am talking about, here's a link to Amazon, but I wouldn't reccomend anyone to buy them because of this problem. http://www.amazon.co.uk/Gear-4-PG142-GEAR4-BlackBox/dp/B000QEFZI2 Thanks in advance.

    Read the article

  • SharePoint 2010 and Excel Calculation Services

    - by Cj Anderson
    I'm curious what the requirements are for Excel Calculation Services in Sharepoint 2010. I found an architecture document but it doesn't list out the specific requirements. (architecture I understand that you can install all the services on one server but it isn't recommended. It then talks about how you can scale up the application servers, and web front ends. What should the hardware look like for both the application server and web front ends? Do I need to setup a standalone box for each application server?

    Read the article

  • Mouse Error Code 24. Windows 7

    - by Cj.
    I've had the same mouse for a while, and it's been working fine until one day, it started giving me a message about a device not working properly. I tried updating the drivers, and re-installing, I even deleted old drivers in case my computer should be a little confused. It never made a difference, and my mouse seemed to be working just fine despite getting the permanent error in my device manager, I looked it up several times online, but I never found anything I could actually use, when I go to official websites, I always get the same response "plug in so so into a different place - drivers - install silverlight before you can watch this tutorial, try it on a different machine". so I gave up on that. But now is where I have a real problem, lately, my little strange error evolved into a fullblown Error 24, and my mouse is starting to turn on and off randomely, especially when it is being used, but I do hear it go "badum..dadum" when I'm off doing something else. when I looked up error code 24, I really didn't find much other than it meaning: Code 24 This device is not present, is not working properly, or does not have all its drivers installed. (Code 24) Cause The device is installed incorrectly. The problem could be a hardware failure, or a new driver might be needed. Devices stay in this state if they have been prepared for removal. After you remove the device, this error disappears. But, I have tried uninstalling the device entirely several times, and it'll go right back to its previous state with error 24, and turning on and off randomely. what do I do? I cannot afford taking it to a repair place, I can't really afford a new mouse either, I refuse to buy cheap ones as I am a gamer, in need of more than 3 buttons, and a good grip is important. Could there possibly be some confusion in the registry? I do remember having gotten some early problems after I converted my vista to windows7. But I hardly dare going in there unless I'm 100% certain of what I'm going for, and I can honestly say I am at a loss here. Edit: it is a USB mouse we're talking about here. MX™518 Optical Gaming Mouse (logitech) Edit2: I am seeing no rupture, so it must be on the inside of my mouse, or inside the rubber, protecting the cable, that would be really inconvenient to search for

    Read the article

  • Bacula windows client could not connect to Bacula director

    - by pr0f-r00t
    I have a Bacula server on my Linux Debian squeeze host (Bacula version 5.0.2) and a Bacula client on Windows XP SP3. On my network each client can see each other, can share files and can ping. On my local server I could run bconsole and the server responds but when I run bconsole or bat on my windows client the server does not respond. Here are my configuration files: bacula-dir.conf: # # Default Bacula Director Configuration file # # The only thing that MUST be changed is to add one or more # file or directory names in the Include directive of the # FileSet resource. # # For Bacula release 5.0.2 (28 April 2010) -- debian squeeze/sid # # You might also want to change the default email address # from root to your address. See the "mail" and "operator" # directives in the Messages resource. # Director { # define myself Name = nima-desktop-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 1 Password = "Cv70F6pf1t6pBopT4vQOnigDrR0v3L" # Console password Messages = Daemon DirAddress = 127.0.0.1 # DirAddress = 72.16.208.1 } JobDefs { Name = "DefaultJob" Type = Backup Level = Incremental Client = nima-desktop-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultJob" } #Job { # Name = "BackupClient2" # Client = nima-desktop2-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=nima-desktop-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /nonexistant/path/to/file/archive/dir/bacula-restores } # job for vmware windows host Job { Name = "nimaxp-fd" Type = Backup Client = nimaxp-fd FileSet = "nimaxp-fs" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = Default Write Bootstrap = "/var/bacula/working/rsys-win-www-1-fd.bsr" #Change this } # job for vmware windows host Job { Name = "arg-michael-fd" Type = Backup Client = nimaxp-fd FileSet = "arg-michael-fs" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = Default Write Bootstrap = "/var/bacula/working/rsys-win-www-1-fd.bsr" #Change this } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } # # Put your list of files here, preceded by 'File =', one per line # or include an external list with: # # File = <file-name # # Note: / backs up everything on the root partition. # if you have other partitions such as /usr or /home # you will probably want to add them too. # # By default this is defined to point to the Bacula binary # directory to give a reasonable FileSet to backup to # disk storage during initial testing. # File = /usr/sbin } # # If you backup the root directory, the following two excluded # files can be useful # Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "nimaxp-fs" Enable VSS = yes Include { Options { signature = MD5 } File = "C:\softwares" File = C:/softwares File = "C:/softwares" } } # List of files to be backed up FileSet { Name = "arg-michael-fs" Enable VSS = yes Include { Options { signature = MD5 } File = "C:\softwares" File = C:/softwares File = "C:/softwares" } } # # When to do the backups, full backup on first sunday of the month, # differential (i.e. incremental since full) every other sunday, # and incremental backups other days Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = nima-desktop-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "_MOfxEuRzxijc0DIMcBqtyx9iW1tzE7V6" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # Client file service for vmware windows host Client { Name = nimaxp-fd Address = nimaxp FDPort = 9102 Catalog = MyCatalog Password = "Ku8F1YAhDz5EMUQjiC9CcSw95Aho9XbXailUmjOaAXJP" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # Client file service for vmware windows host Client { Name = arg-michael-fd Address = 192.168.0.61 FDPort = 9102 Catalog = MyCatalog Password = "b4E9FU6s/9Zm4BVFFnbXVKhlyd/zWxj0oWITKK6CALR/" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = nima-desktop2-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "_MOfxEuRzxijc0DIMcBqtyx9iW1tzE7V62" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" Device = FileStorage Media Type = File } # Definition of DDS tape storage device #Storage { # Name = DDS-4 # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # password for Storage daemon # Device = DDS-4 # must be same as Device in Storage daemon # Media Type = DDS-4 # must be same as MediaType in Storage daemon # Autochanger = yes # enable for autochanger device #} # Definition of 8mm tape storage device #Storage { # Name = "8mmDrive" # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # Device = "Exabyte 8mm" # MediaType = "8mm" #} # Definition of DVD storage device #Storage { # Name = "DVD" # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # Device = "DVD Writer" # MediaType = "DVD" #} # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; dbuser = ""; dbpassword = "" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard # # NOTE! If you send to two email or more email addresses, you will need # to replace the %r in the from field (-f part) with a single valid # email address in both the mailcommand and the operatorcommand. # What this does is, it sets the email address that emails would display # in the FROM field, which is by default the same email as they're being # sent to. However, if you send email to more than one address, then # you'll have to set the FROM address manually, to a single address. # for example, a '[email protected]', is better since that tends to # tell (most) people that its coming from an automated source. # mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = nima-desktop-mon Password = "-T0h6HCXWYNy0wWqOomysMvRGflQ_TA6c" CommandACL = status, .status } bacula-fd.conf on client: # # Default Bacula File Daemon Configuration file # # For Bacula release 5.0.3 (08/05/10) -- Windows MinGW32 # # There is not much to change here except perhaps the # File daemon Name # # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = nimaxp-fd FDport = 9102 # where we listen for the director WorkingDirectory = "C:\\Program Files\\Bacula\\working" Pid Directory = "C:\\Program Files\\Bacula\\working" # Plugin Directory = "C:\\Program Files\\Bacula\\plugins" Maximum Concurrent Jobs = 10 } # # List Directors who are permitted to contact this File daemon # Director { Name = Nima-desktop-dir Password = "Cv70F6pf1t6pBopT4vQOnigDrR0v3L" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = nimaxp-mon Password = "q5b5g+LkzDXorMViFwOn1/TUnjUyDlg+gRTBp236GrU3" Monitor = yes } # Send all messages except skipped files back to Director Messages { Name = Standard director = Nima-desktop = all, !skipped, !restored } I have checked my firewall and disabled the firewall but it doesn't work.

    Read the article

  • Invalid receiver type 'NSUInteger'

    - by CJ
    I have a Core Data entity whose header file looks like this: @interface MyEntity : NSManagedObject { } @property (nonatomic, retain) NSNumber * index; @end And it's implementation file looks like this: @implementation MyEntity @dynamic index; @end Now, I have a piece of code that looks like this: NSArray* selectedObects = [myEntityArrayController selectedObjects]; NSUInteger theIndex = [[[selectedObects objectAtIndex:0] index] unsignedIntegerValue]; The 'myEntityArrayController' object is a NSArrayController which manages all entities of MyEntity. This code executes correctly, however XCode always gives the warning "Invalid receiver type 'NSUInteger'" for the last line of code. For some reason, XCode thinks that the index method returns a NSUInteger. I'm not sure why it thinks this, because 'objectAtIndex' returns an object of type 'id'. I've cleaned the project several times, and these warnings have hung around for a while. Any suggestions are appreciated.

    Read the article

  • Rails 3 Beta 2, Haml, Nested Layouts and LocalJumpError

    - by CJ Bryan
    Alright, I'm trying to create an app with nested templates. I'm using Rails 3 Beta 2 and Haml. I've poked around and I've decided to take the clearest approach and have structured my templates like so: # application.html.haml !!! %body %h1 Outermost Template = yield(:foobar) # inner.html.haml - content_for :foobar do %h2 Inner Template = yield = render :file => 'layouts/application' # foo_controller.rb layout 'inner' With all of this, I get a LocalJumpError with the message no block given. The stack traces are blank and pretty unhelpful. Any ideas? Are these known issues?

    Read the article

  • Linq query - multiple where, with extension method

    - by Cj Anderson
    My code works for ORs as I've listed it below but I want to use AND instead of OR and it fails. I'm not quite sure what I'm doing wrong. Basically I have a Linq query that searches on multiple fields in an XML file. The search fields might not all have information. Each element runs the extension method, and tests the equality. Any advice would be appreciated. refinedresult = From x In theresult _ Where x.<thelastname>.Value.TestPhoneElement(LastName) Or _ x.<thefirstname>.Value.TestPhoneElement(FirstName) Or _ x.<id>.Value.TestPhoneElement(Id) Or _ x.<number>.Value.TestPhoneElement(Telephone) Or _ x.<location>.Value.TestPhoneElement(Location) Or _ x.<building>.Value.TestPhoneElement(building) Or _ x.<department>.Value.TestPhoneElement(Department) _ Select x Public Function TestPhoneElement(ByVal parent As String, ByVal value2compare As String) As Boolean 'find out if a value is null, if not then compare the passed value to see if it starts with Dim ret As Boolean = False If String.IsNullOrEmpty(parent) Then Return False End If If String.IsNullOrEmpty(value2compare) Then Return ret Else ret = parent.ToLower.StartsWith(value2compare.ToLower.Trim) End If Return ret End Function

    Read the article

  • python coockie,request another page

    - by polovinamozga
    #!/usr/bin/python # -*- coding: utf-8 -*- import urllib2 import urllib import httplib import Cookie import cookielib Login = 'user' Password = 'password' Domain = 'inbox.ru' Auth = 'https://auth.mail.ru/cgi-bin/auth' cj = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) login_data = urllib.urlencode({'Login' : Login, 'Domain' :Domain, 'Password' : Password }) opener.open('https://auth.mail.ru/cgi-bin/auth', login_data) resp = opener.open('https://auth.mail.ru/cgi-bin/auth').read() print resp.decode('cp1251') #output page in cp1251 When script sucessfully executed i see in print resp.decode('cp1251') my page with auth. But when a try to request another page for example http://my.mail.ru i see autorization request. How i can use cookie with another page?

    Read the article

  • Jquery getJSON Not Working Cross Site

    - by CJ
    I have a piece of javascript that grabs JSON data. When executed locally everything seems to work fine. However, when I try accessing it from a different site, it doesn't work. Here's the script. $(function(){ var aT = new AjaxTest(); aT.getJson(); }); var AjaxTest = function() { this.ajaxUrl = "http://mydeveloperpage.com/sandbox/ajax_json_test/client_reciever.php"; this.getJson = function(){ $.getJSON(this.ajaxUrl, function(data){ $.each(data, function(i, piece){ alert(piece); }); }); } } You can find a copy of the exact same file at "http://mydeveloperpage.com/sandbox/ajax_json_test/". Any help would be greatly appreciated. Thanks!

    Read the article

  • Uncaught OAuthException

    - by Christopher 'Cj' Jesudas
    Guys please help me out with this...the error which my program gives is "Fatal error: Uncaught OAuthException: Error validating application"... My program code is : require ("src/facebook.php"); $appapikey = '132347710178087'; $appsecret = 'e27456d003801ae145a77ab28904fca0'; $facebook = new Facebook($appapikey, $appsecret); $user_id = $facebook->getUser(); $friends = $facebook->api('friends.get'); echo "<p>Hello <fb:name uid=\"$user_id\" useyou=\"false\" linked=\"false\" firstnameonly=\"true\"></fb:name>, you have ".count($friends)." friends"; foreach($friends as $friend){ $infos.=$friend.","; } $infos = substr($infos,0,strlen($infos)-1); $gender=$facebook->api_client->users_getInfo($infos,'sex'); $gender_array = array(); foreach($gender as $gendervalue){ $gender_array[$gendervalue[sex]]++; } $male = round($gender_array[male]*100/($gender_array[male]+$gender_array[female]),2); $female = 100-$male; echo "<ul><li>Males: $male%</li><li>Females: $female%</li></ul>";

    Read the article

  • NSUndoManager with Core Data - Redo not working

    - by CJ
    I have a Core Data document-based app which support undo/redo via the built-in NSUndoManager associated with the NSManagedObjectContext. I have a few actions set up which perform numerous tasks within Core Data, wrap all these tasks into an undo group via beginUndoGrouping/endUndoGrouping, and are processed by the NSUndoManager. Undo works fine. I can perform several successive actions, and each then undo each one of them successively and my app's state is maintained correctly. However, the "Redo" menu item is never enabled. This means that the NSUndoManager is telling the menu that there are no items to redo. I am wondering why the NSUndoManager is seemingly forgetting about items once they are undone, and not allowing redos to occur? One thing I should mention is that I'm disabling undo registration after a document is opened/created. When I perform an action, I call enableUndoRegistration, beginUndoGrouping, perform the action, then call processPendingChanges, setActionName:, endUndoGrouping, and finally disableUndoRegistration. This makes sure that only specific actions are undoable, and any other data changes I make outside of these go unnoticed to the NSUndoManager. This may be a part of the issue, but if so I'm wondering why it's affecting redo? Thanks in advance.

    Read the article

  • Adding custom methods to a subclassed NSManagedObject

    - by CJ
    I have a Core Data model where I have an entity A, which is an abstract. Entities B, C, and D inherit from entity A. There are several properties defined in entity A which are used by B, C, and D. I would like to leverage this inheritance in my model code. In addition to properties, I am wondering if I can add methods to entity A, which are implemented in it's sub-entities. For example: I add a method to the interface for entity A which returns a value and takes one argument I add implementations of this method to A, B, C, D Then, I call executeFetchRequest: to retrieve all instances of B I call the method on the objects retrieved, which should call the implementation of the method contained in B's implementation I have tried this, but when calling the method, I receive: [NSManagedObject methodName:]: unrecognized selector sent to instance I presume this is because the objects returned by executeFetchRequest: are proxy objects of some sort. Is there any way to leverage inheritance using subclassed NSManagedObjects? I would really like to be able to do this, otherwise my model code would be responsible for determining what type of NSManagedObject it's dealing with and perform special logic according to the type, which is undesirable. Any help is appreciated, thanks in advance.

    Read the article

  • Searching a set of data with multiple terms using Linq

    - by Cj Anderson
    I'm in the process of moving from ADO.NET to Linq. The application is a directory search program to look people up. The users are allowed to type the search criteria into a single textbox. They can separate each term with a space, or wrap a phrase in quotes such as "park place" to indicate that it is one term. Behind the scenes the data comes from a XML file that has about 90,000 records in it and is about 65 megs. I load the data into a DataTable and then use the .Select method with a SQL query to perform the searches. The query I pass is built from the search terms the user passed. I split the string from the textbox into an array using a regular expression that will split everything into a separate element that has a space in it. However if there are quotes around a phrase, that becomes it's own element in the array. I then end up with a single dimension array with x number of elements, which I iterate over to build a long query. I then build the search expression below: query = query & _ "((userid LIKE '" & tempstr & "%') OR " & _ "(nickname LIKE '" & tempstr & "%') OR " & _ "(lastname LIKE '" & tempstr & "%') OR " & _ "(firstname LIKE '" & tempstr & "%') OR " & _ "(department LIKE '" & tempstr & "%') OR " & _ "(telephoneNumber LIKE '" & tempstr & "%') OR " & _ "(email LIKE '" & tempstr & "%') OR " & _ "(Office LIKE '" & tempstr & "%'))" Each term will have a set of the above query. If there is more than one term, I put an AND in between, and build another query like above with the next term. I'm not sure how to do this in Linq. So far, I've got the XML file loading correctly. I'm able to search it with specific criteria, but I'm not sure how to best implement the search over multiple terms. 'this works but far too simple to get the job done Dim results = From c In m_DataSet...<Users> _ Where c.<userid>.Value = "XXXX" _ Select c The above code also doesn't use the LIKE operator either. So partial matches don't work. It looks like what I'd want to use is the .Startswith but that appears to be only in Linq2SQL. Any guidance would be appreciated. I'm new to Linq, so I might be missing a simple way to do this. The XML file looks like so: <?xml version="1.0" standalone="yes"?> <theusers> <Users> <userid>person1</userid> <nickname></nickname> <lastname></lastname> <firstname></firstname> <department></department> <telephoneNumber></telephoneNumber> <email></email> </Users> <Users> <userid>person2</userid> <nickname></nickname> <lastname></lastname> <firstname></firstname> <department></department> <telephoneNumber></telephoneNumber> <email></email> </Users>

    Read the article

  • .NET Regular Expression to split multiple words or phrases

    - by Cj Anderson
    I'm using the code below to take a string and split it up into an array. It will take: Disney Land and make it two separate elements. If the string contains "Disney Land" then it is one element in the array. Works great, however it adds some empty elements to the array each time. So I just iterate over the elements and remove them if they are empty. Is there a tweak to the code below that will prevent those empty elements from occurring? Private m_Reg As Regex m_Reg = New Regex("([^""^\s]+)\s*|""([^""]+)""\s*") Dim rezsplit = m_Reg.Split(criteria)

    Read the article

  • VB.NET switching from ADO.NET to LINQ

    - by Cj Anderson
    I'm VERY new to Linq. I have an application I wrote that is in VB.NET 2.0. Works great, but I'd like to switch this application to Linq. I use ADO.NET to load XML into a datatable. The XML file has about 90,000 records in it. I then use the Datatable.Select to perform searches against that Datatable. The search control is a free form textbox. So if the user types in terms it searches instantly. Any further terms that are typed in continue to restrict the results. So you can type in Bob, or type in Bob Barker. Or type in Bob Barker Price is Right. The more criteria typed in the more narrowed your result. I bind the results to a gridview. Moving forward what all do I need to do? From a high level, I assume I need to: 1) Go to Project Properties -- Advanced Compiler Settings and change the Target framework to 3.5 from 2.0. 2) Add the reference to System.XML.Linq, Add the Imports statement to the classes. So I'm not sure what the best approach is going forward after that. I assume I use XDocument.Load, then my search subroutine runs against the XDocument. Do I just do the standard Linq query for this sort of repeated search? Like so: var people = from phonebook in doc.Root.Elements("phonebook") where phonebook.Element("userid") = "whatever" select phonebook; Any tips on how to best implement?

    Read the article

  • Storing cookielib cookies in a database

    - by Mridang Agarwalla
    Hi, I'm using the cookielib module to handle HTTP cookies when using the urllib2 module in Python 2.6 in a way similar to this snippet: import cookielib, urllib2 cj = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) r = opener.open("http://example.com/") I'd like to store the cookies in a database. I don't know whats better - serialize the CookieJar object and store it or extract the cookies from the CookieJar and store that. I don't know which one's better or how to implement either of them. I should be also be able to recreate the CookieJar object. Could someone help me out with the above? Thanks in advance.

    Read the article

  • Python urllib3 and how to handle cookie support?

    - by bigredbob
    So I'm looking into urllib3 because it has connection pooling and is thread safe (so performance is better, especially for crawling), but the documentation is... minimal to say the least. urllib2 has build_opener so something like: #!/usr/bin/python import cookielib, urllib2 cj = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) r = opener.open("http://example.com/") But urllib3 has no build_opener method, so the only way I have figured out so far is to manually put it in the header: #!/usr/bin/python import urllib3 http_pool = urllib3.connection_from_url("http://example.com") myheaders = {'Cookie':'some cookie data'} r = http_pool.get_url("http://example.org/", headers=myheaders) But I am hoping there is a better way and that one of you can tell me what it is. Also can someone tag this with "urllib3" please.

    Read the article

  • .NET --- Textbox control - wait till user is done typing

    - by Cj Anderson
    Greetings all, Is there a built in way to know when a user is done typing into a textbox? (Before hitting tab, Or moving the mouse) I have a database query that occurs on the textchanged event and everything works perfectly. However, I noticed that there is a bit of lag of course because if a user is quickly typing into the textbox the program is busy doing a query for each character. So what I was hoping for was a way to see if the user has finished typing. So if they type "a" and stop then an event fires. However, if they type "all the way" the event fires after the y keyup. I have some ideas floating around my head but I'm sure they aren't the most efficient. Like measuring the time since the last textchange event and if it was than a certain value then it would proceed to run the rest of my procedures. let me know what you think. Language: VB.NET Framework: .Net 2.0 --Edited to clarify "done typing"

    Read the article

  • Why is floating point byte swapping different from integer byte swapping?

    - by CJ
    I have a binary file of doubles that I need to load using C++. However, my problem is that it was written in big-endian format but the fstream operator will then read the number wrong because my machine is little-endian. It seems like a simple problem to resolve for integers, but for doubles and floats the solutions I have found won't work. How can I (or should I) fix this? I read this as a reference for integer byte swapping: http://stackoverflow.com/questions/105252/how-do-i-convert-between-big-endian-and-little-endian-values-in-c

    Read the article

  • Returning more than one result

    - by Hairr
    I'm using the following code: def recentchanges(bot=False,rclimit=20): """ @description: Gets the last 20 pages edited on the recent changes and who the user who edited it """ recent_changes_data = { 'action':'query', 'list':'recentchanges', 'rcprop':'user|title', 'rclimit':rclimit, 'format':'json' } if bot is False: recent_changes_data['rcshow'] = '!bot' else: pass data = urllib.urlencode(recent_changes_data) response = opener.open('http://runescape.wikia.com/api.php',data) content = json.load(response) pages = tuple(content['query']['recentchanges']) for title in pages: return title['title'] When I do recentchanges() I only get one result. If I print it though, all the pages are printed. Am I just misunderstanding or is this something relating to python? Also, opener is: cj = CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))

    Read the article

  • Creating a Ruby method that pads an Array

    - by CJ Johnson
    I'm working on creating a method that pads an array, and accepts 1. a desired value and 2. an optional string/integer value. Desired_size reflects the desired number of elements in the array. If a string/integer is passed in as the second value, this value is used to pad the array with extra elements. I understand there is a 'fill' method that can shortcut this - but that would be cheating for the homework I'm doing. The issue: no matter what I do, only the original array is returned. I started here: class Array def pad(desired_size, value = nil) desired_size >= self.length ? return self : (desired_size - self.length).times.do { |x| self << value } end end test_array = [1, 2, 3] test_array.pad(5) From what I researched the issue seemed to be around trying to alter self's array, so I learned about .inject and gave that a whirl: class Array def pad(desired_size, value = nil) if desired_size >= self.length return self else (desired_size - self.length).times.inject { |array, x| array << value } return array end end end test_array = [1, 2, 3] test_array.pad(5) The interwebs tell me the problem might be with any reference to self so I wiped that out altogether: class Array def pad(desired_size, value = nil) array = [] self.each { |x| array << x } if desired_size >= array.length return array else (desired_size - array.length).times.inject { |array, x| array << value } return array end end end test_array = [1, 2, 3] test_array.pad(5) I'm very new to classes and still trying to learn about them. Maybe I'm not even testing them the right way with my test_array? Otherwise, I think the issue is I get the method to recognize the desired_size value that's being passed in. I don't know where to go next. Any advice would be appreciated. Thanks in advance for your time.

    Read the article

  • How can I limit the number of connections to an mssql server from my tomcat deployed java applicatio

    - by CJ
    Hi, I have an application that is deployed on tomcat on server A and sends queries to a huge variety of mssql databases on an server B. I am concerned that my application could overload this mssql database server and would like some way to preventing it making requests to connect to any database on that server if some arbitrary number of connections were already in existence and unclosed. I am looking at using connection pooling but am under the impression that this will only pool connections to a specific database on the mssql server, I want to control the total of these combined connections that will occur to many different databases (incidentally I can only find out the names of individual db's dynamically as they change day to day). Will connection pooling take care of this for me, are am I looking at this from the wrong perspective? I have no access to the configuration of the mssql server. Links to tutorials or working examples of your suggested solution are most welcome! Thanks, Caroline

    Read the article

< Previous Page | 1 2 3 4  | Next Page >