Search Results

Search found 23082 results on 924 pages for 'address space'.

Page 98/924 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Why does everybody have the same MAC address as me?

    - by iblue
    I just bought some consumer-grade McCheap PCI-E NICs (and this was a bad idea). They both have the same MAC address. When I google it, it seems like every card of that company has the same address: 00:50:43:00:45:3e. Shouldn't they be unique? According to lspci it's a Marvell Technology Group Ltd. 88E8053 PCI-E Gigabit Ethernet Controller (rev 20). Is there a way to permanently flash a new address?

    Read the article

  • How do I get a static IP address for my teapot?

    - by Joe
    I maintain this teapot: It returns a 418 error when it is pinged, and it is pinged regularly by people arriving from the relevant Wikipedia page. (For those interested, and there appear to be a few of you, the relevent story is here) It sits on a shelf in my office in the Computer Science department of a university and the support guys were kind enough to give it a dedicated IP address some years ago. My contract is coming to an end in the next few weeks and it's occurred to me that I'm going to have to do something with the teapot. I'd like to take it home but I have no clue how to explain to my home broadband supplier that I want a dedicated IP address coming to my house so that people can ping a teapot. Is there a reasonable way of having a server on a shelf in my house that people can ping via an IP address? What search terms can I use to find a solution to this problem?

    Read the article

  • Is it possible to use the same MAC address for an entire subnet?

    - by Bruce
    I wish to add static entries to the ARP table of my machine so that it uses a dummy MAC 00:11:22:33:44:55 for any IP address within the subnet 10.0.0.0/8 Using arp -s 10.0.0.0/8 00:11:22:33:44:55 does not work. What can I do? PS - I know it might sound strange why anyone would want to do this but kindly bear with me here. EDIT: I am using this so that the hosts do not send a broadcast ARP message. I route the packets to the appropriate last hop router which changes the dst MAC from the fake MAC to the MAC of the dst IP address. I can get everything working except I have to manually enter the fake MAC for each subnet IP address.

    Read the article

  • SQL SERVER – Advanced Data Quality Services with Melissa Data – Azure Data Market

    - by pinaldave
    There has been much fanfare over the new SQL Server 2012, and especially around its new companion product Data Quality Services (DQS). Among the many new features is the addition of this integrated knowledge-driven product that enables data stewards everywhere to profile, match, and cleanse data. In addition to the homegrown rules that data stewards can design and implement, there are also connectors to third party providers that are hosted in the Azure Datamarket marketplace.  In this review, I leverage SQL Server 2012 Data Quality Services, and proceed to subscribe to a third party data cleansing product through the Datamarket to showcase this unique capability. Crucial Questions For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct? And if my address is not correct, what should it be corrected to? The answer lies in a third party knowledge base by the acknowledged USPS certified address accuracy experts at Melissa Data. Reference Data Services Within DQS there is a handy feature to actually add reference data from many different third-party Reference Data Services (RDS) vendors. DQS simplifies the processes of cleansing, standardizing, and enriching data through custom rules and through service providers from the Azure Datamarket. A quick jump over to the Datamarket site shows me that there are a handful of providers that offer data directly through Data Quality Services. Upon subscribing to these services, one can attach a DQS domain or composite domain (fields in a record) to a reference data service provider, and begin using it to cleanse, standardize, and enrich that data. Besides what I am looking for (address correction and enrichment), it is possible to subscribe to a host of other services including geocoding, IP address reference, phone checking and enrichment, as well as name parsing, standardization, and genderization.  These capabilities extend the data quality that DQS has natively by quite a bit. For my current address correction review, I needed to first sign up to a reference data provider on the Azure Data Market site. For this example, I used Melissa Data’s Address Check Service. They offer free one-month trials, so if you wish to follow along, or need to add address quality to your own data, I encourage you to sign up with them. Once I subscribed to the desired Reference Data Provider, I navigated my browser to the Account Keys within My Account to view the generated account key, which I then inserted into the DQS Client – Configuration under the Administration area. Step by Step to Guide That was all it took to hook in the subscribed provider -Melissa Data- directly to my DQS Client. The next step was for me to attach and map in my Reference Data from the newly acquired reference data provider, to a domain in my knowledge base. On the DQS Client home screen, I selected “New Knowledge Base” under Knowledge Base Management on the left-hand side of the home screen. Under New Knowledge Base, I typed a Name and description of my new knowledge base, then proceeded to the Domain Management screen. Here I established a series of domains (fields) and then linked them all together as a composite domain (record set). Using the Create Domain button, I created the following domains according to the fields in my incoming data: Name Address Suite City State Zip I added a Suite column in my domain because Melissa Data has the ability to return missing Suites based on last name or company. And that’s a great benefit of using these third party providers, as they have data that the data steward would not normally have access to. The bottom line is, with these third party data providers, I can actually improve my data. Next, I created a composite domain (fulladdress) and added the (field) domains into the composite domain. This essentially groups our address fields together in a record to facilitate the full address cleansing they perform. I then selected my newly created composite domain and under the Reference Data tab, added my third party reference data provider –Melissa Data’s Address Check- and mapped in each domain that I had to the provider’s Schema. Now that my composite domain has been married to the Reference Data service, I can take the newly published knowledge base and create a project to cleanse and enrich my data. My next task was to create a new Data Quality project, mapping in my data source and matching it to the appropriate domain column, and then kick off the verification process. It took just a few minutes with some progress indicators indicating that it was working. When the process concluded, there was a helpful set of tabs that place the response records into categories: suggested; new; invalid; corrected (automatically); and correct. Accepting the suggestions provided by  Melissa Data allowed me to clean up all the records and flag the invalid ones. It is very apparent that DQS makes address data quality simplistic for any IT professional. Final Note As I have shown, DQS makes data quality very easy. Within minutes I was able to set up a data cleansing and enrichment routine within my data quality project, and ensure that my address data was clean, verified, and standardized against real reference data. As reviewed here, it’s easy to see how both SQL Server 2012 and DQS work to take what used to require a highly skilled developer, and empower an average business or database person to consume external services and clean data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: DQS

    Read the article

  • Oracle Enterprise Data Quality: Ever Integration-ready

    - by Mala Narasimharajan
    It is closing in on a year now since Oracle’s acquisition of Datanomic, and the addition of Oracle Enterprise Data Quality (EDQ) to the Oracle software family. The big move has caused some big shifts in emphasis and some very encouraging excitement from the field.  To give an illustration, combined with a shameless promotion of how EDQ can help to give quick insights into your data, I did a quick Phrase Profile of the subject field of emails to the Global EDQ mailing list since it was set up last September. The results revealed a very clear theme:   Integration, Integration, Integration! As well as the important Siebel and Oracle Data Integrator (ODI) integrations, we have been asked about integration with a huge variety of Oracle applications, including EBS, Peoplesoft, CRM on Demand, Fusion, DRM, Endeca, RightNow, and more - and we have not stood still! While it would not have been possible to develop specific pre-integrations with all of the above within a year, we have developed a package of feature-rich out-of-the-box web services and batch processes that can be plugged into any application or middleware technology with ease. And with Siebel, they work out of the box. Oracle Enterprise Data Quality version 9.0.4 includes the Customer Data Services (CDS) pack – a ready set of standard processes with standard interfaces, to provide integrated: Address verification and cleansing  Individual matching Organization matching The services can are suitable for either Batch or Real-Time processing, and are enabled for international data, with simple configuration options driving the set of locale-specific dictionaries that are used. For example, large dictionaries are provided to support international name transcription and variant matching, including highly specialized handling for Arabic, Japanese, Chinese and Korean data. In total across all locales, CDS includes well over a million dictionary entries.   Excerpt from EDQ’s CDS Individual Name Standardization Dictionary CDS has been developed to replace the OEM of Informatica Identity Resolution (IIR) for attached Data Quality on the Oracle price list, but does this in a way that creates a ‘best of both worlds’ situation for customers, who can harness not only the out-of-the-box functionality of pre-packaged matching and standardization services, but also the flexibility of OEDQ if they want to customize the interfaces or the process logic, without having to learn more than one product. From a competitive point of view, we believe this stands us in good stead against our key competitors, including Informatica, who have separate ‘Identity Resolution’ and general DQ products, and IBM, who provide limited out-of-the-box capabilities (with a steep learning curve) in both their QualityStage data quality and Initiate matching products. Here is a brief guide to the main services provided in the pack: Address Verification and Standardization EDQ’s CDS Address Cleaning Process The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically: Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API Optimization of address verification by pre-standardizing data where required Formatting of output addresses into the input address fields normally used by applications Adding descriptions of the address verification and geocoding return codes The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.     Duplicate Prevention Unlike Informatica Identity Resolution (IIR), EDQ uses stateless services for duplicate prevention to avoid issues caused by complex replication and synchronization of large volume customer data. When a record is added or updated in an application, the EDQ Cluster Key Generation service is called, and returns a number of key values. These are used to select other records (‘candidates’) that may match in the application data (which has been pre-seeded with keys using the same service). The ‘driving record’ (the new or updated record) is then presented along with all selected candidates to the EDQ Matching Service, which decides which of the candidates are a good match with the driving record, and scores them according to the strength of match. In this model, complex multi-locale EDQ techniques can be used to generate the keys and ensure that the right balance between performance and matching effectiveness is maintained, while ensuring that the application retains control of data integrity and transactional commits. The process is explained below: EDQ Duplicate Prevention Architecture Note that where the integration is with a hub, there may be an additional call to the Cluster Key Generation service if the master record has changed due to merges with other records (and therefore needs to have new key values generated before commit). Batch Matching In order to allow customers to use different match rules in batch to real-time, separate matching templates are provided for batch matching. For example, some customers want to minimize intervention in key user flows (such as adding new customers) in front end applications, but to conduct a more exhaustive match on a regular basis in the back office. The batch matching jobs are also used when migrating data between systems, and in this case normally a more precise (and automated) type of matching is required, in order to minimize the review work performed by Data Stewards.  In batch matching, data is captured into EDQ using its standard interfaces, and records are standardized, clustered and matched in an EDQ job before matches are written out. As with all EDQ jobs, batch matching may be called from Oracle Data Integrator (ODI) if required. When working with Siebel CRM (or master data in Siebel UCM), Siebel’s Data Quality Manager is used to instigate batch jobs, and a shared staging database is used to write records for matching and to consume match results. The CDS batch matching processes automatically adjust to Siebel’s ‘Full Match’ (match all records against each other) and ‘Incremental Match’ (match a subset of records against all of their selected candidates) modes. The Future The Customer Data Services Pack is an important part of the Oracle strategy for EDQ, offering a clear path to making Data Quality Assurance an integral part of enterprise applications, and providing a strong value proposition for adopting EDQ. We are planning various additions and improvements, including: An out-of-the-box Data Quality Dashboard Even more comprehensive international data handling Address search (suggesting multiple results) Integrated address matching The EDQ Customer Data Services Pack is part of the Enterprise Data Quality Media Pack, available for download at http://www.oracle.com/technetwork/middleware/oedq/downloads/index.html.

    Read the article

  • How do I set up a virtual network interface with its own IP address?

    - by Stefano Palazzo
    I vaguely remember that it's possible to set up virtual network interfaces with their own IP addresses, using only one physical network connection. I can find a few guides on the internet that recommend setting these up in /etc/network/interfaces, but Ubuntu doesn't use this file. Therefore my question: What's the correct way of setting these up in recent versions of Ubuntu? As this is a laptop, and I need it to connect to all kinds of different networks, I want to keep the network manager and all its configuration. To be more clear: at the end of this, I want to have a new network interface (e.g. "eth42") with its own IP address, but using whatever is connected in network manager to send the actual packets. In NM, it should appear as if I just had a second ethernet adapter installed in my system.

    Read the article

  • How do I set a static DNS nameserver address on Ubuntu Server?

    - by Aleks
    I am trying statically to set DNS server addresses in my Ubuntu server running as virtual machine. I followed all recommendations on official Ubuntu support pages but I simply cannot get rid of my ISP's DNS servers set by DHCP. I assigned br0 interface on my host machine static IP address and eth0 on VM to use Google DNS and my own local DNS running on the second vm by setting it in /etc/network/interfaces. Tried to fiddle with head base and tail files in /etc/resolvconf/resolv.conf.d/ and tried to shuffle interface-order in /etc/resolvconf/interface-order but when I restarted network service I got the ISP's DNS addresses back every time. Is there a way that I can disable resolvconf and set up my resolv.conf file manually as I always did on Red Hat? Or at can you tell me which hook script keeps putting ISP DNSs in resolv.conf? My ISP don't allow me to change DHCP settings on my router so I cannot do it that way. Why is such a simple thing such as setting DNS servers got so complicated???

    Read the article

  • how to get IP adress of google search/crawler bot to add to our white list of ip address

    - by Jayapal Chandran
    Hi, Google webmaster tools says network unreachable. When i contacted my hosting provider they said that they have installed firewall which could block frequent incoming ip addresses and they dont know the google's ip adress to unblock. so they requested me to find google search/crawler bot's ip adress so that they can add it to their whitelist. How to find the ip address of google search bot or crawler bot? My site stopped appearing in google search. My hits had gone too low. What should i do? any kind of reply would he helpful.

    Read the article

  • Why Is Another Domain Resolving To My IP Address?

    - by Andrew
    I'm not really sure if this is something that I should worry about... I'm currently renting a dedicated server which is hosting a website I've created. The domain of the website was registered with GoDaddy. After submitting a sitemap to Google several months ago, I've noticed that another domain name is resolving to my IP address. This means that every page on my website is actually accessible from another domain. As far as I can tell, the other domain name is meaningless to me, so I'm not sure if this is something I should worry about or not. Is this a residual DNS record from another site that is probably no longer in use? Is it important from the standpoint of either security or SEO? My website is a .com which will later serve e-commerce purposes. The other domain has a top-level domain of st. It's the first one of those that I've encountered. Many thanks in advance!

    Read the article

  • Do you have an address column somewhere in your database?

    - by shay.shmeltzer
    Do you have an address column somewhere in your database? If the answer is yes - then the Web seminar I'm going to run together with the guys from Navteq on May 26th, might be of interest to you. You see, we all have geographical related information in our database, but many of us don't actually use this to do any geographical type of operations/representation with it. Well once you attend the "Add Maps to Your Java Applications - the Easy Way" seminar this might change. In the seminar we'll give you a quick overview of the Spatial related capabilities of the Oracle DB, Middleware and tools. And a demo showing you how easy it is to actually get data to show up on a map in your application and to interact with it. So register today, and mark your calendar.

    Read the article

  • Why can't i ping server? VMware set to 'Bridged' loses IP address on 10.04.

    - by Dave
    I have installed a fresh 10.04 server onto a laptop on a home network as a VMware machine and set network connection to 'Bridged: connect directly to the physical network' from within VMware and rebooted the server. It then loses its IP address. 'dhclient eth0' says "No working leases in persistent database - sleeping" DHCP is working fine on the wi-fi router. The laptop is wired to a wireless router and from there wirelessly to a desktop. Desktop and Laptop can ping each other from Windows. I can ping the VM from Windows on the same laptop, but not from the desktop. Strangely ping has started to resolve hostnames to IPv6 addresses and not IPv4. Don't know whether that's connected? A kick in the right direction would be greatly appreciated. I've been an Ubuntu desktkop user for a few years, but new to ubuntu servers.

    Read the article

  • Why can't I ping server? VMware set to 'Bridged' loses IP address

    - by Dave
    I have installed a fresh 10.04 server onto a laptop on a home network as a VMware machine and set network connection to 'Bridged: connect directly to the physical network' from within VMware and rebooted the server. It then loses its IP address. dhclient eth0 says "No working leases in persistent database - sleeping" DHCP is working fine on the wi-fi router. The laptop is wired to a wireless router and from there wirelessly to a desktop. Desktop and Laptop can ping each other from Windows. I can ping the VM from Windows on the same laptop, but not from the desktop. Strangely ping has started to resolve hostnames to IPv6 addresses and not IPv4. Don't know whether that's connected? A kick in the right direction would be greatly appreciated. I've been an Ubuntu desktkop user for a few years, but new to ubuntu servers.

    Read the article

  • SharePoint Search Problem: The start address sps3://server cannot be crawled.

    - by Clara Oscura
    With this post, I'm going to start a series on problems I have encountered with SharePoint search. Error: The start address sps3://luapp105 cannot be crawled. Context: Application 'Search_Service_Application', Catalog 'Portal_Content' Details:  Access is denied. Verify that either the Default Content Access Account has access to this repository, or add a crawl rule to crawl this repository. If the repository being crawled is a SharePoint repository, verify that the account you are using has "Full Read" permissions on the SharePoint Web Application being crawled.   (0x80041205) (Event ID: 14, Task Category: Gatherer) Solution: give appropriate permissions to User Profile Synchronisation Service http://social.technet.microsoft.com/Forums/en-US/sharepoint2010setup/thread/64cdf879-f01e-4595-bc52-15975fefd18d http://www.dotnetmafia.com/blogs/dotnettipoftheday/archive/2010/03/29/how-to-set-up-people-search-in-sharepoint-2010.aspx

    Read the article

  • Code Golf: Code 39 Bar Code

    - by gwell
    The challenge The shortest code by character count to draw an ASCII representation of a Code 39 bar code. Wikipedia article about Code 39: http://en.wikipedia.org/wiki/Code_39 Input The input will be a string of legal characters for Code 39 bar codes. This means 43 characters are valid: 0-9 A-Z (space) and -.$/+%. The * character will not appear in the input as it is used as the start and stop characters. Output Each character encoded in Code 39 bar codes have nine elements, five bars and four spaces. Bars will be represented with # characters, and spaces will be represented with the space character. Three of the nine elements will be wide. The narrow elements will be one character wide, and the wide elements will be three characters wide. A inter-character space of a single space should be added between each character pattern. The pattern should be repeated so that the height of the bar code is eight characters high. The start/stop character * (bWbwBwBwb) would be represented like this: # # ### ### # # # ### ### # # # ### ### # # # ### ### # # # ### ### # # # ### ### # # # ### ### # # # ### ### # ^ ^ ^^ ^ ^ ^ ^^^ | | || | | | ||| narrow bar -+ | || | | | ||| wide space ---+ || | | | ||| narrow bar -----+| | | | ||| narrow space ------+ | | | ||| wide bar --------+ | | ||| narrow space ----------+ | ||| wide bar ------------+ ||| narrow space --------------+|| narrow bar ---------------+| inter-character space ----------------+ The start and stop character * will need to be output at the start and end of the bar code. No quiet space will need to be included before or after the bar code. No check digit will need to be calculated. Full ASCII Code39 encoding is not required, just the standard 43 characters. No text needs to be printed below the ASCII bar code representation to identify the output contents. The character # can be replaced with another character of higher density if wanted. Using the full block character U+2588, would allow the bar code to actually scan when printed. Test cases Input: ABC Output: # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # Input: 1/3 Output: # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # Input: - $ (minus space dollar) Output: # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # Code count includes input/output (full program).

    Read the article

  • how to export bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    EDIT: I decided to reformulate the question in much simpler terms to see if someone can give me a hand with this. Basically, I'm exporting meshes, skeletons and actions from blender into an engine of sorts that I'm working on. But I'm getting the animations wrong. I can tell the basic motion paths are being followed but there's always an axis of translation or rotation which is wrong. I think the problem is most likely not in my engine code (OpenGL-based) but rather in either my misunderstanding of some part of the theory behind skeletal animation / skinning or the way I am exporting the appropriate joint matrices from blender in my exporter script. I'll explain the theory, the engine animation system and my blender export script, hoping someone might catch the error in either or all of these. The theory: (I'm using column-major ordering since that's what I use in the engine cause it's OpenGL-based) Assume I have a mesh made up of a single vertex v, along with a transformation matrix M which takes the vertex v from the mesh's local space to world space. That is, if I was to render the mesh without a skeleton, the final position would be gl_Position = ProjectionMatrix * M * v. Now assume I have a skeleton with a single joint j in bind / rest pose. j is actually another matrix. A transform from j's local space to its parent space which I'll denote Bj. if j was part of a joint hierarchy in the skeleton, Bj would take from j space to j-1 space (that is to its parent space). However, in this example j is the only joint, so Bj takes from j space to world space, like M does for v. Now further assume I have a a set of frames, each with a second transform Cj, which works the same as Bj only that for a different, arbitrary spatial configuration of join j. Cj still takes vertices from j space to world space but j is rotated and/or translated and/or scaled. Given the above, in order to skin vertex v at keyframe n. I need to: take v from world space to joint j space modify j (while v stays fixed in j space and is thus taken along in the transformation) take v back from the modified j space to world space So the mathematical implementation of the above would be: v' = Cj * Bj^-1 * v. Actually, I have one doubt here.. I said the mesh to which v belongs has a transform M which takes from model space to world space. And I've also read in a couple textbooks that it needs to be transformed from model space to joint space. But I also said in 1 that v needs to be transformed from world to joint space. So basically I'm not sure if I need to do v' = Cj * Bj^-1 * v or v' = Cj * Bj^-1 * M * v. Right now my implementation multiples v' by M and not v. But I've tried changing this and it just screws things up in a different way cause there's something else wrong. Finally, If we wanted to skin a vertex to a joint j1 which in turn is a child of a joint j0, Bj1 would be Bj0 * Bj1 and Cj1 would be Cj0 * Cj1. But Since skinning is defined as v' = Cj * Bj^-1 * v , Bj1^-1 would be the reverse concatenation of the inverses making up the original product. That is, v' = Cj0 * Cj1 * Bj1^-1 * Bj0^-1 * v Now on to the implementation (Blender side): Assume the following mesh made up of 1 cube, whose vertices are bound to a single joint in a single-joint skeleton: Assume also there's a 60-frame, 3-keyframe animation at 60 fps. The animation essentially is: keyframe 0: the joint is in bind / rest pose (the way you see it in the image). keyframe 30: the joint translates up (+z in blender) some amount and at the same time rotates pi/4 rad clockwise. keyframe 59: the joint goes back to the same configuration it was in keyframe 0. My first source of confusion on the blender side is its coordinate system (as opposed to OpenGL's default) and the different matrices accessible through the python api. Right now, this is what my export script does about translating blender's coordinate system to OpenGL's standard system: # World transform: Blender -> OpenGL worldTransform = Matrix().Identity(4) worldTransform *= Matrix.Scale(-1, 4, (0,0,1)) worldTransform *= Matrix.Rotation(radians(90), 4, "X") # Mesh (local) transform matrix file.write('Mesh Transform:\n') localTransform = mesh.matrix_local.copy() localTransform = worldTransform * localTransform for col in localTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) file.write('\n') So if you will, my "world" matrix is basically the act of changing blenders coordinate system to the default GL one with +y up, +x right and -z into the viewing volume. Then I also premultiply (in the sense that it's done by the time we reach the engine, not in the sense of post or pre in terms of matrix multiplication order) the mesh matrix M so that I don't need to multiply it again once per draw call in the engine. About the possible matrices to extract from Blender joints (bones in Blender parlance), I'm doing the following: For joint bind poses: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: bindPoseJoint = skeleton.data.bones[joint.name] bindPoseTransform = bindPoseJoint.matrix_local.inverted() file.write('Joint ' + joint.name + ' Transform {\n') translationV = bindPoseTransform.to_translation() rotationQ = bindPoseTransform.to_3x3().to_quaternion() scaleV = bindPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') Note that I'm actually grabbing the inverse of what I think is the bind pose transform Bj. This is so I don't need to invert it in the engine. Also note I went for matrix_local, assuming this is Bj. The other option is plain "matrix", which as far as I can tell is the same only that not homogeneous. For joint current / keyframe poses: for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) currentPoseJoint = skeleton.pose.bones[i] currentPoseTransform = currentPoseJoint.matrix translationV = currentPoseTransform.to_translation() rotationQ = currentPoseTransform.to_3x3().to_quaternion() scaleV = currentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') Note that here I go for skeleton.pose.bones instead of data.bones and that I have a choice of 3 matrices: matrix, matrix_basis and matrix_channel. From the descriptions in the python API docs I'm not super clear which one I should choose, though I think it's the plain matrix. Also note I do not invert the matrix in this case. The implementation (Engine / OpenGL side): My animation subsystem does the following on each update (I'm omitting parts of the update loop where it's figured out which objects need update and time is hardcoded here for simplicity): static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.joints[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.joints[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4TranslateWithVector3(currentTransform, LERPTranslation); currentTransform = GLKMatrix4ScaleWithVector3(currentTransform, LERPScaling); GLKMatrix4 inverseBindTransform = GLKMatrix4MakeWithQuaternion(aSkeleton.joints[i].inverseBindTransform.q); inverseBindTransform = GLKMatrix4TranslateWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.t); inverseBindTransform = GLKMatrix4ScaleWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.s); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } Finally, this is my vertex shader: #version 100 uniform mat4 modelMatrix; uniform mat3 normalMatrix; uniform mat4 projectionMatrix; uniform mat4 skinningPalette[6]; uniform lowp float skinningEnabled; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } vec4 skinnedNormal = vec4(0.); for (int i = 0; i < 4; i++) { skinnedNormal += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * vec4(normal, 0.); } vec4 finalPosition = mix(position, skinnedVertexPosition, skinningEnabled); vec4 finalNormal = mix(vec4(normal, 0.), skinnedNormal, skinningEnabled); vec3 eyeNormal = normalize(normalMatrix * finalNormal.xyz); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); gl_Position = projectionMatrix * modelMatrix * finalPosition; } The result is that the animation displays wrong in terms of orientation. That is, instead of bobbing up and down it bobs in and out (along what I think is the Z axis according to my transform in the export clip). And the rotation angle is counterclockwise instead of clockwise. If I try with a more than one joint, then it's almost as if the second joint rotates in it's own different coordinate space and does not follow 100% its parent's transform. Which I assume it should from my animation subsystem which I assume in turn follows the theory I explained for the case of more than one joint. Any thoughts?

    Read the article

  • XP Deploying issues due to msvcr90.dll trying to load FlsAlloc

    - by Sorin Sbarnea
    I have an application build with VS2008 SP1a (9.0.30729.4148) on Windows 7 x64 that does not want to start under XP. The message is The application failed to initialize properly (0x80000003). Click on OK to terminate the application.. I checked with depends.exe and found that msvcr90.dll does try to load FlsAlloc from KERNEL32.dll - and FlsAlloc is available only starting with Vista. I'm sure it is not used by the application. How to solve the issue? The SxS package is already installed on the target machine - In fact I have all 3 versions of 9.0 SxS (initial release, sp1, and sp1+security patch) Application is compiled with _BIND_TO_CURRENT_VCLIBS_VERSION=1 Also I defined the right target Windows version on stdafx.h #define WINVER 0x0500 #define _WIN32_WINNT 0x0500 Manifest file <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false" /> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.CRT" version="9.0.30729.4148" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b" /> </dependentAssembly> </dependency> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.MFC" version="9.0.30729.4148" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b" /> </dependentAssembly> </dependency> </assembly> Result from depends Started "c:\program files\app\app.EXE" (process 0xA0) at address 0x00400000. Successfully hooked module. Loaded "c:\windows\system32\NTDLL.DLL" at address 0x7C900000. Successfully hooked module. Loaded "c:\windows\system32\KERNEL32.DLL" at address 0x7C800000. Successfully hooked module. Loaded "c:\program files\app\MFC90.DLL" at address 0x785E0000. Successfully hooked module. Loaded "c:\program files\app\MSVCR90.DLL" at address 0x78520000. Successfully hooked module. Loaded "c:\windows\system32\USER32.DLL" at address 0x7E410000. Successfully hooked module. Loaded "c:\windows\system32\GDI32.DLL" at address 0x77F10000. Successfully hooked module. Loaded "c:\windows\system32\SHLWAPI.DLL" at address 0x77F60000. Successfully hooked module. Loaded "c:\windows\system32\ADVAPI32.DLL" at address 0x77DD0000. Successfully hooked module. Loaded "c:\windows\system32\RPCRT4.DLL" at address 0x77E70000. Successfully hooked module. Loaded "c:\windows\system32\SECUR32.DLL" at address 0x77FE0000. Successfully hooked module. Loaded "c:\windows\system32\MSVCRT.DLL" at address 0x77C10000. Successfully hooked module. Loaded "c:\windows\system32\COMCTL32.DLL" at address 0x5D090000. Successfully hooked module. Loaded "c:\windows\system32\MSIMG32.DLL" at address 0x76380000. Successfully hooked module. Loaded "c:\windows\system32\SHELL32.DLL" at address 0x7C9C0000. Successfully hooked module. Loaded "c:\windows\system32\OLEAUT32.DLL" at address 0x77120000. Successfully hooked module. Loaded "c:\windows\system32\OLE32.DLL" at address 0x774E0000. Successfully hooked module. Entrypoint reached. All implicit modules have been loaded. DllMain(0x78520000, DLL_PROCESS_ATTACH, 0x0012FD30) in "c:\program files\app\MSVCR90.DLL" called. GetProcAddress(0x7C800000 [c:\windows\system32\KERNEL32.DLL], "FlsAlloc") called from "c:\program files\app\MSVCR90.DLL" at address 0x78543ACC and returned NULL. Error: The specified procedure could not be found (127). GetProcAddress(0x7C800000 [c:\windows\system32\KERNEL32.DLL], "FlsGetValue") called from "c:\program files\app\MSVCR90.DLL" at address 0x78543AD9 and returned NULL. Error: The specified procedure could not be found (127). GetProcAddress(0x7C800000 [c:\windows\system32\KERNEL32.DLL], "FlsSetValue") called from "c:\program files\app\MSVCR90.DLL" at address 0x78543AE6 and returned NULL. Error: The specified procedure could not be found (127). GetProcAddress(0x7C800000 [c:\windows\system32\KERNEL32.DLL], "FlsFree") called from "c:\program files\app\MSVCR90.DLL" at address 0x78543AF3 and returned NULL. Error: The specified procedure could not be found (127).

    Read the article

  • Can compressing Program Files save space *and* give a significant boost to SSD performance?

    - by Christopher Galpin
    Considering solid-state disk space is still an expensive resource, compressing large folders has appeal. Thanks to VirtualStore, could Program Files be a case where it might even improve performance? Discovery In particular I have been reading: SSD and NTFS Compression Speed Increase? Does NTFS compression slow SSD/flash performance? Will somebody benchmark whole disk compression (HD,SSD) please? (may have to scroll up) The first link is particularly dreamy, but maybe head a little too far in the clouds. The third link has this sexy semi-log graph (logarithmic scale!). Quote (with notes): Using highly compressable data (IOmeter), you get at most a 30x performance increase [for reads], and at least a 49x performance DECREASE [for writes]. Assuming I interpreted and clarified that sentence correctly, this single user's benchmark has me incredibly interested. Although write performance tanks wretchedly, read performance still soars. It gave me an idea. Idea: VirtualStore It so happens that thanks to sanity saving security features introduced in Windows Vista, write access to certain folders such as Program Files is virtualized for non-administrator processes. Which means, in normal (non-elevated) usage, a program or game's attempt to write data to its install location in Program Files (which is perhaps a poor location) is redirected to %UserProfile%\AppData\Local\VirtualStore, somewhere entirely different. Thus, to my understanding, writes to Program Files should primarily only occur when installing an application. This makes compressing it not only a huge source of space gain, but also a potential candidate for performance gain. Testing The beginning of this post has me a bit timid, it suggests benchmarking NTFS compression on a whole drive is difficult because turning it off "doesn't decompress the objects". However it seems to me the compact command is perfectly capable of doing so for both drives and individual folders. Could it be only marking them for decompression the next time the OS reads from them? I need to find the answer before I begin my own testing.

    Read the article

  • Cloning and renaming form elements with jQuery

    - by Micor
    I am looking for an effective way to either clone/rename or re-create address fields to offer ability to submit multiple addresses on the same page. So with form example like this: <div id="addresses"> <div class="address"> <input type="text" name="address[0].street"> <input type="text" name="address[0].city"> <input type="text" name="address[0].zip"> <input type="text" name="address[0].state"> </div> </div> <a href="" id="add_address">Add address form</a> From what I can understand there are two options to do that: Recreate the form field by field and increment the index which is kind of verbose: var index = $(".address").length; $('<`input`>').attr({ name: 'address[' + index + '].street', type: 'text' }).appendTo(...); $('<`input`>').attr({ name: 'address[' + index + '].city', type: 'text' }).appendTo(...); $('<`input`>').attr({ name: 'address[' + index + '].zip', type: 'text' }).appendTo(...); $('<`input`>').attr({ name: 'address[' + index + '].state', type: 'text' }).appendTo(...); Clone Existing layer and replace the name in the clone: $("div.address").clone().appendTo($("#addresses")); Which one do you recommend using in terms of being more efficient and if its #2 can you please suggest how I would go about search and replacing all occurrences of [0] with [1] ([n]). Thank you.

    Read the article

  • Invoice & Invoice lines: How do you store customer address information?

    - by elviejo
    Hi I'm developing an invoicing application. So the general idea is to have two tables: Invoice (ID, Date, CustomerAddress, CustomerState, CustomerCountry, VAT, Total); InvoiceLine (Invoice_ID, ID, Concept, Units, PricePerUnit, Total); As you can see this basic design leads to a lot of repetiton of records where the client will have the same addrres, state and country. So the alternative is to have an address table and then make a relationship Address<-Invoice. However I think that an invoice is immutable document and should be stored just the way it was first made. Sometimes customers change their addresses, or states and if it was coming from an Address catalog that will change all the previously made invoices. So What is your experience? How is the customer address stored in an invoice? In the Invoice table? an Address Table? or something else? Can you provide pointers to a book, article or document where this is discussed in further detail?

    Read the article

  • How to get function name against function address by reading co-classs'es vtable?

    - by Usman
    Hello, I need to call the co-class function by reading its address from vtable of COM exposed interface methods. I need some generic way to read addresses. Now I need to call the function, which would have specific address(NOT KNOWN) arguments(parameters) which I have collected from TLB, and name as well. How that address corresponds to that function name to which I am going to call. For this I need to traverse vtable which is holding functional addresses, LASTLY need to correspond function address with NAME of that function. This is I dont know. How? More over one function with the same name may appear in vtable(Overloading case). In that case we need to distinguish function names w.r.t their addresses. How to tackle ? Regards Usman

    Read the article

  • ChannelFactory don't have an address on the endpoint, why?

    - by Maxim
    When I create a new instance of a ChannelFactory: var factory = new ChannelFactory<IMyService>(); and that I create a new channel, I have an exception saying that the address of the Endpoint is null. My configuration inside my web.config is as mentioned and everything is as it is supposed to be (especially the address of the endpoint). If I create a new MyServiceClientBase, it loads all the configuration from my channel factory: var factoryWithClientBase = new MyServiceClientBase().ChannelFactory; Console.WriteLine(factoryWithClientBase.Endpoint.Address); //output the configuration inside the web.config var factoryWithChannelFactory = new ChannelFactory<IMyService>(); Console.WriteLine(factoryWithChannelFactory.Endpoint.Address); //output nothing (null) Why?

    Read the article

  • Getting the start address of the current process's heap?

    - by beta
    Hey, I am exploring the lower level workings of the system, and was wondering how malloc determines the start address of the heap. Is the heap a constant offset or is there a call of some sort to get the start address? Does the stack effect the start address of the heap? Thanks, Braden McDorman

    Read the article

  • Usernames are evil. How can I make Restful Authentication only require an email address and password

    - by Koning WWWWWWWWWWWWWWWWWWWWWWW
    As the title says: how can I use the Restful Authentication Plugin with Ruby on Rails. When I want to create a new user, it requires me to set the (wrong-named, confusing field) login (= username), email address and password. However, I want, like Facebook does, to require the user to enter only an email address and password, not a username. People will also login with this email address. Can anyone help me?

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >