Search Results

Search found 22587 results on 904 pages for 'google translate'.

Page 404/904 | < Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >

  • Plain Text email support: Is it still needed in 2011?

    - by murdoch
    For many years I have been building emails that get sent out by my webapps that are Multi-part with a text part & an email part to allow users of plain text only email clients to default to the text version. However I have recently been developing a rather complex email that doesn't translate so well to text, so in 2011 is there really any need to provide a textual alternative. How many people out there are actually still only able to see plain text emails?

    Read the article

  • Programming and Ubiquitous Language (DDD) in a non-English domain

    - by Sandor Drieënhuizen
    I know there are some questions already here that are closely related to this subject but none of them take Ubiquitous Language as the starting point so I think that justifies this question. For those who don't know: Ubiquitous Language is the concept of defining a (both spoken and written) language that is equally used across developers and domain experts to avoid inconsistencies and miscommunication due to translation problems and misunderstanding. You will see the same terminology show up in code, conversations between any team member, functional specs and whatnot. So, what I was wondering about is how to deal with Ubiquitous Language in non-English domains. Personally, I strongly favor writing programming code in English completely, including comments but ofcourse excluding constants and resources. However, in a non-English domain, I'm forced to make a decision either to: Write code reflecting the Ubiquitous Language in the natural language of the domain. Translate the Ubiquitous Language to English and stop communicating in the natural language of the domain. Define a table that defines how the Ubiquitous Language translates to English. Here are some of my thoughts based on these options: 1) I have a strong aversion against mixed-language code, that is coding using type/member/variable names etc. that are non-English. Most programming languages 'breathe' English to a large extent and most of the technical literature, design pattern names etc. are in English as well. Therefore, in most cases there's just no way of writing code entirely in a non-English language so you end up with mixed languages anyway. 2) This will force the domain experts to start thinking and talking in the English equivalent of the UL, something that will probably not come naturally to them and therefore hinders communication significantly. 3) In this case, the developers communicate with the domain experts in their native language while the developers communicate with each other in English and most importantly, they write code using the English translation of the UL. I'm sure I don't want to go for the first option and I think option 3 is much better than option 2. What do you think? Am I missing other options? UPDATE Today, about year later, having dealt with this issue on a daily basis, I have to say that option 3 has worked out pretty well for me. It wasn't as tedious as I initially feared and translating in real time while talking to the client wasn't a problem either. I also found the following advantages to be true, based on my experience. Translating the UL makes you pay more attention to defining the UL and even the domain itself, especially when you don't know how to translate a term and you have to start looking through dictionaries etc. This has even caused me to reconsider domain modeling decisions a few times. It helps you make your knowledge of the English language more profound. Obviously, your code is much more pleasant to look at instead of being a mind boggling obscenity.

    Read the article

  • How do I handle having too many links on a webpage because of my menu

    - by RandomBen
    I am developing a website that has a drop-down menu at the top of it. The Menu has around 100 links in it that are repeated on every page. Every page also has some number of links below the Menu that may or may not be in the menu itself. My issue is that Google says they generally don't like pages with more than 100 links on them. Is there any way to change the links on the menu so that they no longer "count" towards my max of 100 links? It seems like there should be an easy way to do this but their really doesn't seem to be. the rel=nofollow still counts towards the number of links on the page at least according to Google, so what other options do I have? I looked into where the 100 comes from and I found that it used to be here: http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=35769#2 but that is no longer the case. I found a more definitive and frankly muddier answer here: http://www.seomoz.org/blog/questions-answers-with-googles-spam-guru from Matt Cutts from 2007. Long story short, in 2007 they still felt 100 links was a good number but they stated you could go far beyond that. In fact, they said that pages with high PageRank could have 2-300. It did sound like having many links could reduce the PageRank of the page with all of the links or possibly all of the items linked to. Also, I know IIS7's SEO 1.0 toolkit suggests that pages should have no more than 250 links.

    Read the article

  • English-Focused Translation Bookmarklet for Your Browser

    - by Asian Angel
    Are you wanting a translation bookmarklet that just focuses on translating websites into English? Then you will want to take a look at the To English Bookmarklet. Get the Bookmarklet To install the To English Bookmarklet visit the webpage at Lifehacker (link below), grab the bookmarklet with your mouse, and drag it to your “Bookmarks Toolbar”. Now you are ready for one-click translation into English. To English in Action We decided to test our new bookmarklet on two different International Mozilla websites. The first one was in Swedish… One click and there it is. Notice that there is a “translation bar frame” that will still let you choose yet another language to translate the webpage into if you desire. Definitely a nice touch… Our second example was in Russian. Once again a single click and… The website is now in English. On this particular page the “central green graphic” was affected by the translation and the two sidebar buttons are “pre-made” but that is ok. You can read what you need to without any problems. Conclusion If you have been wanting a bookmarklet that just focuses on translating into English then this should be perfect for you. If you are looking for a bookmarklet that gives you access to a Google Translation Bar then be certain to see our article here. Links Add the To English Bookmarklet to Your Browser Similar Articles Productive Geek Tips Add a Google Translation Bar to Your Favorite BrowserSkip “Next Links” with the PageZipper BookmarkletCreate Shortened goo.gl URLs in Your Favorite BrowserQuickly Translate Text to Another Language in Word 2007See Where Shortened URLs “Link To” in Your Favorite Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Find Downloads and Add-ins for Outlook Recycle ! Find That Elusive Icon with FindIcons Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa !

    Read the article

  • GIS-based data visualization and maintenance tool

    - by Dave Jarvis
    Background Looking to leverage an existing GIS system for exploring organizational data. Architecture The following figure represents a high-level overview of the system's desired features: The most basic usage would be as follows: The user visits a web site. The system presents a map (having regions, cities, and buildings). The user drills-down on the map to a particular building. The system provides a basic CRUD interface. The user can view and modify information about personnel (e.g., their assigned teams), equipment (e.g., network appliances), applications, and the building itself (e.g., contact and phone numbers). Ideally, all the components should be open-source (or otherwise free). Problem This must be a small project that needs a quick (but functional) prototype, mostly to confirm whether or not such a system would be useful in the long term. Questions What software components would you use to quickly develop a working prototype? What open-source solutions already exist, if any? Ideas Here is what I am thinking: PostGIS - Define the regions, cities, and sites Google Maps - Display an interactive, clickable map geoJSON - Protocol between PostGIS and Google Maps Seam - CRUD interface Custom Development For example, this would entail: Installation and configuration Configure SSH for remote logins Subversion (or git) PostgreSQL PostGIS Java Tomcat Seam JasperReports Enter GIS information into PostGIS Aggregate data sources into PostgreSQL database Develop starting page for map interface Develop clickable Google Maps interface Develop summary reports Develop CRUD interface using Seam for data maintenance Surely something like this already exists? Thank you!

    Read the article

  • Fix Nautilus URIs in a Python script

    - by Pablo
    I have a very basic Python script I wrote mostly for learning purposes. It opens a terminal in the current folder. However, I can't get it to work in folders with accented characters in the URI (e.g.: /home/pablo/Vídeos or /home/pablo/Área de Trabalho), because it looks like Nautilus URIs are encoded to those %{number} values. Is there a way to convert these URIs to normalized URIs without having to translate every possible accented value by hand? Thanks in advance!

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Deterministic/Consistent Unique Masking

    - by Dinesh Rajasekharan-Oracle
    One of the key requirements while masking data in large databases or multi database environment is to consistently mask some columns, i.e. for a given input the output should always be the same. At the same time the masked output should not be predictable. Deterministic masking also eliminates the need to spend enormous amount of time spent in identifying data relationships, i.e. parent and child relationships among columns defined in the application tables. In this blog post I will explain different ways of consistently masking the data across databases using Oracle Data Masking and Subsetting The readers of post should have minimal knowledge on Oracle Enterprise Manager 12c, Application Data Modeling, Data Masking concepts. For more information on these concepts, please refer to Oracle Data Masking and Subsetting document Oracle Data Masking and Subsetting 12c provides four methods using which users can consistently yet irreversibly mask their inputs. 1. Substitute 2. SQL Expression 3. Encrypt 4. User Defined Function SUBSTITUTE The substitute masking format replaces the original value with a value from a pre-created database table. As the method uses a hash based algorithm in the back end the mappings are consistent. For example consider DEPARTMENT_ID in EMPLOYEES table is replaced with FAKE_DEPARTMENT_ID from FAKE_TABLE. The substitute masking transformation that all occurrences of DEPARTMENT_ID say ‘101’ will be replaced with ‘502’ provided same substitution table and column is used , i.e. FAKE_TABLE.FAKE_DEPARTMENT_ID. The following screen shot shows the usage of the Substitute masking format with in a masking definition: Note that the uniqueness of the masked value depends on the number of columns being used in the substitution table i.e. if the original table contains 50000 unique values, then for the masked output to be unique and deterministic the substitution column should also contain 50000 unique values without which only consistency is maintained but not uniqueness. SQL EXPRESSION SQL Expression replaces an existing value with the output of a specified SQL Expression. For example while masking an EMPLOYEES table the EMAIL_ID of an employee has to be in the format EMPLOYEE’s [email protected] while FIRST_NAME and LAST_NAME are the actual column names of the EMPLOYEES table then the corresponding SQL Expression will look like %FIRST_NAME%||’.’||%LAST_NAME%||’@COMPANY.COM’. The advantage of this technique is that if you are masking FIRST_NAME and LAST_NAME of the EMPLOYEES table than the corresponding EMAIL ID will be replaced accordingly by the masking scripts. One of the interesting aspect’s of a SQL Expressions is that you can use sub SQL expressions, which means that you can write a nested SQL and use it as SQL Expression to address a complex masking business use cases. SQL Expression can also be used to consistently replace value with hashed value using Oracle’s PL/SQL function ORA_HASH. The following SQL Expression will help in the previous example for replacing the DEPARTMENT_IDs with a hashed number ORA_HASH (%DEPARTMENT_ID%, 1000) The following screen shot shows the usage of encrypt masking format with in the masking definition: ORA_HASH takes three arguments: 1. Expression which can be of any data type except LONG, LOB, User Defined Type [nested table type is allowed]. In the above example I used the Original value as expression. 2. Number of hash buckets which can be number between 0 and 4294967295. The default value is 4294967295. You can also co-relate the number of hash buckets to a range of numbers. In the above example above the bucket value is specified as 1000, so the end result will be a hashed number in between 0 and 1000. 3. Seed, can be any number which decides the consistency, i.e. for a given seed value the output will always be same. The default seed is 0. In the above SQL Expression a seed in not specified, so it to 0. If you have to use a non default seed then the function will look like. ORA_HASH (%DEPARTMENT_ID%, 1000, 1234 The uniqueness depends on the input and the number of hash buckets used. However as ORA_HASH uses a 32 bit algorithm, considering birthday paradox or pigeonhole principle there is a 0.5 probability of collision after 232-1 unique values. ENCRYPT Encrypt masking format uses a blend of 3DES encryption algorithm, hashing, and regular expression to produce a deterministic and unique masked output. The format of the masked output corresponds to the specified regular expression. As this technique uses a key [string] to encrypt the data, the same string can be used to decrypt the data. The key also acts as seed to maintain consistent outputs for a given input. The following screen shot shows the usage of encrypt masking format with in the masking definition: Regular Expressions may look complex for the first time users but you will soon realize that it’s a simple language. There are many resources in internet, oracle documentation, oracle learning library, my oracle support on writing a Regular Expressions, out of all the following My Oracle Support document helped me to get started with Regular Expressions: Oracle SQL Support for Regular Expressions[Video](Doc ID 1369668.1) USER DEFINED FUNCTION [UDF] User Defined Function or UDF provides flexibility for the users to code their own masking logic in PL/SQL, which can be called from masking Defintion. The standard format of an UDF in Oracle Data Masking and Subsetting is: Function udf_func (rowid varchar2, column_name varchar2, original_value varchar2) returns varchar2; Where • rowid is the row identifier of the column that needs to be masked • column_name is the name of the column that needs to be masked • original_value is the column value that needs to be masked You can achieve deterministic masking by using Oracle’s built in hash functions like, ORA_HASH, DBMS_CRYPTO.MD4, DBMS_CRYPTO.MD5, DBMS_UTILITY. GET_HASH_VALUE.Please refers to the Oracle Database Documentation for more information on the Oracle Hash functions. For example the following masking UDF generate deterministic unique hexadecimal values for a given string input: CREATE OR REPLACE FUNCTION RD_DUX (rid varchar2, column_name varchar2, orig_val VARCHAR2) RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE IS stext varchar2 (26); no_of_characters number(2); BEGIN no_of_characters:=6; stext:=substr(RAWTOHEX(DBMS_CRYPTO.HASH(UTL_RAW.CAST_TO_RAW(text),1)),0,no_of_characters); RETURN stext; END; The uniqueness depends on the input and length of the string and number of bits used by hash algorithm. In the above function MD4 hash is used [denoted by argument 1 in the DBMS_CRYPTO.HASH function which is a 128 bit algorithm which produces 2^128-1 unique hashed values , however this is limited by the length of the input string which is 6, so only 6^6 unique values will be generated. Also do not forget about the birthday paradox/pigeonhole principle mentioned earlier in this post. An another example is to consistently replace characters or numbers preserving the length and special characters as shown below: CREATE OR REPLACE FUNCTION RD_DUS(rid varchar2,column_name varchar2,orig_val VARCHAR2) RETURN VARCHAR2 DETERMINISTIC PARALLEL_ENABLE IS stext varchar2(26); BEGIN DBMS_RANDOM.SEED(orig_val); stext:=TRANSLATE(orig_val,'ABCDEFGHILKLMNOPQRSTUVWXYZ',DBMS_RANDOM.STRING('U',26)); stext:=TRANSLATE(stext,'abcdefghijklmnopqrstuvwxyz',DBMS_RANDOM.STRING('L',26)); stext:=TRANSLATE(stext,'0123456789',to_char(DBMS_RANDOM.VALUE(1,9))); stext:=REPLACE(stext,'.','0'); RETURN stext; END; The following screen shot shows the usage of an UDF with in a masking definition: To summarize, Oracle Data Masking and Subsetting helps you to consistently mask data across databases using one or all of the methods described in this post. It saves the hassle of identifying the parent-child relationships defined in the application table. Happy Masking

    Read the article

  • RANT: SkyDrive &amp; Mesh

    - by Sahil Malik
    SharePoint 2010 Training: more information Fellow citizens of the tech world, you’re watching a good Samaritan die. Unfortunately this is not the first time, it won’t be the last. We have seen this before, sadly we will see it again. The IT industry, is a few sharks – Oracle, Apple, Google, and yes, Microsoft, and numerous small fishes around them. 10 years ago, you saw some innovating smart engineers create instant-messaging programs. There was rapid innovation and growth in that field even though internet itself was quite nascent. Remember ICQ? Well, then came around the sharks! They offered you free versions of IM programs that in the short run were actually superior. Yahoo messenger, MSN, AIM and then later on google.  Innovation in IM was pretty much stand still until a new contender like skype decided to marry IM with telephony. This prompted google to do the same. Of course, Skype was then purchased by Microsoft.  The situation still stands, lets take the example of Microsoft, it offers, Read full article ....

    Read the article

  • Informing Googlebot for deprecated pages

    - by trante
    I publish timetables in my website. For example last year I published Number 2 bus Summer 2013 timetable. I has pretty good ranking on Google SERPs for number 2 bus timetable But this year I added a new page with the name "Number 2 bus Summer 2014 timetable". When users search number 2 bus timetable in Google, they find 2013 timetable in first page of SERPs. But I want them to find 2014 timetable. Thy can reach 2014 page with the keywords number 2 bus timetable 2014. But most of the users doesn't write year name. So what's the proper way to say Googlebot that 2013 page is deprecated and newer version is 2014 page ? I created a link from 2013 page to 2013 page and added a deprecation alert for visitors. But I still see 2013 timetable in first page of Google SERPs. Of course it is possible to 301 redirect, 2013 page to 2014 page. But I want users to reach old pages to compare the differences between years. (As you would guess I have many pages like this.) Edit: Why I don't put timetables on same page and show different years' timetables with sorting. Because my old pages has good pagerank scores or SERPs. Removing these old page will remove them.

    Read the article

  • Where does Windows 8 put the exe of the default browser for Modern UI?

    - by avirk
    I was trying to some hack with Win-8 and I found something which is really gonna out my mind. When I set the default browser to the IE then its icon become Modern UI and I can't see the option Open file location at the bottom when I select it by right click. But if I don't set it default then it became a desktop version icon and show up the option when I select it. Same is for the Google Chrome when I checked it. IE icon when it is not set to default, I can see the option open file location. Google Chrome icon when it is set to default. Google Chrome of desktop version when it is set to default. So my question is where does Modern UI keep exe of the default browser? And why the default browser has Modern UI icon and non-default browser has desktop version icon.

    Read the article

  • Can not access Internet (DNS names do not resolve) after update today

    - by Aras
    I have been using Precise for a few weeks now for work with no problem. Today, I am not able to access any website using either wired or wireless connections. I installed the updates today which included nautilus, xserver, and a new kernel (3.2.0-24). After restarting I no longer was able to browse the Internet using firefox or chrome. Trying to ping google in terminal gives ping: unknown host google.ca I have tried: Connecting to wireless or wired networks (both working on other machines) Restart the machine and boot with previous Kernel Manually configure opendns on my wired connection Restart the network and the laptop and the wireless card Without any success so far. I am not sure where to go next. Please let me know the cause of the issue or help me troubleshoot it. Note that the laptop does receive an ip address, and it can ping ip address of google.ca (74.125.127.94) but not the domain name, or any domain name for that matter. This system was upgraded from 11.10 to 12.04 more two weeks ago.

    Read the article

  • Page Titles - Including gender of a fashion product in page titles?

    - by Cedric
    I need a bit of help to decide whether it is worth including gender in page titles. In the webmaster tools: I looked at our search queries that include "women", and they account for 9% of our total search queries for the site. I am wondering if it is the right way assess the benefit of including "woman" or "men" in page titles, looking at it with existing results pointing to us already? Is there another tool that I can check the actual queries that may not include us in search results? Like google insights maybe? http://www.google.com/insights/search/#q=shoes%2Cshoes%20for%20women&cmpt=q So it looks like 1.1% of searches for "shoes" are also "shoes for women" is that correct? As a direct comparison, doing the same analysis on our own search queries, I get 1.8% when comparing "shoes for women" to "shoes" Implementing this automation would probably affect 99% of our site if not more, splitting it in 2 segments (one portion of page titles including "women" and the other including "men") Will doing so create a massively repetitive keyword throughout the site, hurting SEO? http://support.google.com/webmasters/bin/answer.py?hl=en&answer=35624 (see "Avoid repeated or boilerplate titles.")

    Read the article

  • sendmail is using return-path instead of from address

    - by magd1
    I have a customer that is complaining about emails marked as spam. I'm looking at the header. It shows the correct From: [email protected] However, it doesn't like the return-path. Return-Path: <[email protected]> Received-SPF: neutral (google.com: x.x.x.x is neither permitted nor denied by domain of [email protected]) client-ip=x.x.x.x; Authentication-Results: mx.google.com; spf=neutral (google.com: x.x.x.x is neither permitted nor denied by domain of [email protected]) [email protected] How do I configure sendmail to use the From address for the Return-Path?

    Read the article

  • Good translation (manually) software

    - by S.Hoekstra
    I'm looking for a good translation software. i do not mean something that does all the translation automatically for me. But rather something that aids me in translating large pieces of text, since it has to be a perfect translation i can't leave it to computers alone. Something like http://translate.google.com/toolkit but with more options/functions would be great. Preferably freeware ofcourse (But paid is not a problem). At the moment i use the Google toolkit since it's adequate for now, but i really need something more advanced. But looking for such software on google etc. is really hard because of the confusion with translation services and things like babelfish. Do you know any software like this? and maybe want to share your thoughts/experiences.

    Read the article

  • How should I setup separate mx records for a subdomain?

    - by Chris Adams
    Lets say I have a domain that I run a web app on, for example cranketywidgets.com, and I'm using google apps for handle email for people work work on that domain, i.e. support@ cranketywidgets.com, [email protected], [email protected] and so on. Google's own mail services aren't always the best for sending automated reminder emails, comment notifications and so on, so the current solution I plan to pursue is to create a separate subdomain called mailer.cranketywidgets.com, run a mail server off it, and create a few accounts specifically for sending these kinds of emails. What should the mx records and a records look like here for this? I'm somewhat confused by the fact that mx records can be names, but that they must eventually resolve to an A record. What should the records look like here? cranketywidgets.com - A record to actual server like 10.24.233.214 cranketywidgets.com - mx records for google's email apps mailer.cranketywidgets.com - mx name pointing to server's ip address Would greatly appeciate some help on this - the answer seems like it'll be obvious, but email spam is a difficult problem to solve.

    Read the article

  • Recovering a lost website with no backup?

    - by Jeff Atwood
    Unfortunately, our hosting provider experienced 100% data loss, so I've lost all content for two hosted blog websites: http://blog.stackoverflow.com http://www.codinghorror.com (Yes, yes, I absolutely should have done complete offsite backups. Unfortunately, all my backups were on the server itself. So save the lecture; you're 100% absolutely right, but that doesn't help me at the moment. Let's stay focused on the question here!) I am beginning the slow, painful process of recovering the website from web crawler caches. There are a few automated tools for recovering a website from internet web spider (Yahoo, Bing, Google, etc.) caches, like Warrick, but I had some bad results using this: My IP address was quickly banned from Google for using it I get lots of 500 and 503 errors and "waiting 5 minutes…" Ultimately, I can recover the text content faster by hand I've had much better luck by using a list of all blog posts, clicking through to the Google cache and saving each individual file as HTML. While there are a lot of blog posts, there aren't that many, and I figure I deserve some self-flagellation for not having a better backup strategy. Anyway, the important thing is that I've had good luck getting the blog post text this way, and I am definitely able to get the text of the web pages out of the Internet caches. Based on what I've done so far, I am confident I can recover all the lost blog post text and comments. However, the images that go with each blog post are proving…more difficult. Any general tips for recovering website pages from Internet caches, and in particular, places to recover archived images from website pages? (And, again, please, no backup lectures. You're totally, completely, utterly right! But being right isn't solving my immediate problem… Unless you have a time machine…)

    Read the article

  • GLM Velocity Vectors - Basic Maths to Simulate Steering

    - by Reanimation
    UPDATE - Code updated below but still need help adjusting my math. I have a cube rendered on the screen which represents a car (or similar). Using Projection/Model matrices and Glm I am able to move it back and fourth along the axes and rotate it left or right. I'm having trouble with the vector mathematics to make the cube move forwards no matter which direction it's current orientation is. (ie. if I would like, if it's rotated right 30degrees, when it's move forwards, it travels along the 30degree angle on a new axes). I hope I've explained that correctly. This is what I've managed to do so far in terms of using glm to move the cube: glm::vec3 vel; //velocity vector void renderMovingCube(){ glUseProgram(movingCubeShader.handle()); GLuint matrixLoc4MovingCube = glGetUniformLocation(movingCubeShader.handle(), "ProjectionMatrix"); glUniformMatrix4fv(matrixLoc4MovingCube, 1, GL_FALSE, &ProjectionMatrix[0][0]); glm::mat4 viewMatrixMovingCube; viewMatrixMovingCube = glm::lookAt(camOrigin, camLookingAt, camNormalXYZ); vel.x = cos(rotX); vel.y=sin(rotX); vel*=moveCube; //move cube ModelViewMatrix = glm::translate(viewMatrixMovingCube,globalPos*vel); //bring ground and cube to bottom of screen ModelViewMatrix = glm::translate(ModelViewMatrix, glm::vec3(0,-48,0)); ModelViewMatrix = glm::rotate(ModelViewMatrix, rotX, glm::vec3(0,1,0)); //manually turn glUniformMatrix4fv(glGetUniformLocation(movingCubeShader.handle(), "ModelViewMatrix"), 1, GL_FALSE, &ModelViewMatrix[0][0]); //pass matrix to shader movingCube.render(); //draw glUseProgram(0); } keyboard input: void keyboard() { char BACKWARD = keys['S']; char FORWARD = keys['W']; char ROT_LEFT = keys['A']; char ROT_RIGHT = keys['D']; if (FORWARD) //W - move forwards { globalPos += vel; //globalPos.z -= moveCube; BACKWARD = false; } if (BACKWARD)//S - move backwards { globalPos.z += moveCube; FORWARD = false; } if (ROT_LEFT)//A - turn left { rotX +=0.01f; ROT_LEFT = false; } if (ROT_RIGHT)//D - turn right { rotX -=0.01f; ROT_RIGHT = false; } Where am I going wrong with my vectors? I would like change the direction of the cube (which it does) but then move forwards in that direction.

    Read the article

  • Icinga notifications are being marked as spam when sent to my mailbox

    - by user784637
    I'm using gmail and my domain is foo.com About half the notifications from my icinga server, [email protected] go to my spam folder for [email protected] Received-SPF: fail (google.com: domain of [email protected] does not designate <ip6> as permitted sender) client-ip=<ip6>; Authentication-Results: mx.google.com; spf=hardfail (google.com: domain of [email protected] does not designate <ip6> as permitted sender) [email protected] Is my current SPF record set up to allow my icinga server with the ip <ip4> and <ip6> to send email from the domain foo.com? ;; ANSWER SECTION: foo.com. 300 IN TXT "v=spf1 ip4:<ip4> ip6:<ip6> -all"

    Read the article

  • Why do some people hate Dart? [closed]

    - by Hassan
    First, I'd like to note that this question is not intended to compare two languages or technologies, but is only asking about criticisms aimed at a language. I've always thought it a good idea to somehow get rid of Javascript. It works, but it's just so messy. I think many will agree with me there. And that's how I interpreted Google's release of Dart. It seems to me like a very good alternative to Javascript. Now, it looks like some are not very happy that Google has released this new language. Take a look at this Wikipedia page to see what I'm talking about. If you don't feel like reading it, I'll tell you now that some seem to think that Dart is similar to Microsoft's VBScript, in that it only works on Microsoft's browsers. This goes against the web's openness. But it's my understanding that Dart can be compiled to Javascript, which will allow it to be run on any modern browser (as the Wikipedia article also states). So my question is: are these criticisms valid? Is there a real fear that Google is trying to control the web's front-end to be more compatible with its browser?

    Read the article

  • OpenGL and gluUnProject, 3d object following mouse

    - by Robert
    i have a 3d object and i want him to "follow" my mouse position, so i use gluUnProject function to convert screen coordinates to 3d world coordinates and i translate this object with the new coordinates. Its working but i have a problem, my object can follow my mouse but he is moving extremely fast, when i move my mouse a little bit(something like 2 pixels), its moving extremly fast in the 3d world. I want something like that : http://www.youtube.com/watch?v=90zS8SVUAIY (red circle following mouse). Thanks for your help.

    Read the article

  • Ubuntu 12.04 connected to wireless network but internet not working

    - by A.J.
    I can connect to my house's wireless network just fine, but when I'm connected I can't browse the web. Firefox starts connecting to a site and then just poops out. This doesn't happen on my roommates' computers (running Windows) or on our 3DSes, so I know it's just my laptop. I already tried sudo dhclient -r sudo dhclient sudo ifconfig eth0 down sudo ifconfig eth0 up Results of a few commands I was asked to run in comments: ping -c 2 4.2.2.2 PING 4.2.2.2 (4.2.2.2) 56(84) bytes of data. ^C --- 4.2.2.2 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1007ms ping -c 2 google.com PING google.com (173.194.33.38) 56(84) bytes of data. --- google.com ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1006ms nm-tool NetworkManager Tool State: connected (global) - Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: atl1c State: unavailable Default: no HW Address: 88:AE:1D:6B:4E:E7 Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: off - Device: wlan0 [JUSTICE] ----------------------------------------------------- Type: 802.11 WiFi Driver: ath9k State: connected Default: yes HW Address: 1C:65:9D:65:C6:31 Capabilities: Speed: 1 Mb/s Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points (* = current AP) HOME-9B18: Infra, 00:26:F3:53:9B:18, Freq 2412 MHz, Rate 54 Mb/s, Strength 34 WPA WPA2 cougdad48 Network: Infra, 60:33:4B:E4:C4:5D, Freq 2437 MHz, Rate 54 Mb/s, Strength 22 WPA2 cougdad48 Guest Network: Infra, 66:33:4B:E4:C4:5D, Freq 2437 MHz, Rate 54 Mb/s, Strength 20 WPA2 belkin.ade: Infra, 94:44:52:FF:8A:DE, Freq 2457 MHz, Rate 54 Mb/s, Strength 20 WPA WPA2 *JUSTICE: Infra, 00:24:01:7B:9F:7E, Freq 2462 MHz, Rate 54 Mb/s, Strength 88 WEP CenturyLink: Infra, B2:B2:DC:8E:E2:58, Freq 2462 MHz, Rate 54 Mb/s, Strength 17 WPA WPA2 IPv4 Settings: Address: 192.168.0.11 Prefix: 24 (255.255.255.0) Gateway: 192.168.0.1 DNS: 192.168.0.1 (JUSTICE is my home's network.) ping -c 2 198.168.0.1 PING 198.168.0.1 (198.168.0.1) 56(84) bytes of data. --- 198.168.0.1 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1007ms

    Read the article

  • How to change HTTP_REFERER using perl?

    - by zuqqhi2
    I tried to change log format and change HTTP_REFERER using perl to change browser's referrer like below. [pattern1] Log Format : %{HTTP_REFERER}o perl : $ENV{'HTTP_REFERER'} = "http://www.google.com"; [pattern2] Log Format : %{X-RT-REF}o perl : addHeader('X-RT-REF' => "http://www.google.com"); [pattern3] Log Format : %{HTTP_REFERER}e perl : $ENV{'HTTP_REFERER'} = "http://www.google.com"; but they didn't work. How can I do it? If you have any idea please teach me. Note that I just want to do this as a countermeasure for illegal access in my intra tool.

    Read the article

  • Legal issues regarding embedding a toolbar into a browser [closed]

    - by OmarOthman
    We are in the process of developing a software that provides service to internet users and we would like to ask about the legal liabilities of some issues. Of course, everything is to be done with the consent of the user of our software but our concern is about third party tools and services that may be invoked/used by our product. In particular, these are the concerns: (1) Embedding a toolbar to an existing browser. This screenshot is an example, where the words in the highlighted toolbar are passed to www.google.com for searching, and the contents of the window are the results of the search. I want to know if any consent should be obtained before such a toolbar can be embedded in a web browser, whether there are any legal requirements by the web browser; whether different web browsers have different requirements (at least for Internet Explorer, Firefox, Chrome, Opera and Safari). (2) Invoking a free website from that toolbar (like Google’s search page). The screenshot above demonstrates such an existing toolbar. (3) Full ownership and unrestricted access to the data entered to this toolbar. In the screenshot above, I want to take the words (translation english to spanish) and own them, i.e. storing them in my database and do some processing on them. (4) Ability to track the pages entered by the user starting from that free website. In the screenshot above, you can notice that the user opted only for the third result, whose URL is translate.google.com. I want to have access to this and all URLs clicked from this page for some processing as well. This is a commercial application, so I need a very concrete, precise and reference-supported answer.

    Read the article

< Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >