Search Results

Search found 11166 results on 447 pages for 'justin standard'.

Page 236/447 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • Spring Roo Database Reverse Engineer with Oracle

    - by kerry
    So you are trying to reverse engineer an Oracle database with roo? Unfortunately, due to licensing restrictions with the Oracle JDBC Drivers, this is a little difficult. There are a few blog posts and forum threads that address the problem but I figured I would post what worked for me here. First, you need to download the appropriate Oracle Drivers from Oracle. The required login, stringent password requirements, nosy registration form, and general system instability made this a pretty painful step for me. I’d also like to say that companies that have password requirements that don’t allow symbols (or any other non-standard requirement) have a special place in my heart. Having to recover my password every time I go to your site virtually guarantees I will only go there when I absolutely have to (not often). Anyways, once you have it downloaded you need to install is with maven: mvn install:install-file -Dfile=~/Downloads/ojdbc6.jar -DgroupId=com.oracle -DartifactId=ojdbc6 -Dversion=11.2.0.3 -Dpackaging=jar -DgeneratePom=true Here comes the fun part. You need to create an osgi wrapper for the driver to install it in roo. Otherwise, roo cannot see the driver. Create a new folder and put the contents of the oracle roo addon pom gist I created. Now build it with maven. You may want to change some of the artifact ids and dependencies for your particular situation. mvn package No open a roo shell and execute the following command: osgi install --url file:///Users/me/my-osgi-project/target/the-jar-it-built.jar Now run (in roo): jpa setup --provider HIBERNATE --database ORACLE dependency remove --groupId com.oracle --artifactId ojdbc14 --version 10.2.0.2 dependency add --groupId com.oracle --artifactId ojdbc6 --version 11.2.0.3 database properties set --key database.driverClassName --value oracle.jdbc.OracleDriver database properties set --key database.url --value jdbc:oracle:thin:@%YOUR_CONNECTION_INFO% database properties set --key database.username --value %YOUR_USERNAME% database properties set --key database.password --value %YOUR_PASSWORD% database reverse engineer --schema %YOUR_SCHEMA% --package ~.domain If you have any package loading exceptions when running the reverse engineer command you can uninstall the osgi bundle, set the package to optional in the osgi pom in the IncludedPackages tag (javax.some.package.*;resolution:=optional) rebuild, then reinstall in roo.

    Read the article

  • CLR 4.0: Corrupted State Exceptions

    - by Scott Dorman
    Corrupted state exceptions are designed to help you have fewer bugs in your code by making it harder to make common mistakes around exception handling. A very common pattern is code like this: public void FileSave(String name) { try { FileStream fs = new FileStream(name, FileMode.Create); } catch (Exception e) { MessageBox.Show("File Open Error"); throw new Exception(IOException); } The standard recommendation is not to catch System.Exception but rather catch the more specific exceptions (in this case, IOException). While this is a somewhat contrived example, what would happen if Exception were really an AccessViolationException or some other exception indicating that the process state has been corrupted? What you really want to do is get out fast before persistent data is corrupted or more work is lost. To help solve this problem and minimize the chance that you will catch exceptions like this, CLR 4.0 introduces Corrupted State Exceptions, which cannot be caught by normal catch statements. There are still places where you do want to catch these types of exceptions, particularly in your application’s “main” function or when you are loading add-ins.  There are also rare circumstances when you know code that throws an exception isn’t dangerous, such as when calling native code. In order to support these scenarios, a new HandleProcessCorruptedStateExceptions attribute has been added. This attribute is added to the function that catches these exceptions. There is also a process wide compatibility switch named legacyCorruptedStateExceptionsPolicy which when set to true will cause the code to operate under the older exception handling behavior. Technorati Tags: CLR 4.0, .NET 4.0, Exception Handling, Corrupted State Exceptions

    Read the article

  • Is 12.10 ready from prime time?

    - by Mikey
    I recently installed 12.10 - dual boot using Wubi. I had numerous stability problems: many sporadic, unaccounted for system errors, software that was installed and then seemed to disappear from App Launcher, etc. Machine is new HP quad with 4 gig memory - wonderful piece of hardware - definitely no hardware problems there. I went back to 12.04 (installed from ISO image, not Wubi) and it's rock solid - never any issues at all. Is 12.10 not ready for prime time? Are there fixes-updates in the pipe? Should I get a DVD disc and install 12.10 from ISO instead of Wubi? New to Linux/Ubuntu after 20 years on MS/Windows and loving it - I develop in C++ and Python - maybe will try Lazarus since I know Delphi. Would like to use the latest and greatest Ubuntu but I will not sacrifice stabilty. Are there known problems with 12.10? Would using the standard install instead of Wubi help? Seems a lot of users are voting down this question - not sure why since other users have confirmed problems.

    Read the article

  • Development environment to manage multiple Oracle databases

    - by jkohlhepp
    I am in an enterprise environment where we have applications that need to run against multiple Oracle databases. Developers may need to manage multiple vintages of these databases to support different test data or diagnose bugs against different versions of the code. Right now, we have a limited set of test environments set up on "real" Oracle servers within the data center. We juggle these among development and QA groups and there is a lot of conflicts and inefficiencies that arise because of it. I am taking a look at Oracle Express Edition which would allow me to spin up a local Oracle database. This is similar to the workflow I most often see with SQL Server. Devs work on their location machine until they are ready to integration and then they push their DB changes to integration / QA environments. However, from what I read it seems that Oracle XE only supports one database instance at a time. So if I have an application that utilizes two different databases, I can't have both of them running on my local machine. Is that correct? Does Oracle Standard or Personal editions get around this limitation? If I had one of those installed locally, how difficult would it be to get multiple databases working on the same development machine? How do dev shops handle developing against Oracle where they need to be using several different Oracle instances for their applications?

    Read the article

  • Implementing a post-notification function to perform custom validation

    - by Alejandro Sosa
    Introduction Oracle Workflow Notification System can be extended to perform extra validation or processing via PLSQL procedures when the notification is being responded to. These PLSQL procedures are called post-notification functions since they are executed after a notification action such as Approve, Reject, Reassign or Request Information is performed. The standard signature for the post-notification function is     procedure <procedure_name> (itemtype  in varchar2,                                itemkey   in varchar2,                                actid     in varchar2,                                funcmode  in varchar2,                                resultout in out nocopy varchar2); Modes The post-notification function provides the parameter 'funcmode' which will have the following values: 'RESPOND', 'VALIDATE, and 'RUN' for a notification is responded to (Approve, Reject, etc) 'FORWARD' for a notification being forwarded to another user 'TRANSFER' for a notification being transferred to another user 'QUESTION' for a request of more information from one user to another 'QUESTION' for a response to a request of more information 'TIMEOUT' for a timed-out notification 'CANCEL' when the notification is being re-executed in a loop. Context Variables Oracle Workflow provides different context information that corresponds to the current notification being acted upon to the post-notification function. WF_ENGINE.context_nid - The notification ID  WF_ENGINE.context_new_role - The new role to which the action on the notification is directed WF_ENGINE.context_user_comment - Comments appended to the notification   WF_ENGINE.context_user - The user who is responsible for taking the action that updated the notification's state WF_ENGINE.context_recipient_role - The role currently designated as the recipient of the notification. This value may be the same as the value of WF_ENGINE.context_user variable, or it may be a group role of which the context user is a member. WF_ENGINE.context_original_recipient - The role that has ownership of and responsibility for the notification. This value may differ from the value of the WF_ENGINE.context_recipient_role variable if the notification has previously been reassigned.  Example Let us assume there is an EBS transaction that can only be approved by a certain people thus any attempt to transfer or delegate such notification should be allowed only to users SPIERSON or CBAKER. The way to implement this functionality would be as follows: Edit the corresponding workflow definition in Workflow Builder and open the notification. In the Function Name enter the name of the procedure where the custom code is handled, for instance, TEST_PACKAGE.Post_Notification In PLSQL create the corresponding package TEST_PACKAGE with a procedure named Post_Notification, as follows:     procedure Post_Notification (itemtype  in varchar2,                                  itemkey   in varchar2,                                  actid     in varchar2,                                  funcmode  in varchar2,                                  resultout in out nocopy varchar2) is     l_count number;     begin       if funcmode in ('TRANSFER','FORWARD') then         select count(1) into l_count         from WF_ROLES         where WF_ENGINE.context_new_role in ('SPIERSON','CBAKER');               --and/or any other conditions         if l_count<1 then           WF_CORE.TOKEN('ROLE', WF_ENGINE.context_new_role);           WF_CORE.RAISE('WFNTF_TRANSFER_FAIL');         end if;       end if;     end Post_Notification; Launch the workflow process with the changed notification and attempt to reassign or transfer it. When trying to reassign the notification to user CBROWN the screen would like like below: Check the Workflow API Reference Guide, section Post-Notification Functions, to see all the standard, seeded WF_ENGINE variables available for extending notifications processing. 

    Read the article

  • Copying files to Truecrypt file container hangs

    - by Wagner Maestrelli
    I have a dual boot installation with Windows 7 Ultimate (32-bits, NTFS file sytem) and Ubuntu 10.10 (32-bits, ext4 file system). I have installed the version 7.0a of Truecrypt in both Operating Systems. Located in the Windows 7 HDD I have a 150 GB encrypted file container. It is a standard and dynamic file container, which means it's not hidden and uses a sparse file. This file was created using the Windows version of the Truecrypt program. When I logon in Windows the container is mounted as the drive E: and everything works fine! In Ubuntu the Windows's NTFS file system is automaticaly mounted after I logon. I've configured that using the ntfs-config package. In my ~/.profile I have this line to mount the truecrypt's file container: truecrypt /media/7EDEBCFADEBCABB1/Users/Wagner/hd/hd.tc /media/truecrypt1 The file container is mounted after the logon without any problem. I can access it, copy files to/from it, etc. But when I try do copy relatively large amounts of data (~50 MB) to it via nautilus or cp -R, it starts the copy, copies some data until certain point and then it just hangs! The progress bar does not move anymore and nothing happens. There is no error, it just hangs and that's it. I have to kill the process myself. This problem does not happen in Windows: I can copy very large amounts of data to the container and it works great. But in Ubuntu the problem always happens! I mean, whenever I try to copy a bunch of files together the copy process hangs. Does anyone ever faced this problem? Can anyone help? Thanks!

    Read the article

  • I want to build a Virtual Machine, are there any good references?

    - by Michael Stum
    I'm looking to build a Virtual Machine as a platform independent way to run some game code (essentially scripting). The Virtual Machines that I'm aware of in games are rather old: Infocom's Z-Machine, LucasArts' SCUMM, id Software's Quake 3. As a .net Developer, I'm familiar with the CLR and looked into the CIL Instructions to get an overview of what you actually implement on a VM Level (vs. the language level). I've also dabbled a bit in 6502 Assembler during the last year. The thing is, now that I want¹ to implement one, I need to dig a bit deeper. I know that there are stack based and register based VMs, but I don't really know which one is better at what and if there are more or hybrid approaches. I need to deal with memory management, decide which low level types are part of the VM and need to understand why stuff like ldstr works the way it does. My only reference book (apart from the Z-Machine stuff) is the CLI Annotated Standard, but I wonder if there is a better, more general/fundamental lecture for VMs? Basically something like the Dragon Book, but for VMs? I'm aware of Donald Knuth's Art of Computer Programming which uses a register-based VM, but I'm not sure how applicable that series still is, especially since it's still unfinished? Clarification: The goal is to build a specialized VM. For example, Infocom's Z-Machine contains OpCodes for setting the Background Color or playing a sound. So I need to figure out how much goes into the VM as OpCodes vs. the compiler that takes a script (language TBD) and generates the bytecode from it, but for that I need to understand what I'm really doing. ¹ I know, modern technology would allow me to just interpret a high level scripting language on the fly. But where is the fun in that? :) It's also a bit hard to google because Virtual Machines is nowadays often associated with VMWare-type OS Virtualization...

    Read the article

  • Entity Object Based on PL/SQL

    - by Manoj Madhusoodanan
    This blog describes how to create a PL/SQL based Entity Object.Oracle application has number of APIs and each API will perform numerous number of tasks.We can create PL/SQL based EO which will directly invoke the PL/SQL stored procedure from the EO. Here I am demonstrating using a standard API FND_USER_PKG.CREATEUSER.This API has x_user_name and x_owner as mandatory parameter.My task is to create a user through OAF page which will accept User Name and Password. Following steps needs to be performed to achieve the above scenario. 1) Create FndUserEO as follows Include all the API parameters and WHO columns in the EO. Make UserName and EncryptedUserPassword ( Here I am not using Encrypted Password. The column name is same as table column so I am keeping the same) column as mandatory. Generate VO. 2) Edit FndUserEOImpl and add the following 3) Attach FndUserVO to AM 4) Create the UI 5) Deploy following files to middle tier and restart the server.  Entity Object: xxcust.oracle.apps.fnd.user.schema.server.FndUserEO.xml xxcust.oracle.apps.fnd.user.schema.server.FndUserEOImpl.java View Object: xxcust.oracle.apps.fnd.user.server.FndUserVO.xml xxcust.oracle.apps.fnd.user.server.FndUserVOImpl.javaUser Interface: xxcust.oracle.apps.fnd.user.webui.CreateFndUserCO.java xxcust.oracle.apps.fnd.user.webui.CreateFndUserPG.xmlYou can test by giving User Name and Password.

    Read the article

  • Upgraded to 13.10 now clock settings are all disabled and clock does not display

    - by kervin
    I've upgraded to 13.10 and now I don't have the standard menu clock, which I need for work. I checked and 'indicator-datetime' is installed. I even uninstalled/reinstalled that applet with no luck. Also my "clock" settings under System Preferences are all disabled. I can't change anything. Does anyone know how I can get the old menu clock back? Alternatively is there another menu clock app I can download? Thanks for the responses I've restarted several times and that didn't fix the issue. I just ran the 13.10 update today. But I ran it again a few minutes ago. I got about 200KB in random updates. The issue is still present after a reboot. # apt-cache policy indicator-datetime indicator-datetime: Installed: 13.10.0+13.10.20131016.2-0ubuntu1 Candidate: 13.10.0+13.10.20131016.2-0ubuntu1 Version table: *** 13.10.0+13.10.20131016.2-0ubuntu1 0 500 http://us.archive.ubuntu.com/ubuntu/ saucy/main amd64 Packages 100 /var/lib/dpkg/status

    Read the article

  • Working with data and meta data that are separated on different servers

    - by afuzzyllama
    While developing a product, I've come across a situation where my group wants to store meta data for data entry forms (questions, layout, etc) in a different database then the database where the collected data is stored. This is mostly for security because we want to be able to have our meta data public facing, while keeping collected data as secure as possible. I was thinking about writing a web service that provides the meta information that the data collection program could access. The only issue I see with this approach is the front end is going to have to match the meta data with the collected data, which would be more efficient as a join on the back end. Currently, this system is slated to run on .NET and MSSQL. I haven't played around with .NET libraries running in SQL, but I'm considering trying to create logic that would pull from the web service, convert the meta data into a table that SQL can join on, and return the combined data and meta data that way. Is this solution the wrong way to approach the problem? Is there a pattern or "industry standard" way of bringing together two datasets that don't live in the same database?

    Read the article

  • How to talk a client out of a Flash website?

    - by bunglestink
    I have recently been doing a bunch of web side projects through word of mouth recommendations only. Although I am much more a of a programmer than a designer by any means, my design skills are not terrible, and do not hate dealing with UI like many programmers. As a result, I find myself lured into a bunch of side projects where aside from a minimal back end for content administration, most of the programming is on front end interfaces (read javascript/css). By far the biggest frustration I have had is convincing clients that they do not want Flash. Aside the fact that I really do not enjoy Flash "development", there are many practical reasons why Flash is not desirable (lack of compatibility across devices, decreased client accessibility, plug-in requirements, increased development time, etc.). Instead of just flat out telling the clients "I will not build you a flash website", I would much rather use tactics to convince/explain to them that this is not what they actually want, ie: meet their requirements any better than standard html/css/js and distract users from their content. What kind of first hand experience do others have with this? How do you explain to someone that javascript/css/AJAX is usually a better option for most websites? Why do people want to use Flash so bad to begin with? This question pertains to clients who do not have any technical reasons for wanting flash, but just want it because they think it makes pretty websites.

    Read the article

  • SSIS Dashboard v0.4

    - by Davide Mauri
    Following the post on SSISDB script on Gist, I’ve been working on a HTML5 SSIS Dashboard, in order to have a nice looking, user friendly and, most of all, useful, SSIS Dashboard. Since this is a “spare-time” project, I’ve decided to develop it using Python since it’s THE data language (R aside), it’s a beautiful & powerful, well established and well documented and with a rich ecosystem around. Plus it has full support in Visual Studio, through the amazing Python Tools For Visual Studio plugin, I decided also to use Flask, a very good micro-framework to create websites, and use the SB Admin 2.0 Bootstrap admin template, since I’m all but a Web Designer. The result is here: https://github.com/yorek/ssis-dashboard and I can say I’m pretty satisfied with the work done so far (I’ve worked on it for probably less than 24 hours). Though there’s some features I’d like to add in t future (historical execution time, some charts, connection with AzureML to do prediction on expected execution times) it’s already usable. Of course I’ve tested it only on my development machine, so check twice before putting it in production but, give the fact that, virtually, there is not installation needed (you only need to install Python), and that all queries are based on standard SSISDB objects, I expect no big problems. If you want to test, contribute and/or give feedback please fell free to do it…I would really love to see this little project become a community project! Enjoy!

    Read the article

  • crunchbang: it takes up *how* much memory?!?!

    - by Theo Moore
    I've been trying many distros of Linux lately, trying to find something I like for my netbook. I started out with Ubuntu, and I can tell you I am a big fan. Ubuntu is now fast to install, much simpler to administer, and pretty light resource-wise. My original install was the standard 32 bit version of 9.04. I tried the netbook remix version of this release, but it was very, very slow. Even the full-blown version used only about 200mb. Much better than the almost 800 that the recommended Windows y version took. Once the newest release of Ubuntu was released, I decided to try the netbook remx of 10.04. It used even less RAM; only about 150mb. I thought I'd found my OS. I certainly settled in and prepared to use it forever. Then, someone I know suggested I try cunchbang. It is the most minimalistic UI I've ever seen, using Openbox rather than Gnome or KDE. Very slick, simple and clean. Since I am using the alpha of the most recent version (using Debian Squeeze), the apps provided for you are few...although more will be provided soon. You do have a word processor, etc., although not the OpenOffice you would normally get in Ubuntu. But the best part? 48MB. That's it. 48mb fully loaded, supporting what I can "hotel services". It's fast, boots quick, and believe it or not, I can even do Java-based development....on my netbook! Pretty slick.   More on it as I use it.

    Read the article

  • Can Near Field Communications (NFC) Benefit your Supply Chain?

    - by Stephen Slade
    Leading firms continue to leverage the latest tools and technologies to drive performance especially around minimizing transaction costs. With razor thin margins in manufacturing and distribution, the leading producers are resorting to Near Field Communications to gain efficiencies.  In this week’s CIO magazine (Apr1, 2012, pg.30, see http://www.cio.com)  Lauren Brousell talks of the things you need to know to make a more informed decision with NFC.  Sandy Shen of Gartner says NFC appeals because "it supports any services that requires data transfer and authentication' 1. NFC is Cheap and Easy - short range transmitting technology connecting smartphones to data transfer. 2. Adoption Seems Inevitable - more merchants will use NCF for payments in the futures. Wallets are becoming obsolete. 3. It's a Hot Potato for Enterprise - Business with credit card companies and cell phone providers are debating who handles the billing process. 4. It's in use Overseas. Japan uses FeliCa to pay by smartphone. In the US, billing agreements are causing territorial conflict. 5. Security Risks Come Standard. As people lose HH devices, security will be an ongoing concern. Credentials and timeout features and alleviate to some extent. My prediction: In 5 years, we won't have wallets in our pockets.  Our secure and all-powerful smart phones will be our electronic portable banks and execute the transaction for us based on our preferences and propensities and electronically execute the transaction for the supply chain.

    Read the article

  • intel atom getting too hot

    - by user59565
    I own an Asus 1215N which is getting very hot Intel Atom 525D dual core ION 2 (geforce 220 + GMA 3150) 4 GB RAM Ubuntu 12.04 it hits 86 C at idle. Some times (at load or turned on 1 hour) it shuts down due to heat, in Windows 7 it runs idle at 49 C. I tried an acpi call to shut down the nVidia chip which is cooled together with the Atom chip. That didn't solve the problem. To check up to see if it really turned off I checked how much power the laptop consumed, it only went from using 1400 mW to 860 mW, no changes in heat. I also tried reapply the standard heat adhesive, the old heat adhesive made it run at 97 (it couldn't even put up a useful install of Ubuntu). This really annoys me, as Ubuntu is the OS of choice to me. Should I try compile the kernel? Is it true that compile for a P4 is the better choice to the atom, when compiling the kernel for this processor architecture? Now I tried compiling the kernel for atom. Now temperature is 83 C (think the drop has more to do with ambient temp than the customized kernel) help appreciated

    Read the article

  • How do I boot into console mode (redux)

    - by Leo Simon
    I'm running Ubuntu 12.04. This question was asked some time ago How do I disable the boot splash screen? but the answers didn't work for me. The standard way to boot into console mode used to be to edit /etc/default/grub and set GRUB_CMDLINE_LINUX_DEFAULT="text" This worked fine until I ran the fix proposed in https://help.ubuntu.com/community/SoundTroubleshootingProcedure in order to get sound to work. Since then, I have disabled the boot-splash-screen, but I can avoid what I presume is the lightdm login prompt screen. All I want to do is disable this gui and be prompted with a console login prompt. (Shouldnt be so hard should it???) I read in three 33416 mentioned above that there was a bug in lightdm (it wasn't recognizing "text" properly as an option for GRUB_CMDLINE_LINUX_DEFAULT.) But this discussion happened more than a year ago, and it's surely been fixed. Yet my lightdm is uptodate (so I'm told when I try to update it with apt-get). As suggested in one of the above, I tried sudo update-rc.d -f lightdm remove which resulted in a hung machine. I managed to recover using recovery mode, but now I still get the gui again. Another suggestion is to edit /etc/init/lightdm.override. I've done this and set it to "manual" as suggested, but lightdm simply ignores this. Could somebody suggest how to proceed please? Thanks very much, Leo

    Read the article

  • Introducing the Industry's First Analytics Machine, Oracle Exalytics

    - by Manan Goel
    Analytics is all about gaining insights from the data for better decision making. The business press is abuzz with examples of leading organizations across the world using data-driven insights for strategic, financial and operational excellence. A recent study on “data-driven decision making” conducted by researchers at MIT and Wharton provides empirical evidence that “firms that adopt data-driven decision making have output and productivity that is 5-6% higher than the competition”. The potential payoff for firms can range from higher shareholder value to a market leadership position. However, the vision of delivering fast, interactive, insightful analytics has remained elusive for most organizations. Most enterprise IT organizations continue to struggle to deliver actionable analytics due to time-sensitive, sprawling requirements and ever tightening budgets. The issue is further exasperated by the fact that most enterprise analytics solutions require dealing with a number of hardware, software, storage and networking vendors and precious resources are wasted integrating the hardware and software components to deliver a complete analytical solution. Oracle Exalytics In-Memory Machine is the world’s first engineered system specifically designed to deliver high performance analysis, modeling and planning. Built using industry-standard hardware, market-leading business intelligence software and in-memory database technology, Oracle Exalytics is an optimized system that delivers answers to all your business questions with unmatched speed, intelligence, simplicity and manageability. Oracle Exalytics’s unmatched speed, visualizations and scalability delivers extreme performance for existing analytical and enterprise performance management applications and enables a new class of intelligent applications like Yield Management, Revenue Management, Demand Forecasting, Inventory Management, Pricing Optimization, Profitability Management, Rolling Forecast and Virtual Close etc. Requiring no application redesign, Oracle Exalytics can be deployed in existing IT environments by itself or in conjunction with Oracle Exadata and/or Oracle Exalogic to enable extreme performance and best in class user experience. Based on proven hardware, software and in-memory technology, Oracle Exalytics lowers the total cost of ownership, reduces operational risk and provides unprecedented analytical capability for workgroup, departmental and enterprise wide deployments. Click here to learn more about Oracle Exalytics.  

    Read the article

  • Internal Data Masking

    - by ACShorten
    By default, the data in the product is unmasked for authorized users. If particular data within the object is considered a candidate for data masking then the masking capabilities with the product can be used to mask the data in an appropriate fashion. The inbuilt Data Masking capabilities of the Oracle Utilities Application Framework uses a number of configuration elements: An algorithm, of type F1-MASK, is specified to configure the elements of the data masking including the masking character, number of suffix characters left unmasked, characters to ignore in the string, the application service, security type and authorization levels applicable to the mask. A Data Masking Feature Configuration is created to define where the algorithm applies. The specification of the feature allows you to define the fields to encrypt using the configured algorithm. The algorithm can be attached to a schema field, table field, characteristic, search field and even a child record (such as an identifier). The appropriate user groups are then connected to the application services with the appropriate service types and level to indicate whether the masking applies to the user group or not. For example, say there is a field called CCNBR in the product which holds the credit card details. I would create an algorithm, say CCformatCC, to mask the credit card number with the last few digits as unmasked (as the standard in most systems dictate). I would specify on the Field Mask the following: field="CCNBR", alg="CMformatCC" On the algorithm CMfomatCC, I would specify the mask, application service, security type and the authorization level which users would see the credit card unmasked. To finish the configuration off and to implemention I would connect the appropriate user groups to the application service I specified with the security type and appropriate authorization level for that group. Whenever a user accesses the CCNBR field on any of the maintenance screens, searches and other screens that use the CCNBR meta data definition would then be masked according to the user group that the user was a member of. Refer to the documentation supplied with F1-MASK algorithm type entry for more examples of what is possible.

    Read the article

  • 12.10 visual performance using nvidia driver

    - by user100485
    My fresh ubuntu 12.10 install is slow, not something extreme but dragging windows, switching workspaces and things like that are just slow and look horrible. it feels like the fps is dropping in a game. Doing some photoshop work in windows was even a relief! This effect gets worse if I connect my external monitor. My system is an intel pentium dual core T4500 with 4gb memory and a GeForce 8200M G/integrated/SSE2 graphics chip. Nothing fancy but should be able to run ok. My "experience" in ubuntu is set to standard. (MSI cr500 laptop) I've installed the nvidia drivers, tried current and experimental and the experimental drivers seem to perform a bit better but overall bad anyway. I set the mode to adaptive in the nvidia-settings tool and it goes to maximum setting directly and doesn't come back. Using htop I found out that compiz or the X server always use a few percent of my cpu, more than I think it should and the time consumed is 5:18 for compiz, 4:33 for /usr/bin/X and 2:41 for google chrome(about 30 tabs open so not too strange I think.) What can I do to increase the visual performance cause this makes me not want to use ubuntu in public!

    Read the article

  • Handling changes to data types and entries in a database migration

    - by jandjorgensen
    I'm fully redesigning a site that indexes a number of articles with basic search functionality. The previous site was written about a decade ago, and I'm salvaging about 30,000 entries with data stored in less-than-ideal formats. While I'm moving from MSSQL to MySQL, I don't need to make any "live" changes, so this is not a production-level migration issue so much as a redesign. For instance, dates are stored the same as tags/subjects about the articles, but in strings as "YYYYMMDDd" (the lowercase d stands for "date" in the string). Essentially, before or after I move from the previous database format to a new one, I'm going to need to do a lot of replacement of individual entries. While I understand how to do operations with regular expressions in non-database issues, my database experience isn't robust enough to know the best way to handle this. What is the best (or standard) way to handle major changes like this? Is there an SQL operation I should be looking into? Please let me know if the problem isn't clear--I'm not entirely sure what kind of answer I'm looking for.

    Read the article

  • Hidden web standards behind Google "custom searchEngines"?

    - by Hoàng Long
    Today while playing with Google Chrome Omnibox, I notice a strange behavior. I guess there's some "hidden" web standard behind it, but can't figure it out. Here's how to reproduce: Go to http://edition.cnn.com/ Use the search function at the higher right corner, Search a random keyword, for example: "abc" Close the tabs. Open a new tab, type until Chrome reminds you about http://edition.cnn.com/, then press "Tab" The Omnibox now shows "Search CNN.com"! And when you type "abc" and press Enter, it uses the CNN search function to do the job, not Google! I also tried it for several different sites. To some it won't work. But to some sites, like CNN, vnexpress.net, it works after I use the search function of that site once. I also learnt about chrome://settings/searchEngines (type it in your chrome box and you will see), and learnt about you can add custom search engine in chrome. But the question is, why Chrome can realize the search URL automatically to some pages, and not others? It's not because some site subscribe to Google service, because I can do the same method for my site (http://ledohoanglong.wordpress.com), and I'm sure that there's no subscription. So I guess there's a method to "expose" the search function of a site, so that Google Chrome can catch it (after I call the search function of that site once, of courses). Does anyone know about how it works behind the scene?

    Read the article

  • ArchBeat Link-o-Rama for 11/18/2011

    - by Bob Rhubart
    IT executives taking lead role with both private and public cloud projects: survey | Joe McKendrick "The survey, conducted among members of the Independent Oracle Users Group, found that both private and public cloud adoption are up—30% of respondents report having limited-to-large-scale private clouds, up from 24% only a year ago. Another 25% are either piloting or considering private cloud projects. Public cloud services are also being adopted for their enterprises by more than one out of five respondents." - Joe McKendrick SOA all the Time; Architects in AZ; Clearing Info Integration Hurdles This week on the Architect Home Page on OTN. OIM 11g OID (LDAP) Groups Request-Based Provisioning with custom approval – Part I | Alex Lopez Iin part one of a two-part blog post, Alex Lopez illustrates "an implementation of a Custom Approval process and a Custom UI based on ADF to request entitlements for users which in turn will be converted to Group memberships in OID." ArchBeat Podcast Information Integration - Part 3/3 "Oracle Information Integration, Migration, and Consolidation" author Jason Williamson, co-author Tom Laszeski, and book contributor Marc Hebert talk about upcoming projects and about what they've learned in writing their book. InfoQ: Enterprise Shared Services and the Cloud | Ganesh Prasad As an industry, we have converged onto a standard three-layered service model (IaaS, PaaS, SaaS) to describe cloud computing, with each layer defined in terms of the operational control capabilities it offers. This is unlike enterprise shared services, which have unique characteristics around ownership, funding and operations, and they span SaaS and PaaS layers. Ganesh Prasad explores the differences. Stress Testing Oracle ADF BC Applications - Do Connection Pooling and TXN Disconnect Level Oracle ACE Director Andrejus Baranovskis describes "how jbo.doconnectionpooling = true and jbo.txn.disconnect_level = 1 properties affect ADF application performance." Exploring TCP throughput with DTrace | Alan Maguire "According to the theory," says Maguire, "when the number of unacknowledged bytes for the connection is less than the receive window of the peer, the path bandwidth is the limiting factor for throughput."

    Read the article

  • Page Inspector and Visual Studio 2012

    - by nikolaosk
    In this post I will be looking into a new feature that has been added to theVS 2012 IDE. I am talking about Page Inspector that gives developers a very good/handy way to identify layout issues and to find out which part of server side code is responsible for that HTML snippet of code.If you are interested in reading other posts about VS 2012 and .Net 4.5 please have a look here and here.This tool is integrated into the VS 2012 IDE.We can launch it in different ways. 1) We will create a new ASP.Net MVC application (an Internet application). I will not add any code.2) In the Solution Explorer I choose my project and right-click. From the available options select View in Page InspectorHave a look at the picture below.  3) We can launch Page Inspector from the Standard Toolbar. Please have a look at the picture below.   4) Let's view our application with Page Inspector. First I inspect the About link-menu.I can see very quickly the HTML that is rendering for that link to appear. I also see the server side code (the actual view, _Layout.cshtml) that is responsible for that link. This is something developers always craved for. We can also see the CSS styles that are used to style this link (About).Have a look at the picture below Obviously there are similar tools that I have been using in the past when I wanted to change a part of the HTML or see what piece of CSS code affects my layout. I used Firebug when viewing my web applications in the Firefox browser. Internet Explorer and Chrome have also great similar tools that help web developers to identify issues with a site's appearance/issues.Please bear in mind that Page Inspector works with all forms of the ASP.Net stack e.g Web Forms,Web Pages.Hope it helps!!!!!

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Receive Location

    - by Stuart Brierley
    I have previously talked about the installation of the Community ODBC adapter and also using the ODBC adapter to generate schemas.  But what about creating a receive location? An ODBC receive location will periodically poll the configured database using the stored procedure or SQL string defined in your request schema. If you need to, begin by adding a new receive port to your BizTalk configuration. Create a new receive location and select to use the ODBC adapter and click Address. You will now be shown the ODBC Community Adapter Transport properties window.  Select connection string and you will be shown the Choose data Source window.  If you have already created the Test Database source when generating a schema from ODBC this will be shown (if not go and take a look in my previous post to see how this is done).   You will then need to choose the SQL command that will be run by the recieve port.  In this case I have deployed the Test Mapping schemas that I created previously and selected the Request schema. You should now have populated the appropriate properties for the ODBC Com Adapter. Finally set the standard receive location properties and your ODBC receive location is now ready.

    Read the article

  • WebLogic Server Provisioning and Patching with Enterprise Manager Cloud Control 12c Now Available

    - by JuergenKress
    For access to the Oracle demo systems please visit OPN and talk to your Partner Expert. SOA Suite and BPM Suite runs on WebLogic! We are pleased to announce the availability of a WebLogic Server Management demo that showcases some of the key provisioning and patching capabilities of WebLogic Server Management Pack Enterprise Edition (EE). To learn more about these features - as well as other features of the pack - please visit the pack's saleskit page. Demo Highlights The demo showcases the following capabilities: Patching Oracle WebLogic Servers Standardizing WebLogic Server Patch Rollouts Creating a WebLogic Domain Provisioning Profile Cloning a WebLogic Domain from a Provisioning Profile Deploying a Java EE Application Scaling Out an Oracle WebLogic Cluster Demo Instructions Go to the DSS website for Oracle Partners. On the Standard Demo Launchpad page, under the “Software Lifecycle Automation” section, click on the link “EM Cloud Control 12c WLS Provisioning and Patching” (tagged as “NEW”). Specific demo launchpad page contains a link to the detailed demo script with instructions on how to show the demo. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: WebLogic,Enterprise Manager,EM12c,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >