Search Results

Search found 31483 results on 1260 pages for 'database migration'.

Page 59/1260 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Updating Pages after migration of website

    - by DLackey
    My web site was coded in Coldfusion and over the years has obtained a good ranking. I recently migrated the front-end to a Wordpress site and wanted to know what is ideal way of updating Google and the various search engines of of the updates. For example, the home page of index.cfm is no longer valid since it's index.php. I've submitted an updated sitemap.xml file to Google. I'm sure my site will slip some while the search engines re-index my site but I'd like to try and minimize this as much as possible with the holidays coming up (my site is a service oriented site that caters to people who travel during the holidays). Right now, the old .cfm pages are still online but are re-routed to the appropriate Wordpress page (for example, about.cfm is now routed to /about/ using a cflocation tag.). Not sure if I should pull down the .cfm pages all together or leave them in place until the new pages are picked up by the search engines. Any advice would be helpful.

    Read the article

  • Database Security: The First Step in Pre-Emptive Data Leak Prevention

    - by roxana.bradescu
    With WikiLeaks raising awareness around information leaks and the harm they can cause, many organization are taking stock of their own information leak protection (ILP) strategies in 2011. A report by IDC on data leak prevention stated: Increasing database security is one of the most efficient and cost-effective measures an organization can take to prevent data leaks. By utilizing the data protection, access control, account management, encryption, log management, and other security controls inherent in the database management system, entities can institute first-level control over the widest range of protected information. As a central repository for unstructured data, which is growing at leaps and bounds, the database should be the first layer providing information leakage protection. Unfortunately, most organizations are not taking sufficient steps to protect their databases according to a survey of the Independent Oracle User Group. For example, any operating system administrator or database administrator can access the all the data stored in the database in most organizations. Without any kind of auditing or monitoring. And it's not just administrators, database users can typically access the database with ad-hoc query tools from their desktop and by-pass any application level controls. Despite numerous regulations calling for controls to limit the powers of insiders, most organizations still put too many privileges in the hands of their employees. Time and time again these excess privileges have backfired. Internal agents were implicated in almost half of data breaches according to the Verizon Data Breach Investigations Report and the rate is rising. Hackers also took advantage of these excess privileges very successfully using stolen credentials and SQL injection attacks. But back to the insiders. Who are these insiders and why do they do it? In 2002, the U.S. Secret Service (USSS) behavioral psychologists and CERT information security experts formed the Insider Threat Study team to examine insider threat cases that occurred in US critical infrastructure sectors, and examined them from both a technical and a behavioral perspective. A series of fascinating reports has been published as a result of this work. You can learn more by watching the ISSA Insider Threat Web Conference. So as your organization starts to look at data leak prevention over the coming year, start off by protecting your data at the source - your databases. IDC went on to say: Any enterprise looking to improve its competitiveness, regulatory compliance, and overall data security should consider Oracle's offerings, not only because of their database management capabilities but also because they provide tools that are the first layer of information leak prevention. Learn more about Oracle Database Security solutions and get the whitepapers, demos, tutorials, and more that you need to protect data privacy from internal and external threats.

    Read the article

  • Accounts in Work Items after migration to TFS 2010 and to new domain

    - by Clara Oscura
    Lately I’ve been doing some tests on migrating our TFS 2008 installation to TFS 2010, coupled with a machine and domain change. One particular topic that was tricky is user accounts. We installed first a new machine with TFS 2010 and then migrated the projects in the old server. The work items were migrated with the projects. Great, but if I try to edit one of the old work items I cannot save it anymore because some fields contain old user names (ex. OLDDOMAIN\user) which are not known in the new domain (it should be NEWDOMAIN\user). The errors look like this: When I correct the ‘Assigned To’ field value, I get another error regarding another field: Before TFS 2010, we had TFSUsers power tool. It allow you to map an old user name to a new user name. This is not available anymore because WI fields with user accounts are now synchronized with AD display names changes (explained here). The correct way to go about this in TFS 2010 is to use TFSConfig Identities before adding the new domain accounts into the TFS groups (documented here). So, too late for us. I’ve found a (tedious) workaround to change those old account in work items in order to allow people to keep working with them. 1. Install TFS 2010 power tools 2. Export WIT from your project (VS | Tools | Process Editor | Work Item Types). Save the definition, for example: Original_MyProject_Task.xml 3. Copy the xml (NoReadOnly_MyProject_Task.xml) and edit it. From the field definition of ‘Activated By’, ‘Closed By’ and ‘Resolved By’, remove the following:        <WHENNOTCHANGED field="System.State">           <READONLY />         </WHENNOTCHANGED> 4. Import WIT in VS. Choose the new file (NoReadOnly_MyProject_Task.xml) and import it in MyProject 5. Open all tasks in Excel (flat list). Display the following columns: Asssigned To Activated By Closed By Resolved By Change the user accounts to the new ones (I usually sort each column alphabetically to make it easier). 6. Publish. If you get a conflict on a field, tough luck. You will have to manually choose “Local version” for each work item. I told you it was a tedious process. 7. Import original WIT (Original_MyProject_Task.xml) in MyProject. We only changed the WI definition so that we could change some fields. The original definition should be put back. And what about these other fields? Created By Authorized As These fields are not editable by definition (VS | Tools | Process Editor | Work Item Fields Explorer), even if they are not marked as read-only in the WIT. You can leave the old values. It doesn’t seem to matter to TFS. The other four fields are editable by definition, so only the WIT readonly rule prevents us from changing them. Technorati Tags: TFS,Team Foundation Server 2010,Work Item,Domain change

    Read the article

  • The Great PST Migration

    Having recently been on the front lines of a massive PST import operation, Sean Duffy offers advice and points out pitfalls. More than anything, he wishes he had a simple tool with which to banish PST hell, and finishes with some hard-won guidelines.

    Read the article

  • #OOW 2012 : IaaS, Private Cloud, Multitenant Database, and X3H2M2

    - by Eric Bezille
    The title of this post is a summary of the 4 announcements made by Larry Ellison today, during the opening session of Oracle Open World 2012... To know what's behind X3H2M2, you will have to wait a little, as I will go in order, beginning with the IaaS - Infrastructure as a Service - announcement. Oracle IaaS goes Public... and Private... Starting in 2004 with Fusion development, Oracle Cloud was launch last year to provide not only SaaS Application, based on standard development, but also the underlying PaaS, required to build the specifics, and required interconnections between applications, in and outside of the Cloud. Still, to cover the end-to-end Cloud  Services spectrum, we had to provide an Infrastructure as a Service, leveraging our Servers, Storage, OS, and Virtualization Technologies, all "Engineered Together". This Cloud Infrastructure, was already available for our customers to build rapidly their own Private Cloud either on SPARC/Solaris or x86/Linux... The second announcement made today bring that proposition a big step further : for cautious customers (like Banks, or sensible industries) who would like to benefits from the Cloud value of "as a Service", but don't want their Data out in the Cloud... We propose to them to operate the same systems, Exadata, Exalogic & SuperCluster, that are providing our Public Cloud Infrastructure, behind their firewall, in a Private Cloud model. Oracle 12c Multitenant Database This is also a major announcement made today, on what's coming with Oracle Database 12c : the ability to consolidate multiple databases with no extra additional  cost especially in terms of memory needed on the server node, which is often THE consolidation limiting factor. The principle could be compare to Solaris Zones, where, you will have a Database Container, who is "owning" the memory and Database background processes, and "Pluggable" Database in this Database Container. This particular feature is a strong compelling event to evaluate rapidly Oracle Database 12c once it will be available, as this is major step forward into true Database consolidation with Multitenancy on a shared (optimized) infrastructure. X3H2M2, enabling the new Exadata X3 in-Memory Database Here we are :  X3H2M2 stands for X3 (the new version of Exadata announced also today) Heuristic Hierarchical Mass Memory, providing the capability to keep most if not all the Data in the memory cache hierarchy. Of course, this is the major software enhancement of the new X3 Exadata machine, but as this is a software, our current customers would be able to benefit from it on their existing systems by upgrading to the new release. But that' not the only thing that we did with X3, at the same time we have upgraded everything : the CPUs, adding more cores per server node (16 vs. 12, with the arrival of Intel E5 / Sandy Bridge), the memory with 512GB memory as well per node,  and the new Flash Fire card, bringing now up to 22 TB of Flash cache. All of this 4TB of RAM + 22TB of Flash being use cleverly not only for read but also for write by the X3H2M2 algorithm... making a very big difference compare to traditional storage flash extension. But what does those extra performances brings to you on an already very efficient system: double your performances compare to the fastest storage array on the market today (including flash) and divide you storage price x10 at the same time... Something to consider closely this days... Especially that we also announced the availability of a new Exadata X3-2 8th rack : a good starting point. As you have seen a major opening for this year again with true innovation. But that was not the only thing that we saw today, as before Larry's talk, Fujitsu did introduce more in deep the up coming new SPARC processor, that they are co-developing with us. And as such Andrew Mendelsohn - Senior Vice President Database Server Technologies came on stage to explain that the next step after I/O optimization for Database with Exadata, was to accelerate the Database at execution level by bringing functions in the SPARC processor silicium. All in all, to process more and more Data... The big theme of the day... and of the Oracle User Groups Conferences that were also happening today and where I had the opportunity to attend some interesting sessions on practical use cases of Big Data one in Finances and Fraud profiling and the other one on practical deployment of Oracle Exalytics for Data Analytics. In conclusion, one picture to try to size Oracle Open World ... and you can understand why, with such a rich content... and this only the first day !

    Read the article

  • Oracle Database In-Memory

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE Larry Ellison unveiled the next major milestone in database technology, Oracle Database In-Memory, on June 10, 2014. Oracle Database In-Memory will be generally available in July 2014 and can be used with all hardware platforms on which Oracle Database 12c is supported. This option will accelerate database performance by orders of magnitude for analytics, data warehousing, and reporting while also speeding up online transaction processing (OLTP). It allows any existing Oracle Database-compatible application to automatically and transparently take advantage of columnar in-memory processing, without additional programming or application changes. Benefits Fast ad-hoc analytics without the need to pre-create indexes Completely transparent to existing applications Faster mixed workload OLTP No database size limit Industrial strength availability and security Robustness and maturity of Oracle Database 12c To find out more see Oracle Database In-Memory Comment from Rittman Mead on Oracle In-Memory Option Launch  ... and I will let you know how this unfolds in regards to advantages for OBI11g and Exalytics and Big Data over the coming months. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Document Link about Database Features on Exadata

    - by Bandari Huang
    DBFS on Exadata Exadata MAA Best Practices Series - Using DBFS on Exadata  (Internal Only) Oracle® DatabaseSecureFiles and Large Objects Developer's Guide 11g Release 2 (11.2) E18294-01 Configuring a Database for DBFS on Oracle Database Machine [ID 1191144.1] Configuring DBFS on Oracle Database Machine [ID 1054431.1] Oracle Sun Database Machine Setup/Configuration Best Practices [ID 1274318.1] - Verify DBFS Instance Database Initialization Parameters    DBRM on Exadata Exadata MAA Best Practices Series - Benefits and use cases with Resource Manager, Instance Caging, IORM  (Internal Only) Oracle® Database Administrator's Guide 11g Release 2 (11.2) E25494-02    

    Read the article

  • Web migration of a VB6 system with VWG

    - by Webgui
    Brinks Bolivia eSAC System (Customer Service) allows to register all different kinds of contacts for a customer; addition to maintaining an updated status of each service or customer request, to have accurate information and perform the appropriate procedures for all applications. The system was originally developed in VB6 and since web access was essential it was offered via Citrix. Since the application's performance was a critical issue as well as the need to offer the system without specific installations the company looked for a solution that would solve those drawbacks of using Citrix. Searching for a solution that would allow it to offer the eSAC system over the web without the need for specific client installations and provide sufficient performance levels even when there is limited bandwidth lead Brinks to a decision to migrate their VB6 Customer Service system to to Visual WebGui. "Developing on Visual WebGui we were able to migrate the system to web environment and even add new features in less time which allows us to offer it over a standard web browser with better performance and no installations as was required with Citrix," concluded Alexander Cuellar. The full article and screenshots of the system are available here.

    Read the article

  • Entity Framework: Connecting to a mdf user database file via localDB during script execution

    - by Marko Apfel
    Problem If you run the “Generate database from model” wizard and execute the generated script the destination database could be the wrong one (for instance master of the SQL Server). Solution To use an own mdf attachable user database some connection information must specified during script execution. Execute your script opens the dialog “Connect to Server”. Press “Options” and go to the second tab “Connection Properties”. Select “Browse server” in the “Connect to database” dropdown box: Confirm the information dialog with “Yes”. In the following dialog you could choose your user database. Now the schema is created in the user database.

    Read the article

  • Migration for Dummies: The Practical Top 10 Checklist

    There are a number of top 10 lists of considerations for the cloud, which primarily are designed to help you decide if you should move to the cloud or not. But once you have made the important decision to migrate your app to the cloud, the below offers a list of important things to check before moving to the cloud.

    Read the article

  • Code base migration - old versioning system to modern

    - by JohnP
    Our current code base is contained in a versioning system that is old and outdated (Visual Sourcesafe 5.0, mid 1990's), and contains a mix of packages that are no longer used, ones that are being used but no longer updated, and newer code. It is also a mix of 4 languages, and includes libraries for some of our systems (Such as Dialogic, Sun Tzu {clipper}) implementations. This breaks down into the following categories: Legacy code - No longer used (Systems that have been retired or replaced, etc) Legacy code - In current use (No intentions for upgrades or minor bug fixes, only major fixes if needed) Current code - In current use, and will be used for future versions/development Support libraries - For both legacy and current code (Some of the legacy libraries are no longer available as well) We would like to migrate this to a newer versioning system as we will be adding more developers, and expanding the reach to include remote programmers. When migrating, how do you structure it? Do you just perform a dump of all the data and then import it into the new system, or do you segregate according to type before you bring it into the new system? Do you set up a separate area for libraries, or keep them with the relevant packages? Do you separate by language, system, both? A general outline and methodology is fine, it doesn't need to be broken down to individual program level.

    Read the article

  • RTF template migration in BIP

    - by Manoj Madhusoodanan
    When you are creating BI template through application the RTF template information will stored in XDO_LOBS table.Column LOB_CODE will store the template short code,ie the link between the template and lob. When you migrate the template through java oracle.apps.xdo.oa.util.XDOLoader make sure the rtf file name and template short code are same.Otherwise the rtf will not get attached. Eg:  Source Instance Template Short Code : XXCUST_TEMPLATE RTF Name: XXCUST_TEMPLATE_1.rtf When you migrate the above details through  XDOLoader the rtf will not get attached to template in destination instance.So make sure RTF Name should be XXCUST_TEMPLATE.

    Read the article

  • Bzr to git migration

    - by Sardathrion
    I am planning to do two things on several large (several gigs) and old (several years) repositories: Move from bzr to git without losing the commit history. Restructure all the repositories either using bzr or git. This will involve moving files/directories from one repository to another with its change history. Doing both at once would be foolish (I think!) but I am not sure which one should be done first. Any suggestions? Anything I should watch out for when migrating/restructuring?

    Read the article

  • MySQL maintenance - how to clear the buffer?

    - by Dougal
    We have a server running our web app (PHP / MySQL) which is SLOW. My predecessor says that: "We use to do database maintenance, which use to clear the buffer, cached and unwanted variables." And I wonder what on earth he means with that statement? Does he mean a simple optimize of the tables? Or the query cache? I understand MySQL but don't really know what he is describing. I would appreciate any pointers. Thanks.

    Read the article

  • Retrouvez l'enregistrement du Webinar consacré à la migration et aux évolutions Oracle E-Business Suite R12

    - by Valérie De Montvallon
    Vous n’avez pas pu assister au webinar des experts Panaya et Logica consacré aux mises à jour Oracle EBS ? Nous avons pensé à vous et nous vous offrons la possibilité de visualiser l'enregistrement de l'événement. Grâce à nos intervenants - Patrice Bugeaud, Directeur Practice Oracle chez Logica, Cyril Vinger, Solution Manager chez Logica Business Consulting, David Balouka, Regional Business Manager chez Panaya et Zoharit Ben-Zvi, Consultant Solutions Oracle EBS chez Panaya - vous allez pouvoir découvrir :  Comment réduire vos risques et vos efforts lors de vos migrations Quels sont les outils à votre disposition Quels sont les bénéfices que vous pouvez attendre d’une planification et d’une gestion de projets optimisées Comment réduire et prioriser vos corrections de code Comment créer et exécuter des scripts de test EBS facilement et rapidement Regardez maintenant !

    Read the article

  • Website migration is not working for all computers

    - by Shadowizoo
    We got 2 servers on same network, Server-A and Server-B. On Server-A (widows server 2003), we have IIS 5.2 and our website was hosted on it few month ago (about 7-8 months). We bought a new server, Server-B (Windows Server 2008) with IIS 7.5 and copied our old website on this new machine. On our router, we forward the port 80 to Server-B. The Server-A is still on because we need to access some old data by our old website. I would like to access it with it's internal Ip (192.168.1.xxx/mywebsite) On my Windows 7 computer, if I write www.example.com or example.com (without www.), I'm being redirected to Server-B and I can see our new interface. On some Windows Vista computer, example.com (without www.) redirect to Server-B, but if I write www.example.com, I'm still on Server-A. In our website code (on Server-B), we sometimes redirect with a "www." so this is causing some error because we are trying to access a webpage that exist on Server-B but not Server-A and because the www.example I compared 2 computers with Vista Home on them and Internet Options looks the same. I cannot figure why this is happening

    Read the article

  • Is there an industry standard for systems registered user permissions in terms of database model?

    - by EASI
    I developed many applications with registered user access for my enterprise clients. In many years I have changed my way of doing it, specially because I used many programming languages and database types along time. Some of them not very simple as view, create and/or edit permissions for each module in the application, or light as access or can't access certain module. But now that I am developing a very extensive application with many modules and many kinds of users to access them, I was wondering if there is an standard model for doing it, because I already see that's the simple or the light way won't be enough.

    Read the article

  • New technical whitepaper on Database-as-a-Service

    - by Javier Puerta
    High Availability Best Practices for Database Consolidation- The Foundation for Database-as-a-Service. An Oracle White Paper - April 2014 This paper provides MAA best practices for Database Consolidation using Oracle Multitenant. It describes standard HA architectures that are the foundation for DBaaS. It is most appropriate for a technical audience: Architects, Directors of IT and Database Administrators responsible for the consolidation and migration of traditional database deployments to DBaaS.  Recommended best practices are equally relevant to any platform supported by Oracle Database except where explicitly noted as being an optimization or an example that applies only to Oracle Engineered systems. 

    Read the article

  • Which database to use for quickly and pygtk

    - by usher
    I'm writing application using quickly Pygtk and glade. this application should have database connection (such as MySQL) for reading and writing data from the local or outsourcing machine \ server. However, in my machine there is MySQL installed, but when releasing the app it sould be installed on another ubuntu machine, which may not have mysql and moreover not the same database with the required database name and structure.... So my questions are: Is it a good choice using mysql as database 1.2 If not what is? Is it posible to embeding mysql or other database program during the installation from ubuntu software center? 2.2 If it's posible: hwo(any tutorial?) Where to store secure data outside the mysql (or whatever) for conecting the database every time a user launch the application

    Read the article

  • Is it safe to set MySQL isolation to "Read Uncommitted" (dirty reads) for typical Web usage? Even with replication?

    - by Continuation
    I'm working on a website with typical CRUD web usage pattern: similar to blogs or forums where users create/update contents and other users read the content. Seems like it's OK to set the database's isolation level to "Read Uncommitted" (dirty reads) in this case. My understanding of the general drawback of "Read Uncommitted" is that a reader may read uncommitted data that will later be rollbacked. In a CRUD blog/forum usage pattern, will there ever be any rollback? And even if there is, is there any major problem with reading uncommitted data? Right now I'm not using any replication, but in the future if I want to use replication (row-based, not statement-based) will a "Read Uncommitted" isolation level prevent me from doing so? What do you think? Has anyone tried using "Read Uncommitted" on their RDBMS?

    Read the article

  • Glassfish to WebLogic Migration

    - by JuergenKress
    At WebLogic Community Workspace ( WebLogic Community membership required) you can find Migrating Apps from GlassFish to Oracle WehbLogic Server with Ease.pptx WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Glassfish,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Is it better to use a Database or a data structure for network stack?

    - by poly
    I've built a multi threaded messaging application in C and I'm currently using a MySQL Memory table to save the session ID, but I'm not sure whether this was a good decision or not. It works like this, the application sends a message and saves the source session ID in the MySQL table. When the application gets the success response it will remove the session's ID from the MySQL table, or if it received an error response then it will keep the ID to be retried later. I've built it this way so that I don't need to care about building a data structure by myself, and the Database provides flexibility when it comes to querying it. Do you think this is appropriate or do I need to use something else? Please note that the application is expecting to handle a large number of transactions/sec.

    Read the article

  • Designing a user-defined list to be stored in a relational database - Should I include user index?

    - by Zaemz
    By index, I mean, as the user creates the list, each item receives an integer index for its place in that particular list. Since there will be a table of ListItems, I'd prefer to avoid using the name "Index" for the field. Then I was thinking - should I even include the list index in the database? I figured I would because the list would be created in the same fashion every time, then. Or I could order the list for the user based on its actual primary key, since the list items are created in succession anyway... What should I do?

    Read the article

  • Is there anyway i can create an intranet request form and have it be stored in a database? [on hold]

    - by eternalearth888
    I am trying to create a form for my companies intraNet site. The idea is as follows: An employee wants to make a purchase, so they will go to the appropriate page in the intraNet They will fill out the form on the intraNet page They click the email button The data in the form is saved in a database, and an email is sent to me stating that there is purchase order request form filled out I am not exactly sure how to go about this. Part of me wants to create it in a Data Access Page but I am not sure that's correct. If there is no one here who can help, is there anyone who can direct me to someone/something that can help me?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >