Search Results

Search found 1693 results on 68 pages for 'sqlalchemy migrate'.

Page 57/68 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Does migrating 2 domain controllers between 2 datacentre requires both virtual machines to be shut down at the same time?

    - by Imagineer
    I was attempting to migrate 2 virtual machines that are domain controllers between 2 datacentres running ESX 3.5 and ESX 4.1. I was advised to shut down both domain controller at the same time during the migration process. This is to avoid USN Rollback and other replication issues. The following are the steps that I was planning to perform: 1. Shutdown both DC. 2. Copy both VMs files across to new datacentre using Veeam FastSCP (connection to both vCentre through IP address instead of hostname) 3. Power them up at new datacentre. 4. Configure Network interface/DNS/DHCP for both DCs in new datacentre I have tried to use Veeam FastSCP rather than VMware Standalone Converter is because its copying rather than converting. Someone also suggested that I use backup and restore app like Veeam backup and replication software. Sounds like a simple job, but after shutting down both DCs, the transfer rate using FastSCP is so slow registering only 1KB/s as oppose to the normal 1MB/s (or more). When that attempt to transfer failed, I tried to cold clone both DCs resulted in the both ESX hosts get disconnected. I have tried troubleshooting by referring to this - VMware KB - Diagnosing an ESX Server that is Disconnected or Not Responding in VirtualCenter It seems that the DNS being down is the caused of all unusual occurrence. The moment I powered up the DCs via VMware console command, the ESX host were able to connect to the vCentre again. How can I avoid such a pitfall again? Am I doing it correctly? Any help would be greatly appreciated! Thank you.

    Read the article

  • nginx giving 404 when accessing php from alias directory

    - by code90
    I am trying to migrate from apache to nginx. The php sites that I am hosting need to access a shared library which turns out to be an alias directory. Below is the configuration I came up with. html files work fine, but php files giving 404. I have read through and tried most (if not all) of the answers to the similar questions with no any success. Any hint on what might be causing the issue in my case? location /wtlib/ { alias /var/www/shared/wtlib_4/; index index.php; } location ~ /wtlib/.*\.php$ { alias /var/www/shared/wtlib_4/; try_files $uri =404; if ($fastcgi_script_name ~ /wtlib(/.*\.php)$) { set $valid_fastcgi_script_name $1; } fastcgi_pass 127.0.0.1:9013; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/shared/wtlib_4$valid_fastcgi_script_name; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } Thanks all ! Update: Following seems to be working fine: location /wtlib/ { alias /usr/share/php/wtlib_4/; location ~* .*\.php$ { try_files $uri @php_wtlib; } location ~* \.(html|htm|js|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air)$ { expires 7d; access_log off; } } location @php_wtlib { if ($fastcgi_script_name ~ /wtlib(/.*\.php)$) { set $valid_fastcgi_script_name $1; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4$valid_fastcgi_script_name; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; }

    Read the article

  • Reusing slot numbers in Linux software RAID arrays

    - by thkala
    When a hard disk drive in one of my Linux machines failed, I took the opportunity to migrate from RAID5 to a 6-disk software RAID6 array. At the time of the migration I did not have all 6 drives - more specifically the fourth and fifth (slots 3 and 4) drives were already in use in the originating array, so I created the RAID6 array with a couple of missing devices. I now need to add those drives in those empty slots. Using mdadm --add does result in a proper RAID6 configuration, with one glitch - the new drives are placed in new slots, which results in this /proc/mdstat snippet: ... md0 : active raid6 sde1[7] sdd1[6] sda1[0] sdf1[5] sdc1[2] sdb1[1] 25185536 blocks super 1.0 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU] ... mdadm -E verifies that the actual slot numbers in the device superblocks are correct, yet the numbers shown in /proc/mdstat are still weird. I would like to fix this glitch, both to satisfy my inner perfectionist and to avoid any potential sources of future confusion in a crisis. Is there a way to specify which slot a new device should occupy in a RAID array? UPDATE: I have verified that the slot number persists in the component device superblock. For the version 1.0 superblocks that I am using that would be the dev_number field as defined in include/linux/raid/md_p.h of the Linux kernel source. I am now considering direct modification of said field to change the slot number - I don't suppose there is some standard way to manipulate the RAID superblock?

    Read the article

  • How do I cancel windows server 2003 repair install?

    - by Kilgore2k
    System: Windows 2003 Server Enterprise Scenario: NTDS db is corrupt and all attempts to fix with esentutl fail. Ran chkdsk which seemed to repair disk error and give access to the ntds.dit file but still esentutl fails. (Attached the drive to a different server to run the esentutl) Error: Access to source database '[path to copy of]/ntds.dit' failed with Jet error -1022. Operation terminated with error -1022 (JET_errDiskIO, Disk IO error) after 0.170 seconds. This error occurs on any disk I cpoy the files to including original location in C:\WINDOWS\NTDS\ Now enter the "Stupid!" and "what was I thinking!?" part (must be the late hour...) Stupid: No updated backup - after using a backup I get a network password error in the lsass error. what was I thinking!?: Started the install repair from the original CD but the install fails since the AD fails to start. Now I cant boot into any mode (safe mode, AD restore etc) nor complete the repair install. I would really like to avoid a fresh install since I have the Exchange server on this DC and would rather migrate to a new server than have to start from scratch. Thanks!

    Read the article

  • Migrating WebLogic 10.3.0 to new host. Slow managed server startup times

    - by wadevondoom
    We are migrating our Blue Martini Commerce application (only supported on WebLogic 10.3.0) to a new host (Redhat 6.3 on a VMWare ESX vm). We are seeing extremely slow start up times for our managed server(s) that is basically 20x slower than our current production. As a for instance the Publish managed server takes ~30 - 45 seconds in current production and in the new environment it takes ~10 minutes. The setup uses the same domain structure and JVM as the current production environment. The same setup files are used. We use jdk1.6.0_33 on 64 bit architecture. We used the generic 64bit weblogic installer and used pack / unpack utilities to migrate the domain. The JAVA_OPTS to start this server are: "-d64 -Xms256m -Xmx512m -XX:PermSize=48m -XX:MaxPermSize=256m" The sysadmins have checked /etc/sysctl.conf and /etc/limits.conf to ensure we were not hitting some kind of process limit. As I am not sure what this managed server does from a Blue Martini perspective during the phase of startup I also had the DBA check to ensure that Oracle RAC (11.2.0.3) wasn't also hitting some kind of process limit or if there was a tns listener issue. The new host is quite a bit stricter with their server lock downs so there are a few differences.... Redhat 6.3 in new env, RH 5.7 in current SElinux is targeted in new env and disabled in current VM in new env and dedicated hardware in current iptables disabled in current. It was enabled in new prod but I had them disable it just in case I apologize for not being more specific. I am mostly hoping got some tips. I do not have the typical root access I would normally have in this environment. I am just hoping got a path forward. I did a few 'kill -3' to see if there are blocked threads and I got nadda. The service works for all intents and purposes it is just painfully slow. Thanks you all in advance for reading and best regards. Wade

    Read the article

  • What are good systems for managing PHP/MySQL infrastructure?

    - by sbrattla
    I work in a company which is about to migrate most applications from in-house custom built Java/Tomcat applications to Drupal. Due to company policies, applications and websites need to run on in-house servers. This means that we need infrastructure for Drupal (PHP/MySQL) applications. This must have been solved a million times already. I believe this is what web-hosting companies does every day. Even though we work on a much smaller scale than web-hosting companies, i assume it would make sense to look at the task as if we're going to have an internal small-scale web-hosting company. This means that the guys in IT operations could be "responsible" for "offering" web-hosting, while developers could use these "services". We have three environments; dev(elopment), test and prod(uction). It would make sense that developers could log in to a system and create/edit/delete dev and test sites as they'd like. Production sites should be available through the same system, but only available to IT ops. We need to work with clusters of web servers, meaning that an administration system should be capable of creating/editing/deleting sites across multiple servers. I know there's no "this is it" answer to my question; but what would be a good place to start to get going with this? Apart from the actual hardware, what would be a good administration system for this?

    Read the article

  • Sharing music on NAS with Zune and iPod?

    - by osij2is
    After being a long time iPod owner, I'm switching to the new Zune with its subscription model. I haven't bought a Zune yet but I'm planning on doing so within the next month or so. I have approximately 40GB worth of music and my girlfriend has her iPod music library around 30GB. I've been trying to figure out how to migrate all our music off of our laptops/desktops and centralize everything on my NAS. In sharing iPod music isn't too bad. Sharing from one machine to all is fairly easy within the iTunes player. As far as storing all the music on a NAS, again, iPods aren't too bad and imagine other systems aren't difficult. But I'm really new to the Zune and I'm beginning to run into some issues. My questions are: Is it possible to store all music from our iPods and Zune subscriptions and share music between the iPod/Zune within the same file share on my NAS? I'm sure it's possible to store music on a share, but I'm not sure how iTunes and the Zune software differs. Is there 3rd party software, maybe something like DoubleTwist that can sync based from NAS to multiple desktop/laptops? I've never used DoubleTwist but it's something that I found that looks close to being what I need. I've never quite done this myself so I'm trying to find a solution that can: a) store music on a network share; b) sync between different devices (Zune/iPod) seamlessly.

    Read the article

  • windows: force user to use specific network adapter

    - by Chad
    I'm looking for a configuration/hack to force a particular application or all traffic from a particular user to use a specific NIC. I have an legacy client/server app that has a "security feature" that limits connections based on IP address. I'm trying to find a way to migrate this app to a terminal server environment. The simple solution is for the development team to update the code in the application, however in this case that's not an option. I was thinking I might be able to install VMware NIC's installed for each user on the terminal server and do some type of scripting to force that user account to use a specific NIC. Anybody have any ideas on this? EDIT 1: I think I have a hack to work around my specific problem, however I'd love to hear of a more elegant solution. I got lucky in that the software reads the server IP address out of a config file. So I'm going to have to make a config file for each user and make a customer programs files for each user. Then add a VMware NIC for each user and make each server IP address reside on a different subnet. That will force the traffic for a particular user to a particular IP address, however its really messy and all the VM NIC's will slow down the terminal server. I'll setup a proof of concept Monday and let the group know how it affects performance.

    Read the article

  • 404 with serving static files in a custom nginx configuration

    - by code90
    In my nginx configuration, I have the following: location /admin/ { alias /usr/share/php/wtlib_4/apps/admin/; location ~* .*\.php$ { try_files $uri $uri/ @php_admin; } location ~* \.(js|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air)$ { expires 7d; access_log off; } } location ~ ^/admin/modules/([^/]+)(.*\.(html|js|json|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air))$ { alias /usr/share/php/wtlib_4/modules/$1/admin/$2; } location ~ ^/admin/modules/([^/]+)(.*)$ { try_files $uri @php_admin_modules; } location @php_admin { if ($fastcgi_script_name ~ /admin(/.*\.php)$) { set $valid_fastcgi_script_name $1; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4/apps/admin$valid_fastcgi_script_name; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } location @php_admin_modules { if ($fastcgi_script_name ~ /admin/modules/([^/]+)(.*)$) { set $byr_module $1; set $byr_rest $2; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4/modules/$byr_module/admin$byr_rest; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } Following is the requested url which ends up with "404": http://www.{domainname}.com/admin/modules/cms/styles/cms.css Following is the error log: [error] 19551#0: *28 open() "/usr/share/php/wtlib_4/apps/admin/modules/cms/styles/cms.css" failed (2: No such file or directory), client: xxx.xxx.xxx.xxx, server: {domainname}.com, request: "GET /admin/modules/cms/styles/cms.css HTTP/1.1", host: "www.{domainname}.com" Following urls works fine: http://www.{domainname}.com/admin/modules/store/?a=manage http://www.{domainname}.com/admin/modules/cms/?a=cms.load Can anyone see what the problem could be? Thanks. PS. I am trying to migrate existing sites from apache to nginx.

    Read the article

  • As an admin, what tools do you use to log what you do to your boxes?

    - by Jerry
    I am more of a linux applications developer than an admin. Over time, I've built servers and maintained them, sometimes to offer services, mostly just to develop the applications I work on. Way back when I would create a file in my account to keep notes on what I did on each machine, so that I could replicate that when I migrated to other machines. Nowadays, I install something a private trac installation, install it's blog plugin, and then use that to make notes of everything I install, and most commands that I run, as well as the output. This provides me a combination wiki and blog that I find very useful as a "captain's log". I do this mostly so that when I migrate to a new clean machine, I have a much easier time in bringing it up. And yet, I am always amazed when I see others just install this, delete that, run this, setup this config, ... without seeming to use any way to actually note what they are doing. What do YOU do, and what tools are available? I am especially interested in the transition between maintaining a few machines for a few people and maintaining several to dozens of machines providing a real service. What are the best practices, and where can I find good resources? Thanks!

    Read the article

  • What are the typical methods used to scale up/out email storage servers?

    - by nareshov
    Hi, What I've tried: I have two email storage architectures. Old and new. Old: courier-imapds on several (18+) 1TB-storage servers. If one of them show signs of running out of disk space, we migrate a few email accounts to another server. the servers don't have replicas. no backups either. New: dovecot2 on a single huge server with 16TB (SATA) storage and a few SSDs we store fresh mails on the SSDs and run a doveadm purge to move mails older than a day to the SATA disks there is an identical server which has a max-15min-old rsync backup from the primary server higher-ups/management wanted to pack in as much storage as possible per server in order to minimise the cost of SSDs per server the rsync'ing is done because GlusterFS wasn't replicating well under that high small/random-IO. scaling out was expected to be done with provisioning another pair of such huge servers on facing disk-crunch issues like in the old architecture, manual moving of email accounts would be done. Concerns/doubts: I'm not convinced with the synchronously-replicated filesystem idea works well for heavy random/small-IO. GlusterFS isn't working for us yet, I'm not sure if there's another filesystem out there for this use case. The idea was to keep identical pairs and use DNS round-robin for email delivery and IMAP/POP3 access. And if one the servers went down for whatever reasons (planned/unplanned), we'd move the IP to the other server in the pair. In filesystems like Lustre, I get the advantage of a single namespace whereby I do not have to worry about manually migrating accounts around and updating MAILHOME paths and other metadata/data. Questions: What are the typical methods used to scale up/out with the traditional software (courier-imapd / dovecot)? Do traditional software that store on a locally mounted filesystem pose a roadblock to scale out with minimal "problems"? Does one have to re-write (parts of) these to work with an object-storage of some sort - such as OpenStack object storage?

    Read the article

  • How to do a Windows 7 Image restore to an external drive?

    - by Vaccano
    I have a system that I have done a Windows 7 Image restore on. I would like to migrate that image to a different hard drive. Is there a way to restore the image to an externally connected hard drive? For example: I have 3 hard drives: The first in the source machine (the one I want to copy). The second in a machine that I want to do the work. And the third is not in a machine. It is the target that I want to overwrite with the contents of the first. I boot up a 2nd machine and connect the 3rd hard drive externally (using some cool cables I have). I then use some cool feature of Windows 7 to replace what is on the 3rd hard drive with the windows 7 image of my 1st machine (that is on on my networked backup server). I need to know what the above mentioned "cool feature of windows 7" is, if there is one. And how to use it. Any ideas? Note: that I very much so don't want it to overwrite what is on the 2nd machine/hard drive.

    Read the article

  • How to protect folder privacy against unethical network administrators? [closed]

    - by Trevor Trovalds
    I just need a technical solution for the sake of my group's shared passwords, projects, works, etc. safety. Our network has Active Directory with public/groups/users and NTFS permissions, under a Windows Server 2003 which will soon migrate to Windows Server 2008 R2. Our IT crowd is small, consisting of 2 DBAs, 4 designers, 6 developers (including me), 2 netadmins and (a lot of) tech supporters, everyone has local admin rights. Those 2 network admins weren't the ones who set the network up, they just took the lift recently when the previous ones quit. We usually find them laughing at private contents from users stored in the groups AD, sabotaging documents that don't match their personal tastes and, finally, this week we found out they stole a project we (developers and DBAs) were finishing and, long before, they presented it to the CEO as theirs without us knowing. I'm a systems analyst, and initially my group decided to store critical content, like shared passwords, inside encrypted .zip files. Unfortunately we couldn't do the same to the other hundreds of folders and files, which included the stolen project, because the zipping process would take too long for every update. We also tried an encrypted Subversion repository under SSL, but there are many dummies (~38 atm) involved in the projects that have trouble using TortoiseSVN when contributing, and very oftenly we had to fix messed up updates. Well, I think these two give the idea of what we've been trying to reach. So, is there a practical "individual" protection for our extensive data or my hope can already be euthanized? P.S.: Seriously, at the place where I live/work, political corruption gone the wildest, so denounce related options are likely impracticable. Yet both netadmins have strong "political bond" with the CEO and the President, hence their lousy behavior and our failed delation attempts.

    Read the article

  • Windows xp blinking under score after bios

    - by heyjoe
    so this is for an older pc I have to repair for a friend. The pc has an hdd of about 60 something gb, It uses win xp and let's say 60-70% of the boots it hangs on showing only an underscore bilking line after bios screen, rest of the times it boots fine or the computer shuts down on xp loading screen. Sometimes if you let it alone while the underscore is blinking, it will boot after a while, like a few minutes, some times it won't boot at all even if you give him more time, like one hour. When it boots successfully the pc seems to work fine. I think it's a bad hard disk and i'm about to suggest buying a new one and switching it but I don't have enough experience and i would hate making him buy a new hdd and not solving the problem. anyone has any tips? I know there are other topics about blinking underscores or cursors while xp is booting but the issues about the pc shutting itself down or sometimes booting really freaks me out. Can't format everything and re install until about 10 days from now, cause the dude has some program for his business on this pc and I have to migrate it when the next computer arrives, however he needs to use it until then. so please advise, thx.

    Read the article

  • Fedora Core 6 Migration

    - by Matthew Sprankle
    I am at a loss as to what I should to for this server. I need it to run php5.3 and corresponding version of mysql. I received a client today through work that is using Fedora core 6 running 10 very small websites on some very hodge podge setup. My original idea was just upgrade to php5.3. I have yum (installed 3.0.8) reconfigured for the fedora archive. The latest version of php it allows is 5.1.8. I am still relatively new to server setups and am nervous about wiping their server to upgrade it. Since it is about 6-8 years old I'm not sure if it will even support the newest version of fedora. The server specs are: Parallels Plesk Panel version 9.5.4 Operating system Linux 2.6.9-023stab048.4-smp CPU GenuineIntel, Intel(R) Xeon(R)CPU E5335 @ 2.00GHz (10gb disk space and 1gb of memory). I use fedora for my personal server so I was a little familiar with it. I haven't done anything too extravagant. Is there a way I can escape this nightmare with installing php5.3 or do I need to migrate these sites to a new server?

    Read the article

  • Exchange 2007 Backup - For a newbie

    - by mew3900
    I am trying to setup an exchange 2007 backup solution. After doing a lot of reading, Microsoft have decided in server 2008 unless you are willing to spend a great deal on a 3rd party solution you are pretty stuck! Essentially what I have been asked to do is perform an off-line file backup of our current exchange server and replicate this onto a new 2nd server. The reasoning behind this is that we need to upgrade our current installation of exchange 2007 to SP2 so that the exchange plug-in for windows server backup will be available to us. From this I can then actually take an exchange aware backup weekly and take it off site. Ideally then also we can migrate to this new server and keep the old one as a fail over. Is there a way I can copy across the files required onto a second server, although I doubt very much it is that simple. I may be barking up completely the wrong tree, however I have very limited knowledge with Exchange and any help and advice on how I would resolve this would be much appreciated. Thanks in advance

    Read the article

  • Can't login after upgrading to Windows 8.1

    - by flatline
    This afternoon, I upgraded my work laptop from Windows 8 to Windows 8.1. I had previously had a local account, but after the upgrade, it prompted me to enter my windows account credentials, which I had set up beforehand at some point. I entered my password and clicked next, went through another screen or two, grew tired with the process, and clicked whatever the equivalent button to "skip this step" that I was presented with. Now I can't log in. Not with my (previous) local account password, and not with my windows account password. It's a Dell with biometric identification, which I had set up previously, so I put my finger on the reader and it complained that I couldn't use that fingerprint because I had changed my password. But, I hadn't wittingly changed my password at all. I assume that what happened is that, by entering my credentials, my local account was tied to the Windows account, but because I cancelled the process partway through, something went wrong and I cannot log in. A few questions: 1) How do I log in with my windows account credentials? Should LOCALMACHINENAME\username, which was my previous login method, still work for the Windows account? When I booted to safemode it prompted me with WindowsAccount\myemailaddress, which allowed me to login there, but the regular login doesn't accept the '@' symbol. 2) Is there any way to make that account local-only again? I can't find any way of doing it. 3) I managed to enable the local administrator account and get back into the box; failing all else, is there a quick way to migrate my old profile over to a new user?

    Read the article

  • Swapping RAID sets in and out of the same controller

    - by hazymat
    This is a really simple question, and the answer is probably encoded in various wikipedia articles, however my question is reasonably specific, and I need a bulletproof answer! I'm not sure if my question pertains to hardware RAID in general, or to the specific RAID controller I'm working on. Either way it is the Dell SAS 6/iR (this is an LSI sas1068e chipset). I simply want to: remove a set of striped (RAID 0) disks from this RAID controller in a server put in another set of disks, and create a RAID 1 array (or create a new 'virtual disk', as they call it in the SAS 6/iR manual) Do stuff with the new RAID 1 array Have the option of putting back the old set of disks (the RAID 0 striped ones) I am quite sure this is possible, but I need some form of reliable, evidence-based answer as it's for a client of mine, and I need to migrate their data safely. The question: can I actually do the above? Does the RAID configuration get stored on the disks themselves, or in the hardware controller? Is any data stored in the hardware controller? If there is any chance I cannot completely restore operation of the first set of disks I removed, then I need to know about it! The manual alludes to the answer to this question (see page 45 of this document), and talks about activating an array of disks. I just need someone to confirm I can definitely do the above. See, simple question, right? :)

    Read the article

  • Windows 7: resizing the 'Save As' window

    - by Mark Miller
    I do not know whether my question is appropriate for this forum. Apologies if it is not. I am running Windows 7 Professional with Service Pack 1 on a Dell Vostro 460 PC. I am downloading journal articles from the internet and saving them as *.pdf files. Somehow I unintentionally clicked a button that resulted in the 'Save as' window filling the entire screen of my computer, except for the toolbar at the very bottom. How can I resize the 'Save as' window so it only fills perhaps somewhere around 25% of the computer screen, or whatever the default size for that window is? I have searched the internet extensively and found one or two threads about this problem, but no specific solutions were posted there. One suggestion was to grab the bottom of the window with the mouse and scroll upward until the window was the desired size. That does not appear to be possible in my case. Another suggestion was to click on the window with the middle mouse button before attempting to resize the window, but that does not appear to help in my case either. Thank you for any help. If I should post this question in a different forum here please let me know, or kindly migrate the question to the appropriate forum. If additional information is necessary before a solution can be attempted, please let me know that as well.

    Read the article

  • Solaris OpenStack Horizon customizations

    - by GirishMoodalbail-Oracle
    In Oracle Solaris OpenStack Havana, we have customized the Horizon BUI by modifying existing dashboard and panels to reflect only those features that we support. The modification mostly involves:  --  disabling an widget (checkbox, button, textarea, and so on) --  removal of a tab from a panel --  removal of options from pull-down menus The following table lists the customizations that we have made. |-----------------------------+-----------------------------------------------------| | Where                       | What                                                | |-----------------------------+-----------------------------------------------------| | Project => Instances =>     | Post-Creation tab is removed.                       | | Launch Instance             |                                                     | |                             |                                                     | | Project => Instances =>     | Security Groups tab is removed.                     | | Actions => Edit Instance    |                                                     | |                             |                                                     | | Project => Instances =>     | Console tab is removed.                             | | Instance Name               |                                                     | |                             |                                                     | | Project => Instances =>     | Following actions Console, Edit Security Groups,    | | Actions                     | Pause Instance, Suspend Instance, Resize Instance,  | |                             | Rebuild Instance, and Migrate Instance are removed. | |                             |                                                     | | Project =>                  | Security Groups tab is removed.                     | | Access and Security         |                                                     | |                             |                                                     | | Project =>                  | Create Volume action is removed.                    | | Images and Snapshots =>     |                                                     | | Images => Actions           |                                                     | |                             |                                                     | | Project => Networks =>      | Admin State is disabled and its value is always     | | Create Network              | true.                                               | |                             |                                                     | | Project => Networks =>      | Disable Gateway checkbox is disabled, and its       | | Create Network =>           | value is always false.                              | | Subnet                      |                                                     | |                             |                                                     | | Project => Networks =>      | Allocation Pools and Host Routes text area are      | | Create Network =>           | disabled.                                      | | Subnet Detail               |                                                     | |                             |                                                     | | Project => Networks =>      | Edit Subnet action is removed.                      | | Network Name => Subnet =>   |                                                     | | Actions                     |                                                     | |                             |                                                     | | Project => Networks =>      | Edit Port action is removed.                        | | Network Name => Ports =>    |                                                     | | Actions                     |                                                     | |                             |                                                     | | Admin => Instnaces =>       | Following actions Console, Pause Instance,          | | Actions                     | Suspend Instance, and Migrate Instance are removed. | |                             |                                                     | | Admin => Networks =>        | Edit Network action is removed                      | | Actions                     |                                                     | |                             |                                                     | | Admin => Networks =>         | Edit Subnet action is removed                       | | Subnets => Actions          |                                                     | |                             |                                                     | | Admin => Networks =>         | Edit Port action is removed                         | | Ports => Actions            |                                                     | |                             |                                                     | | Admin => Networks =>         | Admin State and Shared check box are disabled.      | | Create Network              | Network's Admin State is always true, and Shared is | |                             | always false.                                       | |                             |                                                     | | Admin => Networks =>        | Admin State check box is disabled and its value     | | Network Name => Create Port | is always true.                                     | |-----------------------------+-----------------------------------------------------|

    Read the article

  • Romanian parter Omnilogic Delivers “No Limits” Scalability, Performance, Security, and Affordability through Next-Generation, Enterprise-Grade Engineered Systems

    - by swalker
    Omnilogic SRL is a leading technology and information systems provider in Romania and central and Eastern Europe. An Oracle Value-Added Distributor Partner, Omnilogic resells Oracle software, hardware, and engineered systems to Oracle Partner Network members and provides specialized training, support, and testing facilities. Independent software vendors (ISVs) also use Omnilogic’s demonstration and testing facilities to upgrade the performance and efficiency of their solutions and those of their customers by migrating them from competitor technologies to Oracle platforms. Omnilogic also has a dedicated offering for ISV solutions, based on Oracle technology in a hosting service provider model. Omnilogic wanted to help Oracle Partners and ISVs migrate solutions to Oracle Exadata and sell Oracle Exadata to end-customers. It installed Oracle Exadata Database Machine X2-2 Quarter Rack at its data center to create a demonstration and testing environment. Demonstrations proved that Oracle Exadata achieved processing speeds up to 100 times faster than competitor systems, cut typical back-up times from 6 hours to 20 minutes, and stored 10 times more data. Oracle Partners and ISVs learned that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, without business disruption, and with reduced ongoing operating costs. Challenges A word from Omnilogic “Oracle Exadata is the new killer application—the smartest solution on the market. There is no competition.” – Sorin Dragomir, Chief Operating Officer, Omnilogic SRL Enable Oracle Partners in Romania and central and eastern Europe to achieve Oracle Exadata Ready status by providing facilities to test and optimize existing applications and build real-life proofs of concept (POCs) for new solutions on Oracle Exadata Database Machine Provide technical support and demonstration facilities for ISVs migrating their customers’ solutions from competitor technologies to Oracle Exadata to maximize performance, scalability, and security; optimize hardware and datacenter space; cut maintenance costs; and improve return on investment Demonstrate power of Oracle Exadata’s high-performance, high-capacity engineered systems for customer-facing businesses, such as government organizations, telecommunications, banking and insurance, and utility companies, which typically require continuous availability to support very large data volumes Showcase Oracle Exadata’s unchallenged online transaction processing (OLTP) capabilities that cut application run times to provide unrivalled query turnaround and user response speeds while significantly reducing back-up times and eliminating risk of unplanned outages Capitalize on providing a world-class training and demonstration environment for Oracle Exadata to accelerate sales with Oracle Partners Solutions Created a testing environment to enable Oracle Partners and ISVs to test their own solutions and those of their customers on Oracle Exadata running on Oracle Enterprise Linux or Oracle Solaris Express to benchmark performance prior to migration Leveraged expertise on Oracle Exadata to offer Oracle Exadata training, migration, support seminars and to showcase live demonstrations for Oracle Partners Proved how Oracle Exadata’s pre-engineered systems, that come assembled, configured, and ready to run, reduce deployment time and cost, minimize risk, and help customers achieve the full performance potential immediately after go live Increased processing speeds 10-fold and with zero data loss for a telecommunications provider’s client-facing customer relationship management solution Achieved performance improvements of between 6 and 100 times faster for financial and utility company applications currently running on IBM, Microsoft, or SAP HANA platforms Showed how daily closure procedures carried out overnight by banks, insurance companies, and other financial institutions to analyze each day’s business, can typically be cut from around six hours to 20 minutes, some 18 times faster, when running on Oracle Exadata Simulated concurrent back-ups while running applications under normal working conditions to prove that Oracle Exadata-based solutions can be backed up during business hours without causing bottlenecks or impacting the end-user experience Demonstrated that Oracle Exadata’s built-in analytics, data mining and OLTP capabilities make it the highest-performance, lowest-cost choice for large data warehousing operations Showed how Oracle Exadata’s columnar compression and intelligent storage architecture allows 10 times more data to be stored than on competitor platforms Demonstrated how Oracle Exadata cuts hardware requirements significantly by consolidating workloads on to fewer servers which delivers greater power efficiency and lower operating costs that competing systems from IBM and other manufacturers Proved to ISVs that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, and with minimal business disruption Demonstrated how storage servers, database servers, and network switches can be added incrementally and inexpensively to the Oracle Exadata platform to support business expansion On track to grow revenues by 10% in year one and by 15% annually thereafter through increased business generated from Oracle Partners and ISVs

    Read the article

  • Oracle Forms Migration to ADF - Webinar vom ORACLE Partner PITSS

    - by Thomas Leopold
      Tuesday, February 22, 2011 5:00 PM - 6:00 PM CET Free Webinar Re-Engineering Legacy Oracle Forms Migration from Forms to ADF - A Case Study Join Oracle's Grant Ronald and PITSS to see a software architecture comparison of Oracle Forms and ADF and a live step-by-step presentation on how to achieve a successful migration. Learn about various migration options, challenges and best practices to protect your current investment in Oracle Forms. PL/SQL is without match for what it does: manipulating data in the database. If you blindly migrate all your PL/SQL to Java you will, in all probability, end up with less maintainable and less efficient code. Instead you should consider which code it best left as PL/SQL..." Grant Ronald - "Migrating Oracle Forms to Fusion: Myth or Magic Bullet?" Re-Engineering existing business logic is mandatory for your legacy Forms application to take advantage of the new Software architectures like ADF. The PITSS.CON solution combines the deep understanding of Oracle Forms and Reports applications architecture with powerful re-engineering capabilities that allows the developer community to protect the investment in the existing Forms applications and to concentrate on fine-tuning and customization of the modernized functionality rather than manually recreating every module and business logic from bottom up. Registration: https://www2.gotomeeting.com/register/971702250   PITSS GmbHKönigsdorferstrasse 25D-82515 WolfratshausenDo not forget to check out these Free Webinars in March! Thursday, March 3, 2011 Upgrade and Modernize Your Application to Forms 11g Registration/Information Tuesday, March 15, 2011 Shaping the Future for your Oracle Forms Application:Forms 11g, ADF, APEX Registration/Information Tuesday, March 29, 2011 Oracle Forms Modernization to APEX Registration/Information Registration is limited, so sign up  today!Presented By:        Grant Ronald, Senior Group Product Manager,Oracle       Magdalena Serban, Product Manager,PITSS   Contact Us:            PITSS in Americas +1 248.740.0935 [email protected] www.pitssamerica.com       PITSS in Europe +49 (0) 717287 5200 [email protected] www.pitss.com   White Paper:      From Oracle Forms to Oracle ADF and JEE     © Copyright 2010 PITSS GmbH, Wolfratshausen, Stuttgart, München; Managing Directors: Dipl.-Ing. Andreas Gaede, Michael Kilimann, Dipl.-Ing. Dirk Fleischmann Commercial Register: HRB 125471 at District Court Munich. All rights reserved. Any duplication or further treatment in any medium, in parts or as a whole, requires a written agreement. If you do not want to receive invitations for events, meetings and seminars from us, then please click here.

    Read the article

  • Open Your Windows - 4/Maio/10

    - by Claudia Costa
    This FREE technical briefing is designed to show ISVs/SIs how to leverage the Oracle11g Technology especially in the small to medium business. The briefing focuses on Oracle's 11g platform on Windows & Linux and gives a very comprehensive technical competitive overview to the products offered by Microsoft. The technical part covers Integration and Migration aspects of various Microsoft products such as SQL Server, .NET and Active Directory. Register Today! With Oracle11g Oracle introduced various products (ApplicationExpress, OracleExpress Edition, ADF, BPEL) and licenses (Oracle Database Standard Edition One, Application Server Java Edition) specifically targetting the small to medium business market and to show that Oracle Database and Application Server are as easy to use and costs less than Microsoft products in terms of purchase price and ongoing support & maintenance and even much much less when considering the Linux platform.. For those ISVs have already adopted Microsoft .NET framework and using SQL Server as their database layer, we will demostrate that Oracle11g Database is as easy as SQL Server to install, configure, and manage. In addition to that, their application development .NET platform does not requires dramatic changes to enable it to run on the Oracle database. Besides the standard functionalities, Oracle has enhanced some of the advanced features; such as Intermedia, Security, Ref Cursor, etc., tightly integrated with .NET framework so that .NET developers can take full advantage of the Oracle technology, without worrying or programming the complexity components. Objectives ·         Understand Oracle's strategy and commitment on Windows & Linux ·         Learn how to migrate from SQL Server to Oracle on Windows AND Linux ·         Understand that Oracle11g is easy to manage and to install on Windows & Linux ·         Learn how to integrate Windows products with the Oracle11g Platform ·         Learn how Oracle products interoperate & integrate with Microsoft .NET ·         Learn how an Oracle database on Windows will easily be ported to a lower cost Linux database platform and interoperate with a .NET application Prerequisites General Operating System expertise including MS-Windows and Linux. Agenda ·         Welcome and Intro ·         Oracle at a glance ·         Strategy; Small to Medium Business, Microsoft and Linux ·         Oracle 11g Architecture on Linux & Windows ·         Managing Oracle 11g on Linux & Windows ·         Application Development ·         Migration ·         Value propositions for ISVs & Wrap-up   ---------------------------------------------------------------------------- Para mais informações/inscrições, contacte: [email protected].

    Read the article

  • Building vs. Buying a Master Data Management Solution

    - by david.butler(at)oracle.com
    Many organizations prefer to build their own MDM solutions. The argument is that they know their data quality issues and their data better than anyone. Plus a focused solution will cost less in the long run then a vendor supplied general purpose product. This is not unreasonable if you think of MDM as a point solution for a particular data quality problem. But this approach carries significant risk. We now know that organizations achieve significant competitive advantages when they deploy MDM as a strategic enterprise wide solution: with the most common best practice being to deploy a tactical MDM solution and grow it into a full information architecture. A build your own approach most certainly will not scale to a larger architecture unless it is done correctly with the larger solution in mind. It is possible to build a home grown point MDM solution in such a way that it will dovetail into broader MDM architectures. A very good place to start is to use the same basic technologies that Oracle uses to build its own MDM solutions. Start with the Oracle 11g database to create a flexible, extensible and open data model to hold the master data and all needed attributes. The Oracle database is the most flexible, highly available and scalable database system on the market. With its Real Application Clusters (RAC) it can even support the mixed OLTP and BI workloads that represent typical MDM data access profiles. Use Oracle Data Integration (ODI) for batch data movement between applications, MDM data stores, and the BI layer. Use Oracle Golden Gate for more real-time data movement. Use Oracle's SOA Suite for application integration with its: BPEL Process Manager to orchestrate MDM connections to business processes; Identity Management for managing users; WS Manager for managing web services; Business Intelligence Enterprise Edition for analytics; and JDeveloper for creating or extending the MDM management application. Oracle utilizes these technologies to build its MDM Hubs.  Customers who build their own MDM solution using these components will easily migrate to Oracle provided MDM solutions when the home grown solution runs out of gas. But, even with a full stack of open flexible MDM technologies, creating a robust MDM application can be a daunting task. For example, a basic MDM solution will need: a set of data access methods that support master data as a service as well as direct real time access as well as batch loads and extracts; a data migration service for initial loads and periodic updates; a metadata management capability for items such as business entity matrixed relationships and hierarchies; a source system management capability to fully cross-reference business objects and to satisfy seemingly conflicting data ownership requirements; a data quality function that can find and eliminate duplicate data while insuring correct data attribute survivorship; a set of data quality functions that can manage structured and unstructured data; a data quality interface to assist with preventing new errors from entering the system even when data entry is outside the MDM application itself; a continuing data cleansing function to keep the data up to date; an internal triggering mechanism to create and deploy change information to all connected systems; a comprehensive role based data security system to control and monitor data access, update rights, and maintain change history; a flexible business rules engine for managing master data processes such as privacy and data movement; a user interface to support casual users and data stewards; a business intelligence structure to support profiling, compliance, and business performance indicators; and an analytical foundation for directly analyzing master data. Oracle's pre-built MDM Hub solutions are full-featured 3-tier Internet applications designed to participate in the full Oracle technology stack or to run independently in other open IT SOA environments. Building MDM solutions from scratch can take years. Oracle's pre-built MDM solutions can bring quality data to the enterprise in a matter of months. But if you must build, at lease build with the world's best technology stack in a way that simplifies the eventual upgrade to Oracle MDM and to the full enterprise wide information architecture that it enables.

    Read the article

  • SQL SERVER – Integrate Your Data with Skyvia – Cloud ETL Solution

    - by Pinal Dave
    In our days data integration often becomes a key aspect of business success. For business analysts it’s very important to get integrated data from various sources, such as relational databases, cloud CRMs, etc. to make correct and successful decisions. There are various data integration solutions on market, and today I will tell about one of them – Skyvia. Skyvia is a cloud data integration service, which allows integrating data in cloud CRMs and different relational databases. It is a completely online solution and does not require anything except for a browser. Skyvia provides powerful etl tools for data import, export, replication, and synchronization for SQL Server and other databases and cloud CRMs. You can use Skyvia data import tools to load data from various sources to SQL Server (and SQL Azure). Skyvia supports such cloud CRMs as Salesforce and Microsoft Dynamics CRM and such databases as MySQL and PostgreSQL. You even can migrate data from SQL Server to SQL Server, or from SQL Server to other databases and cloud CRMs. Additionally Skyvia supports import of CSV files, either uploaded manually or stored on cloud file storage services, such as Dropbox, Box, Google Drive, or FTP servers. When data import is not enough, Skyvia offers bidirectional data synchronization. With this tool, you can synchronize SQL Server data with other databases and cloud CRMs. After performing the first synchronization, Skyvia tracks data changes in the synchronized data storages. In SQL Server databases (and other relational databases) it creates additional tracking tables and triggers. This allows synchronizing only the changed data. Skyvia also maps records by their primary key values to each other, so it does not require different sources to have the same primary key structure. It still can match the corresponding records without having to add any additional columns or changing data structure. The only requirement for synchronization is that primary keys must be autogenerated. With Skyvia it’s not necessary for data to have the same structure in integrated data storages. Skyvia supports powerful mapping mechanisms that allow synchronizing data with completely different structure. It provides support for complex mathematical and string expressions when mapping data, using lookups, etc. You may use data splitting – loading data from a single CSV file or source table to multiple related target tables. Or you may load data from several source CSV files or tables to several related target tables. In each case Skyvia preserves data relations. It builds corresponding relations between the target data automatically. When you often work with cloud CRM data, native CRM data reporting and analysis tools may be not enough for you. And there is a vast set of professional data analysis and reporting tools available for SQL Server. With Skyvia you can quickly copy your cloud CRM data to an SQL Server database and apply corresponding SQL Server tools to the data. In such case you can use Skyvia data replication tools. It allows you to quickly copy cloud CRM data to SQL Server or other databases without customizing any mapping. You need just to specify columns to copy data from. Target database tables will be created automatically. Skyvia offers powerful filtering settings to replicate only the records you need. Skyvia also provides capability to export data from SQL Server (including SQL Azure) and other databases and cloud CRMs to CSV files. These files can be either downloadable manually or loaded to cloud file storages or FTP server. You can use export, for example, to backup SQL Azure data to Dropbox. Any data integration operation can be scheduled for automatic execution. Thus, you can automate your SQL Azure data backup or data synchronization – just configure it once, then schedule it, and benefit from automatic data integration with Skyvia. Currently registration and using Skyvia is completely free, so you can try it yourself and find out whether its data migration and integration tools suits for you. Visit this link to register on Skyvia: https://app.skyvia.com/register Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Cloud Computing

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >