Search Results

Search found 28747 results on 1150 pages for 'switch case'.

Page 549/1150 | < Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >

  • HP DL160 vs DL360 for virtualization

    - by Chris
    Hi, We're planning to consolidate our infrastructure (a dozen servers). We'll buy two or three identical new servers who will be setup as a XenServer pool. Load balancing and HA tools will monitor the pool and vMotion VMs in case of failure/overload. I know that the DL100 series of HP servers is cheaper than the DL300 serie (in every sense of the word). As we don't need local storage (we have a SAN) and can live with a temporary down server (provided that Xen Server HA tools work as advertized), what are the downsides going with the DL100 serie ? Thanks, Chris

    Read the article

  • Bash - read as a fallback to $@

    - by user137369
    I have a working bash script (working on OSX) that takes files and directories as input and does something like for inputFile in $@ do [someStuff] done but I want to provide a “fallback”, meaning, if the script is started with no arguments (double-clicked, for example), it can take input at that time, by letting the user drop the files directly on the terminal (possibly through read but not mandatory, I'm open to better/different solutions). I'm guessing I should use some kind of if statement, but I'm not sure how. I'd like to not have to essentially duplicate the script's size by two by repeating [someStuff] for each case. Thank you.

    Read the article

  • PostgreSQL server: 10k RPM SAS or Intel 520 Series SSD drives?

    - by Vlad
    We will be expanding the storage for a PostgreSQL server and one of the things we are considering is using SSDs (Intel 520 Series) instead of rotating discs (10k RPM). Price per GB is comparable and we expect improved performance, however we are concerned about longevity since our database usage pattern is quite write-heavy. We are also concerned about data corruption in case of power failure (due to SSDs write cache not flushing properly). We currently use RAID10 with 4 active HDDs (10k 146GB) and 1 spare configured in the controller. It's a HP DL380 G6 server with P410 Smart Array Controller and BBWC. What makes more sense: upgrading the drives to 300GB 10k RPM or using Intel 520 Series SSDs (240GB)?

    Read the article

  • Code review “on a napkin” — could it be useful?

    - by gaRex
    Preconditions Team uses DVCS IDE supports comments parsing (like TODO and etc.) Tools like CodeCollaborator are expensive for budget Tools like gerrit are too complex for install or not usable Workflow Author publishes somewhere on central repo feature branch Reviewer fetch it and start review In case of some question/issue reviewer create comment with special label, like "BLA". Such label MUST not be in production code -- only on review stage: $somevar = 123; // BLA Why do echo this here? echo $somevar; When reviewer finish post comments -- it just commits with stupid message "comments" and pushes back Author pulls feature branch back and answer comments in similar way or improve code and push it back When "BLA" comments have gone we can think, that review has successfully finished. Author interactively rebases feature branch, stashes it to remove those "comment" commits and now is ready to merge feature to develop or make any action that usualy could be after successful internal review IDE support I know, that custom comment tags are possible in eclipse & netbeans. Sure it also should be in blablaStorm family. So my specific questions are Do you think this methodology is viable? Do you know something similar? What can be improved in it? ps: migrated from http://stackoverflow.com/questions/12692695/code-review-on-a-napkin-could-it-be-useful

    Read the article

  • DATE function does not support all the dates in DAX by design #powerpivot #tabular #dax

    - by Marco Russo (SQLBI)
    The DATE function in DAX has this simple syntax: DATE( <year>, <month>, <day> ) If you are like me, you never read the BOL notes that says in a clear way that it supports dates beginning with March 1, 1900. In fact, I was wrongly assuming that it would have supported any date that can be represented in a Date data type in Data Models, so all the dates beginning with January 1, 1900. The funny thing is that in some of the BOL documentation you will find that Date data type supports dates after March 1, 1900 (which seems not including that date, but this is a detail…). But we should not digress. The real issue is that if you try to call the DATE function passing values between January 1 and February 28, 1900, you will see a different day as a result. evaluate row ( "x", DATE( 1900, 1, 1 ) ) -- return WRONG result -- [x] 12/31/1899 12:00:00 AM   evaluate row ( "x", DATE( 1901, 2, 29 ) ) -- return WRONG result -- [x] 2/28/1900 12:00:00 AM   evaluate row ( "x", DATE( 1900, 3, 1 ) ) -- return CORRECT result -- [x] 3/1/1900 12:00:00 AM As usual, this is not a bug. It is “by design”. The DATE function works in this way in Excel. And also in Excel it was “by design”. In this case the design is having the same bug of Lotus 1-2-3 that handled 1900 a leap year, even though it isn’t. The first release of Lotus 1-2-3 is dated 1983. I hope many of my readers are younger than that. I tried to open a bug in Connect. Please vote it. I would like if Microsoft changed this type of items from “by design” (as we can expect) to “by genetic disease”. Or by “historical respect”, in order to be more politically correct.

    Read the article

  • Cannot get webapps to work after upgrade from 12.04 to 12.10

    - by kashan
    I just upgraded to Ubuntu 12.10 from 12.04. But I cannot use WebApps. In fact there is no sign of them anywhere. Firefox and Chromium are not prompting for integration. I tried re-installing both browsers/webapps plugin of each, but no luck. Out of curiousity, I tried to install unity-webapps-preview through terminal, apt-get reported that this operation is going to need 127mb. After installation, restarting session, nothing. I re-ran the unity-webapp-preview in terminal and SURPRISINGLY it again asked me that this operation is going to need 58mb. After Installation, nothing. Firefox is showing the unity-webapp plugin in Extensions but in Preferences there is noting like Unity settings or options for exceptions in General tab (as I seen in some threads). Chromium is not even showing the plugin in Extensions nor in the settings. Really need help. I know there is a reported bug but it is mostly about the complains that the webapps are not working as they should but in my case they don't seem to exist. EDIT: The out put of gsettings list-recursively | grep webapp is: com.canonical.unity.webapps allowed-domains @as [] com.canonical.unity.webapps dontask-domains @as [] com.canonical.unity.webapps index-update-time 43200 com.canonical.unity.webapps integration-allowed true com.canonical.unity.webapps preauthorized-domains ['amazon.ca', 'amazon.cn', 'amazon.com', 'amazon.co.uk', 'amazon.de', 'amazon.es', 'amazon.fr', 'amazon.it', 'www.amazon.ca', 'www.amazon.cn', 'www.amazon.com', 'www.amazon.co.uk', 'www.amazon.de', 'www.amazon.es', 'www.amazon.fr', 'www.amazon.it', 'one.ubuntu.com']

    Read the article

  • Windows Resource Monitoring Programs

    - by Sal
    I work at a small tech start up managing websites with our own in house server side code. (In production, we use Windows Server 2008 boxes, running Java 6. In our dev boxes, we use Windows 7 running Java 7.) Recently, we had an issue where some of our boxes in production failed, and we didn't have means of trouble shooting, since we keep little to no monitoring logs about a given box's CPU/memory/RAM usage, etc. So, I'm wondering if there is some commercial/freeware that's the standard for performance monitoring/logging. Essentially, I'm just looking for an analytics system that is similar to the Windows Task Manager or the Resource Monitor, that serializes all of its data periodically. Ideally, I'd like to find a program that's also extensible, in case I'd like to add addition monitors in the future.

    Read the article

  • Anonymous Access and Sharepoint Web Services

    - by Stacy Vicknair
    A month or so ago I was working on a feature for a project that required a level of anonymity on the Sharepoint site in order to function. At the same time I was also working on another feature that required access to the Sharepoint search.asmx web service. I found out, the hard way, that the Sharepoint Web Services do not operate in an expected way while the IIS site is under anonymous access. Even though these web services expect requests with certain permissions (in theory) they never attempt to request those credentials when the web service is contacted. As a result the services return a 401 Unauthorized response. The fix for my situation was to restrict anonymous access to the area that needed it (in this case the control in question had support for being used in an ASP.NET app that I could throw in a virtual directory). After that I removed anonymous access from IIS for the site itself and the QueryService requests were working once more. Here’s a related article with a bit more depth about a similar experience: http://chrisdomino.com/Blog/Post/401-Reasons-Why-SharePoint-Web-Services-Don-t-Work-Anonymously?Length=4 Technorati Tags: Sharepoint,QueryService,WSS,IIS,Anonymous Access

    Read the article

  • Slow NFS transfer performance of small files

    - by Arie K
    I'm using Openfiler 2.3 on an HP ML370 G5, Smart Array P400, SAS disks combined using RAID 1+0. I set up an NFS share from ext3 partition using Openfiler's web based configuration, and I succeeded to mount the share from another host. Both host are connected using dedicated gigabit link. Simple benchmark using dd: $ dd if=/dev/zero of=outfile bs=1000 count=2000000 2000000+0 records in 2000000+0 records out 2000000000 bytes (2.0 GB) copied, 34.4737 s, 58.0 MB/s I see it can achieve moderate transfer speed (58.0 MB/s). But if I copy a directory containing many small files (.php and .jpg, around 1-4 kB per file) of total size ~300 MB, the cp process ends in about 10 minutes. Is NFS not suitable for small file transfer like above case? Or is there some parameters that must be adjusted?

    Read the article

  • Virtual Machine Manager 2012 CPU Average

    - by Grant
    What exactly is the CPU Average field in VMM 2012 showing me? I'm running Server 2008 R2 with VMM 2012. My server has 2x16 core CPUs installed. An example virtual machine has 4 virtual processors, and shows 20% CPU usage. Is that: 20% of the entire system's available CPU power? 20% of 4 of the 32 core's CPU power? 20% of one core's CPU? (in which case it could go as high as 400%) Something else entirely? How can I tell how much of the entire system's CPU power is being used (all 32 cores)? Edit: Well, I can tell for sure it's not 20% of the entire system's CPU power - since the entire server's CPU averages add up to well over 100% right now.

    Read the article

  • If some standards apply when "it depends" then should I stick with custom approaches?

    - by Travis J
    If I have an unconventional approach which works better than the industry standard, should I just stick with it even though in principal it violates those standards? What I am talking about is referential integrity for relational database management systems. The standard for enforcing referential integrity is to CASCADE delete. In practice, this is just not going to work all the time. In my current case, it does not. The alternative suggested is to either change the reference to NULL, DEFAULT, or just to take NO ACTION - usually in the form of a "soft delete". I am all about enforcing referential integrity. Love it. However, sometimes it just does not fully apply to use all the standards in practice. My approach has been to slightly abandon a small part of one of those practices which is the part about leaving "hanging references" around. Oops. The trade off is plentiful in this situation I believe. Instead of having deprecated data in the production database, a splattering of "soft delete" logic all across my controllers (and views sometimes depending on how far down the chain the soft delete occurred), and the prospect of queries taking longer and longer - instead of all that - I now have a recycle bin and centralized logic. The only tradeoff is that I must explicitly manage the possibility of "hanging references" which can be done through generics with one class. Any thoughts?

    Read the article

  • Configuring SMB shares in OS X

    - by Craig Walker
    I'm at my wit's end trying to control SMB file sharing on my Mac. (OS X 10.5 Leopard). I want to do something fairly simple: share a particular (non-home, non-Public) folder over my my SMB/Windows network with two users (accounts are local to my Mac), and share no other folders with anyone. The instructions on the internet are fairly straightforward: add the folders to be shared to the File Sharing panel of the Sharing System Preferences pane: ..and ensure that I'm sharing through SMB: However, when I actually try to connect via a SMB client (Windows XP in this case), the share does not appear. I see my home directory, "Macintosh HD", and my printers, but not the folder I just shared. I ensured that the underlying directory had the proper permissions (since this seems to affect share visibility) and that the "Shared Folder" checkbox was checked: ...but this didn't have any effect. I checked /etc/smb.conf but there was nothing obviously out of place there. I've also restarted smbd and rebooted. What else should I be looking for?

    Read the article

  • IDN and HTTP_HOST

    - by Sandman
    So, when I want to link my users to a specific page I always use (in php): "http://" . $_SERVER["HTTP_HOST"] . "/page.php", to be sure that the link points to the page they're currently surfing (and not one of the server aliases). But with IDN names, HTTP_HOST is set to "xn--hemmabst-5za.net" (for example) - which of course works but doesn't look very nice. Is there a way to have HTTP_HOST set to the correct IDN name in these cases (in this case - "hemmabäst.net")? I rather do it in Apache before it comes to PHP, because otherwise I'd have to replace all my usage of $_SERVER["HTTP_HOST"]. Any ideas?

    Read the article

  • Grub2 fails to chainload Windows 7 with error "invalid signature"

    - by atomicpirate
    I've built a new UEFI 64-bit system with both Windows 7 and Ubuntu 11.10 installed (on separate hard drives). I'd like to be able to boot Windows 7 from the grub menu, but I have so far been unsuccessful in getting grub to chainload it. After getting the grub menu, I choose the option for the command line and I can see that bootmgfw.efi is at (hd1,gpt1)/efi/Microsoft/Boot/bootmgfw.efi. However, when I attempt to chainload I get an error: grub> chainloader (hd1,gpt1)/efi/Microsoft/Boot/bootmgfw.efi error: invalid signature I am not sure whether I chose the UEFI boot option when I installed Linux from the LiveCD, and so I am wondering if the grub I have is perhaps unable to chainload in this manner? In any case I am not sure how to get the chainload to work.

    Read the article

  • How can I transition from being a "9-5er" to being self-employed?

    - by Stephen Furlani
    Hey, I posted this question last fall about moonlighting, and I feel like I've got a strong case to make to start transitioning from being a Full-Time Employee to being self-employed. So much so, that I find it hard to concentrate at work on the things I'm supposed to be doing. However, self-employment comes with things like no health benefits or guaranteed income... so I don't feel like I can just quit. (At least not in this economy with a house and family). I'm already working 40hrs/wk on my main job, going to school to get my MS, and trying to freelance on weekends and evenings, but I want to give it more time. If I can't take LWOP or just work less than 40hrs/wk I feel like I have to give up self-employment because I just can't give my day job all my best. Would it be reasonable to ask my employer if they can cut my hours (and pay)? Is there something else I can/should do? Has anyone done this transition and had it turn out well? or bad? I am in the USA and I understand answers are not legal advice. Thanks!

    Read the article

  • What are the disadvantages of automated testing?

    - by jkohlhepp
    There are a number of questions on this site that give plenty of information about the benefits that can be gained from automated testing. But I didn't see anything that represented the other side of the coin: what are the disadvantages? Everything in life is a tradeoff and there are no silver bullets, so surely there must be some valid reasons not to do automated testing. What are they? Here's a few that I've come up with: Requires more initial developer time for a given feature Requires a higher skill level of team members Increase tooling needs (test runners, frameworks, etc.) Complex analysis required when a failed test in encountered - is this test obsolete due to my change or is it telling me I made a mistake? Edit I should say that I am a huge proponent of automated testing, and I'm not looking to be convinced to do it. I'm looking to understand what the disadvantages are so when I go to my company to make a case for it I don't look like I'm throwing around the next imaginary silver bullet. Also, I'm explicity not looking for someone to dispute my examples above. I am taking as true that there must be some disadvantages (everything has trade-offs) and I want to understand what those are.

    Read the article

  • How to share two keyboard on the same laptop, french iso layout and usa ansi layout keyboard with usb?

    - by reyman64
    I recently buy a "noppoo choc mini" with this specific ANSI US-INTERNATIONAL pc84 layout. This specific keyboard have only 84 key , a 60% (compact tenkeyless) reduced layout My problem is simple, there is no keyboard layout into Ubuntu 12.04 which correspond to this usa normal ansi layout ... so it's the same problem with reduced version and only 84 key .. I search a template of normal ANSI US-INTERNATIONAL for xmodmap/xkb, and after i can try to manually map the other key. I search on google, and i don't find any other user which have same problem, so it's seem i have not the good keywoard to search this information.. Edit 1 : Here you can see there is probably a bug in ubuntu, because the layout for USA with dead key is not correct ! I have this : http://minus.com/lEdKMrsNAwkVA And other users have this for the same layout : http://i.stack.imgur.com/p52XG.png EDIT 2 It seems after a "sudo dpkg-reconfigure keyboard-configuration" : french standard keyboard pc105 + precision M65 keyboard from dell laptop Now i can see the good us layout in parameters, but i cannot have the iso layout for french usage... EDIT 3 Ok, after reboot i understand the probleme, i explain. I have one laptop with integrated french keyboard, and i want to use my usb keyboard which use a usa ANSI layout. It seem it's impossible in ubuntu and "dpkg-reconfigure keyboard-configuration" to share two different physical layout (ANSI and EU ISO) on the same computer ... EDIT4 Ok, it seems i can switch the physical layout (ISO <- ANSI) with this command in terminal : setxkbmap -layout us setxkbmap -layout us -variant alt-intl an setxkbmap -layout fr It's very complicated qnd it seem ubuntu 12.04 have big problem with keyboard manager ... because all works great with these two commands, without ANY change into the system parameters keyboard !!! Second bug ? The image of the layout for fr is buggy, the layout is not ISO, but i can press on the letter "< " at the left of right shift without any problem ! You can see the image here (french alternative with ANSI layout ? it's crazy ?) : http: //minus.com/lXsDJwoeyWAfF Can you help me on this point ? I'm lost with xkb, and manual mapping is very complicated ... Thanks a lot, SR

    Read the article

  • Content server backups

    - by Dan Sosedoff
    What is the best way to backup data on content servers? For example, I have 15 servers that just have content, no applications running on it. Each server has a 250 GB hard drive. So, it's a pretty big amount of data. All the data have external access (via HTTP). So, the question is: what methodology is best in my case? The most useful method I know is cross-backup: when each server contains its own data and backup of one other server. But, there is significant reduction in total capacity. RAID?

    Read the article

  • LibGDX Boid Seek Behaviour

    - by childonline
    I'm trying to make a swarm of boids which seek out the mouse position and move towards it, but I'm having a bit of a problem. The boids just seem to want to go to upper-right corner of the game window. The mouse position seems influence the behavior a bit, but not enough to make the boid turn towards it. I suspect there is a problem with the way LibGDX handles its coordinate system, but I'm not sure how to fix it I've uploaded the eclipse project here! Also here are the relevant bits of my code, in case you see something obviously wrong: public Agent(){ _texture = GdxGame.TEX_AGENT; TextureRegion region = new TextureRegion(_texture, 0, 0, 32, 32); TextureRegion region2 = new TextureRegion(GdxGame.TEX_TARGET, 0, 0, 32, 32); _sprite = new Sprite(region); _sprite.setSize(.05f, .05f); _sprite_target = new Sprite(region2); _sprite_target.setSize(.1f, .1f); _max_velocity = 0.05f; _max_speed = 0.005f; _velocity = new Vector2(0, 0); _desired_velocity = new Vector2(0, 0); _steering = new Vector2(0, 0); _position = new Vector2(-_sprite.getWidth()/2, -_sprite.getHeight()/2); _mass = 10f; } public void Update(float deltaTime){ _target = new Vector2(Gdx.input.getX(), Gdx.input.getY()); _desired_velocity = ((_target.sub(_position)).nor()).scl(_max_velocity,_max_velocity); _steering = ((_desired_velocity.sub(_velocity)).limit(_max_speed)).div(_mass); _velocity = (_velocity.add(_steering)).limit(_max_speed); _position = _position.add(_velocity); _sprite.setPosition(_position.x, _position.y); _sprite_target.setPosition(Gdx.input.getX(), Gdx.input.getY()); } I've used this tutorial here. Thanks!

    Read the article

  • determining if .htaccess is working

    - by Toc
    Following some guide on the web, I have created the following .htaccess for my WordPress installation: # protect the htaccess file <files .htaccess> order allow,deny deny from all </files> # protect wpconfig.php <files wp-config.php> order allow,deny deny from all </files> plus chmod wp-config.php 600 and .htaccess 644. Which is the simplest way I can test if it is working properly? In case, I can create some other files to verify the work. I only want to be sure.

    Read the article

  • determining if .htaccess is working

    - by Toc
    Following some guide on the web, I have created the following .htaccess for my WordPress installation: # protect the htaccess file <files .htaccess> order allow,deny deny from all </files> # protect wpconfig.php <files wp-config.php> order allow,deny deny from all </files> plus chmod wp-config.php 600 and .htaccess 644. Which is the simplest way I can test if it is working properly? In case, I can create some other files to verify the work. I only want to be sure.

    Read the article

  • SQL SERVER – Copy Column Headers from Resultset – SQL in Sixty Seconds #026 – Video

    - by pinaldave
    SQL Server Management Studio returns results in Grid View, Text View and to the file. When we copy results from Grid View to Excel there is a common complaint that the column  header displayed in resultset is not copied to the Excel. I often spend time in performance tuning databases and I run many DMV’s in SSMS to get a quick view of the server. In my case it is almost certain that I need all the time column headers when I copy my data to excel or any other place. SQL Server Management Studio have two different ways to do this. Method 1: Ad-hoc When result is rendered you can right click on the resultset and click on Copy Header. This will copy the headers along with the resultset. Additionally, you can use the shortcut key CTRL+SHIFT+C for coping column headers along with the resultset. Method 2: Option Setting at SSMS level This is SSMS level settings and I kept this option always selected as I often need the column headers when I select the resultset. Go Tools >> Options >> Query Results >> SQL Server >> Results to Grid >> Check the Box “Include column header when copying or saving the results.” Both of the methods are discussed in following SQL in Sixty Seconds Video. Here is the code used in the video. Related Tips in SQL in Sixty Seconds: Copy Column Headers in Query Analyzers in Result Set Getting Columns Headers without Result Data – SET FMTONLY ON If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Windows activation on a Virtual Machine (Physical->VM)

    - by Daisetsu
    I backed up a number of laptops to virtual machines before they are to be re-purposed, in case I need the data at some later time. While the Physical to VM processes worked fine I am encountering issues on some of the VMs. When I boot them I get an error message saying I MUST activate windows in order to login. This is expected because the hardware changed (from physical hardware to virtualized hardware). I click the OK button and expect to be prompted with ways to activate, windows sits there for quite a while then tells me that "Windows has already been activated". I click OK at that message and get take back to the beginning where I am asked to activate Windows. I have done some fairly intensive googling but haven't been able to find a real solution. EDIT: The laptops with the issues are 2 Sony Vaios, I believe that they have the OEM version of the OS originally installed by the factory.

    Read the article

  • Why Oracle Delivers More Value than IBM in Data Integration Solutions

    - by irem.radzik(at)oracle.com
    For data integration projects, IT organization look for a robust but an easy-to-use solution, which simplifies enterprise data architecture while providing exceptional value-- not one that adds complexity and costs. This is a major challenge today for customers who are using IBM InfoSphere products like DataStage or Change Data Capture. Whereas, Oracle consistently delivers higher level value with its data integration products such as Oracle Data Integrator, Oracle GoldenGate. There are many differentiators for Oracle's Data Integration offering in comparison to IBM. Here are the top five: Lower cost of ownership Higher performance in both real-time and bulk data movement Ease of use and flexibility Reliability Complete, Open, and Integrated Middleware Offering Architectural differences between products contribute a great deal to these differences. First of all, Oracle's ETL architecture does not require a middle-tier transformation server, something IBM does require. Not only it costs more to manage an additional transformation server including energy costs, but it adds a performance bottleneck as well. In addition, IBM's data integration products are complex and often require lengthy professional services engagements to integrate. This translates to higher costs and delayed time to market. Then there's the reliability factor. Our customers choose Oracle GoldenGate over IBM's InfoSphere Change Data Capture product because Oracle GoldenGate is designed for mission-critical systems that require guaranteed data delivery and automatic recovery in case of process interruptions. On Thursday we will discuss these key differentiators in detail and provide customer examples that chose Oracle over IBM in data integration projects. Join us on Thursday Feb 10th at 11am PT to learn how Oracle delivers more value than IBM in data integration solutions.

    Read the article

  • Tuxedo Runtime for CICS and Batch Webcast

    - by Jason Williamson
    There was a recent webcast about the new Tux ART solution that we released last month. Here is the link to hear Hassan talk about that Link to Listen to Webcast Below is the market speak about what the webcast is about and what you will hear. From my own experience, there is certainly an uptick in rehosting discussions and projects with customers all around the world. The notion that mainframes can be rehosted on open system is pretty well accepted. There are still some hold out CxO's who don't believe it, but those guys typically are not really looking to migrate anyway and don't take an honest look at the case studies, history and TPC reports. Maybe in my next blog I'll talk about "myth busters" -- to borrow some presentation details from Mark Rakhmilevich (Tuxedo PM for Rehosting). *********** Mainframe rehosting is a compelling approach for migrating and modernizing mainframe applications and data to lower data center cost and risk while increasing business agility. Oracle Tuxedo 11g with CICS application runtime (ART) capabilities is designed to facilitate the migration of IBM mainframe applications by allowing these to run on open systems in a distributed grid architecture. The brand new Oracle Tuxedo Application Runtime for CICS and Batch 11g can significantly reduce your costs and risks while preserving your investments in applications and data. In this on-demand Webcast, hear from Oracle Senior Vice President, Hasan Rizvi, on how Oracle Tuxedo 11g with CICS application runtime capabilities is changing the way customers think about mainframe migration. You'll learn: * What market forces drive mainframe migration and modernization * What technologies and capabilities are available for migrating mainframe transaction processing and batch applications * How Oracle brings rehosting technologies to a new level of scalability, robustness, and automation

    Read the article

< Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >