Search Results

Search found 9156 results on 367 pages for 'cloud storage'.

Page 42/367 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • How to run your own cloud network at home?

    - by Xeoncross
    I have 7 PC's around the house that total more than 10Ghz in CPU power. I was wondering if I could put them to use building my own "cloud computing" network to help with rendering videos, blender animations, Photoshop effects, or anything else. Does something like this exists - and if so what is it called and where do I find it?

    Read the article

  • Alternative to Amazon’s S3 service?

    - by Cory
    Just wondering if there is good alternative to Amazon's S3 service? I like S3 but the bandwidth cost is high. I looked at CouldFiles from Rackspace but the cost is even higher. I don't mind prepaying or having monthly payment in order to reduce the bandwidth cost greatly. Thank you for any help

    Read the article

  • Oracle’s New Approach to Cloud-based Applications User Experiences

    - by Oracle OpenWorld Blog Team
    By Misha Vaughan It was an exciting Oracle OpenWorld this year for customers and partners, as they got to see what their input into the Oracle user experience research and development process has produced for cloud-delivered applications. The result of all this engagement and listening is a focus on simplicity, mobility, and extensibility. These were the core themes across Oracle OpenWorld sessions, executive roundtables, and analyst briefings given by Jeremy Ashley, Oracle's vice president of user experience. The highlight of every meeting with a customer featured the new simplified UI for Oracle’s cloud applications.    Attendees at some sessions and events also saw a vision of what is coming next in the Oracle user experience, and they gave direct feedback on whether this would help solve their business problems.  What did attendees think of what they saw this year? Rebecca Wettemann of Nucleus Research was part of  an analyst briefing on next-generation user experiences from Oracle. Here’s what she told CRM Buyer in an interview just after the event:  “Many of the improvements are incremental, which is not surprising, as Oracle regularly updates its application,” Rebecca Wettemann, vice president of Nucleus Research, told CRM Buyer. "Still, there are distinct themes to this latest set of changes. One is usability. Oracle Sales Cloud, for example, is designed to have zero training for onboarding sales reps, which it does," she explained. "It is quite impressive, actually—the intuitive nature of the application and the design work they have done with this goal in mind. The software uses as few buttons and fields as possible," she pointed out. "The sales rep doesn't have to ask, 'what is the next step?' because she can see what it is."  What else did we hear? Oracle OpenWorld is a time when we can take a broader pulse of our customers’ and partners’ concerns. This year we heard some common user experience themes on the following: · A desire to continue to simplify widely used self-service tasks · A need to understand how customers or partners could take some of the UX lessons learned on simplicity and mobility into their own custom areas and projects  · The continuing challenge of needing to support bring-your-own-device and corporate-provided mobile devices to end users · A desire to harmonize user experiences across platforms for specific business-use cases  What does this mean for next year? Well, there were a lot of things we could only show to smaller groups of customers in our Oracle OpenWorld usability labs and HQ lab tours, to partners at our Expo, and to analysts under non-disclosure agreements. But we used these events as a way to get some early feedback about where we are focusing for the year ahead. Attendees gave us a positive response: @bkhan Saw some excellent UX innovations at the expo “@usableapps: Great job @mishavaughan and @vinoskey on #oow13 UX partner expo!” @WarnerTim @usableapps @mishavaughan @vinoskey @ultan Thanks for an interesting afternoon definitely liked the UX tool kits for partners. You can expect Oracle to continue pushing themes of simplicity, mobility, and extensibility even more aggressively in the next year.  If you are interested to find out what really goes on in the UX labs, such as what we are doing with smartphones, tablets, heads-up displays, and the AppsLab robots, feel free to reach out to me for more information: Misha Vaughan or on Twitter: @mishavaughan.

    Read the article

  • store everything only once, smarter

    - by hsmit
    In the digital world a lot is stored multiple times. As a thought experiment or creative challenge I want you to think about making this more efficient and maybe reuse more. Think of the following cases: an mp3 track is downloaded multiple times, copied over various devices on website a login form is often rebuild many times, why not reuse more code? words themselves are used many times questions and answers are accidentally saved at many places in parallel images or photos often describe the same data (Eiffel tower, Golden gate, Taj Mahal) etc etc Are you aware of solutions? Or are you thinking about similar topics? Ideas? Blueprints? I'd love to hear from you!

    Read the article

  • Force CloudFront distribution/file update

    - by Martin
    I'm using Amazon's CloudFront to serve static files of my web apps. Is there no way to tell a cloudfront distribution that it needs to refresh it's file or point out a single file that should be refreshed? Amazon recommend that you version your files like logo_1.gif, logo_2.gif and so on as a workaround for this problem but that seems like a pretty stupid solution. Is there absolutely no other way?

    Read the article

  • Cloud hosted CI for .NET projects

    - by Scott Dorman
    Originally posted on: http://geekswithblogs.net/sdorman/archive/2014/06/02/cloud-hosted-ci-for-.net-projects.aspxContinuous integration (CI) is important. If you don’t have it set up…you should. There are a lot of different options available for hosting your own CI server, but they all require you to maintain your own infrastructure. If you’re a business, that generally isn’t a problem. However, if you have some open source projects hosted, for example on GitHub, there haven’t really been any options. That has changed with the latest release of AppVeyor, which bills itself as “Continuous integration for busy developers.” What’s different about AppVeyor is that it’s a hosted solution. Why is that important? By being a hosted solution, it means that I don’t have to maintain my own infrastructure for a build server. How does that help if you’re hosting an open source project? AppVeyor has a really competitive pricing plan. For an unlimited amount of public repositories, it’s free. That gives you a cloud hosted CI system for all of your GitHub projects for the cost of some time to set them up, which actually isn’t hard to do at all. I have several open source projects (hosted at https://github.com/scottdorman), so I signed up using my GitHub credentials. AppVeyor fully supported my two-factor authentication with GitHub, so I never once had to enter my password for GitHub into AppVeyor. Once it was done, I authorized GitHub and it instantly found all of the repositories I have (both the ones I created and the ones I cloned from elsewhere). You can even add “build badges” to your markdown files in GitHub, so anyone who visits your project can see the status of the lasted build. Out of the box, you can simply select a repository, add the build project, click New Build and wait for the build to complete. You now have a complete CI server running for your project. The best part of this, besides the fact that it “just worked” with almost zero configuration is that you can configure it through a web-based interface which is very streamlined, clean and easy to use or you can use a appveyor.yml file. This means that you can define your CI build process (including any scripts that might need to be run, etc.) in a standard file format (the YAML format) and store it in your repository. The benefits to that are huge. The file becomes a versioned artifact in your source control system, so it can be branched, merged, and is completely transparent to anyone working on the project. By the way, AppVeyor isn’t limited to just GitHub. It currently supports GitHub, BitBucket, Visual Studio Online, and Kiln. I did have a few issues getting one of my projects to build, but the same day I posted the problem to the support forum a fix was deployed, and I had a functioning CI build about 5 minutes after that. Since then, I’ve provided some additional feature requests and had a few other questions, all of which have seen responses within a 24-hour period. I have to say that it’s easily been one of the best customer support experiences I’ve seen in a long time. AppVeyor is still young, so it doesn’t yet have full feature parity with some of the older (more established) CI systems available,  but it’s getting better all the time and I have no doubt that it will quickly catch up to those other CI systems and then pass them. The bottom line, if you’re looking for a good cloud-hosted CI system for your .NET-based projects, look at AppVeyor.

    Read the article

  • Gaming on Cloud

    - by technomad
    Sometimes I wonder the pundits of cloud computing are way to consumed with the enterprise applications. With all the CAPEX / OPEX, ROI-talk taking the center stage, an opportunity to affect masses directly is getting overlooked. I am a self proclaimed die hard gamer. I come from the generation of gamers who started their journey in DOS games like Wolfenstein 3D and Allan Border Cricket (the latter is still a favorite pastime). In the late 90s, a revolution called accelerated graphics started in DirectX and OpenGL. Games got more advanced. Likes of Quake III and Unreal Tournament became the crown jewels of the industry. But with all these advancements, there started a race. A race of GFX giants ATI and NVIDIA to beat each other for better frame and image quality. Revisions to the graphics chipsets became frequent. Games became eye candies but at the cost of more GPU power / memory. Every eagerly awaited title started demanding more muscle power in graphics and PC hardware. Latest games and all the liquid smooth frame rates became the territory of the once with deep pockets who could spend lavishly on latest hardware. Enthusiasts like yours truly, who couldn’t afford this route, started exploring over-clocking, optimized hardware cooling... etc. to pursue the passion. Ever rising cost of hardware requirements lead to rampant piracy of PC games. Gamers were willing to spend on the latest titles, but the ones with tight budget prefer hardware upgrades against a legal copy of the game. It was also fueled by emergence of the P2P file sharing networks. Then came the era of Xbox and PS3s. It solved the major issue of hardware standardization and provided an alternative to ever increasing hardware costs. I have always admired these consoles, but being born and brought up in a keyboard/mouse environment, I still find it difficult to play first person shooters with a gamepad. I leave the topic of PC v/s Consol gaming for another day, but the bottom line is… PC gamers deserve an equally democratized solution. This is where I think Cloud Computing can come to rescue. It can minimize hardware requirements. Virtually end the software piracy and rationalize costs for gamers. Subscription based models like pay-as-you-play. In game rewards, like extended subscription credits for exceptional gamers (oh yes, I have beaten Xaero on nightmare in Quake III, time and again!) Easy deployment for patches and fixes. Better game AI. The list goes on and on… Fortunately, companies like OnLive are thinking in the same direction. Their gaming service is all set to launch on 17th June 2010 in E3 2010 expo in L.A. I wish them all the luck. I hope they will start a trend which will bring the smiles back on the face of budget gamers with the help of cloud computing.

    Read the article

  • Azure Blobs - ArgumentNullException when calling UploadFile()

    - by Ariel
    I’m getting the following exception when trying to upload a file with the following code: string encodedUrl = "videos/Sample.mp4" CloudBlockBlob encodedVideoBlob = blobClient.GetBlockBlobReference(encodedUrl); Log(string.Format("Got blob reference for {0}", encodedUrl), EventLogEntryType.Information); encodedVideoBlob.Properties.ContentType = contentType; encodedVideoBlob.Metadata[BlobProperty.Description] = description; encodedVideoBlob.UploadFile(localEncodedBlobPath); I see the "Got blob reference" message, so I assume the reference resolves correctly. Void Run() C:\Inter\Projects\PoC\WorkerRole\WorkerRole.cs (40) System.ArgumentNullException: Value cannot be null. Parameter name: value at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.get_Result() at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.ExecuteAndWait() at Microsoft.WindowsAzure.StorageClient.CloudBlob.UploadFromStream(Stream source, BlobRequestOptions options) at Microsoft.WindowsAzure.StorageClient.CloudBlob.UploadFile(String fileName, BlobRequestOptions options) at EncoderWorkerRole.WorkerRole.ProcessJobOutput(IJob job, String videoBlobToEncodeUrl) in C:\Inter\Projects\PoC\WorkerRole\WorkerRole.cs:line 144 at EncoderWorkerRole.WorkerRole.Run() in C:\Inter\Projects\PoC\WorkerRole\WorkerRole.cs:line 40 Interestingly, I'm running that same snippet from an on-premises server i.e., outside of Azure and it works correctly. Ideas welcome, thanks!

    Read the article

  • Sun Storage 2500-M2 Array and Sun Fire X4470 M2 Server

    - by nospam(at)example.com (Joerg Moellenkamp)
    There is some new hardware in the Oracle portfolio. The first one is the Sun Fire X4470 M2 Server. There was a lot of talk about the system before because of benchmark results, but now it's finally announced. Two or four Intel Xeon E7-4800. Up to 1 TB as the system provides 64 DIMM slots with 16 GB DDR DIMMs. The memory is placed on those riser cards right behind the fans of this chassis. Up to 6 internal drives. In a 3 RU package. Another announcement was the Sun Storage 2500 M2 announced yesterday: From 5 to 48 drives (the later number with three expansion trays) for up to 28.8 TB of storage. The array is SAS based internally. You can put 300GB and 600 GB in it. The 2540-M2 provides 4 (8 optional) FC ports with up to 8 GB/sec. The 2530-M2 has 4 SAS2 ports with up to 6 GBit/s. It has 2 integrated controllers providing 2 GB cache protected by a power backup for 72 hours. The controller enables the arrays to deliver 0, 1, 10, 3, 5, 6, (P+Q) RAID levels.

    Read the article

  • Git-based storage and publishing, infrastructure advice

    - by Joel Martinez
    I wanted to get some advice on moving a system to "the cloud" ... specifically, I'm looking to move into some of Windows Azure's managed services, as right now I'm managing a VM. Basically, the system operates on some data stored in a github git repository. I'll describe the current architecture: Current system (all hosted on a single server): GitHub - configured with a webhook pointing at ... ASP.NET MVC application - to accept the webhook from git. It pushes a message onto ... Azure service bus Queue - which is drained by ... Windows Service - pulls the message from the queue and ... Fetches the latest data from the git repository (using GitLib2Sharp) onto the local disk and finally ... Operates on the data in git to produce a static HTML website hosted/served by IIS. The system works really well, actually ... but I would like to get out of the business of managing the VM, and move to using some combination of Azure web and worker roles. But because the system relies so heavily on the git repository on the local filesystem, I'm finding it difficult to figure out how to architect in the cloud. I know you can get file system access, so in theory I could just fetch the repository if there's nothing on disk ... but the performance/responsiveness of the system sort of depends on the repository being available and only having to fetch diffs, which is relatively quick. As opposed to periodically having to fetch the entire (somewhat large) git repository if the web or worker role was recycled, or something. So I would love some advice on how you would architect such a system :) Ultimately, the only real requirement is to be able to serve HTML content that's been produced from the contents of a git repository (in a relatively responsive manner, from a publishing perspective) ... please feel free to ask any clarifying questions if there's something I omitted. Thanks!

    Read the article

  • Progress 4GL and DB to Oracle and cloud

    - by llaszews
    Getting from client/server based 4GLs and databases where the 4GL is tightly linked to the database to Oracle and the cloud is not easy. The least risky and expensive option (in the short term) is to use the Progress OpenEdge DataServer for Oracle: Progress OpenEdge DataServer This eliminates the need to have to migrate the Progress 4GL to Java/J2EE. The database can be migrated using SQLWays Ispirer: Ispirer SQLWays ProgressDB migrations tool The Progress 4GL can remain as is. In order to get the application on the cloud there are a few approaches: 1. VDI - Virtual Desktop is a way to put all of the users desktop in a centralized environment off the desktop. This is great in cases where it is just not one client/server application that the user needs access too. In many cases, users will utilize MS Access, MS Excel, Crystal Reports and other tools to get at the Progress DB and other centralized databases. Vmware's acquistion of Wanova shows how VDI is growing in usage. Citrix is the 800 pound gorilla in the VDI space with Citrix WinFrame (now called XenDesktop). Oracle offers a VDI solution that Oracle picked up when it acquired Sun. 2. Hypervisor Server Virtualization - Of course you can place applications written in client/server languages like Progress 4GL buy using server virtualization from Oracle, VMWare, Microsoft, Citrix and others. 3. Microsoft Remote Desktop Services (aka: Terminal Services Client) The entire idea is to eliminate all the client/server desktop devices and connections which require desktop software and database drivers. A solution to removing database drivers from the desktop is to use DataDirect SQLLink

    Read the article

  • Securing ClickOnce hosted with Amazon S3 Storage

    - by saifkhan
    Well, since my post on hosting ClickOnce with Amazon S3 Storage, I've received quite a few emails asking how to secure the deployment. At the time of this post I regret to say that there is no way to secure your ClickOnce deployment hosted with Amazon S3. The S3 storage is secured by ACL meaning that a username and password will have to be provided before access. The Amazon CloudFront, which sits on top of S3, allows you to apply security settings to your CloudFront distribution by Applying an encryption to the URL. Restricting by IP. The problem with the CloudFront is that the encryption of the URL is mandatory. ClickOnce does not provide a way to pass the "Amazon Public Key" to the CloudFront URL (you probably can if you start editing the XML and HTML files ClickOnce generate but that defeats the porpose of ClickOnce all together). What would be nice is if Amazon can allow users to restrict by IP addresses or IP Blocks. I'd sent them an email and received a response that this is something they are looking into...I won't hold my breadth though. Alternative I suggest you look at Rack Space Cloud hosting http://www.rackspacecloud.com they have very competitive pricing and recently started hosting Windows Virtual Servers. What you can do is rent a virtual server, setup IIS to host your ClickOnce applications. You can then use IIS security setting to restrict what IP/Blocks can access your ClickOnce payloads. Note: You don't really need Windows Server to host ClickOnce. Any web server can do. If you are familiar with Linux you can run that VM with rackspace for half the price of Windows. I hope you found this information helpful.

    Read the article

  • Windows Intune, Cloud Desktop management

    - by David Nudelman
    As a part of Microsoft Cloud computing strategy, Windows Intune beta was released today. Here’s a quick overview of what customers and IT consultants can do with the cloud service component of Windows Intune: Manage PCs through web-based console: Windows Intune provides a web-based console for IT to administrate their PCs. Administrators can manage PCs from anywhere. Manage updates: Administrators can centrally manage the deployment of Microsoft updates and service packs to all PCs. Protection from malware: Windows Intune helps protect PCs from the latest threats with malware protection built on the Microsoft Malware Protection Engine that you can manage through the Web-based console. Proactively monitor PCs: Receive alerts on updates and threats so that you can proactively identify and resolve problems with your PCs—before it impacts end users and your business. Provide remote assistance: Resolve PC issues, regardless of where you or your users are located, with remote assistance. Track hardware and software inventory: Track hardware and software assets used in your business to efficiently manage your assets, licenses, and compliance. Set security policies: Centrally manage update, firewall, and malware protection policies, even on remote machines outside the corporate network. And here a quick video about Windows Intune For support and questions go to : TechNet Forums for Intune Regards, David Nudelman

    Read the article

  • Cloud Computing and Data Utilization

    - by Ahsan Alam
    Someone recently asked me “is cloud computing going to change the way we perceive data?”. My first instinct was “off course”; but I restrained myself and thought for a moment. Then my answer was “no”. Why do I feel that way? Technology and business have evolved quite a bit in the past few decades; however, the need to effectively view and utilize data hasn't changed. It is not uncommon to see many organization to rely on multiple database management systems (DBMS). Applications and systems are often built to utilize information from all these data sources, and effectively present them to users. In addition to multiple DBMS, corporations are also housing their systems across numerous data centers. In fact, systems and data can reside anywhere around the world with the advent of globalization. Cloud based systems have simply provided us a different place to maintain our data, nothing more. Hosting costs, security and accessibility are different issues; however, the way we utilize and view these data remains the same.

    Read the article

  • Using XML as data storage

    - by Kian Mayne
    I was thinking about the XML format and the following quote: “XML is not a database. It was never meant to be a database. It is never going to be a database. Relational databases are proven technology with more than 20 years of implementation experience. They are solid, stable, useful products. They are not going away. XML is a very useful technology for moving data between different databases or between databases and other programs. However, it is not itself a database. Don't use it like one.“ -Effective XML: 50 Specific Ways to Improve Your XML by Elliotte Rusty Harold (page 230, Part 4, Item 41, 2nd paragraph) This seems to really stress that XML should not be used for data storage and should only be used for program to program interoperability. Personally, I disagree and .NET's app.config file that's used to store a program's settings is an example of data storage in an XML file. However for databases rather than configurations etc XML should not be used. To develop my point, I will use two examples: A) Data about customers with fields that are all on one level i.e. there are a number of fields all relating to one customer with no children B) Data about configuration of an application where nested fields and properties make a lot of sense So my question is, Is this still a valid statement and is it now acceptable to store data using XML? EDIT: I've sent an email to the author of that quote to ask for his input/extra context.

    Read the article

  • Is DQS-in-the-cloud on its way?

    - by jamiet
    LinkedIn profiles are always a useful place to find out what's really going on in Microsoft. Today I stumbled upon this little nugget from former SSIS product team member Matt Carroll: March 2012 – December 2012 (10 months)Redmond, WA Took ownership of the SQL 2012 Data Quality Services box product and re-architected and extended it to become a cloud service. Led team and managed product to add dynamic scale, security, multi-tenancy, deployment, logging, monitoring, and telemetry as well as creating new Excel add-in and new ecosystem experience around easily sharing and finding cleansing agents. Personally designed, coded, and unit tested in-memory trigram matching algorithm core to better performance, scale and maintainability. Delivered and supported successful private preview of the new service prior to SQL wide reorganization.  http://www.linkedin.com/profile/view?id=9657184  Sounds as though a Data-Quality-Services-in-the-cloud (which I spoke of as being a useful addition to Microsoft's BI portfolio in my previous blog post Thoughts on Power BI for Office 365 ) might be on its way some time in the future. And what's this SQL wide reorganization? Interesting stuff. @Jamiet  

    Read the article

  • TechEd 2012: A Little Cloud And Too Little Windows Phone

    - by Tim Murphy
    It is Monday afternoon and the last couple of sessions have been disappointing.  I started out in the Nokia: Learning to Tile session.  I guess I should have read the summary more closely because it turned out to be more of a Nokia/WP7 history and sales pitch. “I’m outa here!” I made a quick venue change and now we are learning about Private Cloud Architecture.  The topic and the material were very informative.  The speaker even had a couple of quotable statements. The first quote was “You can trust me … I’m a doctor”.  The second was a new acronym (at least for me): CAVE – committee against virtually everything.  I am sure I have dealt with them more than once in my career. Unfortunately he didn’t just have a doctorate, the presentation was overdone like a medical journal.  While I didn’t enjoy the presentation, I am looking forward to getting my hands on the slides to review. Here is looking forward to the next sessions. del.icio.us Tags: Windows Phone,Cloud,Architecture

    Read the article

  • Oracle Enterprise Manager 12c R3 introduces advancements in cloud lifecycle and operations management

    - by Anand Akela
    Oracle Enterprise Manager 12c Release 3 (R3) was announced ( Press Release ) earlier today. It is now available for download at  OTN . This latest release features improvements in several areas, including: Improvements to Private Cloud and Engineered Systems Management Expanded Middleware and Application Management Capabilities Efficiency Gains for Enterprise manager Users in EM’s Enterprise-Ready Framework You can learn more about what's new in the Oracle Enterprise Manager 12c R3 in the Enterprise Manager 12c documentation . You will see more blogs and details about the new features during the next few weeks. Please let us what On July 18th, you can join us at a webcast to hear Thomas Kurian, EVP of Product Development on what Oracle Engineering has achieved with Oracle Enterprise Manager 12c Release 3 to address these challenges. Later, during this webcast, Oracle experts will discuss the latest capabilities in Oracle Enterprise Manager 12c Release 3 for cloud lifecycle and operations management. The presentation will be followed by a live Q&A session with Oracle experts. You can also join us online on Twitter to get your specific questions answered. Please use hash tag #em12c to join the conversation. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Register Now for the Webcast! Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Oracle Cloud and Oracle Platinum Services Announcements

    - by kellsey.ruppel
    Live Webcast - Oracle Cloud and Oracle Platinum Services Announcements Wednesday, June 06, 2012 1:00 p.m. PT – 2:30 p.m. PT View your local time Live Webcast Register to watch at your desk! Don't have an Oracle account? Sign up now!  Why do I need an account? Register Now! Please join Larry Ellison and Mark Hurd for important Oracle announcements. Be among the first to learn about new developments in Oracle’s cloud strategy and game-changing advances in Oracle Support.  Register Now! Are you based in the San Francisco Bay Area? Register to attend the live event in Redwood Shores. Oracle values your privacy, and will treat the information we collect from you as a result of your registration and participation in this activity in accordance with the Oracle Privacy Policy. Event Details: Wednesday, June 06, 2012 1:00 p.m. PT – 2:30 p.m. PT Live Webcast Stay Connected:     Join the conversation: #oraclecloud #oraclesupport

    Read the article

  • Oracle as a Service in the Cloud

    - by Jason Williamson
    This should really be a Tweet, but I guess I'm writing a bit more. In theme of data migration and legacy modernization, I am seeing more and more of a push to consolidate data sources, especially from non-oracle to oracle in an effort to save dollars. From a modernization perspective, this fits in quite well. We are able to migrate things like Terradata, Sybase and DB2 and put that all into an Oracle database and then provide that as a OaaS (Oracle as a Service) Cloud offering. This seems to be a growing trend, and not unlike the cool RDS Amazon Database cloud being built on Oracle as well. We again find ourselves back in the similar theme of migration, however. The target technology sounds like a winner (COBOL to Java/SOA...DB2/Datacom/Adabas to Oracle) but the age-old problem of how to get there still persists. It it not trivial to migrate large amounts of pre-relational or "devolved" relational data. To do this, we again must revert back to a tight roadmap to migration and leverage the growing tools and services that we have. I'm working on a couple of new sections and chapters for a book that Tom and Prakash and I are writing around Database Migration and Consolidation. I'll share some snipits shortly.

    Read the article

  • Making document storage in Sharepoint a breeze (leave the Web UI behind)

    - by deadlydog
    Hey everyone, I know many of us regularly use Sharepoint for document storage in order to make documents available to several people, have it version controlled, etc.  Doing this through the Web UI can be a real headache, especially when you have multiple documents you want to modify or upload, or when IE isn’t your default browser.  Luckily we can access the Sharepoint library like a regular network drive if we like. Open Sharepoint in Internet Explorer (other browsers don’t support the Open with Explorer functionality), navigate to wherever your documents are stored, choose the Library tab, and then click Open with Explorer. This will open the document storage in Explorer and you can interact with the documents just like they were on any other network drive J  This makes uploading large numbers of documents or directory structures super easy (a simple copy-paste), and modifying your files nice and easy. As an added bonus, you can drag and drop that location from the address bar in Explorer to the Favorites menu so that it’s always easily accessible and you can leave the Sharepoint Web UI behind completely for modifying your documents.  Just click on the new favorite to go straight to your documents.   You can even map this folder location as a network drive if you want to have it show up as another drive (e.g N: drive). I hope you found this as useful as I did

    Read the article

  • So You Want To Build a SPARC Cloud

    - by user12601629
    Did you ever wish you could get the industrial strength power of UNIX/RISC with the flexibility of cloud computing?  Well, now you can!  With recent advances from Oracle it's possible to build an incredibly high-performance, flexible, available virtualized infrastructure based on Solaris and SPARC.  Here's the recipe! Authored in collaboration across the Oracle "Systems Group" team, we now have a complete best practice guide for you.  Click below to download it: Best Practices for Building a Virtualized SPARC Computing Environment Inside you'll find recommendations for how and when to leverage technologies like: SPARC T4 OVM for SPARC hypervisor (version 2.2 and newer) Solaris 11 Ops Center 12c ZFS Storage Appliance Oracle network switches Based on following these best practices, you'll be able to construct a dynamic, virtualized infrastructure that allows for: Easy, GUI-based provisioning on new VMs Automated HA failover in the event of physical server failures Automatic load balancing across a cluster of VM hosts Complete end-to-end monitoring You should download this paper and check it out.  Even if you aren't planning on buying all new hardware, and instead want to transform some existing gear into a dynamic virtualized environment then this paper will give you concrete info on what to do and the trade-offs you'll make. Have fun getting started on your journey to build a SPARC cloud!

    Read the article

  • Many users, many cpus, no delays. Good for cloud?

    - by Eric
    I wish to set up a CPU-intensive time-important query service for users on the internet. A usage scenario is described below. Is cloud computing the right way to go for such an implementation? If so, what cloud vendor(s) cater to this type of application? I ask specifically, in terms of: 1) pricing 2) latency resulting from: - slow CPUs, instance creations, JIT compiles, etc.. - internal management and communication of processes inside the cloud (e.g. a queuing process and a calculation process) - communication between cloud and end user 3) ease of deployment A usage scenario I am expecting is: - A typical user sends a query (XML of size around 1K) once every 30 seconds on average. - Each query requires a numerical computation of average time 0.2 sec and max time 1 sec on a 1 GHz Pentium. The computation requires no data other than the query itself and is performed by the same piece of code each time. - The delay a user experiences between sending a query and receiving a response should be on average no more than 2 seconds and in general no more than 5 seconds. - A background save to a DB of the response should occur (not time critical) - There can be up to 30000 simultaneous users - i.e., on average 1000 queries a second, each requiring an average 0.2 sec calculation, so that would necessitate around 200 CPUs. Currently I'm look at GAE Java (for quicker deployment and less IT hassle) and EC2 (Speed and price optimization) as options. Where can I learn more about the right way to set ups such a system? past threads, different blogs, books, etc.. BTW, if my terminology is wrong or confusing, please let me know. I'd greatly appreciate any help.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >