Search Results

Search found 9847 results on 394 pages for 'cloud backup'.

Page 132/394 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • More Great Improvements to the Windows Azure Management Portal

    - by ScottGu
    Over the last 3 weeks we’ve released a number of enhancements to the new Windows Azure Management Portal.  These new capabilities include: Localization Support for 6 languages Operation Log Support Support for SQL Database Metrics Virtual Machine Enhancements (quick create Windows + Linux VMs) Web Site Enhancements (support for creating sites in all regions, private github repo deployment) Cloud Service Improvements (deploy from storage account, configuration support of dedicated cache) Media Service Enhancements (upload, encode, publish, stream all from within the portal) Virtual Networking Usability Enhancements Custom CNAME support with Storage Accounts All of these improvements are now live in production and available to start using immediately.  Below are more details on them: Localization Support The Windows Azure Portal now supports 6 languages – English, German, Spanish, French, Italian and Japanese. You can easily switch between languages by clicking on the Avatar bar on the top right corner of the Portal: Selecting a different language will automatically refresh the UI within the portal in the selected language: Operation Log Support The Windows Azure Portal now supports the ability for administrators to review the “operation logs” of the services they manage – making it easy to see exactly what management operations were performed on them.  You can query for these by selecting the “Settings” tab within the Portal and then choosing the “Operation Logs” tab within it.  This displays a filter UI that enables you to query for operations by date and time: As of the most recent release we now show logs for all operations performed on Cloud Services and Storage Accounts.  You can click on any operation in the list and click the “Details” button in the command bar to retrieve detailed status about it.  This now makes it possible to retrieve details about every management operation performed. In future updates you’ll see us extend the operation log capability to apply to all Windows Azure Services – which will enable great post-mortem and audit support. Support for SQL Database Metrics You can now monitor the number of successful connections, failed connections and deadlocks in your SQL databases using the new “Dashboard” view provided on each SQL Database resource: Additionally, if the database is added as a “linked resource” to a Web Site or Cloud Service, monitoring metrics for the linked SQL database are shown along with the Web Site or Cloud Service metrics in the dashboard. This helps with viewing and managing aggregated information across both resources in your application. Enhancements to Virtual Machines The most recent Windows Azure Portal release brings with it some nice usability improvements to Virtual Machines: Integrated Quick Create experience for Windows and Linux VMs Creating a new Windows or Linux VM is now easy using the new “Quick Create” experience in the Portal: In addition to Windows VM templates you can also now select Linux image templates in the quick create UI: This makes it incredibly easy to create a new Virtual Machine in only a few seconds. Enhancements to Web Sites Prior to this past month’s release, users were forced to choose a single geographical region when creating their first site.  After that, subsequent sites could only be created in that same region.  This restriction has now been removed, and you can now create sites in any region at any time and have up to 10 free sites in each supported region: One of the new regions we’ve recently opened up is the “East Asia” region.  This allows you to now deploy sites to North America, Europe and Asia simultaneously.  Private GitHub Repository Support This past week we also enabled Git based continuous deployment support for Web Sites from private GitHub and BitBucket repositories (previous to this you could only enable this with public repositories).  Enhancements to Cloud Services Experience The most recent Windows Azure Portal release brings with it some nice usability improvements to Cloud Services: Deploy a Cloud Service from a Windows Azure Storage Account The Windows Azure Portal now supports deploying an application package and configuration file stored in a blob container in Windows Azure Storage. The ability to upload an application package from storage is available when you custom create, or upload to, or update a cloud service deployment. To upload an application package and configuration, create a Cloud Service, then select the file upload dialog, and choose to upload from a Windows Azure Storage Account: To upload an application package from storage, click the “FROM STORAGE” button and select the application package and configuration file to use from the new blob storage explorer in the portal. Configure Windows Azure Caching in a caching enabled cloud service If you have deployed the new dedicated cache within a cloud service role, you can also now configure the cache settings in the portal by navigating to the configuration tab of for your Cloud Service deployment. The configuration experience is similar to the one in Visual Studio when you create a cloud service and add a caching role.  The portal now allows you to add or remove named caches and change the settings for the named caches – all from within the Portal and without needing to redeploy your application. Enhancements to Media Services You can now upload, encode, publish, and play your video content directly from within the Windows Azure Portal.  This makes it incredibly easy to get started with Windows Azure Media Services and perform common tasks without having to write any code. Simply navigate to your media service and then click on the “Content” tab.  All of the media content within your media service account will be listed here: Clicking the “upload” button within the portal now allows you to upload a media file directly from your computer: This will cause the video file you chose from your local file-system to be uploaded into Windows Azure.  Once uploaded, you can select the file within the content tab of the Portal and click the “Encode” button to transcode it into different streaming formats: The portal includes a number of pre-set encoding formats that you can easily convert media content into: Once you select an encoding and click the ok button, Windows Azure Media Services will kick off an encoding job that will happen in the cloud (no need for you to stand-up or configure a custom encoding server).  When it’s finished, you can select the video in the “Content” tab and then click PUBLISH in the command bar to setup an origin streaming end-point to it: Once the media file is published you can point apps against the public URL and play the content using Windows Azure Media Services – no need to setup or run your own streaming server.  You can also now select the file and click the “Play” button in the command bar to play it using the streaming endpoint directly within the Portal: This makes it incredibly easy to try out and use Windows Azure Media Services and test out an end-to-end workflow without having to write any code.  Once you test things out you can of course automate it using script or code – providing you with an incredibly powerful Cloud Media platform that you can use. Enhancements to Virtual Network Experience Over the last few months, we have received feedback on the complexity of the Virtual Network creation experience. With these most recent Portal updates, we have added a Quick Create experience that makes the creation experience very simple. All that an administrator now needs to do is to provide a VNET name, choose an address space and the size of the VNET address space. They no longer need to understand the intricacies of the CIDR format or walk through a 4-page wizard or create a VNET / subnet. This makes creating virtual networks really simple: The portal also now has a “Register DNS Server” task that makes it easy to register DNS servers and associate them with a virtual network. Enhancements to Storage Experience The portal now lets you register custom domain names for your Windows Azure Storage Accounts.  To enable this, select a storage resource and then go to the CONFIGURE tab for a storage account, and then click MANAGE DOMAIN on the command bar: Clicking “Manage Domain” will bring up a dialog that allows you to register any CNAME you want: Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. One of the other cool features that is now live within the portal is our new Windows Azure Store – which makes it incredibly easy to try and purchase developer services from a variety of partners.  It is an incredibly awesome new capability – and something I’ll be doing a dedicated post about shortly. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Oracle Manageability Presentations at Collaborate 2012

    - by Get_Specialized!
    Attending the Collaborate 2012 event, April 22-26th in Las Vegas, and interested in learning more about becoming specialized on Oracle Manageability? Be sure and checkout these sessions below presented by subject matter experts while your onsite. Set up a meeting or be one of the first Oracle Partners onsite to ask me, and we'll request one of the limited FREE Oracle Enterprise Manager 12c partner certification exam vouchers for you. Can't travel this year? the  COLLABORATE 12 Plug Into Vegas may be another option for you to attend from your own desk presentations like session #489 Oracle Enterprise Manager 12c: What's Changed? What's New? presented by Oracle Specialized Partners like ROLTA   Session ID Title Presented by Day/Time 920 Enterprise Manager 12c Cloud Control: New Features and Best Practices Dell Sun 9536 Release 12 Apps DBA 101 Justadba, LLC Mon 932 Monitoring Exadata with Cloud Control Oracle Mon 397 OEM Cloud Control Hands On Performance Tuning Mon 118 Oracle BI Sys Mgmt Best Practices & New Features Rittman Mead Consulting Mon 548 High Availability Boot Camp: RAC Design, Install, Manage Database Administration, Inc Mon 926 The Only Complete Cloud Management Solution -- Oracle Enterprise Manager Oracle Mon 328 Virtualization Boot Camp Dell Mon 292 Upgrading to Oracle Enterprise Manager 12c - Best Practices Southern Utah University Mon 793 Exadata 101 - What You Need to Know Rolta Tues 431 & 1431 Extreme Database Administration: New Features for Expert DBAs Oracle Tue Wed 521 What's New for Oracle WebLogic Management: Capabilities that Scripting Cannot Provide Oracle Thu 338 Oracle Real Application Testing: A look under the hood PayPal Tue 9398 Reduce TCO Using Oracle Application Management Suite for Oracle E-Business Suite Oracle Tue 312 Configuring and Managing a Private Cloud with Oracle Enterprise Manager 12c Dell Tue 866 Making OEM Sing and Dance with EMCLI Portland General Electric Tue 533 Oracle Exadata Monitoring: Engineered Systems Management with Oracle Enterprise Manager Oracle Wed 100600 Optimizing EnterpriseOne System Administration Oracle Wed 9565 Optimizing EBS on Exadata Centroid Systems Wed 550 Database-as-a-Service: Enterprise Cloud in Three Simple Steps Oracle Wed 434 Managing Oracle: Expert Panel on Techniques and Best Practices Oracle Partners: Dell, Keste, ROLTA, Pythian Wed 9760 Cloud Computing Directions: Understanding Oracle's Cloud AT&T Wed 817 Right Cloud: Use Oracle Technologies to Avoid False Cloud Visual Integrator Consulting Wed 163 Forgetting something? Standardize your database monitoring environment with Enterprise Manager 11g Johnson Controls Wed 489 Oracle Enterprise Manager 12c: What's Changed? What's New? ROLTA Thu    

    Read the article

  • Parsing a blackberry .ipd file

    - by galaxywatcher
    I recently lost my Blackberry. When I discovered it was gone very shortly afterwards and called it, the sim card had already been removed. I ain't seeing that Blackberry again. Ok. I am out $300, but at least my data is backed up. I had an older working Blackberry fortunately and I got a new sim card and proceeded to restore my data using Blackberry Desktop Manager. 7000+ emails, hundreds of autotext entries, sms messages, calendar events, all backing up. Looking good. Lo and behold! My Address Book contacts refuse to back up? I try advanced, and it is greyed out as an option to restore. Far more frustrating than losing my bberry in the first place is wrangling with software that defies human logic. Ok, now I guess I will have to enter all 327 names by hand. That is, if I can read the .ipd file. I have tried the free version of ABC Amber Blackberry editor, but when I open the .ipd file, the contacts just do not show up. I am beginning to feel like the gods are conspiring against me. Then I found this: http://jabide.com/2009/03/parse-blackberry-ipd-files/ He posted a perl script that claims to extract the files. I copied and pasted the code and it did list all the different databases in my .ipd file, I was elated that a cool solution like this was published. I followed the instructions and garbled data with some discernible ascii was sent to standard output unlike a .csv file like he said it would. This is enough to make a grown man cry. Does anyone out there have a solution to extract my address book contacts from an .ipd file?

    Read the article

  • Undo table updates in SQL Server 2008

    - by sikas
    I updated a table in my MS SQL Server 2008 by accident, I was updating a table from another by copying cell by cell, but I have overwritten the original table. Is there a way that I can restore my table contents as it was?

    Read the article

  • Filesystem synchronization library?

    - by IsaacB
    Hi, I've got 10 GB of files to back up daily to another site. The client is way out in the country so bandwidth is an issue. Does anyone know of any existing software or libraries out there that help with keeping a folder with its files synchronized across a slow link, that is it only sends files across if they have changed? Some kind of hash checking would be nice, too, to at least confirm the two sides are the same. I don't mind paying some money for it, seeing as how it might take me several weeks to a month to implement something decent on my own. I just don't want to re-invent the wheel, here. BTW it is a windows shop (they have an in house windows IT guy) so windows is preferred. I also have 10 GB of SQL Server 2000 databases to go across. Is the SQL server replication mode reliable? Thanks!

    Read the article

  • What does BucketAlreadyOwnedByYou error (from Amazon S3) actually mean? I can't find any reason affe

    - by Phyo Wai Win
    Hi there, I am using Amazon S3 to back up my Rails app's mysql database. And I am using astrails-safe plugin to do that and I got the "Your previous request to create the named bucket succeeded and you already own it. (AWS::S3::BucketAlreadyOwnedByYou)" error back whenever I try to update it. I have checked that the folder in which I am going to back up is there in my account already. It's just that I can't upload the files from the code (using astrails-safe). Any help would be appreciated! Thanks.

    Read the article

  • Tools to work with App Engine data dumps

    - by Thilo
    Using the bulkloader.py utility you can download all data from your application's Datastore. It is not obvious how the data is stored, however. From the looks of it, you get a SQLite file with all data in binary format in a single table: sqlite> .tables bulkloader_database_signature result sqlite> .schema result CREATE TABLE result ( id BLOB primary key, value BLOB not null, sort_key BLOB); Are there any tools to work with this data?

    Read the article

  • importing a project into a svn repository question

    - by ajsie
    im using netbeans for svn. i open a project in netbeans and then i import it to a svn repo. it seems that although im only importing the project folder, svn creates .svn folders in all folders within this project folder. why is that? i thought that i was only creating .svn folders to checked out projects, not imported ones? now this folder acts very weird, when i open this folder as a project in netbeans, netbeans treats it like a svn folder some way. is this normal? cause i want this one to not be under SVN.

    Read the article

  • Restoring MySQL database from physical files

    - by Abdullah Jibaly
    Is it possible to restore a MySQL database from the physical database files. I have a directory that has the following file types: client.frm client.MYD client.MYI but for about 20 more tables. I usually use mysqldump or a similar tool to get everything in 1 SQL file so what is the way to deal with these types of files?

    Read the article

  • Mercurial between server and local?

    - by artmania
    I have a portal development work in process... I had some troubles time to time like losing, overwriting wrong files, etc... So I decided to go for Mercurial for this development. My first experience with Source Control. I work on server [bluehost] for this project, is there any way to keep update backups at local? Do I have to setup Mercurial to Bluehost? any way to sync changes on server to my local mac?

    Read the article

  • SQL Server 2005: When copy table structure to other database "CONSTRAINT" keywords lost

    - by StreamT
    Snippet of original table: CREATE TABLE [dbo].[Batch]( [CustomerDepositMade] [money] NOT NULL CONSTRAINT [DF_Batch_CustomerDepositMade] DEFAULT (0) Snippet of copied table: CREATE TABLE [dbo].[Batch]( [CustomerDepositMade] [money] NOT NULL, Code for copy database: Server server = new Server(SourceSQLServer); Database database = server.Databases[SourceDatabase]; Transfer transfer = new Transfer(database); transfer.CopyAllObjects = true; transfer.CopySchema = true; transfer.CopyData = false; transfer.DropDestinationObjectsFirst = true; transfer.DestinationServer = DestinationSQLServer; transfer.CreateTargetDatabase = true; Database ddatabase = new Database(server, DestinationDatabase); ddatabase.Create(); transfer.DestinationDatabase = DestinationDatabase; transfer.Options.IncludeIfNotExists = true; transfer.TransferData();

    Read the article

  • Strange Sql Server 2005 behavior

    - by Justin C
    Background: I have a site built in ASP.NET with Sql Server 2005 as it's database. The site is the only site on a Windows Server 2003 box sitting in my clients server room. The client is a local school district, so for data security reasons there is no remote desktop access and no remote Sql Server connection, so if I have to service the database I have to be at the terminal. I do have FTP access to update ASP code. Problem: I was contacted yesterday about an issue with the system. When I looked in to it, it seems a bug that I had solved nearly a year ago had returned. I have a stored procedure that used to take an int as a parameter but a year ago we changed the structure of the system and updated the stored procedure to take an nvarchar(10). The stored procedure somehow changed back to taking an int instead of an nvarchar. There is an external hard drive connected to the server that copies data periodically and has the ability to restore the server in case of failure. I would have assumed that somehow an older version of the database had been restored, but data that I know was inserted 7 days and 1 day before the bug occurred is still in the database. Question: Is there anyway that the structure of a Sql Server 2005 database can revert to a previous version or be restored to a previous version without touching the actual data? No one else should have access to the server so I'm going a little insane trying to figure out how this even happened. Any ideas?

    Read the article

  • Download databasename.bak file

    - by Jordon
    I have downloaded databasename.bak file from my hosting company, when i tried to restore that DB file in SQL server 2008 it is keep on giving me following error. The media family on device 'C:\go4sharepoint_1384_8481.bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE HEADERONLY is terminating abnormally. (Microsoft SQL Server, Error: 3241) According to this error and from following link http://www.sqlcoffee.com/Troubleshooting047.htm It is clear that either file i am downloading is corrupt or it is getting corrupted on the way? Any idea, why I am keep on receiving this error? I tried almost all ways but unable to fix this problem, please help me.

    Read the article

  • Does anyone has the experience of using the new p4 replicate command in their Perforce back-up /rest

    - by Thomas Corriol
    Hi all, we recently performed an upgrade of our whole perforce system to 2009.02 During this exercise, we noticed that the back-up /restore process that was installed here by the Perforce consultant a year ago was not completely working. Basically, the verify command has never worked (scary !). As we are obliged to revisit our Back-Up/Restore scripts, I was toying with the idea of using the new p4 replicate command. The idea is to use it alongside an rsync of the data files, so that in case of crash we will lose at worst an hour of work (if we execute them every hour). Does anyone has the experience or an example of back-up/restore scripts using the p4 replicate command of the 2009.02 version ? Thanks, Thomas

    Read the article

  • backing up user data

    - by shani
    in my app the user saves data in archive using core data and sqllite. 1. is there a way letting him the option to back up his data and restoring it in the future? 2. does the user info is backed up with the iphone regular back up? thanks shani

    Read the article

  • Syncing large personal school-material -git-repo with things such as casual notes? Rsync, wget and Git -- or some ready tool?

    - by hhh
    My friend wants to store electrically her school -notes and process them fast, with backups. She has over 2GB -size repo already and growing all the time (mostly appended material i.e. more school notes, different formats, pdf, pictures and scanned, some text -files, etc). The goal of my friend is to process fast the notes. I suggested command like this here i.e. "# crontab -e @weekly wget --random-wait -e robots=off -U mozilla -mirror http://VeryLong.com". But I think plugging in Rsync somewhere could make it much better with Git. How would you help my friend to process and store the school -material under Git-version-controlling and still keep the size reasonable? Perhaps related rsync .git directory rsync git big repository Different scope Git/rsync mix for projects with large binaries and text files What's a good way to organize a large collection of personal scripts using git?

    Read the article

  • How do I back up a remote SVN repository

    - by Roaders
    Hi all I am currently moving my SVN server from my home server to my remote server so I can access it more easily from other locations. My remote server is not backed up so I want to regularly back it up to my home server. Remote server is Windows 2003 server. Home server is Windows Home Server. What is the best way to do this? can I get my home server to get a dump of the remote server every night? Bandwidth isn't a huge consideration but if I could just copy any new checkins to an SVN server on my home server that would be fine. Any suggestions welcome. Thanks

    Read the article

  • Keeping track of dirty blocks on a block device

    - by mikeY
    I'm looking for a way to keep track of what blocks on a block device are modified after a point in time. How I eventually want to use this for is to keep two 2TB disks in sync, one which only comes online (connected through USB) once a month. Without knowing what blocks have been modified, I have to go through the whole 2TB every time. I'm using a recent GNU/Linux OS and have C and Python experience. I'm hoping to avoid writing kernel level code as I don't have any experience in that area whatsoever. My current theory is that there should be some hooks somewhere where my code can get called when a disk flush is performed. Any ideas?

    Read the article

  • How to version control data stored in mysql

    - by Shawn
    I'm trying to use a simple mysql database but tweak it so that every field is backed up up to an indefinite number of versions. The best way I can illustrate this is by replacing each and every field of every table with a stack of all the values this field has ever had (each of these values should be timestamped). I guess it's kind of like having customized version control for all my data.. Any ideas on how to do this?

    Read the article

  • Is this safe? Is this OK to do in MYSQL?

    - by alex
    I have always done this: mysqldump -hlocalhost -uuser -ppass MYDATABASE > /home/f/db_backup/MYDATABASE.sql mysql -uuser -ppass MYDATABASE < MYDATABASE.sql But, if I do this instead...is this safe? Is this identical to the above??? mysqldump -hlocalhost -uuser -ppass MYDATABASE | gzip > /home/f/db_backup/MYDATABASE.sql.gz zcat MYDATABASE.sql.gz | mysql -uuser -ppass MYDATABASE

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >