Search Results

Search found 9156 results on 367 pages for 'cloud storage'.

Page 102/367 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • More Great Improvements to the Windows Azure Management Portal

    - by ScottGu
    Over the last 3 weeks we’ve released a number of enhancements to the new Windows Azure Management Portal.  These new capabilities include: Localization Support for 6 languages Operation Log Support Support for SQL Database Metrics Virtual Machine Enhancements (quick create Windows + Linux VMs) Web Site Enhancements (support for creating sites in all regions, private github repo deployment) Cloud Service Improvements (deploy from storage account, configuration support of dedicated cache) Media Service Enhancements (upload, encode, publish, stream all from within the portal) Virtual Networking Usability Enhancements Custom CNAME support with Storage Accounts All of these improvements are now live in production and available to start using immediately.  Below are more details on them: Localization Support The Windows Azure Portal now supports 6 languages – English, German, Spanish, French, Italian and Japanese. You can easily switch between languages by clicking on the Avatar bar on the top right corner of the Portal: Selecting a different language will automatically refresh the UI within the portal in the selected language: Operation Log Support The Windows Azure Portal now supports the ability for administrators to review the “operation logs” of the services they manage – making it easy to see exactly what management operations were performed on them.  You can query for these by selecting the “Settings” tab within the Portal and then choosing the “Operation Logs” tab within it.  This displays a filter UI that enables you to query for operations by date and time: As of the most recent release we now show logs for all operations performed on Cloud Services and Storage Accounts.  You can click on any operation in the list and click the “Details” button in the command bar to retrieve detailed status about it.  This now makes it possible to retrieve details about every management operation performed. In future updates you’ll see us extend the operation log capability to apply to all Windows Azure Services – which will enable great post-mortem and audit support. Support for SQL Database Metrics You can now monitor the number of successful connections, failed connections and deadlocks in your SQL databases using the new “Dashboard” view provided on each SQL Database resource: Additionally, if the database is added as a “linked resource” to a Web Site or Cloud Service, monitoring metrics for the linked SQL database are shown along with the Web Site or Cloud Service metrics in the dashboard. This helps with viewing and managing aggregated information across both resources in your application. Enhancements to Virtual Machines The most recent Windows Azure Portal release brings with it some nice usability improvements to Virtual Machines: Integrated Quick Create experience for Windows and Linux VMs Creating a new Windows or Linux VM is now easy using the new “Quick Create” experience in the Portal: In addition to Windows VM templates you can also now select Linux image templates in the quick create UI: This makes it incredibly easy to create a new Virtual Machine in only a few seconds. Enhancements to Web Sites Prior to this past month’s release, users were forced to choose a single geographical region when creating their first site.  After that, subsequent sites could only be created in that same region.  This restriction has now been removed, and you can now create sites in any region at any time and have up to 10 free sites in each supported region: One of the new regions we’ve recently opened up is the “East Asia” region.  This allows you to now deploy sites to North America, Europe and Asia simultaneously.  Private GitHub Repository Support This past week we also enabled Git based continuous deployment support for Web Sites from private GitHub and BitBucket repositories (previous to this you could only enable this with public repositories).  Enhancements to Cloud Services Experience The most recent Windows Azure Portal release brings with it some nice usability improvements to Cloud Services: Deploy a Cloud Service from a Windows Azure Storage Account The Windows Azure Portal now supports deploying an application package and configuration file stored in a blob container in Windows Azure Storage. The ability to upload an application package from storage is available when you custom create, or upload to, or update a cloud service deployment. To upload an application package and configuration, create a Cloud Service, then select the file upload dialog, and choose to upload from a Windows Azure Storage Account: To upload an application package from storage, click the “FROM STORAGE” button and select the application package and configuration file to use from the new blob storage explorer in the portal. Configure Windows Azure Caching in a caching enabled cloud service If you have deployed the new dedicated cache within a cloud service role, you can also now configure the cache settings in the portal by navigating to the configuration tab of for your Cloud Service deployment. The configuration experience is similar to the one in Visual Studio when you create a cloud service and add a caching role.  The portal now allows you to add or remove named caches and change the settings for the named caches – all from within the Portal and without needing to redeploy your application. Enhancements to Media Services You can now upload, encode, publish, and play your video content directly from within the Windows Azure Portal.  This makes it incredibly easy to get started with Windows Azure Media Services and perform common tasks without having to write any code. Simply navigate to your media service and then click on the “Content” tab.  All of the media content within your media service account will be listed here: Clicking the “upload” button within the portal now allows you to upload a media file directly from your computer: This will cause the video file you chose from your local file-system to be uploaded into Windows Azure.  Once uploaded, you can select the file within the content tab of the Portal and click the “Encode” button to transcode it into different streaming formats: The portal includes a number of pre-set encoding formats that you can easily convert media content into: Once you select an encoding and click the ok button, Windows Azure Media Services will kick off an encoding job that will happen in the cloud (no need for you to stand-up or configure a custom encoding server).  When it’s finished, you can select the video in the “Content” tab and then click PUBLISH in the command bar to setup an origin streaming end-point to it: Once the media file is published you can point apps against the public URL and play the content using Windows Azure Media Services – no need to setup or run your own streaming server.  You can also now select the file and click the “Play” button in the command bar to play it using the streaming endpoint directly within the Portal: This makes it incredibly easy to try out and use Windows Azure Media Services and test out an end-to-end workflow without having to write any code.  Once you test things out you can of course automate it using script or code – providing you with an incredibly powerful Cloud Media platform that you can use. Enhancements to Virtual Network Experience Over the last few months, we have received feedback on the complexity of the Virtual Network creation experience. With these most recent Portal updates, we have added a Quick Create experience that makes the creation experience very simple. All that an administrator now needs to do is to provide a VNET name, choose an address space and the size of the VNET address space. They no longer need to understand the intricacies of the CIDR format or walk through a 4-page wizard or create a VNET / subnet. This makes creating virtual networks really simple: The portal also now has a “Register DNS Server” task that makes it easy to register DNS servers and associate them with a virtual network. Enhancements to Storage Experience The portal now lets you register custom domain names for your Windows Azure Storage Accounts.  To enable this, select a storage resource and then go to the CONFIGURE tab for a storage account, and then click MANAGE DOMAIN on the command bar: Clicking “Manage Domain” will bring up a dialog that allows you to register any CNAME you want: Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. One of the other cool features that is now live within the portal is our new Windows Azure Store – which makes it incredibly easy to try and purchase developer services from a variety of partners.  It is an incredibly awesome new capability – and something I’ll be doing a dedicated post about shortly. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • 10 tape technology features that make you go hmm.

    - by Karoly Vegh
    A week ago an Oracle/StorageTek Tape Specialist, Christian Vanden Balck, visited Vienna, and agreed to visit customers to do techtalks and update them about the technology boom going around tape. I had the privilege to attend some of his sessions and noted the information and features that took the customers by surprise and made them think. Allow me to share the top 10: I. StorageTek as a brand: StorageTek is one of he strongest names in the Tape field. The brand itself was valued so much by customers that even after Sun Microsystems acquiring StorageTek and the Oracle acquiring Sun the brand lives on with all the Oracle tapelibraries are officially branded StorageTek.See http://www.oracle.com/us/products/servers-storage/storage/tape-storage/overview/index.html II. Disk information density limitations: Disk technology struggles with information density. You haven't seen the disk sizes exploding lately, have you? That's partly because there are physical limits on a disk platter. The size is given, the number of platters is limited, they just can't grow, and are running out of physical area to write to. Now, in a T10000C tape cartridge we have over 1000m long tape. There you go, you have got your physical space and don't need to stuff all that data crammed together. You can write in a reliable pattern, and have space to grow too. III. Oracle has a market share of 62% worldwide in recording head manufacturing. That's right. If you are running LTO drives, with a good chance you rely on StorageTek production. That's two out of three LTO recording heads produced worldwide.  IV. You can store 1 Exabyte data in a single tape library. Yes, an Exabyte. That is 1000 Petabytes. Or, a million Terabytes. A thousand million GigaBytes. You can store that in a stacked StorageTek SL8500 tapelibrary. In one SL8500 you can put 10.000 T10000C cartridges, that store 10TB data (compressed). You can stack 10 of these SL8500s together. Boom. 1000.000 TB.(n.b.: stacking means interconnecting the libraries. Yes, cartridges are moved between the stacked libraries automatically.)  V. EMC: 'Tape doesn't suck after all. We moved on.': Do you remember the infamous 'Tape sucks, move on' Datadomain slogan? Of course they had to put it that way, having only had disk products. But here's a fun fact: on the EMCWorld 2012 there was a major presence of a Tape-tech company - EMC, in a sudden burst of sanity is embracing tape again. VI. The miraculous T10000C: Oracle StorageTek has developed an enterprise-grade tapedrive and cartridge, the T10000C. With awesome numbers: The Cartridge: Native 5TB capacity, 10TB with compression Over a kilometer long tape within the cartridge. And it's locked when unmounted, no rattling of your data.  Replaced the metalparticles datalayer with BaFe (bariumferrite) - metalparticles lose around 7% of magnetism within 30 days. BaFe does not. Yes we employ solid-state physicists doing R&D on demagnetisation in our labs. Can be partitioned, storage tiering within the cartridge!  The Drive: 2GB Cache Encryption implemented in HW - no performance hit 252 MB/s native sustained data rate, beats disk technology by far. Not to mention peak throughput.  Leading the tape while never touching the data side of it, protecting your data physically too Data integritiy checking (CRC recalculation) on tape within the drive without having to read it back to the server reordering data from tape-order, delivering it back in application-order  writing 32 tracks at once, reading them back for CRC check at once VII. You only use 20% of your data on a regular basis. The rest 80% is just lying around for years. On continuously spinning disks. Doubly consuming energy (power+cooling), blocking diskstorage capacity. There is a solution called SAM (Storage Archive Manager) that provides you a filesystem unifying disk and tape, moving data on-demand and for clients transparently between the different storage tiers. You can share these filesystems with NFS or CIFS for clients, and enjoy the low TCO of tape. Tapes don't spin. They sit quietly in their slots, storing 10TB data, using no energy, producing no heat, automounted when a client accesses their data.See: http://www.oracle.com/us/products/servers-storage/storage/storage-software/storage-archive-manager/overview/index.html VIII. HW supported for three decades: Did you know that the original PowderHorn library was released in '87 and has been only discontinued in 2010? That is over two decades of supported operation. Tape libraries are - just like the data carrying on tapecartridges - built for longevity. Oh, and the T10000C cartridge has 30-year archival life for long-term retention.  IX. Tape is easy to manage: Have you heard of Tape Storage Analytics? It is a central graphical tool to summarize, monitor, analyze dataflow, health and performance of drives and libraries, see: http://www.oracle.com/us/products/servers-storage/storage/tape-storage/tape-analytics/overview/index.html X. The next generation: The T10000B drives were able to reuse the T10000A cartridges and write on them even more data. On the same cartridges. We call this investment protection, and this is very important for Oracle for the future too. We usually support two generations of cartridges together. The current drive is a T10000C. (...I know I promised to enlist 10, but I got still two more I really want to mention. Allow me to work around the problem: ) X++. The TallBots, the robots moving around the cartridges in the StorageTek library from tapeslots to the drives are cableless. Cables, belts, chains running to moving parts in a library cause maintenance downtimes. So StorageTek eliminated them. The TallBots get power, commands, even firmwareupgrades through the rails they are running on. Also, the TallBots don't just hook'n'pull the tapes out of their slots, they actually grip'n'lift them out. No friction, no scratches, no zillion little plastic particles floating around in the library, in the drives, on your data. (X++)++: Tape beats SSDs and Disks. In terms of throughput (252 MB/s), in terms of TCO: disks cause around 290x more power and cooling, in terms of capacity: 10TB on a single media and soon more.  So... do you need to store large amounts of data? Are you legally bound to archive it for dozens of years? Would you benefit from automatic storage tiering? Have you got large mediachunks to be streamed at times? Have you got power and cooling issues in the growing datacenters? Do you find EMC's 180° turn of tape attitude interesting, but appreciate it at the same time? With all that, you aren't alone. The most data on this planet is stored on tape. Tape is coming. Big time.

    Read the article

  • I see no LOBs!

    - by Paul White
    Is it possible to see LOB (large object) logical reads from STATISTICS IO output on a table with no LOB columns? I was asked this question today by someone who had spent a good fraction of their afternoon trying to work out why this was occurring – even going so far as to re-run DBCC CHECKDB to see if any corruption had taken place.  The table in question wasn’t particularly pretty – it had grown somewhat organically over time, with new columns being added every so often as the need arose.  Nevertheless, it remained a simple structure with no LOB columns – no TEXT or IMAGE, no XML, no MAX types – nothing aside from ordinary INT, MONEY, VARCHAR, and DATETIME types.  To add to the air of mystery, not every query that ran against the table would report LOB logical reads – just sometimes – but when it did, the query often took much longer to execute. Ok, enough of the pre-amble.  I can’t reproduce the exact structure here, but the following script creates a table that will serve to demonstrate the effect: IF OBJECT_ID(N'dbo.Test', N'U') IS NOT NULL DROP TABLE dbo.Test GO CREATE TABLE dbo.Test ( row_id NUMERIC IDENTITY NOT NULL,   col01 NVARCHAR(450) NOT NULL, col02 NVARCHAR(450) NOT NULL, col03 NVARCHAR(450) NOT NULL, col04 NVARCHAR(450) NOT NULL, col05 NVARCHAR(450) NOT NULL, col06 NVARCHAR(450) NOT NULL, col07 NVARCHAR(450) NOT NULL, col08 NVARCHAR(450) NOT NULL, col09 NVARCHAR(450) NOT NULL, col10 NVARCHAR(450) NOT NULL, CONSTRAINT [PK dbo.Test row_id] PRIMARY KEY CLUSTERED (row_id) ) ; The next script loads the ten variable-length character columns with one-character strings in the first row, two-character strings in the second row, and so on down to the 450th row: WITH Numbers AS ( -- Generates numbers 1 - 450 inclusive SELECT TOP (450) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) INSERT dbo.Test WITH (TABLOCKX) SELECT REPLICATE(N'A', N.n), REPLICATE(N'B', N.n), REPLICATE(N'C', N.n), REPLICATE(N'D', N.n), REPLICATE(N'E', N.n), REPLICATE(N'F', N.n), REPLICATE(N'G', N.n), REPLICATE(N'H', N.n), REPLICATE(N'I', N.n), REPLICATE(N'J', N.n) FROM Numbers AS N ORDER BY N.n ASC ; Once those two scripts have run, the table contains 450 rows and 10 columns of data like this: Most of the time, when we query data from this table, we don’t see any LOB logical reads, for example: -- Find the maximum length of the data in -- column 5 for a range of rows SELECT result = MAX(DATALENGTH(T.col05)) FROM dbo.Test AS T WHERE row_id BETWEEN 50 AND 100 ; But with a different query… -- Read all the data in column 1 SELECT result = MAX(DATALENGTH(T.col01)) FROM dbo.Test AS T ; …suddenly we have 49 LOB logical reads, as well as the ‘normal’ logical reads we would expect. The Explanation If we had tried to create this table in SQL Server 2000, we would have received a warning message to say that future INSERT or UPDATE operations on the table might fail if the resulting row exceeded the in-row storage limit of 8060 bytes.  If we needed to store more data than would fit in an 8060 byte row (including internal overhead) we had to use a LOB column – TEXT, NTEXT, or IMAGE.  These special data types store the large data values in a separate structure, with just a small pointer left in the original row. Row Overflow SQL Server 2005 introduced a feature called row overflow, which allows one or more variable-length columns in a row to move to off-row storage if the data in a particular row would otherwise exceed 8060 bytes.  You no longer receive a warning when creating (or altering) a table that might need more than 8060 bytes of in-row storage; if SQL Server finds that it can no longer fit a variable-length column in a particular row, it will silently move one or more of these columns off the row into a separate allocation unit. Only variable-length columns can be moved in this way (for example the (N)VARCHAR, VARBINARY, and SQL_VARIANT types).  Fixed-length columns (like INTEGER and DATETIME for example) never move into ‘row overflow’ storage.  The decision to move a column off-row is done on a row-by-row basis – so data in a particular column might be stored in-row for some table records, and off-row for others. In general, if SQL Server finds that it needs to move a column into row-overflow storage, it moves the largest variable-length column record for that row.  Note that in the case of an UPDATE statement that results in the 8060 byte limit being exceeded, it might not be the column that grew that is moved! Sneaky LOBs Anyway, that’s all very interesting but I don’t want to get too carried away with the intricacies of row-overflow storage internals.  The point is that it is now possible to define a table with non-LOB columns that will silently exceed the old row-size limit and result in ordinary variable-length columns being moved to off-row storage.  Adding new columns to a table, expanding an existing column definition, or simply storing more data in a column than you used to – all these things can result in one or more variable-length columns being moved off the row. Note that row-overflow storage is logically quite different from old-style LOB and new-style MAX data type storage – individual variable-length columns are still limited to 8000 bytes each – you can just have more of them now.  Having said that, the physical mechanisms involved are very similar to full LOB storage – a column moved to row-overflow leaves a 24-byte pointer record in the row, and the ‘separate storage’ I have been talking about is structured very similarly to both old-style LOBs and new-style MAX types.  The disadvantages are also the same: when SQL Server needs a row-overflow column value it needs to follow the in-row pointer a navigate another chain of pages, just like retrieving a traditional LOB. And Finally… In the example script presented above, the rows with row_id values from 402 to 450 inclusive all exceed the total in-row storage limit of 8060 bytes.  A SELECT that references a column in one of those rows that has moved to off-row storage will incur one or more lob logical reads as the storage engine locates the data.  The results on your system might vary slightly depending on your settings, of course; but in my tests only column 1 in rows 402-450 moved off-row.  You might like to play around with the script – updating columns, changing data type lengths, and so on – to see the effect on lob logical reads and which columns get moved when.  You might even see row-overflow columns moving back in-row if they are updated to be smaller (hint: reduce the size of a column entry by at least 1000 bytes if you hope to see this). Be aware that SQL Server will not warn you when it moves ‘ordinary’ variable-length columns into overflow storage, and it can have dramatic effects on performance.  It makes more sense than ever to choose column data types sensibly.  If you make every column a VARCHAR(8000) or NVARCHAR(4000), and someone stores data that results in a row needing more than 8060 bytes, SQL Server might turn some of your column data into pseudo-LOBs – all without saying a word. Finally, some people make a distinction between ordinary LOBs (those that can hold up to 2GB of data) and the LOB-like structures created by row-overflow (where columns are still limited to 8000 bytes) by referring to row-overflow LOBs as SLOBs.  I find that quite appealing, but the ‘S’ stands for ‘small’, which makes expanding the whole acronym a little daft-sounding…small large objects anyone? © Paul White 2011 email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • Oracle Manageability Presentations at Collaborate 2012

    - by Get_Specialized!
    Attending the Collaborate 2012 event, April 22-26th in Las Vegas, and interested in learning more about becoming specialized on Oracle Manageability? Be sure and checkout these sessions below presented by subject matter experts while your onsite. Set up a meeting or be one of the first Oracle Partners onsite to ask me, and we'll request one of the limited FREE Oracle Enterprise Manager 12c partner certification exam vouchers for you. Can't travel this year? the  COLLABORATE 12 Plug Into Vegas may be another option for you to attend from your own desk presentations like session #489 Oracle Enterprise Manager 12c: What's Changed? What's New? presented by Oracle Specialized Partners like ROLTA   Session ID Title Presented by Day/Time 920 Enterprise Manager 12c Cloud Control: New Features and Best Practices Dell Sun 9536 Release 12 Apps DBA 101 Justadba, LLC Mon 932 Monitoring Exadata with Cloud Control Oracle Mon 397 OEM Cloud Control Hands On Performance Tuning Mon 118 Oracle BI Sys Mgmt Best Practices & New Features Rittman Mead Consulting Mon 548 High Availability Boot Camp: RAC Design, Install, Manage Database Administration, Inc Mon 926 The Only Complete Cloud Management Solution -- Oracle Enterprise Manager Oracle Mon 328 Virtualization Boot Camp Dell Mon 292 Upgrading to Oracle Enterprise Manager 12c - Best Practices Southern Utah University Mon 793 Exadata 101 - What You Need to Know Rolta Tues 431 & 1431 Extreme Database Administration: New Features for Expert DBAs Oracle Tue Wed 521 What's New for Oracle WebLogic Management: Capabilities that Scripting Cannot Provide Oracle Thu 338 Oracle Real Application Testing: A look under the hood PayPal Tue 9398 Reduce TCO Using Oracle Application Management Suite for Oracle E-Business Suite Oracle Tue 312 Configuring and Managing a Private Cloud with Oracle Enterprise Manager 12c Dell Tue 866 Making OEM Sing and Dance with EMCLI Portland General Electric Tue 533 Oracle Exadata Monitoring: Engineered Systems Management with Oracle Enterprise Manager Oracle Wed 100600 Optimizing EnterpriseOne System Administration Oracle Wed 9565 Optimizing EBS on Exadata Centroid Systems Wed 550 Database-as-a-Service: Enterprise Cloud in Three Simple Steps Oracle Wed 434 Managing Oracle: Expert Panel on Techniques and Best Practices Oracle Partners: Dell, Keste, ROLTA, Pythian Wed 9760 Cloud Computing Directions: Understanding Oracle's Cloud AT&T Wed 817 Right Cloud: Use Oracle Technologies to Avoid False Cloud Visual Integrator Consulting Wed 163 Forgetting something? Standardize your database monitoring environment with Enterprise Manager 11g Johnson Controls Wed 489 Oracle Enterprise Manager 12c: What's Changed? What's New? ROLTA Thu    

    Read the article

  • AWS EC2 Oracle RDB - Storing and managing my data

    - by llaszews
    When create an Oracle Database on the Amazon cloud you will need to store you database files somewhere on the EC2 cloud. There are basically three places where database files can be stored: 1. Local drive - This is the local drive that is part of the virtual server EC2 instance. 2. Elastic Block Storage (EBS) - Network attached storage that appears as a local drive. 3. Simple Storage Server (S3) - 'Storage for the Internet'. S3 is not high speed and intended for store static document type files. S3 can also be used for storing static web page files. Local drives are ephemeral so not appropriate to be used as a database storage device. The leaves EBS which is the best place to store database files. EBS volumes appear as local disk drives. They are actually network-attached to an Amazon EC2 instance. In addition, EBS persists independently from the running life of a single Amazon EC2 instance. If you use an EBS backed instance for your database data, it will remain available after reboot but not after terminate. In many cases you would not need to terminate your instance but only stop it, which is equivalent of shutdown. In order to save your database data before you terminate an instance, you can snapshot the EBS to S3. Using EBS as a data store you can move your Oracle data files from one instance to another. This allows you to move your database from one region or or zone to another. Unfortunately, to scale out your Oracle RDS on AWS you can not have read only replicas. This is only possible with the other Oracle relational database - MySQL. The free micro instances use EBS as its storage. This is a very good white paper that has more details: AWS Storage Options This white paper also discusses: SQS, SimpleDB, and Amazon RDS in the context of storage devices. However, these are not storage devices you would use to store an Oracle database. This slide deck discusses a lot of information that is in the white paper: AWS Storage Options slideshow

    Read the article

  • How can I build something like Amazon S3 in Perl?

    - by Joel G
    I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going): Easy API implementation for client side apps. (maybe REST (?) Centralized database server for the USERDB (maybe PostgreSQL (?). Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?). Easy server side configuration (config file(s) stored on the servers). Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases) Fast High Uptime Low memory usage Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?). Maybe a cache of some sort (memcached or parlbal or something else (?). Thanks in advance

    Read the article

  • 3 Servers, is this is a cluster?

    - by Andy Barlow
    Hello, At the moment I have one Ubuntu server, 9.10, running with a simple Samba share, a mail server, DNS server and DHCP server. Mostly its just there for file sharing and email server. I also have 2 other servers that are exactly the same hardware and spec as the first, which have an rsync set up to retrieve the shared folders and backs them up. However, if the first server goes down, all of our shares disappear along with our mail and the system must be rebuilt. Also I tend to find if people are downloading a large amount from the file server, no-one can access there emails - especially in the morning when everyone is signing in at once. Would it be more beneficial for me to have all 3 servers, all running the same services, doing the same thing with some sort of cluster with load balancing? I'm not really sure where to begin looking, or how to go about such a setup where 3 servers are all identical, but perhaps one acts as the main load balancer?? If someone can point me in the right direction, or if this simply sounds like one of those Enterprise Cloud's that is now a default setup in Ubuntu Server 9.10+, then I'll go down that route. Cheers in advance. Andy

    Read the article

  • Is it possible to read data that has been separately copied to the Android sd card without having ro

    - by icecream
    I am developing an application that needs to access data on the sd card. When I run on my development device (an odroid with Android 2.1) I have root access and can construct the path using: File sdcard = Environment.getExternalStorageDirectory(); String path = sdcard.getAbsolutePath() + File.separator + "mydata" File data = new File(path); File[] files = data.listFiles(new FilenameFilter() { @Override public boolean accept(File dir, String filename) { return filename.toLowerCase().endsWith(".xyz"); }}); However, when I install this on a phone (2.1) where I do not have root access I get files == null. I assume this is because I do not have the right permissions to read the data from the sd card. I also get files == null when just trying to list files on /sdcard. So the same applies without my constructed path. Also, this app is not intended to be distributed through the app store and is needs to use data copied separately to the sd card so this is a real use-case. It is too much data to put in res/raw (I have tried, it did not work). I have also tried adding: <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> to the manifest, even though I only want to read the sd card, but it did not help. I have not found a permission type for reading the storage. There is probably a correct way to do this, but I haven't been able to find it. Any hints would be useful.

    Read the article

  • StorageClientException: The specified message does not exist?

    - by Aaron
    I have a simple video encoding worker role that pulls messages from a queue encodes a video then uploads the video to storage. Everything seems to be working but occasionally when deleting the message after I am done encoding and uploading I get a "StorageClientException: The specified message does not exist." Although the video is processed, I believe the message is reappearing in the queue because it's not being deleted correctly. Is it possible that another instance of the Worker role is processing and deleting the message? Doesn't the GetMessage() prevent other worker roles from picking up the same message? Am I doing something wrong in the setup of my queue? What could be causing this message to not be found on delete? some code... //onStart() queue setup var queueStorage = _storageAccount.CreateCloudQueueClient(); _queue = queueStorage.GetQueueReference(QueueReference); queueStorage.RetryPolicy = RetryPolicies.Retry(5, new TimeSpan(0, 5, 0)); _queue.CreateIfNotExist(); public override void Run() { while (true) { try { var msg = _queue.GetMessage(new TimeSpan(0, 5, 0)); if (msg != null) { EncodeIt(msg); PostIt(msg); _queue.DeleteMessage(msg); } else { Thread.Sleep(WaitTime); } } catch (StorageClientException exception) { BlobTrace.Write(exception.ToString()); Thread.Sleep(WaitTime); } } }

    Read the article

  • Should I persist images on EBS or S3?

    - by javanes
    I am migrating my Java,Tomcat, Mysql server to AWS EC2. I have already attached EBS volume for storing MySql data. In my web application people may upload images. So I should persist them. There are 2 alternatives in my mind: Save uploaded images to EBS volume. Use the S3 service. The followings are my notes, please be skeptic about them, as my expertise is not on servers, but software development. EBS plus: S3 storage is more expensive. (0.15 $/Gb 0.1$/Gb) S3 plus: Serving statics from EBS may influence my web server's performance negatively. Is this true? Does Serving images affect server performance notably? For S3 my server will not be responsible for serving statics. S3 plus: Serving statics from EBS may result I/O cost, probably it will be minor. EBS plus: People say EBS is faster. S3 plus: People say S3 is more safe for persistence. EBS plus: No need to learn API, it is straight forward to save the images to EBS volume. Namely I can not decide, will be happy if you guide. Thanks

    Read the article

  • Architecture for data layer that uses both localStorage and a REST remote server

    - by Zack
    Anybody has any ideas or references on how to implement a data persistence layer that uses both a localStorage and a REST remote storage: The data of a certain client is stored with localStorage (using an ember-data indexedDB adapter). The locally stored data is synced with the remote server (using ember-data RESTadapter). The server gathers all data from clients. Using mathematical sets notation: Server = Client1 ? Client2 ? ... ? ClientN where, in general, a record may not be unique to a certain client. Here are some scenarios: A client creates a record. The id of the record can not set on the client, since it may conflict with a record stored on the server. Therefore a newly created record needs to be committed to the server - receive the id - create the record in localStorage. A record is updated on the server, and as a consequence the data in localStorage and in the server go out of sync. Only the server knows that, so the architecture needs to implement a push architecture (?) Would you use 2 stores (one for localStorage, one for REST) and sync between them, or use a hybrid indexedDB/REST adapter and write the sync code within the adapter? Can you see any way to avoid implementing push (Web Sockets, ...)?

    Read the article

  • Windows Azure - Automatic Load Balancing - partitioning

    - by veda
    I was going through some videos. I found that Windows Azure will group the blobs into partitions based on the partition key and will Automatically Load Balance these partitions on their servers. The partition key for a blob is blob name. Using the blob name, azure will automatically do partitions. Now, My question is that Can I able to make the azure to do partitions based on the Container Name. I wanted my partition key to be container name. For example, I have a storage account. In that I have 2 containers named container1 and container2. In container1, I have 1000 files named 1.txt, 2.txt, 3.txt, ......., 501.txt, 502.txt, ..... 999.txt, 1000.txt and in container2, I have another 1000 files named 1001.txt, 1002.txt, 1003.txt, ......., 1501.txt, 1502.txt, ..... 1999.txt, 2000.txt Now, Will Windows Azure will generate 2000 partitions based on the blob name and serve me through several servers??? Won't it be better if Azure partitions based on the Container name? container1 on one server and conatiner2 on another.

    Read the article

  • How do I use HTML5's localStorage in a Google Chrome extension?

    - by davidkennedy85
    I am trying to develop an extension that will work with Awesome New Tab Page. I've followed the author's advice to the letter, but it doesn't seem like any of the script I add to my background page is being executed at all. Here's my background page: <script> var info = { poke: 1, width: 1, height: 1, path: "widget.html" } chrome.extension.onRequestExternal.addListener(function(request, sender, sendResponse) { if (request === "mgmiemnjjchgkmgbeljfocdjjnpjnmcg-poke") { chrome.extension.sendRequest( sender.id, { head: "mgmiemnjjchgkmgbeljfocdjjnpjnmcg-pokeback", body: info, } ); } }); function initSelectedTab() { localStorage.setItem("selectedTab", "Something"); } initSelectedTab(); </script> Here is manifest.json: { "update_url": "http://clients2.google.com/service/update2/crx", "background_page": "background.html", "name": "Test Widget", "description": "Test widget for mgmiemnjjchgkmgbeljfocdjjnpjnmcg.", "icons": { "128": "icon.png" }, "version": "0.0.1" } Here is the relevant part of widget.html: <script> var selectedTab = localStorage.getItem("selectedTab"); document.write(selectedTab); </script> Every time, the browser just displays null. The local storage isn't being set at all, which makes me think the background page is completely disconnected. Do I have something wired up incorrectly?

    Read the article

  • Best way to use Google's hosted jQuery, but fall back to my hosted library on Google fail

    - by Nosredna
    What would be a good way to attempt to load the hosted jQuery at Google (or other Google hosted libs), but load my copy of jQuery if the Google attempt fails? I'm not saying Google is flaky. There are cases where the Google copy is blocked (apparently in Iran, for instance). Would I set up a timer and check for the jQuery object? What would be the danger of both copies coming through? Not really looking for answers like "just use the Google one" or "just use your own." I understand those arguments. I also understand that the user is likely to have the Google version cached. I'm thinking about fallbacks for the cloud in general. Edit: This part added... Since Google suggests using google.load to load the ajax libraries, and it performs a callback when done, I'm wondering if that's the key to serializing this problem. I know it sounds a bit crazy. I'm just trying to figure out if it can be done in a reliable way or not. Update: jQuery now hosted on Microsoft's CDN. http://www.asp.net/ajax/cdn/

    Read the article

  • Framework/service for hosting and managing files

    - by Peteris Caune
    Hi, in a webapp I'm building there is a planned side feature of supporting product illustrations and manuals (so pictures and PDFs), possibly arranged in galleries. As I'd rather not implement from scratch all of the uploading, managing and serving of this content, I'm looking for existing solutions which I could integrate. For example, I'm considering Flickr--users of webapp specify their Flickr username and then use some naming convention to link objects in my webapp with pictures uploaded in their Flickr account. Very little code to write from my side, maybe just some API calls that proxy Flickr APIs, since Flickr would handle picture uploading, organizing them in sets, storing them in cloud and serving them in various sizes etc. One drawback here is that either all of the the pictures are public or I have to deal with interactive Flickr authorization. Also not sure if Flickr would be happy being used in such manner. What other online services or libraries/frameworks I should look at? My webapp is written in Python/Pylons, so Python libraries would be preferred. I'm already using some of Amazon infrastructure, so frontends to Amazon S3 would be cool. For online services, RESTful API would be nice.

    Read the article

  • How to use dd to make splitted ISO images from an storage device?

    - by Gustavo Bandeira
    This is a double question, I just hope it's valid. I need to know how to use dd to make splitted ISO images from some storage device, I'm doing it through SSH: the process is slow and the risk of faling at the mid of the operation (1) is high then I need to know how to make these splitted ISO images from my storage device. (2) I'm searching for some reference on dd, it could be a book or a good website about it for when any doubt arises. 1 - I'm doing it on a ~60GB storage device, it took me a whole day to copy ~10GB from this disk. 2 - For curious people, I'm trying to recover an accidentaly deleted file from an iPod, until now I've been able to make the whole process, I just need to improve it beucase I left it copying the disk yesterday: Today it gave me an error when it was at ~10GB.

    Read the article

  • make a folder/partition on one computer appear as a mass storage device to another?

    - by user137560
    Is there anyway to make a folder or a partition on a computer (Linux or Windows) act like a mass storage device to other computers or devices when connected with a Male-Male USB cable? For example, I have a Windows 7 computer with 2 partitions, C and D. I would then connect that computer to another computer or a Smart TV using a Male-Male USB cable, and the other computer or device recognizes a folder/partition on current computer as a mass storage device. Is this possible? If not, is there any USB switch that can connect an external hard drive or flash drive to both a computer and TV without the need to manually switch them? (I know about some USB switches, but they only support automatic switching with some certain types of printers, not with mass storage)

    Read the article

  • Mark your calendar! Live OPN Partnercast on Oracle Cloud Applications

    - by Catalin Teodor
    Tune in on Wednesday, August 27 at 10 AM PDT to watch a LIVE OPN PartnerCast focused on Oracle Cloud Applications update for partners. Topics include Oracle Taleo Business Edition, Oracle Taleo Learn, Oracle Fusion HCM. Stream it live from the OPN homepage. Viewers will have the opportunity to submit questions and share comments during the live event via Twitter by using @oraclepartners or #OPN in your tweets.

    Read the article

  • Google intègre CoreOS à sa plateforme cloud, « un hôte idéal pour les systèmes distribués » estime Brandon Philips

    Google intègre CoreOS à sa plateforme cloud, « un hôte idéal pour les systèmes distribués » estime Brandon Philips Google rallonge la liste des distributions Linux pouvant être prises en charge par Google Compute Engine. Ainsi, CoreOS, qui était déjà disponible en preview depuis décembre dernier, fait désormais part entière des offres de la plateforme au même titre que Debian, RedHat ou encore Suse. Les entreprises pourront la tester et l'utiliser facilement pour leurs clusters et leurs applications...

    Read the article

  • From Trailer to Cloud: Skire acquisition expands Oracle’s on-demand project management options.

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By Alison Weiss Whether building petrochemical facilities in the Middle East or managing mining operations in Australia, project managers face significant challenges. Local regulations and currencies, contingent labor, hybrid public/private funding sources, and more threaten project budgets and schedules. According to Mike Sicilia, senior vice president and general manager for the Oracle Primavera Global Business Unit, there will be trillions of dollars invested in industrial projects around the globe between 2012 and 2016. But even with so much at stake, project leads don’t always have time to look for new and better enterprise project portfolio management (EPPM) software solutions to manage large-scale capital initiatives across the enterprise. Oracle’s recent acquisition of Skire, a leading provider of capital program management and facilities management applications available both in the cloud and on premises, gives customers outstanding new EPPM options. By combining Skire’s cloud-based solutions for managing capital projects, real estate, and facilities with Oracle’s Primavera EPPM solutions, project managers can quickly get a solution running that is interoperable across an extended enterprise. Staff can access the EPPM solution within days, rather than waiting for corporate IT to put technology in place. “Staff can access the EPPM solution within days, rather than waiting for corporate IT to put technology in place,” says Sicilia. This applies to a problem that has, according to Sicilia, bedeviled project managers for decades: extending EPPM functionality into the field. Frequently, large-scale projects are remotely located, and the lack of communications and IT infrastructure threatened the accuracy of project reporting and scheduling. Read the full version of this article in the November 2012 edition of Oracle's Profit Magazine: Special Report on Project Management

    Read the article

  • OVH : « Nous allons rester dans la culture Geek », entretien avec un hébergeur qui s'attaque au Cloud, aux FAI et à l'Amérique

    OVH : « Nous allons rester dans la culture Geek » Entretien avec le Directeur commercial de l'hébergeur qui s'attaque au Cloud, aux FAI et à l'Amérique Edit : ajout de la photo d'Alain Rigaux Il y a quelques semaines, nous avions lancé une consultation pour savoir quels étaient vos hébergeurs préférés. OVH en était sorti grand vainqueur. L'occasion nous a paru idéale de nous entretenir avec la société pour faire le point sur l'année 2011, qui l'a vue devenir leader européen, et sur ses objectifs de l'année 2012. Voici l'intégralité de notre entretien avec Ala...

    Read the article

  • ISVForce Paris : développer ses applications dans le Cloud avec Force.com évènement CRM organisé par Salesforce le 7 juillet à Paris

    ISVForce Paris : développer ses applications dans le Cloud avec Force.com Évènement CRM organisé par Salesforce le 7 juillet au Centre d'Affaires Paris Trocadéro Salesforce.com organise le 7 juillet au Centre d'Affaires Paris Trocadéro, une matinée de présentation de sa plateforme ISVForce, qui met à la disposition des développeurs tous les outils et ressources dont ils ont besoin pour développer leur business sur Force.com. « ISVForce Paris » présentera les retours d'expérience d'éditeurs de logiciels en présence de hauts responsables de Salesforce et d'un intervenant de l'AFDEL (Association Française Des Editeurs de Logiciels). Cet évènement CRM sera aussi un moment d'écha...

    Read the article

  • Node.js : enfin une intégration native sous Windows, le framework événementiel en JavaScript arrive sur le Cloud d'Azure

    Node.js : enfin une intégration native et complète sous Windows Le framework événementiel en JavaScript arrive sur le Cloud d'Azure Mise à jour du 9 novembre 2011 par Idelways Microsoft a manifesté en juin dernier son soutien au projet Node.js, le framework JavaScript événementiel et open source (lire ci-devant). La collaboration de l'entreprise avec Joycent, qui parraine son équipe de développeurs, vient d'aboutir à la version 0.6.0 de Node, qui bénéficie d'un support natif et complet sur la plateforme Windows. Cette troisième édition stable de Node.js exploite l'API Windows « I/O Completion Ports », pour ...

    Read the article

  • SAP France enchaîne les bons trimestres financiers, continue son virage vers le Cloud et se montre confiant pour 2012

    SAP France enchaîne les bons trimestres financiers Continue son virage vers le Cloud et se montre confiant pour 2012 SAP France vient d'enchaîner un 5ème trimestre de croissance consécutif (le 7ème pour SAP dans son ensmble). L'éditeur allemand enregistre même le meilleur troisième trimestre fiscal de toute son histoire. Ces très bonnes performances ont été l'occasion pour Nicolas Sekkaki, Directeur Général de la branche française, de revenir sur les évolutions en cours et sur la crise qui s'annonce. Depuis le rachat de Business Object, SAP ne cesse de le répéter : il n'est plus seulem...

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >