Search Results

Search found 13808 results on 553 pages for 'remote storage'.

Page 12/553 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • How to connect to a remote server and run some code on that particular server?

    - by seedeg
    I am implementing an automated backup scheme so I created a shell script which first creates SQL Dumps for all MySQL databases, then it retrieves all the websites from the /var/www of a remote server. The latter is working as I am using rsync to get the remote files. However, obviously, the MySQL dumps being retrieved are the ones on the local server which is not what I want. I want to get the SQL Dumps from the remote server as well. I have a tunnel between the local and remote server which I can connect without using any password (I added the public key to the authorized_hosts), so I tried to add the following code to the script: ssh [email protected] Then I tried to retrieve the SQL dumps and then I exit from the remote server. However this does not work as I still have to enter exit manually in the terminal for the SQL dumps to be retrieved from the remote host. I don't know why this is happening. Basically this is what the script is trying to do: //connect to remote server ssh [email protected] //retrieve SQL dumps //code to retrieve... //exit from remote server exit //use rsync to get remote files of /var/www from local server (working) Is there a way to connect to the remote host AND run the script's code ON THAT remote host? Many thanks in advance

    Read the article

  • Can't connect to computer via SBS2011 RWA

    - by sbrattla
    I've got an SBS 2011 Essentials server. Users a able to log on to Remote Web Access using their username and password. However, the trouble starts when a users attempts to log on remotely to his/her computer from the Remote Web Access website. When the user clicks on his/her computer (in the RWA website), the user is first presented with a window listing Publisher, Type, Remote Computer name and Gateway Server. Everything seems fine here, and the user clicks Connect. The user credentials are provided, and a connection is attempted. However, the logon attempt always fails with the message "The logon attempt failed". The logon attempt always generates three log events in the server log: EventId: 4672 - Special Logon EventId: 4624 - Logon EventId: 4634 - Logoff All events happens have the same timestamp. No events are logged on the client machine which the user attempts to log on to. Others have solved this by going to their IIS server and enable "Windows Authentication" for Rpc and RpcWithCert (in Default Web Site). However, this is in place on the server. I've also got RD CAPs and RD RAPs in place. As a side note; if i try to connect to any of the machines using the Remote Desktop Connection using the "Connect from anywhere" functionality - then things work flawlessly! In other words, the error only occurs when attempting to login to a computer via the Remote Web Access website. I've run out of ideas for how I can solve this (too many hours spent). Any ideas highly appreciated!

    Read the article

  • Windows Storage Server 2008 hangs at logon

    - by ErJab
    We have a Dell PowerVault NX-3000 server running Windows Server 2008. Every now and then, when I try to login, the server seems to hang at the Welcome screen after I type in the password. However, all other services on the server are running fine - users are able to print off the print server and access their files. It just won't let me login. Any idea why this is happening? P.S.: I can't look at the server logs, because it won't let me login in the first place. Remote administration is also disabled on the server, so I can't use Remote Administration tools to look at the logs.

    Read the article

  • Error with git: remote HEAD is ambiguous, may be one of the following

    - by vfclists
    After branching and pushing to the remote, a git remote show origin gives the report HEAD branch (remote HEAD is ambiguous, may be one of the following): master otherbranch What does the imply? It is a critical error? remote origin Fetch URL: [email protected]:/home/gituser/repos/csfsconf.git Push URL: [email protected]:/home/gituser/repos/csfsconf.git HEAD branch (remote HEAD is ambiguous, may be one of the following): master otherbranch

    Read the article

  • Error with git: remote HEAD is ambiguous, may be one of the followin

    - by vfclists
    After branching and pushing to the remote, a git remote show origin gives the report HEAD branch (remote HEAD is ambiguous, may be one of the following): master otherbranch What does the imply? It is a critical error? remote origin Fetch URL: [email protected]:/home/gituser/repos/csfsconf.git Push URL: [email protected]:/home/gituser/repos/csfsconf.git HEAD branch (remote HEAD is ambiguous, may be one of the following): master otherbranch

    Read the article

  • What happens to the storage capacity when I uninstall Ubuntu?

    - by shole1202
    I used the wubi installer for Ubuntu 12.04. After having trouble with getting the Operating System to boot, I tried uninstalling it with wubi. From 'My Computer' (in Windows 7), I noticed the maximum capacity of my hard drive drop from 256gb to 238gb. I have tried using some methods with the command prompt to locate the missing storage, but Windows now only recognizes that the storage on the disk to have 238gb instead of the original 256. Is there any way to recover that memory?

    Read the article

  • VMWare ESX, storage over 2TB

    - by Phliplip
    Hi, First of, i'm a webdeveloper and my server experience lies in setting up FreeBSD servers for webserver. I'm working on a project for at photographer, and i'm hired to develop a new online photo ordering system - where user of course can view their photos :) They have a massive need of storage, thus we have bought a HP G6 and 8x1TB SATA HDD. Our plan is to install VMWare ESX 4.0, running multiple virtual machines; FreeBSD 8 for webserver and some windows servers. Allready done that. Then mount one big storage to the BSD, and share it through Samba to the WinServers. The raid is set up with an array of 2x 1TB to handle the VMs. And the rest is setup as 3 2x1TB to handle the photo-data. Thus 2.73TB for photo-data (the raids are 1+0). Now if we add a datastore in the ESX and add the 3 LUNs we can get a datastore of 2.74TB. But i don't se how i can add this datastore direct to the VM. Only the BSD VM needs access to this. Only way is to create a VirtualDisk, with a max of 2TB (8MB blocksize). This is because the datastore where we save the virtualdisk has a maximum filesize of 2TB. Then add it as a harddisk to the BSD VM. In the 'Add Harddisk' pane for the VM, i see an option for Raw Disk Management. I think this is to access the datastore or the raid directly. Only problem is that its greyed out! Can i access the datastorage directly from the BSD? Without creating and adding virtualdisk.

    Read the article

  • Windows Remote Desktop (RDP/MSTSC) fails with Error Code: 5

    - by BryCoBat
    I have 2 Windows XP boxen: A (running XP SP3) and B (running XP SP2). I'm using Remote Desktop to connect from A to B. When I connect, I get the login screen (which is slow to respond to keyboard/mouse input), and after logging in, I get the following: Fatal Error (Error Code: 5) Your Remote Desktop session is about to end. This computer might be low on virtual memory. Close your other programs, and then try connecting to the remote computer again. If the problem continues, contact your network administrator or technical support. I've seen one way to (sometimes) get in by opening a second RDP session to the same box [1], and if I wait long enough sometimes it will go ahead and log in anyway. Is there something broken/missing on the PC I'm trying to remote in to? Edited in reply to djangofan: There's nobody listed under "Lock pages in memory". When the double login trick works, a glance at Task Manager shows plenty of free memory, 800MB available out of 1.5 GB. (Performance tab, Physical memory) For what it's worth, this happens consistently after a reboot. What sort of exact info would be useful? There's very little remaining installed on that machine that's not Windows + Office... [1] found at http://www.fdcservers.net/vbulletin/archive/index.php/t-1580.html

    Read the article

  • Dedicated server with a lot of storage and good support - and cost-effective

    - by Martin Burger
    Hello, I am from Germany and looking for a dedicated server located in the US with a lot of storage: 750 - 1500 GB. CPU speed and amount of memory are secondary, the server will host large amounts of media files via http and ftp - the basic task is to help people exchange media files. In Germany, there are some good offers, like "Root Server EQ6" at www.hetzner.de. For example, that company provides support of high quality, and their plans are very cost-effective. The plan mentioned above costs about $90 per month and provides two 1500 GB SATA-II HDDs (Software-RAID 1). In the US, I found (amongst others) Go Daddy and rackspace. Go Daddy offers some "Storage Monster" plans that include 2 x 1,000 GB hard drives for about $180 per month - already twice as much as Hetzner above. However, I found some blog and forum entries that complain about the support provided by Go Daddy. Rackspace seems to provide decent support, but they are very "upscale". Their dedicated servers are customizable and start at $419 - thus, about 4.5 times as much as Hetzner. Can anybody recommend a solution / plan that is comparable to the one by Hetzner? Or are prices for dedicated servers in general much higher than in Germany? Regards, Martin

    Read the article

  • Java Swing over Remote Desktop - Strange, weird GUI squashing

    - by ADTC
    I thought this question fits SuperUser more than StackOverflow because it's not about actual Java programming, though programmers might be more likely to encounter the problem. Anyway, let me start of with some stats before I ask the actual question: Laptop: Windows 7 x32 Screen resolution 1024 x 768; Nvidia GeForce Go 6200 Connected to desktop via ad-hoc wireless network Access internet via desktop Desktop: Windows 7 x64 Screen resolution 1920 x 1080 Connected to laptop via ad-hoc wireless network Access internet via cable modem I'm connecting to my laptop via Remote Desktop from my desktop to take advantage of the large screen. I'm doing programming on my laptop (for portability reasons). Everything else runs smooth and fast over Remote Desktop as both computers are connected directly over the ad-hoc wireless. The only problem is this: Java Swing apps don't display the GUI properly. I acquired a Java Swing application and I'm debugging it in Eclipse. Here's what I got when I ran the app: Apparently there doesn't seem to be anything wrong with the GUI application I'm debugging, because the Java Control Panel exhibits the same problem. I've searched high and low in Google about this; the closest I came to a solution is this. But sadly, the use of -Dsun.java2d.nodraw=true has no effect at all. This only happens over Remote Desktop. I have tried locally and the GUI apps display properly. This isn't a dealbreaker for me as I can stop using Remote Desktop when developing Java Swing apps. However, I would like to know if anyone has encountered this and found any solution. PS: All software involved (Eclipse, Java JRE, etc.) are latest versions.

    Read the article

  • Backup Exec 10 - Network connection to the remote agent has been lost

    - by jherlitz
    Okay, so I have 4 remote offices, all running off of a 3mb ethernet connection. Two sites are part of a WAN and 2 sites are using 3mb connections over a site to site tunnel. I am using Backup Exec 2010, I have the remote agent installed on all the remote servers. For the past few weeks now, on the two sites running over the site to site tunnel have been failing with the following error message now. "The network connection to the Backup Exec Remote Agent has been lost. Check for network errors" We used to be on a DSL connection site to site tunnel, now we changed to the 3mb ethernet connection using site to site tunnel. I have to find out, has it been failing ever since we changed, or just recently. Backup exec support is telling me it is a network issue. My communication or connection to the server is solid, we don't have any issues, or outages. So I am baffled on why this continues to fail. And why just those two sites.. Any advice?

    Read the article

  • Home Sharing and Remote on iTunes causing firewall nags

    - by BoltClock
    It seems that enabling Home Sharing and/or hooking up my iPhone's Remote to iTunes causes Mac OS X Snow Leopard's firewall to freak out and keep nagging every time I launch iTunes to ask if I'd like it to accept incoming connections. If I turn off Home Sharing and forget all Remotes, the nag dialog no longer comes up. I could also disable the firewall, but I think that's a silly thing to do. iTunes is already in the firewall whitelist, so the only thing I know that could cause Mac OS X to nag is a bad application bundle code signature. I checked with this Terminal command: $ codesign -vvv /Applications/iTunes.app/ And sure enough, this is what it outputs: /Applications/iTunes.app/: a sealed resource is missing or invalid /Applications/iTunes.app/Contents/Resources/English.lproj/AutofillSettings.nib/objects.xib: resource added /Applications/iTunes.app/Contents/Resources/English.lproj/iTunesDJSettings.nib/objects.xib: resource added /Applications/iTunes.app/Contents/Resources/English.lproj/MobilePhonePrefs.nib/objects.xib: resource added /Applications/iTunes.app/Contents/Resources/English.lproj/MobilePhoneSetup.nib/objects.xib: resource added /Applications/iTunes.app/Contents/Resources/English.lproj/UniversalAccess.nib/objects.xib: resource added I've tried reinstalling iTunes as suggested by this answer, but Mac OS X still nags about incoming connections and the exact same output is generated when I run the above command again. On my PC, Windows Firewall has never nagged whenever I turn on Home Sharing and hook up Remote on my iPhone. Both computers use iTunes 9.2.1. My Mac runs Mac OS X 10.6.4. Is there anything special I need to do that I might have missed? Or how do I resolve the issue? EDIT: I've updated to iTunes 10, but the nags on my Mac are still there and only go away if I turn off Home Sharing and Remote. EDIT 2: I've updated to Remote 2.0 on my iPhone, but the firewall nags are persisting. Has anyone else had this firewall issue at all?

    Read the article

  • Remote Desktop connection to vista vs. xp

    - by CMP
    I am trying to log into my work computer remotely. I am using Windows 7 on my laptop. I have created a vpn connection to the network, and I am doing a remote desktop connection directly to the ip of my box (192.168.xxx.yyy). If I do a remote connection to a different box, running xp, it goes into remote desktop mode immediately and I see the windows login dialog as I am used to seeing. If I try remoting to my box, which is running vista, I do not see the remote desktop mode, but an additional dialog on my local machine asking for my credentials. It defaults in my local username. It allows me to log in as a different user, but the domain it has is still my local domain, not my work domain, so none of my usernames or passwords work. There doesn't appear to be a way to change the domain. Trying to hit several more boxes, it appears to act differently on xp and vista target machines. I feel like this must be a configuration issue, but I am not sure what the problem is. Any idea on how I can connect?

    Read the article

  • IPtables and Remote Desktop with Proxy

    - by Sebastian
    So I setup a windows 2008 web server R2 on VirtualBox. Currently using Bridged Network. I can remote desktop to the machine hosting the VM (10.0.0.183) but cannot remote desktop to the VM itself (10.0.0.195). The remote port on the VM set to 5003. VM setup to accept remote connections (windows side). We also use a proxy for our internet, and I added these rules under NAT. (centOS 5) on our proxy box. -A INPUT -p tcp --dport 3389 -j ACCEPT -A REROUTING -i ppp0 -p tcp --dport 3389 -j REDIRECT --to-port 5003 -A FORWARD -d 10.0.0.195 --dport 5003 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT I've been trying for hours and hours and just cannot get it to work. I also used freedns so that we can use a domain name to connect too this VM over the internet. (the DNS points to our external IP address). If we don't get this right we will have to purchase a PPoE from an ISP to connect to this VM remotely, but I know that there is an alternative route if I can just get this port forwarding right!

    Read the article

  • Multi-petabyte scale out storage solution [closed]

    - by Alex Yuriev
    Let's say that I have a need to have a single-name space scale to multi-petabyte object store with a file system-like wrapper. What is currently out there that supports the following: Single name space that can take 1B files. Support for multiple entry points using NFS At least node level replication ( preferably node and file level replication ) Online software upgrades No "magic sauce" on the storage layer The following has been evaluated: Gluster & Lustre - just ick - fundamental lack of understanding of why online upgrades are mandatory. OneFS - we have it. It is smelling more and more like it hides a dead body under the hood. Other than MapR and zfs am I missing anything? P.S. Oh yes, I keep forgetting that the forums are for people to discuss if 2TB drive actually stores 2TB info. May bad. Seriously though - how the heck can "meets the following requirements" can be considered a "debate"? P.P.S. I did not throw an idiotic insult - i pointed out that this is actually an interesting question compared to a conversation about storage capacity of a 2TB hard drive. It is not a question of what works better - it is a question that asks did I miss any of the products that currently exist which fit the criteria where criteria is clearly outline. I got one answer below which included something that I have not looked at in a long time which looks quite a bit grown up compared to the time I briefly look at it before.

    Read the article

  • HTML5 web storage: can different websites overwrite each other’s data on a user’s computer?

    - by Deepak Mahalingam
    I have a few questions regarding the concept of HTML5 storage. I went through the w3c specification, books and tutorials on the same, but still I am a bit unclear about certain concepts: Assume that I access Website A. Some JavaScript runs in my browser that sets a key value pair, say ('username','deepak'). Then I access Website B which also adds a key,value pair in the localstorage as ('username','mahalingam'). How will they both be differentiated? Will Website B override the value set by website A in my localstorage? How can we ensure that a website would not erase all of my localstorage?

    Read the article

  • How to config Remote BLOB Storage(RBS) with Microsoft Dynamics CRM 4.0 ?

    - by jk
    Hi We have working site for Dynamic crm 4.0 and in it we are storing image into the database. Now database is growing very fast and server is dying.. now I want to enable the Remote BLOB Storage with Dynamic CRM 4.0. for that I tried to install RBS for testing but everywhere is configure with Sharepoint 2010 not with Dynamic Crm. Does anybody know how to install and configure with Dyanmic CRM 4.0? Does RBS with Standard Edition of SQL Server 2008? I followed following path to install but it with Sharepoint? http://technet.microsoft.com/en-us/library/ee663474.aspx Any help is appreciate. Thanks

    Read the article

  • DocumentDB - Another Azure NoSQL Storage Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/08/25/documentdb---another-azure-nosql-storage-service.aspxMicrosoft just released a bunch of new features for Azure on 22nd and one of them I was interested in most is DocumentDB, a document NoSQL database service on the cloud.   Quick Look at DocumentDB We can try DocumentDB from the new azure preview portal. Just click the NEW button and select the item named DocumentDB to create a new account. Specify the name of the DocumentDB, which will be the endpoint we are going to use to connect later. Select the capacity unit, resource group and subscription. In resource group section we can select which region our DocumentDB will be located. Same as other azure services select the same location with your consumers of the DocumentDB, for example the website, web services, etc.. After several minutes the DocumentDB will be ready. Click the KEYS button we can find the URI and primary key, which will be used when connecting. Now let's open Visual Studio and try to use the DocumentDB we had just created. Create a new console application and install the DocumentDB .NET client library from NuGet with the keyword "DocumentDB". You need to select "Include Prerelase" in NuGet Package Manager window since this library was not yet released. Next we will create a new database and document collection under our DocumentDB account. The code below created an instance of DocumentClient with the URI and primary key we just copied from azure portal, and create a database and collection. And it also prints the document and collection link string which will be used later to insert and query documents. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7: Run(client).Wait(); 8:  9: Console.WriteLine("done"); 10: Console.ReadKey(); 11: } 12:  13: static async Task Run(DocumentClient client) 14: { 15:  16: var database = new Database() { Id = "testdb" }; 17: database = await client.CreateDatabaseAsync(database); 18: Console.WriteLine("database link = {0}", database.SelfLink); 19:  20: var collection = new DocumentCollection() { Id = "testcol" }; 21: collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection); 22: Console.WriteLine("collection link = {0}", collection.SelfLink); 23: } Below is the result from the console window. We need to copy the collection link string for future usage. Now if we back to the portal we will find a database was listed with the name we specified in the code. Next we will insert a document into the database and collection we had just created. In the code below we pasted the collection link which copied in previous step, create a dynamic object with several properties defined. As you can see we can add some normal properties contains string, integer, we can also add complex property for example an array, a dictionary and an object reference, unless they can be serialized to JSON. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: // collection link pasted from the result in previous demo 9: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 10:  11: // document we are going to insert to database 12: dynamic doc = new ExpandoObject(); 13: doc.firstName = "Shaun"; 14: doc.lastName = "Xu"; 15: doc.roles = new string[] { "developer", "trainer", "presenter", "father" }; 16:  17: // insert the docuemnt 18: InsertADoc(client, collectionLink, doc).Wait(); 19:  20: Console.WriteLine("done"); 21: Console.ReadKey(); 22: } the insert code will be very simple as below, just provide the collection link and the object we are going to insert. 1: static async Task InsertADoc(DocumentClient client, string collectionLink, dynamic doc) 2: { 3: var document = await client.CreateDocumentAsync(collectionLink, doc); 4: Console.WriteLine(await JsonConvert.SerializeObjectAsync(document, Formatting.Indented)); 5: } Below is the result after the object had been inserted. Finally we will query the document from the database and collection. Similar to the insert code, we just need to specify the collection link so that the .NET SDK will help us to retrieve all documents in it. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 9:  10: SelectDocs(client, collectionLink); 11:  12: Console.WriteLine("done"); 13: Console.ReadKey(); 14: } 15:  16: static void SelectDocs(DocumentClient client, string collectionLink) 17: { 18: var docs = client.CreateDocumentQuery(collectionLink + "docs/").ToList(); 19: foreach(var doc in docs) 20: { 21: Console.WriteLine(doc); 22: } 23: } Since there's only one document in my collection below is the result when I executed the code. As you can see all properties, includes the array was retrieve at the same time. DocumentDB also attached some properties we didn't specified such as "_rid", "_ts", "_self" etc., which is controlled by the service.   DocumentDB Benefit DocumentDB is a document NoSQL database service. Different from the traditional database, document database is truly schema-free. In a short nut, you can save anything in the same database and collection if it could be serialized to JSON. We you query the document database, all sub documents will be retrieved at the same time. This means you don't need to join other tables when using a traditional database. Document database is very useful when we build some high performance system with hierarchical data structure. For example, assuming we need to build a blog system, there will be many blog posts and each of them contains the content and comments. The comment can be commented as well. If we were using traditional database, let's say SQL Server, the database schema might be defined as below. When we need to display a post we need to load the post content from the Posts table, as well as the comments from the Comments table. We also need to build the comment tree based on the CommentID field. But if were using DocumentDB, what we need to do is to save the post as a document with a list contains all comments. Under a comment all sub comments will be a list in it. When we display this post we just need to to query the post document, the content and all comments will be loaded in proper structure. 1: { 2: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 3: "title": "xxxxx", 4: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 5: "postedOn": "08/25/2014 13:55", 6: "comments": 7: [ 8: { 9: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 10: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 11: "commentedOn": "08/25/2014 14:00", 12: "commentedBy": "xxx" 13: }, 14: { 15: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 16: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 17: "commentedOn": "08/25/2014 14:10", 18: "commentedBy": "xxx", 19: "comments": 20: [ 21: { 22: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 23: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 24: "commentedOn": "08/25/2014 14:18", 25: "commentedBy": "xxx", 26: "comments": 27: [ 28: { 29: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 30: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 31: "commentedOn": "08/25/2014 18:22", 32: "commentedBy": "xxx", 33: } 34: ] 35: }, 36: { 37: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 38: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 39: "commentedOn": "08/25/2014 15:02", 40: "commentedBy": "xxx", 41: } 42: ] 43: }, 44: { 45: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 46: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 47: "commentedOn": "08/25/2014 14:30", 48: "commentedBy": "xxx" 49: } 50: ] 51: }   DocumentDB vs. Table Storage DocumentDB and Table Storage are all NoSQL service in Microsoft Azure. One common question is "when we should use DocumentDB rather than Table Storage". Here are some ideas from me and some MVPs. First of all, they are different kind of NoSQL database. DocumentDB is a document database while table storage is a key-value database. Second, table storage is cheaper. DocumentDB supports scale out from one capacity unit to 5 in preview period and each capacity unit provides 10GB local SSD storage. The price is $0.73/day includes 50% discount. For storage service the highest price is $0.061/GB, which is almost 10% of DocumentDB. Third, table storage provides local-replication, geo-replication, read access geo-replication while DocumentDB doesn't support. Fourth, there is local emulator for table storage but none for DocumentDB. We have to connect to the DocumentDB on cloud when developing locally. But, DocumentDB supports some cool features that table storage doesn't have. It supports store procedure, trigger and user-defined-function. It supports rich indexing while table storage only supports indexing against partition key and row key. It supports transaction, table storage supports as well but restricted with Entity Group Transaction scope. And the last, table storage is GA but DocumentDB is still in preview.   Summary In this post I have a quick demonstration and introduction about the new DocumentDB service in Azure. It's very easy to interact through .NET and it also support REST API, Node.js SDK and Python SDK. Then I explained the concept and benefit of  using document database, then compared with table storage.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Article about Sun ZFS Storage Appliances

    - by Owen Allen
    Sun ZFS Storage Appliances are versatile storage systems. Discovering and managing them in Ops Center, though, makes them even more versatile. If you discover a Sun ZFS Storage Appliance in Ops Center 12c, you can create iSCSI and Fibre Channel LUNS, and make the LUNs available to server pools and virtualization hosts as a storage library. Barbara Higgins has written an excellent article that walks you through the process of setting up a Sun ZFS Storage Appliance and discovering and managing it in Ops Center. If you're looking into ways to make a Sun ZFS Storage Appliance work for you, it's worth a look.

    Read the article

  • Join Our Call: Sun Storage 2500-M2 Announcement

    - by user797911
    Oracle's Sun Storage 2500-M2 array brings together the latest Fibre Channel (FC) and SAS2 technologies with Oracle's Sun Storage Common Array software from Oracle to create a robust solution that’s equally adept in an entry-level storage area network (SAN) for the mid-size business and integrating into an existing storage network within the enterprise. The Sun Storage 2500-M2 replaces Sun's Storage 2500 array product line and is designed so that the customer may have a quick qualification time for fast and easy deployment in the traditional 2500 environments. Jun Jang, Oracle Principal Product Manager will be hosting this 1 hour live call (a recording will be available), please join us to find out more: Event Date: 24-JUN-11 Event Time: 08:00 am PST/PDT/4pm UK time Web Registration and Access: http://oukc.oracle.com/static09/opn/login/?t=livewebcast|c=1031672594 Access for Mobile Devices: http://my.oracle.com/content/web/cnt636926 Call Provider: Intercall International Participant Dial-In Number: 706-634-8508 Additional International Dial-In Numbers Link: http://www.intercall.com/national/oracleuniversity/gdnam.html Dial-In Passcode: 96395

    Read the article

  • Join Our Call: Sun Storage 2500-M2 Announcement

    - by mseika
    Oracle's Sun Storage 2500-M2 array brings together the latest Fibre Channel (FC) and SAS2 technologies with Oracle's Sun Storage Common Array software from Oracle to create a robust solution that’s equally adept in an ! entry-level storage area network (SAN) for the mid-size business and integrating into an existing storage network within the enterprise. The Sun Storage 2500-M2 replaces Sun's Storage 2500 array product line and is designed so that the customer may have a quick qualification time for fast and easy deployment in the traditional 2500 environments. Jun Jang, Oracle Principal Product Manager will be hosting this 1 hour live call (a recording will be available), please join us to find out more:24. Jun 2011 08:00 am PST/PDT/4pm UK timeWeb Registration and AccessAccess for Mobile DevicesInternational Participant Dial-In Number: 706-634-8508Additional International Dial-In Numbers LinkDial-In Passcode: 6395

    Read the article

  • Azure Blob storage defrag

    - by kaleidoscope
    The Blob Storage is really handy for storing temporary data structures during a scaled-out distributed processing. Yet, the lifespan of those data structures should not exceed the one of the underlying operation, otherwise clutter and dead data could potentially start filling up your Blob Storage Temporary data in cloud computing is very similar to memory collection in object oriented languages, when it's not done automatically by the framework, temp data tends to leak. In particular, in cloud computing,  it's pretty easy to end up with storage leaks due to: Collection omission. App crash. Service interruption. All those events cause garbage to accumulate into your Blob Storage. Then, it must be noted that for most cloud apps, I/O costs are usually predominant compared to pure storage costs. Enumerating through your whole Blob Storage to clean the garbage is likely to be an expensive solution. Lokesh, M

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • How to Use Windows 8's Storage Spaces to Mirror & Combine Drives

    - by Chris Hoffman
    “Storage Spaces” is a new feature in Windows 8 that can combine multiple hard drives into a single virtual drive. It can mirror data across multiple drives for redundancy or combine multiple physical drives into a single pool of storage. You can even create pools of storage larger than the amount of physical storage space you have available. When the physical storage fills up, you can plug in another drive and take advantage of it with no additional configuration required. Storage Spaces is similar to RAID or LVM on Linux. The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos

    Read the article

  • Oracle OpenWord 2012 - Managing Storage in the Cloud

    - by jwalker
    At Oracle OpenWorld this year attendees will get experience using the Sun ZFS Storage Appliance during the Managing Storage in the Cloud Hands-On-Lab. Using Sun ZFS Storage, we will be provisioning Oracle Enterprise Linux Virtual Machines and filesystem shares that can be used with Oracle Database. We will also be using Oracle DTrace Analytics to analyze I/O workloads and drill down to see how the storage is really being used. Hope you can join us! Session ID: HOL10034 Session Title: Managing Storage in the Cloud Speakers: Brian Haskins, Nagendran J, Paul Johnson, Karlheinz Vogel and Jim Walker Venue and Room: Marriott Marquis - Salon 14/15 Date and Times: Monday October 1 - 3:15-4:15PM, Tuesday October 2 - 5:00-6:00PM Oracle OpenWorld Storage Sessions

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >