Search Results

Search found 14028 results on 562 pages for 'online storage'.

Page 9/562 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • best storage option for recording live streaming video

    - by Alchemical
    We're creating a new web site that includes video chat capabilities. Wowza Server is being used for the streaming media. We would like the capability for users to record their video chats in certain circumstances. Is it feasible to record the videos to the same server running Wowza? Or performance-wise, would it make more sense to store them on another server? I understand SANs are popular as well, but I'm a little concerned about cost for those as we are on a fairly tight budget. Currently we have two servers with pretty decent RAID cnofigurations for storage, one has 14TB and the other 6TB. Mostly concerned if doing the recording on the same server as the streaming server (or web server), could significantly adversely affect that server's primary function.

    Read the article

  • Storage product testing

    - by wildchild
    hello, I know this is out of place (being an active member here i am coming for the help from seniors) ,but i need some information regarding storage testing ,testing of Raid arrays, SCSI, SAS ,SATA and also test carried out on fabric manager(Cisco MDS series switches). I am aware that this is an administrative forum and i would really appreciate if you could direct me to the correct forum ar links where i can learn things . @ moderators-Sorry for posting at the wrong place,i would be deleting this as soon as i get the help. Thanks !

    Read the article

  • Provision storage in SAN encironment

    - by wildchild
    hi,.. Can somebody help in understanding how Lun Provisioning is done in clariion based on client requirement. Say, a client needs one 50 GB space on the host( for application) ,two -350 GBs ( for DB and stuff) on the same host and i have 4 raid groups with space of 200 GB in one 50 GB in second and 400 GB and 400GB in rest two all having 5 disks. And , I 'm using Raid 5 only. How do i provision keeping performance in mind( so that performance hit is not there)?..And i need to understand the concept of meta head as well. Please help me by elaborating the above scenario..I am learning to provision storage in non productive environment( b'fore going ahead and working on client prod server) and i have been learning a lot from SF ...I request the seniors here to help me understand it better. Thanks for reading

    Read the article

  • Intel Rapid Storage Technology - Raid5 is very slow

    - by Cederstrom
    Hi, I build a computer with a raid5, using the motherboards raid controller (ASUS P7H57D-V EVO - intel Rapid Storage Technology). The read and write are however very slow, when using the raid controller :( - I am using Windows 2008 R2, and when using the windows software raid, it was ok in speed - so there must be an issue with the controller? Im using 6 disks on 2TB each. Do anyone have any idea why its so slow, and how to fix it? I rather not pick the easy solutiuon of "just buy a raid controller" :| If you need more info about my setup, please just ask. Thanks :)

    Read the article

  • Storage disk format to use to be utilised by all OS

    - by ClothSword
    Aiming to have a large internal HDD to store files, and several OSes on separate disks. They all need to read and write to one storage drive. Ubuntu (13.04+) Mac OSX (10.8+) Windows (7+) What format should the drive be? Would like to avoid buying third party software, here's what I have discerned so far: NTFS - Can't be written to by Mac without buying third party software? ext3 - Windows can't read, third party software in development. Mac has OSXFUSE HFS+ - Buy third party and/or faff around to get working on windows exFAT - Cross platform, but breaks MS patents?

    Read the article

  • Storage setup for large files

    - by Mecca
    I need to store over 200TB of data (all types, biggest being video files) and be able to access it over a local network. The files will be accessed for editing or searches. I don't need versioning, but a setup that would keep me safe from harddrive failures would be nice. Right now the content is on different harddrives, some external drives, some regular. I don't exclude the possibility of buying new/extra drives if necessary. If they will ever be exposed to the web, it wont be to the public, but just a couple of people. I have no idea what to buy to make this happen. I see some NAS solutions over the internet like this http://www.bestbuy.com/site/a/2266043.p?id=1218317764591&skuId=2266043 but the storage is not enough, plus it doesn't seem to be scalable. What do you recommend? Thanks

    Read the article

  • Symantect (Veritas) Storage Foundation for Windows [closed]

    - by SvrGuy
    Does anyone out their have rough (I don't need exact) pricing for Symantec (used to be Veritas) Storage Foundation for Windows? Its for Windows Server 2008 R2. Ideally, I would love to know the cost of Storange Foundation For Windows, and also the price of the options (like VRR, HA etc. ) if you happen to know them. Getting the information out of a reseller is like pulling teeth. They want to meet with us and discuss our needs etc. My needs are just to know whether its $100, $500, $1,000 or $10,000 per server in small qtys (i.e. less than 20 licences). Arghh. Anyone know the rough prices?

    Read the article

  • Simple Central Storage for HA mail server

    - by jtnire
    Hi Everyone, I will have 2 Postfix servers. One will be a backup of the other. What is the easiest method to provide central storage to both of these boxes? My infrastructure is very simple: Just a lot of Xen hosts, so there is no SAN or anything. Each Xen host does have RAID1 though. I don't mind mounting NFS shares on each of those mail servers, as long as the NFS server wasn't a single point of failure. Is there such a thing as redundant NFS? Any help would be appreciated Thanks

    Read the article

  • Storage sizing for virtual machines

    - by njo
    I am currently doing research to determine the consolidation ratio my company could expect should we start using a virtualization platform. I find myself continually running into a dead end when researching how to translate observed performance (weeks of perfmon data) to hdd array requirements for a virtualization server. I am familiar with the concept of IOPs, but they seem to be an overly simplistic measurement that fails to take into account cache, write combining, etc. Is there a seminal work on storage array performance analysis that I'm missing? This seems like an area where hearsay and 'black magic' have taken over for cold, hard fact.

    Read the article

  • Data storage solutions for rapidly running out of space

    - by Grimlockz
    I have 2 web servers (1 live and other backup), the issue I have is our storage is rapidly running out. All the data on the server is used by our customers and new documents are uploaded to the server daily. So nothing can be deleted as it's always in use. We use a flat file structure with no database. I'm seeking solutions or ideas for the best place to move the our data to. The data has to be secure and needs to run on a linux environment. Not sure where to start - clusters, vmware, or they such solutions for huge file servers?

    Read the article

  • StorageWorks MSA60 and other storage related questions

    - by Mejmo
    Hi, I do not have deeper knowledge of the storage area, sorry for asking evidently stupid questions :) We are thinking about getting HP StorageWorks MSA60 for storing our VM. Do we need another DL server with controller so that we could use iSCSI ? Do we need to get some P800 controller for doing that? I cannot imagine how it is connected together actually ... MSA60-DLserver with p800 controller and servers that are running VM connected with iSCSI to this DL server ? Or MSA60 directly supports iSCSI so the DL server is not necessary ? What is inside this MSA60? Is it possible to install there OS ? Thank you.

    Read the article

  • Linqpad with Table Storage

    - by kaleidoscope
    LinqPad as we all know has been a wonderful tool for running ad-hoc queries. With Azure Table storage in picture LinqPad was no longer in picture and we shifted focus to Cloud Storage Studio only to realize the limited and strange querying capabilities of CSS. With some tweaking to Linqpad we can get the comfortable old shoe of ad-hoc queries with LinqPad in the Azure Table storage. Steps: 1. Start LinqPad 2. Right Click in the query window and select “Query Properties” 3. In The Additional References add reference to Microsoft.WindowsAzure.StorageClient, System.Data.Services.Client.dll and the assembly containing the implementation of the DataServiceContext class tied to the Azure table storage. 4. In the additional namespace imports import the same three namespaces mentioned above. 5. Then we need to provide following details. a. Table storage account name and shared key. b. DataServiceContext implementing class in your code. c. A LINQ query. e.x. var storageAccountName = "myStorageAccount";  // Enter valid storage account name var storageSharedKey = "mysharedKey"; // Enter valid storage account shared key var uri = new System.Uri("http://table.core.windows.net/"); var storageAccountInfo = new CloudStorageAccount(new StorageCredentialsAccountKey(storageAccountName, storageSharedKey), false); var serviceContext = new TweetPollDataServiceContext(storageAccountInfo); // Specify the DataServiceContext implementation // The query var query = from row in serviceContext.Table select row;         query.Dump(); Sarang, K

    Read the article

  • PARTNER WEBCAST (June 4): Enhance Customer experience with Nimble Storage SmartStack for Oracle with Cisco

    - by Zeynep Koch
    Live Webcast: Enhance Customer experience with Nimble Storage SmartStack for Oracle with Cisco A webcast for resellers who sell Oracle workloads to customers  Wednesday, June 4, 2014, 8:00 AM PDT /11 AM EDT  Register today Nimble Storage SmartStack™ for Oracle provides pre-validated reference architecture that speed deployments and minimize risk.  IT and Oracle administrators and architects realize the importance of underlying Operating System, Virtualization software, and Storage in maintaining services levels and staying in budget.  In this webinar, you will learn how Nimble Storage SmartStack for Oracle provides a converged infrastructure for Oracle database online transaction processing (OLTP) and online analytical processing (OLAP) environments with Oracle Linux and Oracle VM. SmartStack delivers the performance and reliability needed for deploying Oracle on a single symmetric multiprocessing (SMP) server or if you are running Oracle Real Application Clusters (RAC) on multiple nodes. Nimble Storage SmartStack for Oracle with Cisco can help you provide: Improved Oracle performance Stress-free data protection and DR of your Oracle database Higher availability and uptime Accelerate Oracle development and improve testing All for dramatically less than what you’re paying now Presenters: Doan Nguyen, Senior Principal Product Marketing Director, Oracle Vanessa Scott , Business Development Manager, Cisco Ibrahim “Ibby” Rahmani, Product and Solutions Marketing, Nimble Storage Join this event to learn from our Nimble Storage and Oracle experts on how to optimize your customers' Oracle environments. Register today to learn more!

    Read the article

  • PARTNER WEBCAST (June 4): Enhance Customer experience with Nimble Storage SmartStack for Oracle with Cisco

    - by Zeynep Koch
    Live Webcast: Enhance Customer experience with Nimble Storage SmartStack for Oracle with Cisco A webcast for resellers who sell Oracle workloads to customers  Wednesday, June 4, 2014, 8:00 AM PDT /11 AM EDT  Register today Nimble Storage SmartStack™ for Oracle provides pre-validated reference architecture that speed deployments and minimize risk.  IT and Oracle administrators and architects realize the importance of underlying Operating System, Virtualization software, and Storage in maintaining services levels and staying in budget.  In this webinar, you will learn how Nimble Storage SmartStack for Oracle provides a converged infrastructure for Oracle database online transaction processing (OLTP) and online analytical processing (OLAP) environments with Oracle Linux and Oracle VM. SmartStack delivers the performance and reliability needed for deploying Oracle on a single symmetric multiprocessing (SMP) server or if you are running Oracle Real Application Clusters (RAC) on multiple nodes. Nimble Storage SmartStack for Oracle with Cisco can help you provide: Improved Oracle performance Stress-free data protection and DR of your Oracle database Higher availability and uptime Accelerate Oracle development and improve testing All for dramatically less than what you’re paying now Presenters: Doan Nguyen, Senior Principal Product Marketing Director, Oracle Vanessa Scott , Business Development Manager, Cisco Ibrahim “Ibby” Rahmani, Product and Solutions Marketing, Nimble Storage Join this event to learn from our Nimble Storage and Oracle experts on how to optimize your customers' Oracle environments. Register today to learn more!

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Temporary file storage - script for webserver

    - by Chris
    I'm looking for a script (preferably php) which I can use for to temporarily exchange files. I had such a script before (it was written in flash), but - damn - just can't find it anymore. Here the features I'm looking for: I have a logon, which allows me to create "upload spaces". for these "upload spaces" I define a time how long the space will be available for (until the files are deleted), and who can access it - in the means of typing in an email, and that user gets a link with the "online space", user ID and password. the other user then clicks on the link and uploads the file, and I get (preferably) an email should there be any file changes I get an email before the content gets deleted Now, 3 additional things: - it should be open- source - run on linux (preferably lamp) - no, I dont wanna use dropbox or similar :p Thanks in advance everybody, Cheers Chris

    Read the article

  • Enterprise online backup providers

    - by PHLiGHT
    We've used Iron Mountain's LiveVault service but found that it was only good for file level backups. We liked how it backed up every 15 minutes. It doesn't support Exchange 2007-10 and the web interface was very poor. Who else is everyone using? The most notable names in online backup such as Mozy and Carbonite don't really seem suitable for larger companies. We have SQL, Exchange and Sharepoint servers and are looking to virtualize in the near future. Until then bare metal restore capability would be nice. We are currently using Backup Exec 12.5 but that can be so troublesome at times. We have about 2 TB of data. 1TB is archival data.

    Read the article

  • Searching for online database software/cms

    - by ButterdBread
    I am searching for a software or CMS that manages and displays large online databases, as some kind of frontend to MySQL or any other database. It should be accessible through the browser, be as secure as possible (offering login). The data I'd like to store would be personal information such as name, adress and birthday - also I'd need to be able to add custom fields as well. Also forms and the possibility to download the data in an excel? table would be great. PHPmyadmin is not an option, it should be similar to a CRM but more closely adapted to managing database tables, searching for entries and filtering data. It should be possible to have many user accounts with different rights, with each of them being able to acces certain parts of the data and entering own data. Is there something out there, that might get close to what I imagine? I appreciate any help!

    Read the article

  • USB drives not recognized all of a sudden

    - by Siddharth
    I have tried most of the advice on askubuntu and other sites, usb_storage enable to fdisk -l. But I am unable to find steps to get it working again. lsusb results Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 002: ID 413c:3012 Dell Computer Corp. Optical Wheel Mouse Bus 005 Device 002: ID 413c:2105 Dell Computer Corp. Model L100 Keyboard Bus 001 Device 005: ID 8564:1000 dmseg | tail reports [ 69.567948] usb 1-4: USB disconnect, device number 4 [ 74.084041] usb 1-6: new high-speed USB device number 5 using ehci_hcd [ 74.240484] Initializing USB Mass Storage driver... [ 74.256033] scsi5 : usb-storage 1-6:1.0 [ 74.256145] usbcore: registered new interface driver usb-storage [ 74.256147] USB Mass Storage support registered. [ 74.257290] usbcore: deregistering interface driver usb-storage fdisk -l reports Device Boot Start End Blocks Id System /dev/sda1 * 2048 972656639 486327296 83 Linux /dev/sda2 972658686 976771071 2056193 5 Extended /dev/sda5 972658688 976771071 2056192 82 Linux swap / Solaris I think I need steps to install and get usb_storage module working. Edit : I tried sudo modprobe -v usb-storage reports sudo modprobe -v usb-storage insmod /lib/modules/3.2.0-48-generic-pae/kernel/drivers/usb/storage/usb-storage.ko Still no usb driver mounted. Nor does a device show up in /dev. Any step by step process to debug and fix this will be really helpful. Thanks.

    Read the article

  • What online games would let me practice AI development?

    - by Myn
    I am working on a project experimenting with Artificial Intelligence design methodologies for online world avatars. Online world here is quite open to interpretation; Second Life is just as applicable as Counter Strike, for example. To carry out these experiments, I must first develop an intelligent agent for the world in question. However, I am honestly quite stuck as to which game I could use for this. My preference was to develop an intelligent "bot" to play an MMORPG, but the legal restrictions of such games prevent me. Likewise with most FPS games the use of an intelligent agent in place of a human player is considered cheating. The alternative, of course, is to create an NPC bot; an intelligent agent that populates the world alongside the player(s) rather than replacing a particular player. However, I'm struggling to find a game that would enable me to create an intelligent opponent either. I suppose the main requirements would be a game allows a third party program to use the function calls usually utilised by players and read feedback on the state of the world. Quake III and Unreal Tournament have been suggested before, but they have already been the subject of work on this research project. Short of writing my own online game from scratch, what games would allow me, through middleware, an API, or otherwise, to create either an artificially intelligent player or a bot?

    Read the article

  • Intel Rapid Storage Technology (pre-OS) driver installation

    - by Nero theZero
    My desktop machine is built on Gigabyte GA-Z87-UD3H and Gigabyte provides the latest driver for Intel Rapid Storage Technology (IRST), which I installed after installing the OS. Same goes for my Lenovo Thinkpad-T420. And for both machine, checking the controller device under the IDE ATA/ATAPI Controllers section in Device Manager I see the driver has been updated to the latest version. I set the SATA controller to AHCI from BIOS On the desktop machine I have one WD 2TB BLACK & one WD 3TB Green I don’t use RAID, & no chance of using in near future, but according to Intel IRST improves performance in single disk scenario too. Now I have the following questions – What is the actual purpose of IRST (pre-OS install) driver that doesn’t get served with a post-OS driver that I installed? There must be some difference, otherwise there wouldn’t be a pre-OS version of the driver. Right? In the pre-OS procedure (loading the drivers at OS-installation time) after successfully completing the OS installation, do I need that post-OS driver? Because after installing from that one I got a quick launch icon that runs the IRST configuration application. Where do get that after installing the pre-OS driver? As it is “pre-OS”, when I load it at OS-installation time, does it updates anything at BIOS level or anywhere other than HDD? That’s because I’m going to dual boot Windows 7 with Windows 8.1, and after installing Windows 7 when I install Windows 8.1 & load the IRST driver for that, is there any chance of any “overwriting” or OS-incompatibility? In short, is there anything specific to follow while installing the second OS?

    Read the article

  • Is basing storage requirements based on IOPS sufficient?

    - by Boden
    The current system in question is running SBS 2003, and is going to be migrated on new hardware to SBS 2008. Currently I'm seeing on average 200-300 disk transfers per second total across all the arrays in the system. The array seeing the bulk of activity is a 6 disk 7200RPM RAID 6 and it struggles to keep up during high traffic times (idle time often only 10-20%; response times peaking 20-50+ ms). Based on some rough calculations this makes sense (avg ~245 IOPS on this array at 70/30 read to write ratio). I'm considering using a much simpler disk configuration using a single RAID 10 array of 10K disks. Using the same parameters for my calculations above, I'm getting 583 average random IOPS / sec. Granted SBS 2008 is not the same beast as 2003, but I'd like to make the assumption that it'll be similar in terms of disk performance, if not better (Exchange 2007 is easier on the disk and there's no ISA server). Am I correct in believing that the proposed system will be sufficient in terms of performance, or am I missing something? I've read so much about recommended disk configurations for various products like Exchange, and they often mention things like dedicating spindles to logs, etc. I understand the reasoning behind this, but if I've got more than enough random I/O overhead, does it really matter? I've always at the very least had separate spindles for the OS, but I could really reduce cost and complexity if I just had a single, good performing array. So as not to make you guys do my job for me, the generic version of this question is: if I have a projected IOPS figure for a new system, is it sufficient to use this value alone to spec the storage, ignoring "best practice" configurations? (given similar technology, not going from DAS to SAN or anything)

    Read the article

  • online backup plan for a home office with servers

    - by TiernanO
    So, i am in the process of tweaking my spending and i need to change my backup plan... I am currently using a mix of JungleDisk and ZManda ZCB to backup files on my MacBook Pro, Main Windows Server Wrokstation, a dedicated Windows Server in a datacenter, and various other machines and file sources. The problem is the cost: this month, it has cost me about $90 to backup a little over 500Gb... This amount of data will increese over time too, since i am backing up Photos (24Mb RAW images + 4-8MB JPEGs), Videos (various cameras shooting 720p and 1080p), Music, Movies, TV shows and Apps from iTunes (though with iTunes cloud, this might not need to be backed up again) and source code... I have looked at the likes of Mozy, CrashPlan+ and Pro, Backblaze and Carbonite, but each have their problems: Mozy seems overly expenvice per gig at 50C Crashplan wont sell to me since i am outside the US (they hide it on their site... hidden in the FAQ section!) Backblaze dont support Windows Server Carbonite business pricing is $600 up front for 500Gb of storage... Fro $229, they will not backup Windows Servers. So, other than those, Jungle Disk (at 15c per Gig) or ZManda (also at 15c per Gig) what other options are there? what are other people using?

    Read the article

  • Enterprise class storage best practices

    - by churnd
    One thing that has always perplexed me is storage best practices. Filesystems brag about how they can be petabytes or exabytes in size. Yet, I do not know many sysadmins who are willing to let a single volume grow over several terrabytes. I do know the primary reason behind this is how long it would take to rebuild the array should a drive fail. The more drives in a single LUN, the longer this takes and the greater your risk of losing another drive while the rebuild is taking place. Then there's usage reasons. Admins will carve out a LUN based on how much space they think needs to be allocated to the project. It seems more practical to me for the LUN to be one large array and to use quotas. I understand this wouldn't satisfy every requirement (iSCSI), but I see a lot of NAS systems (NFS) managed this way. I also understand that the underlying volumes can be grown/shrunk as needed quite easily, but wouldn't it be less "risky" to use quotas rather than manipulating volumes and bringing possible data loss into the equation? There may be some other reasons I'm missing, so please enlighten me. Can we not expect filesystems to ever be so large? Are we waiting for the hardware to get faster to cut down on rebuild times?

    Read the article

  • How to diagnose storage system scaling problems?

    - by Unknown
    We are currently testing the maximum sequential read throughput of a storage system (48 disks total behind two HP P2000 arrays) connected to HP DL580 G7 running RHEL 5 with 128 GB of memory. Initial testing has been mainly done by running DD-commands like this: dd if=/dev/mapper/mpath1 of=/dev/null bs=1M count=3000 In parallel for each disk. However, we have been unable to scale the results from one array (maximum throughput of 1.3 GB/s) to two (almost the same throughput). Each array is connected to a dedicated host bust adapter, so they should not be the bottleneck. The disks are currently in JBOD configuration, so each disk can be addressed directly. I have two questions: Is running multiple DD commands in parallel really a good way to test maximum read throughput? We have noticed very high SWAPIN-% numbers in iotop, which I find hard to explain because the target is /dev/null How shoud we proceed in trying to find the reason for the scaling problem? Do you thing the server itself is the bottleneck here, or could there be some linux parameters that we have overlooked?

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >