Search Results

Search found 6770 results on 271 pages for 'azure storage'.

Page 25/271 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Azure Deployment - Be careful adding a Remote Desktop connection to deployments that you want to swap staging with live…

    - by joelvarty
    Adding Remote Desktop capability adds an external endpoint onto the deployment, meaning it may have more endpoints that your current live deployment.  When there is a difference in the number of endpoints between a staging and live deployment, you can’t swap them in the Azure portal.  Oops. So you have to the remote capability to your live deployment first if you want to do this… more later – joel

    Read the article

  • Windows Azure and Server App Fabric &ndash; kinsmen or distant relatives?

    - by kaleidoscope
    Technorati Tags: tinu,windows azure,windows server,app fabric,caching windows azure If you are into Windows Azure then it would be rather demeaning to ask if you are aware of Windows Azure App Fabric. Just in case you are not - Windows Azure App Fabric provides a secure connectivity service by means of which developers can build distributed applications as well as services that work across network and organizational boundaries in the cloud. But some of you may have heard of another similar term floating around forums and blog posts - Windows Server App Fabric. The momentary déjà vu that you might have felt upon encountering it is not unheard of in the Cloud Computing circles - http://social.msdn.microsoft.com/Forums/en/netservices/thread/5ad4bf92-6afb-4ede-b4a8-6c2bcf8f2f3f http://forums.virtualizationtimes.com/session-state-management-using-windows-server-app-fabric Many have fallen prey to this ambiguous nomenclature but its not without a purpose. First announced at PDC 2009, Windows Server AppFabric is a set of application services focused on improving the speed, scale, and management of Web, Composite, and Enterprise applications. Initially codenamed Dublin the app fabric (oops....Windows Server App Fabric) provides add-ons like Monitoring,Tracking and Persistence into your hosted Workflow and Services without the Developer worried about these Functionalities. Alongwith this it also provides Distributed In-Memory caching features from Velocity caching. In short it is a healthy equivalent of Windows Azure App Fabric minus the cloud part. So why bring this up while talking about Windows Azure? Well, apart from their similar last names these powers are soon to be combined if Microsoft's roadmap is to be believed - "Together, Windows Server AppFabric and Windows Azure platform AppFabric provide a comprehensive set of services that help developers rapidly develop new applications spanning Windows Azure and Windows Server, and which also interoperate with other industry platforms such as Java, Ruby, and PHP." One of the most powerful features of the Windows Server App Fabric is its distributed caching mechanism which if appropriately leveraged with the Windows Azure App Fabric could very well mean a revolution in the Session Management techniques for the Azure platform. Well Microsoft, we do have our fingers crossed..... Read on... http://blogs.technet.com/windowsserver/archive/2010/03/01/windows-server-appfabric-beta-2-available.aspx

    Read the article

  • The Shift: how Orchard painlessly shifted to document storage, and how it’ll affect you

    - by Bertrand Le Roy
    We’ve known it all along. The storage for Orchard content items would be much more efficient using a document database than a relational one. Orchard content items are composed of parts that serialize naturally into infoset kinds of documents. Storing them as relational data like we’ve done so far was unnatural and requires the data for a single item to span multiple tables, related through 1-1 relationships. This means lots of joins in queries, and a great potential for Select N+1 problems. Document databases, unfortunately, are still a tough sell in many places that prefer the more familiar relational model. Being able to x-copy Orchard to hosters has also been a basic constraint in the design of Orchard. Combine those with the necessity at the time to run in medium trust, and with license compatibility issues, and you’ll find yourself with very few reasonable choices. So we went, a little reluctantly, for relational SQL stores, with the dream of one day transitioning to document storage. We have played for a while with the idea of building our own document storage on top of SQL databases, and Sébastien implemented something more than decent along those lines, but we had a better way all along that we didn’t notice until recently… In Orchard, there are fields, which are named properties that you can add dynamically to a content part. Because they are so dynamic, we have been storing them as XML into a column on the main content item table. This infoset storage and its associated API are fairly generic, but were only used for fields. The breakthrough was when Sébastien realized how this existing storage could give us the advantages of document storage with minimal changes, while continuing to use relational databases as the substrate. public bool CommercialPrices { get { return this.Retrieve(p => p.CommercialPrices); } set { this.Store(p => p.CommercialPrices, value); } } This code is very compact and efficient because the API can infer from the expression what the type and name of the property are. It is then able to do the proper conversions for you. For this code to work in a content part, there is no need for a record at all. This is particularly nice for site settings: one query on one table and you get everything you need. This shows how the existing infoset solves the data storage problem, but you still need to query. Well, for those properties that need to be filtered and sorted on, you can still use the current record-based relational system. This of course continues to work. We do however provide APIs that make it trivial to store into both record properties and the infoset storage in one operation: public double Price { get { return Retrieve(r => r.Price); } set { Store(r => r.Price, value); } } This code looks strikingly similar to the non-record case above. The difference is that it will manage both the infoset and the record-based storages. The call to the Store method will send the data in both places, keeping them in sync. The call to the Retrieve method does something even cooler: if the property you’re looking for exists in the infoset, it will return it, but if it doesn’t, it will automatically look into the record for it. And if that wasn’t cool enough, it will take that value from the record and store it into the infoset for the next time it’s required. This means that your data will start automagically migrating to infoset storage just by virtue of using the code above instead of the usual: public double Price { get { return Record.Price; } set { Record.Price = value; } } As your users browse the site, it will get faster and faster as Select N+1 issues will optimize themselves away. If you preferred, you could still have explicit migration code, but it really shouldn’t be necessary most of the time. If you do already have code using QueryHints to mitigate Select N+1 issues, you might want to reconsider those, as with the new system, you’ll want to avoid joins that you don’t need for filtering or sorting, further optimizing your queries. There are some rare cases where the storage of the property must be handled differently. Check out this string[] property on SearchSettingsPart for example: public string[] SearchedFields { get { return (Retrieve<string>("SearchedFields") ?? "") .Split(new[] {',', ' '}, StringSplitOptions.RemoveEmptyEntries); } set { Store("SearchedFields", String.Join(", ", value)); } } The array of strings is transformed by the property accessors into and from a comma-separated list stored in a string. The Retrieve and Store overloads used in this case are lower-level versions that explicitly specify the type and name of the attribute to retrieve or store. You may be wondering what this means for code or operations that look directly at the database tables instead of going through the new infoset APIs. Even if there is a record, the infoset version of the property will win if it exists, so it is necessary to keep the infoset up-to-date. It’s not very complicated, but definitely something to keep in mind. Here is what a product record looks like in Nwazet.Commerce for example: And here is the same data in the infoset: The infoset is stored in Orchard_Framework_ContentItemRecord or Orchard_Framework_ContentItemVersionRecord, depending on whether the content type is versionable or not. A good way to find what you’re looking for is to inspect the record table first, as it’s usually easier to read, and then get the item record of the same id. Here is the detailed XML document for this product: <Data> <ProductPart Inventory="40" Price="18" Sku="pi-camera-box" OutOfStockMessage="" AllowBackOrder="false" Weight="0.2" Size="" ShippingCost="null" IsDigital="false" /> <ProductAttributesPart Attributes="" /> <AutoroutePart DisplayAlias="camera-box" /> <TitlePart Title="Nwazet Pi Camera Box" /> <BodyPart Text="[...]" /> <CommonPart CreatedUtc="2013-09-10T00:39:00Z" PublishedUtc="2013-09-14T01:07:47Z" /> </Data> The data is neatly organized under each part. It is easy to see how that document is all you need to know about that content item, all in one table. If you want to modify that data directly in the database, you should be careful to do it in both the record table and the infoset in the content item record. In this configuration, the record is now nothing more than an index, and will only be used for sorting and filtering. Of course, it’s perfectly fine to mix record-backed properties and record-less properties on the same part. It really depends what you think must be sorted and filtered on. In turn, this potentially simplifies migrations considerably. So here it is, the great shift of Orchard to document storage, something that Orchard has been designed for all along, and that we were able to implement with a satisfying and surprising economy of resources. Expect this code to make its way into the 1.8 version of Orchard when that’s available.

    Read the article

  • Azure SDK causes Node.js service bus call to run slow

    - by PazoozaTest Pazman
    I am using this piece of code to call the service bus queue from my node.js server running locally using web matrix, I have also upload to windows azure "web sites" and it still performs slowly. var sb1 = azure.createServiceBusService(config.serviceBusNamespace, config.serviceBusAccessKey); sbMessage = { "Entity": { "SerialNumbersToCreate": '0', "SerialNumberSize": config.usageRates[3], "BlobName": 'snvideos' + channel.ChannelTableName, "TableName": 'snvideos' + channel.ChannelTableName } }; sb1.getQueue('serialnumbers', function(error, queue){ if (error === null){ sb1.sendQueueMessage('serialnumbers', JSON.stringify(sbMessage), function(error) { if (!error) res.send(req.query.callback + '({data: ' + JSON.stringify({ success: true, video: newVideo }) + '});'); else res.send(req.query.callback + '({data: ' + JSON.stringify({ success: false }) + '});'); }); } else res.send(req.query.callback + '({data: ' + JSON.stringify({ success: false }) + '});'); }); It can be up to 5 seconds before the server responds back to the client with the return result. When I comment out the sb1.getQueue('serialnumbers', function(error, queue){ and just have it return without sending a queue message it performs in less than 1 second. Why is that? Is my approach to using the azure sdk service bus correct? Any help would be appreciated.

    Read the article

  • Storage subsystem borking after server restart (all on a Parallel SCSI bus)

    - by Dat Chu
    I have a server (with a SCSI HBA) connected to two Promise VTrak M310p RAID enclosure on the same bus. Everything is working fine until I have to restart my server. Once restarted, the server can no longer communicate with the enclosures: lots of read errors and bus resets. I have to turn off both enclosure, then turn off the server, then turn on the enclosure, then turn on the server for things to work. I don't believe this is the normal behavior, what could I be missing?

    Read the article

  • Can IIS (Ideally Azure) do SSL Proxying?

    - by Acoustic
    My team has been asked to add a new feature to a project we're working on, and none of can find authoritative details on whether it's possible with Windows/IIS. The short of it is that we're hoping to have customers update their DNS with a CNAME record to point their website to our server instead of theirs (they why's are trivial - it's what the app does on behalf of your site). We're using a reverse proxy with several custom modules to serve particular content from the original servers. So far everything works perfectly until we encounter SSL. Is there a way to have IIS serve up an SSL certificate from another server? In other words, is there a way to be a trusted man in the middle? I'm hoping that's possible so that we don't have to require all our clients to re-issue their SSL certs. Frankly, we don't want to have to manage hundreds of certs. I'd also like to avoid a UCC situation if there's a way to because it seems to require re-creating the cert each time a client is added. So, any pointers on proxying/hosting SSL (or even dynamic SSL hosting like http://www.globalsign.com/cloud/) would be appreciated.

    Read the article

  • needing storage integrity (write/read) test - for BASH

    - by Mr. Bash
    In need of shell scripts / bash commands to verify data integrity of local harddrives, usb-drives, etc, ... Like the famous www.heise.de/download/h2testw; or something that is at least common within repositories. (h2testw writes a specific datastring over and over onto the medium, then reads it again to verify if it was written correctly and displays write/read time/speed.) please no dd if=/dev/random of=/dev/sdx bs=1k && dd if=/dev/sdx of=/dev/null bs=1k since it won't verify if everything was written correctly. It is only a test if read/write is successful to the device. So far, I'm not too happy with badblocks -w -v /dev/sdx1 either, since it seems rather slow and I don't know what it exactly writes, and if it considers wear-leveling on flash media. There is also a program named F3 http://oss.digirati.com.br/f3/ that needs to be compiled. Designed after h2testw, the concept sounds interesting, i'd just rather have it as a ready to go bash script.

    Read the article

  • Windows Storage Server 2008 hangs at logon

    - by ErJab
    We have a Dell PowerVault NX-3000 server running Windows Server 2008. Every now and then, when I try to login, the server seems to hang at the Welcome screen after I type in the password. However, all other services on the server are running fine - users are able to print off the print server and access their files. It just won't let me login. Any idea why this is happening? P.S.: I can't look at the server logs, because it won't let me login in the first place. Remote administration is also disabled on the server, so I can't use Remote Administration tools to look at the logs.

    Read the article

  • Home media storage solution

    - by Dan
    I record lots of personal HD film footage and am looking for a cheap way to store all of this. I take ~120 GB of footage each month, so something expandable would be nice... something that might be able to hold 6+ SATA drives. There is a low load requirement, as there is never more than a user or two... but it should be able to keep up with streaming 2 simultanious HD videos. I don't really want to spend more than $200-$300 on top of the $900 I am thinking of spending for 6X2GB SATA drives@ $150 apiece, but I am willing to pay extra for a quality solution. Should I get a cheap NAS server? a cheap multi-drive external enclosure? should I just get some used systems off craigslist? If it is an independent system I'll probably just throw ubuntu on it since I can maintain that well. Its easy to do a software raid from ubuntu too, if I choose to go that way. Thanks

    Read the article

  • Unmount Mass Storage USB Device from the Command Line in Linux

    - by Casey
    I've searched high and low, and can't figure this one out. I have a older Olympus Camera (2001 or so). When I plug in the USB connection, I get the following log output: $ dmesg | grep sd [20047.625076] sd 21:0:0:0: Attached scsi generic sg7 type 0 [20047.627922] sd 21:0:0:0: [sdg] Attached SCSI removable disk Secondly, the drive is not mounted in the FS, but when I run gphoto2 I get the following error: $ gphoto2 --list-config *** Error *** An error occurred in the io-library ('Could not lock the device'): Camera is already in use. *** Error (-60: 'Could not lock the device') *** What command will unmount the drive. For example in Nautilus, I can right click and select "Safely Remove Device". After doing that, the /dev/sg7 and /dev/sdg devices are removed. Some things I've tried already are sdparm and sg3_utils, however I am unfamiliar with them, so it's possible I just didn't find the right command.

    Read the article

  • Temporary file storage - script for webserver

    - by Chris
    I'm looking for a script (preferably php) which I can use for to temporarily exchange files. I had such a script before (it was written in flash), but - damn - just can't find it anymore. Here the features I'm looking for: I have a logon, which allows me to create "upload spaces". for these "upload spaces" I define a time how long the space will be available for (until the files are deleted), and who can access it - in the means of typing in an email, and that user gets a link with the "online space", user ID and password. the other user then clicks on the link and uploads the file, and I get (preferably) an email should there be any file changes I get an email before the content gets deleted Now, 3 additional things: - it should be open- source - run on linux (preferably lamp) - no, I dont wanna use dropbox or similar :p Thanks in advance everybody, Cheers Chris

    Read the article

  • Image storage social network (Host plan)

    - by Samir
    I'm wondering what the best way is to host images on a social network site. Let's say that I expect my social network to reach 500.000 users in 2 years time. That would mean that if every user uploaded about 100 images and every image is 1 MB that I will have to need: 500.000 * 100 * 1 MB = 50.000.000 MB which means 50 terabytes. I'm not sure how I can best setup my hosting plan in order to have a solid bases to store my images and eventually store video files as well. Which hosting plan would you recommend me to start with and how can I enhance the plan?

    Read the article

  • media storage social network (Host plan)

    - by Samir
    I'm wondering what the best way is to host media for a social network site. Let's say that I expect my social network to reach 500.000 users in 2 years time. I'm not sure how I can best setup my hosting plan in order to have a solid bases to store media files. Which hosting plan would you recommend me to start with and how can I enhance the plan?

    Read the article

  • Page allocation failures on iSCSI storage

    - by Dave
    We have a CentOS 6.3 iscsi server (16GB RAM) running on Infiniband bus (ipoib). When the load is high I can see multiple errors: Sep 3 23:22:20 stor4 kernel: tgtd: page allocation failure. order:2, mode:0x20 Sep 3 23:22:20 stor4 kernel: Pid: 3637, comm: tgtd Not tainted 2.6.32 #1 Sep 3 23:22:20 stor4 kernel: Call Trace: Sep 3 23:22:20 stor4 kernel: [] ? __alloc_pages_nodemask+0x77f/0x940 Sep 3 23:22:20 stor4 kernel: [] ? kmem_getpages+0x62/0x170 Sep 3 23:22:20 stor4 kernel: [] ? fallback_alloc+0x1ba/0x270 Sep 3 23:22:20 stor4 kernel: [] ? cache_grow+0x2cf/0x320 Sep 3 23:22:20 stor4 kernel: [] ? ____cache_alloc_node+0x99/0x160 Sep 3 23:22:20 stor4 kernel: [] ? pskb_expand_head+0x64/0x270 Sep 3 23:22:20 stor4 kernel: [] ? __kmalloc+0x189/0x220 Sep 3 23:22:20 stor4 kernel: [] ? pskb_expand_head+0x64/0x270 Sep 3 23:22:20 stor4 kernel: [] ? __pskb_pull_tail+0x2aa/0x360 Sep 3 23:22:20 stor4 kernel: [] ? tcp_init_tso_segs+0x37/0x50 Sep 3 23:22:20 stor4 kernel: [] ? dev_queue_xmit+0x4bb/0x6f0 Sep 3 23:22:20 stor4 kernel: [] ? neigh_connected_output+0xbd/0x100 Sep 3 23:22:20 stor4 kernel: [] ? ip_finish_output+0x237/0x310 Sep 3 23:22:20 stor4 kernel: [] ? ip_output+0xb8/0xc0 Sep 3 23:22:20 stor4 kernel: [] ? __ip_local_out+0x9f/0xb0 Sep 3 23:22:20 stor4 kernel: [] ? ip_local_out+0x25/0x30 Sep 3 23:22:20 stor4 kernel: [] ? ip_queue_xmit+0x190/0x420 Sep 3 23:22:20 stor4 kernel: [] ? sock_aio_write+0x167/0x180 Sep 3 23:22:20 stor4 kernel: [] ? tcp_transmit_skb+0x3fe/0x7b0 Sep 3 23:22:20 stor4 kernel: [] ? tcp_write_xmit+0x1fb/0xa20 Sep 3 23:22:20 stor4 kernel: [] ? __tcp_push_pending_frames+0x30/0xe0 Sep 3 23:22:20 stor4 kernel: [] ? tcp_push_pending_frames+0x33/0x40 Sep 3 23:22:20 stor4 kernel: [] ? do_tcp_setsockopt+0x3d6/0x480 Sep 3 23:22:20 stor4 kernel: [] ? tcp_setsockopt+0x2a/0x30 Sep 3 23:22:20 stor4 kernel: [] ? sock_common_setsockopt+0x14/0x20 Sep 3 23:22:20 stor4 kernel: [] ? sys_setsockopt+0x7f/0xe0 Sep 3 23:22:20 stor4 kernel: [] ? system_call_fastpath+0x16/0x1b Sep 3 23:22:20 stor4 kernel: Mem-Info: Sep 3 23:22:20 stor4 kernel: Node 0 DMA per-cpu: Sep 3 23:22:20 stor4 kernel: CPU 0: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: CPU 1: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: CPU 2: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: CPU 3: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: Node 0 DMA32 per-cpu: Sep 3 23:22:20 stor4 kernel: CPU 0: hi: 186, btch: 31 usd: 183 Sep 3 23:22:20 stor4 kernel: CPU 1: hi: 186, btch: 31 usd: 23 Sep 3 23:22:20 stor4 kernel: CPU 2: hi: 186, btch: 31 usd: 183 Sep 3 23:22:20 stor4 kernel: CPU 3: hi: 186, btch: 31 usd: 181 Sep 3 23:22:20 stor4 kernel: Node 0 Normal per-cpu: Sep 3 23:22:20 stor4 kernel: CPU 0: hi: 186, btch: 31 usd: 171 Sep 3 23:22:20 stor4 kernel: CPU 1: hi: 186, btch: 31 usd: 29 Sep 3 23:22:20 stor4 kernel: CPU 2: hi: 186, btch: 31 usd: 32 Sep 3 23:22:20 stor4 kernel: CPU 3: hi: 186, btch: 31 usd: 32 Sep 3 23:22:20 stor4 kernel: active_anon:1875 inactive_anon:2473 isolated_anon:0 Sep 3 23:22:20 stor4 kernel: active_file:1243637 inactive_file:2505055 isolated_file:0 Sep 3 23:22:20 stor4 kernel: unevictable:0 dirty:268338 writeback:0 unstable:0 Sep 3 23:22:20 stor4 kernel: free:86050 slab_reclaimable:132377 slab_unreclaimable:23744 Sep 3 23:22:20 stor4 kernel: mapped:1293 shmem:222 pagetables:720 bounce:0 Sep 3 23:22:20 stor4 kernel: Node 0 DMA free:15732kB min:124kB low:152kB high:184kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15332kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Sep 3 23:22:20 stor4 kernel: lowmem_reserve[]: 0 2172 16060 16060 Sep 3 23:22:20 stor4 kernel: Node 0 DMA32 free:107544kB min:18268kB low:22832kB high:27400kB active_anon:468kB inactive_anon:2364kB active_file:566208kB inactive_file:976112kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:2224900kB mlocked:0kB dirty:96816kB writeback:0kB mapped:908kB shmem:12kB slab_reclaimable:176940kB slab_unreclaimable:968kB kernel_stack:64kB pagetables:192kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 3 23:22:20 stor4 kernel: lowmem_reserve[]: 0 0 13887 13887 Sep 3 23:22:20 stor4 kernel: Node 0 Normal free:220924kB min:116772kB low:145964kB high:175156kB active_anon:7032kB inactive_anon:7528kB active_file:4408340kB inactive_file:9044108kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:14220800kB mlocked:0kB dirty:976536kB writeback:0kB mapped:4264kB shmem:876kB slab_reclaimable:352568kB slab_unreclaimable:94008kB kernel_stack:2048kB pagetables:2688kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 3 23:22:20 stor4 kernel: lowmem_reserve[]: 0 0 0 0 Sep 3 23:22:20 stor4 kernel: Node 0 DMA: 1*4kB 0*8kB 1*16kB 1*32kB 1*64kB 0*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15732kB Sep 3 23:22:20 stor4 kernel: Node 0 DMA32: 16305*4kB 4381*8kB 353*16kB 8*32kB 1*64kB 1*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB 0*4096kB = 107900kB Sep 3 23:22:20 stor4 kernel: Node 0 Normal: 14548*4kB 14808*8kB 2420*16kB 31*32kB 5*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 220784kB Sep 3 23:22:20 stor4 kernel: 3748822 total pagecache pages Sep 3 23:22:20 stor4 kernel: 0 pages in swap cache Sep 3 23:22:20 stor4 kernel: Swap cache stats: add 0, delete 0, find 0/0 Sep 3 23:22:20 stor4 kernel: Free swap = 975864kB Sep 3 23:22:20 stor4 kernel: Total swap = 975864kB Sep 3 23:22:20 stor4 kernel: 4194303 pages RAM Sep 3 23:22:20 stor4 kernel: 126915 pages reserved Sep 3 23:22:20 stor4 kernel: 3753534 pages shared Sep 3 23:22:20 stor4 kernel: 213500 pages non-shared TCP stack and VM config: net.core.rmem_max = 83886080 net.core.wmem_max = 83886080 net.core.rmem_default = 65536 net.core.wmem_default = 65536 net.ipv4.tcp_rmem = 40960 1048560 4194304 net.ipv4.tcp_wmem = 40960 196608 4194304 net.ipv4.tcp_mem = 16388608 16388608 16388608 vm.min_free_kbytes=135168 Additional tweaks: /sbin/blockdev --setra 16384 /dev/sdb echo 2048 /sys/block/sdb/queue/nr_requests Where might the problem be? Thank you.

    Read the article

  • Can I get redundancy with a JBOD storage subsystem

    - by Dat Chu
    I have a Promise Technology J610S. This is a JBOD subsystem. Is it possible for me to buy a SAS hardware RAID controller and provide some type of redundancy for these drives? I am unsure whether I will use Linux or Windows yet so an answer with enumeration for both would be highly appreciated. One solution that I thought of was: if my J610s can export each drive as a target, my server will simply see 16 drives. The RAID controller can then perform the RAID5/RAID6 if I want.

    Read the article

  • Windows Azure virtual machine credentials

    - by Georges
    I have created a Windows Server virtual machine, and this also automatically created a disk (VHD). Then I removed the vm, and I created a new vm from the same vhd: in this process I wasn't asked to supply any credentials. I am not able anymore to login (with rdp) to the new VM, the credentials that I supplied when creating the first machine don't work anymore. Any ideas why this happens? What credentials should I use? Thanks a lot in advance.

    Read the article

  • linking Office 365 and ADFS on Azure

    - by Gaurav
    i am trying to link my ADFS to Office 365 in order to set up Single Sign On for Office 365. I am going over this blog post. at Step 5, i am supposed to run Powershell commands so i can connect the ADFS to Office 365. Set-MsolAdfscontext -Computer <AD FS server FQDN> Convert-MsolDomainToFederated -DomainName <domain name> i am unable to determine the FQDN for the ADFS server. i searched around a lot on the web, but was not able to locate the solution. i am just getting started with all configuration of ADFS and Office 365, but was not able to find out a solution. any answers?

    Read the article

  • What are some solutions for cloud storage [closed]

    - by jaja
    I don't want to move my HDDs much, as it would definitely result in one of them one day dropping and fail... also, they are not portable (there are many), and inconvenient to use with a laptop. So, this is what I merely need:- 1) ability to access files as if the HDDs are directly connected to the computer. Which means I don't have to "transfer" to use (more like stream), and ease of access. 2) low cost.

    Read the article

  • Remotely Managing Storage on Hyper-V 2012 Core

    - by Vazgen
    I have a core Hyper-V Server 2012 that I am remotely managing from a Windows 8 client. I can connect in Hyper-V Manager, Server Manager, and MMC. However, I don't understand how I can manage the physical hard drive (for ex, deleting vhdx files, creating folders, etc) from my Windows 8 client. I tried to attach the remote share as follows: q: \\MyServer\c$ It said command completed successfully, but I don't see the drive on my client's Explorer. I can get to it in cmd.exe on the client but how can I manage it in a GUI? explorer q: Throws error:

    Read the article

  • Creating bootable Fedora USB with persistent storage

    - by dooffas
    I am attempting to burn the full Fedora 19 x86_64 DVD iso to a USB drive and have a separate partition on it for a kickstart file / other media that will be installed in the kickstart process. With the Ubuntu server 12 iso, you can simply dd the iso to the usb drive: dd if=/path/to/iso of=/dev/sdb Once the iso has been burnt, open gparted and create a ext2 parition in the allocated space. However, this does not seem to work with the Fedora ISO. When loading the USB drive in gparted I get a warning and an error: Warning: The driver descriptor says the physical block size is 2048 bytes, but Linux says it is 512 bytes. Error: The partition's data region doesn't occupy the entire partition. Ignoring both of these errors allows gparted to load the usb drive, however it shows a blank drive with no partition table. Has anyone come across this before? From what I have found, it may have something to do with the fact that Fedora use isohybrid.

    Read the article

  • What are the best ways to store Graphs in persistent storage

    - by nicoslepicos
    I am wondering what the best ways to store graphs in persistent storage are, for later analysis, search, clustering, etc. I see neo4j being an option, I am curious if there are also other graph databases available. Does anyone have any insights into how larger social networks store their graph based data (or other sites that require the storage of graph like models, e.g. RDF). What about options like Cassandra, or MySQL?

    Read the article

  • Efficient storage in C#.net App

    - by Tommy
    I'm looking for the fastest, least memory consuming, stand alone storage method available for large amounts of data for my C# app. My initial thoughts: Sql: no. not stand alone XML in flat file: no. takes too long to parse large amounts of data Other Options? Basically what i'm looking for, is a way that i can load with my applications load, keep all the data in my app, and when the data in my app changes just update the storage location.

    Read the article

  • android internal phone storage

    - by John
    how can you retrieve yours phone internal storage from an app? I found memoryinfo, but it seems that returns information on how much memory your currently running tasks. I am trying to get my app to retrieve how much internal phone storage is available.

    Read the article

  • Efficient storage in .Net App

    - by Tommy
    I'm looking for the fastest, least memory consuming, stand alone storage method available for large amounts of data for my C# app. My initial thoughts: Sql: no. not stand alone XML in flat file: no. takes too long to parse large amounts of data Other Options? Basically what i'm looking for, is a way that i can load with my applications load, keep all the data in my app, and when the data in my app changes just update the storage location.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >