Search Results

Search found 9156 results on 367 pages for 'cloud storage'.

Page 107/367 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Virtualized data centre&ndash;Part four: The design

    - by marc dekeyser
    Welcome back to the fourth post in this series! Today we will have a look at what Microsoft recommends as a “private cloud design” and what I will make of it. Whilst my own solution is based of the reference architecture, it is quite different indeed! An important thing to know is that, whilst I am using the private cloud as a reference, I am skipping most of the steps in designing a private cloud. If that is why you are here, please read the links at the end of the article and skim through my own content. A private cloud is much more process driven than just building a virtual infrastructure… The architecture of it all… So imagine for a minute that you have unlimited funds to build this lab of yours… You’d want redundancy on all levels and separation of each network where possible! Unfortunately we don’t have that luxury and, as you saw me hinting at in the previous article, our own design will be more limited but still quite capable! Networking From the networking perspective I will not have a fully redundant network, after all, this is but a lab environment! Thanks to Server 2012 I will be able to use bonding on my NIC’s and use LACP to improve the performance on that part. Storage As I mentioned in the previous article a Synology DS1218+ will be used for iSCSI provisioning. This device has 2 NICs on-board which can be bonded in to one 2 Gbps interface giving me a decent throughput and making the disks the most limiting factor in the storage design. Domain controllers and extra infrastructure Server 2012 completely supports running domain controllers virtualized and has no need to actually have a reachable DC when booting… That being said I need a remote access machine to power on the hosts (I have no need for them running 24/7) and a possible System Center VMM 2012 box (although server 2012 is not supported until SP1 :( ). Undecided on if I am to install those boxes separately or as a virtual machine… Which amounts to… Something like this pretty picture!                   Sources Microsoft Private Cloud Solutions Repository (en-US) http://social.technet.microsoft.com/wiki/contents/articles/12131.microsoft-private-cloud-solutions-repository-en-us.aspx Reference  Architecture: http://social.technet.microsoft.com/wiki/contents/articles/3819.reference-architecture-for-private-cloud.aspx Private Cloud Reference Model: http://social.technet.microsoft.com/wiki/contents/articles/4399.private-cloud-reference-model.aspx

    Read the article

  • How to safely store encryption key in a .NET assembly

    - by Alex
    In order to prevent somebody from grabbing my data easily, I cache data from my service as encrypted files (copy protection, basically). However, in order to do this, I must store the encryption key within the .NET assembly so it is able to encrypt and decrypt these files. Being aware of tools like Red Gate's .NET Reflector which can pull my key right out, I get a feeling that this is not a very safe way of doing it... are there any best practices to doing this?

    Read the article

  • Storing uploaded content on a website

    - by Matt
    For the past 5 years, my typical solution for storing uploaded files (images, videos, documents, etc) was to throw everything into an "upload" folder and give it a unique name. I'm looking to refine my methods for storing uploaded content and I'm just wondering what other methods are used / preferred. I've considered storing each item in their own folder (folder name is the Id in the db) so I can preserve the uploaded file name. I've also considered uploading all media to a locked folder, then using a file handler, which you pass the Id of the file you want to download in the querystring, it would then read the file and send the bytes to the user. This is handy for checking access, and restricting bandwidth for users.

    Read the article

  • Compressing large text data before storing into db?

    - by Steel Plume
    Hello, I have application which retrieves many large log files from a system LAN. Currently I put all log files on Postgresql, the table has a column type TEXT and I don't plan any search on this text column because I use another external process which nightly retrieves all files and scans for sensitive pattern. So the column value could be also a BLOB or a CLOB, but now my question is the following, the database has already its compression system, but could I improve this compression manually like with common compressor utilities? And above all WHAT IF I manually pre-compress the large file and then I put as binary into the data table, is it unuseful as database system provides its internal compression?

    Read the article

  • What are good ways to guarantee business continuity with a SaaS product?

    - by tommyvdz
    For my Bachelor Thesis I am researching how SaaS providers can arrange some sort of business continuity guarantee. You probably know the Source Code Escrow arrangements for 'shrink-wrapped' software. They give customers access to the source code and all applicable documentation whenever the software supplier gets into (financial) trouble. This clearly does not work for SaaS, because customers have no use for just the source code, and customers probably can not afford not being able to login to their CRM system for a couple weeks because the SaaS provider went bankrupt. I am currently researching different methods to solve this problem. Do you know good and practical solutions to solve this continuity problem? Or companies that already offer a good solution? Thanks!

    Read the article

  • How to store millions of pictures about 2k each in size

    - by LuftMensch
    We're creating an ASP.Net MVC site that will need to store 1 million+ pictures, all around 2k-5k in size. From previous ressearch, it looks like a file server is probably better than a db (feel free to comment otherwise). Is there anything special to consider when storing this many files? Are there any issues with Windows being able to find the photo quickly if there are so many files in one folder? Does a segmented directory structure need to be created, for example dividing them up by filename? It would be nice if the solution would scale to at least 10 million pictures for potential future expansion needs.

    Read the article

  • cheap way to scale a rails application

    - by VP
    I have an application, that is becoming big, but until now, its not giving me a good revenue. That means, short money to re-invest on that. In this scenario, i found a way to make a "cheap distributed rails" deployment. I've got 4 VPS. All of them are in the same physical server. I added a load balance server running HAproxy in one dedicated VPS. There i pointed my virtual ip address where my domain name is associated. Behind this HAproxy i have more two VPS running my rails APP, passenger and memcache. Both apps servers are looking to the same database server, my 4th VPS. So with $44/month, i mounted a distributed environment. It won't be my final choice, but now, that the budget is short, is that a good way to deploy a rails application? Any pros or cons? It worth my $44/month?

    Read the article

  • Reading what is in the vsdiagnostics blob in Azure (1.7)

    - by tomasmcguinness
    I've enabled Diagnostics in one of my Worker roles and published it to Azure. There was a new blob container created called "vsdiagnostics" and contained within in are two binary files. I'm assuming that these files contain the output of my Trace statements, but I'm unable to open these files as I have no idea what format they are in. I've not found anything on www.windowsazure.com about it and most of the tools they recommend are very outdated. I have installed Cerebrata's Azure Diagnostics Manager, but that isn't able to load the Trace Logs. If anyone could point me in the right direction I'd be grateful!

    Read the article

  • Are bit operations quick?

    - by flashnik
    I'm dealing with a problem which needs to work with a lot of data. Currently its' values are represented as unsigned int. I know that real values do not exceed some limit, say 1000. That means that I can use unsigned short to store it. One profit is that it'll use less space. Do I have to pay for it by loosing in performance? Another assumption. I decided to store data as short but all calling functions use int, so I need to convert between these datatypes when storing/extracting values. Wiil the performance lost be dramatic? Third assumption. Due to great wish to econom memory I decided to use not short but just 10 bits packed into array of unsigned int. What will happen in this case comparing with previous ones?

    Read the article

  • How does jQuery .data() work?

    - by kazanaki
    My Javascript knowledge is pretty limited. Instead of asking several javascript questions I got the "message" from Stack overflow and started using jQuery right away in order to save me some time. However several times I do not undestand the "magic" behind jQuery and I would love to learn the details. I want to use .data() in my application. The examples are very helpful. I do not understand however WHERE these values are stored. I inspect the webpage with Firebug and as soon as .data() saves an object to a dom element, I do not see any change in Firebug (either HTML or Dom tabs). I tried to look at jQuery source, but it is very advanced for my Javascript knowledge and I lost myself. So the question is: Where do the values stored by jQuery.data() actually go? Can I inspect/locate/list/debug them using a tool?

    Read the article

  • PVM terminates after Adding Host

    - by Tyug
    On Ubuntu 9.10 using PVM 3.4.5-12 (the PVM package when you use apt-get) The program terminates after adding a host. laptop> pvm pvm> add bowtie-slave add bowtie-slave terminated laptop> Current Configuration only $PVM_RSH = bin/usr/ssh I can ssh perfectly fine into the slave without a password, and run commands on it. Any ideas? Thanks in advance! Here are the sample logs: Laptop log [t80040000] 02/11 10:23:32 laptop (127.0.1.1:xxxxx) LINUX 3.4.5 [t80040000] 02/11 10:23:32 ready Thu Feb 11 10:23:32 2010 [t80040000] 02/11 10:23:32 netoutput() sendto: errno=22 [t80040000] 02/11 10:23:32 em=0x2c24f0 [t80040000] 02/11 10:23:32 [49/à][6e/à][76/à][61/à][6c/à][69/à][64/à][20/à][61/à][72/à] [t80040000] 02/11 10:23:32 netoutput() sendto: Invalid argument [t80040000] 02/11 10:23:32 pvmbailout(0) bowtie-log [t80080000] 02/11 10:23:25 bowtie-slave (xxx.x.x.xxx:xxxxx) LINUX64 3.4.5 [t80080000] 02/11 10:23:25 ready Thu Feb 11 10:23:25 2010 [t80080000] 02/11 10:28:26 work() run = STARTUP, timed out waiting for master [t80080000] 02/11 10:28:26 pvmbailout(0)

    Read the article

  • don't wanna lose data on Android after uninstalling

    - by soclose
    Hi, Now I make a trial application. I'd like to store IMEI and other info in Android permanently. And I don't want to lose them after uninstalling it. I tested with shared preference but it deletes after un-installation. SharedPreferences settings = getSharedPreferences(PREFS_NAME, 0); SharedPreferences.Editor editor = settings.edit(); editor.putBoolean("silentMode", true); // Commit the edits! editor.commit(); let me know where to store.

    Read the article

  • How to Store Cookies in Ruby?

    - by viatropos
    I am programmatcally accessing authenticated content in my CDN on Google App Engine, and it's returning a cookie that I'm supposed to store: {"set-cookie"=>"ACSID=cookie-hash; expires=Mon, 12-Apr-2010 01:56:06 GMT; path=/"} What do I do with that? This is my first time dealing with Cookies. I can put in the header of the next request, but what's the recommended way to store that? I'm testing this with irb in the console and when I exit and try again, the cookie is gone. How do I save it for a few days/weeks? I'm using pure ruby without Rails or anything. Thanks so much.

    Read the article

  • Write Java objects to file

    - by Mark Szymanski
    Is it possible to write objects in Java to a binary file? The objects I want to write would be 2 arrays of String objects. The reason I want to do this is to save persistent data. If there is some easier way to do this let me know. Thanks in advance!

    Read the article

  • Azure scalability over XML File

    - by dayscott
    What is the best practise solution for programmaticaly changing the XML file where the number of instances are definied ? I know that this is somehow possible with this csmanage.exe for the Windows Azure API. How can i measure which Worker Role VMs are actually working? I asked this question on MSDN Community forums as well: http://social.msdn.microsoft.com/Forums/en-US/windowsazure/thread/02ae7321-11df-45a7-95d1-bfea402c5db1

    Read the article

  • Thread Local Memory for Scratch Memory.

    - by Hassan Syed
    I am using Protocol Buffers and OpensSSL to generate, HMACs and then CBC encrypt the two fields to obfuscate the session cookies -- similar Kerberos tokens. Protocol Buffers' API communicates with std::strings and has a buffer caching mechanism; I exploit the caching mechanism, for successive calls in the the same thread, by placing it in thread local memory; additionally the OpenSSL HMAC and EVP CTX's are also placed in the same thread local memory structure ( see this question for some detail on why I use thread local memory and the massive amount of speedup it enables even with a single thread). The generation and deserialization, "my algorithms", of these cookie strings uses intermediary void *s and std::strings and since Protocol Buffers has an internal memory retention mechanism I want these characteristics for "my algorithms". So how do I implement a common scratch memory ? I don't know much about the rdbuf of the std::string object. I would presumeably need to grow it to the lowest common size ever encountered during the execution of "my algorithms". Thoughts ?

    Read the article

  • How much Maximum Data we can store in a File in salesforce

    - by Ritesh Mehandiratta
    i searched a little for the size of file in salesforce . i found this link http://help.salesforce.com/HTViewHelpDoc?id=collab_files_size_limits.htm&language=en_US its showing that file size can be upto 2 GB.i have to store IDs in a text file and want to make it scalable for for nearly about 1 Million record .file size will be equal to 15 MB .can any one please provide some good tutorial how to create such kind of files and using it in apex for retrieving and updating data

    Read the article

  • Is there any benefit to my rather quirky character sizing convention?

    - by Paul Alan Taylor
    I love things that are a power of 2. I celebrated my 32nd birthday knowing it was the last time in 32 years I'd be able to claim that my age was a power of 2. I'm obsessed. It's like being some Z-list Batman villain, except without the colourful adventures and a face full of batarangs. I ensure that all my enum values are powers of 2, if only for future bitwise operations, and I'm reasonably assured that there is some purpose (even if latent) for doing it. Where I'm less sure, is in how I define the lengths of database fields. Again, I can't help it. Everything ends up being a power of 2. CREATE TABLE Person ( PersonID int IDENTITY PRIMARY KEY ,Firstname varchar(64) ,Surname varchar(128) ) Can any SQL super-boffins who know about the internals of how stuff is stored and retrieved tell me whether there is any benefit to my inexplicable obsession? Is it more efficient to size character fields this way? Can anyone pop in with an "actually, what you're doing works because ....."? I suspect I'm just getting crazier in my older age, but it'd be nice to know that there is some method to my madness.

    Read the article

  • Show a DB as a directory (Like Sharepoint Does)

    - by Zyd
    Hi, My team and I are programming a sort of Document Manager and the idea is to store them completely on DB. Is there a protocol or Extensions that allows us to show a "Virtual Directory" or files that are really non existent (only in DB). How does Sharepoint do this? I understand that Sharepoint uses WebDav but it implies that the files do exist physically somewhere. We intend to develop this application on .NET 4.0 and deploy it on IIS. Any suggestions? Thanks in advance

    Read the article

  • Best way to collect and store data daily?

    - by mktb
    I have a bunch of statistics: # of users, # of families, ratio user/family, etc. I'd like to store these daily so I can view this data historically. However, I'm looking for the most effective way to store this data. Should I run a cron job that writes to the database DATE: today USERS: 123 FAMILIES: 456 RATIO: 7.89 or whatever? (or should I write multiple rows like DATE: today DATATYPE: users VALUE: 123?) Or is there another option I can use that is more efficient or more effective? Thanks!

    Read the article

  • Are batch mutations atomic in Cassandra?

    - by user317459
    The Cassandra API supports batch mutations: batch_mutate(keyspace, mutation_map, consistency_level): Executes the specified mutations on the keyspace. mutation_map is a map; the outer map maps the key to the inner map, which maps the column family to the Mutation; can be read as: map. To be more specific, the outer map key is a row key, the inner map key is the column family name. A Mutation specifies either columns to insert or columns to delete. See Mutation and Deletion above for more details. Are all mutations that are executed in a batch executed atomically? So if one of the mutations fails, do the others fail too?

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >