Search Results

Search found 13808 results on 553 pages for 'remote storage'.

Page 149/553 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Images in database vs file system

    - by Jesse
    We have a project coming up where we will be building a whole backend CMS system that will power our entire extranet and intranet with one package. The question I have been trying to find an answer to is which is better: storing images in the database (SQL Server 2005) so we may have integrity, single replication plan, etc OR storing on the file system? One issue we have is that we have multiple servers load balanced that require to have the same data at all times. As of now we have SQL replication taking care of that but file replication seems to be a little tougher. Another concern we have is that we would like to have multiple resolutions of the same image, we are not sure if creating and storing each version on the file system would be best or maybe dynamically pulling and creating the resolution image we would like upon request. Our concerns are the with the following: Data integrity Data replication Multiple resolutions Speed of database vs file system Overhead load of database vs file system Data management and backup Does anyone have a similar situation or have any input on what would be recommended? Thanks in advance for the help!

    Read the article

  • Efficiently storing a list of prime numbers

    - by eSKay
    This article says: Every prime number can be expressed as 30k±1, 30k±7, 30k±11, or 30k±13 for some k. That means we can use eight bits per thirty numbers to store all the primes; a million primes can be compressed to 33,334 bytes "That means we can use eight bits per thirty numbers to store all the primes" This "eight bits per thirty numbers" would be for k, correct? But each k value will not necessarily take-up just one bit. Shouldn't it be eight k values instead? "a million primes can be compressed to 33,334 bytes" I am not sure how this is true. We need to indicate two things: VALUE of k (can be arbitrarily large) STATE from one of the eight states (-13,-11,-7,-1,1,7,11,13) I am not following how 33,334 bytes was arrived at, but I can say one thing: as the prime numbers become larger and larger in value, we will need more space to store the value of k. How, then can we fix it at 33,334 bytes?

    Read the article

  • dynamic array pointer to binary file

    - by Yijinsei
    Hi guys, Know this might be rather basic, but I been trying to figure out how to one after create a dynamic array such as double* data = new double[size]; be used as a source of data to be kept in to a binary file such as ofstream fs("data.bin",ios:binary"); fs.write(reinterpret_cast<const char *> (data),size*sizeof(double)); When I finish writing, I attempt to read the file through double* data = new double[size]; ifstream fs("data.bin",ios:binary"); fs.read(reinterpret_cast<char*> (data),size*sizeof(double)); However I seem to encounter a run time error when reading the data. Do you guys have any advice how i should attempt to write a dynamic array using pointers passed from other methods to be stored in binary files?

    Read the article

  • Quick backup system for large projects

    - by kamziro
    I've always backed up all my source codes into .zip files and put it in my usb drive and uploaded to my server somewhere else in the world.. however I only do this once every two weeks, because my project is a little big. Right now my project directories (I have a few of them) contains a hierarchy of c++ files in it, and interspersed with them are .o files which would make backing up take a while if not ignored. What tools exist out there that will let me just back things up efficiently, conveniently and lets me specify which file types to back up (lots of .png, .jpg and some text types in there), and which directories to be ignored (esp. the build dirs)? Or is there any ingenious methods out there that people use?

    Read the article

  • Thread Local Memory, Using std::string's internal buffer for c-style Scratch Memory.

    - by Hassan Syed
    I am using Protocol Buffers and OpensSSL to generate, HMACs and then CBC encrypt the two fields to obfuscate the session cookies -- similar Kerberos tokens. Protocol Buffers' API communicates with std::strings and has a buffer caching mechanism; I exploit the caching mechanism, for successive calls in the the same thread, by placing it in thread local memory; additionally the OpenSSL HMAC and EVP CTX's are also placed in the same thread local memory structure ( see this question for some detail on why I use thread local memory and the massive amount of speedup it enables even with a single thread). The generation and deserialization, "my algorithms", of these cookie strings uses intermediary void *s and std::strings and since Protocol Buffers has an internal memory retention mechanism I want these characteristics for "my algorithms". So how do I implement a common scratch memory ? I don't know much about the rdbuf(streambuf - strinbuf ??) of the std::string object. I would presumeably need to grow it to the lowest common size ever encountered during the execution of "my algorithms". Thoughts ? My question I guess would be: " is the internal buffer of a string re-usable, and if so, how ?" Edit: See comments to Vlad's answer please.

    Read the article

  • How to debug GWT using Ant

    - by Phuong Nguyen de ManCity fan
    I know that the job would be simpler if I use Google Plugin for Eclipse. However, in my situation, I heavily adapted Maven and thus, the plugin cannot suit me. (In fact, it gave me the whole week of headache). Rather, I relied on a ant script that I learned from http://code.google.com/webtoolkit/doc/latest/tutorial/appengine.html The document was very clear; I follow the article and successfully invoked DevMode using ant devmode. However, the document didn't tell me about debugging GWT (like Google Plugin for Eclipse can do). Basically, I want to add some parameter to an ant task that expose a debug port (something like (com.google.gwt.dev.DevMode at localhost:58807)) so that I can connect my eclipse to. How can I do that?

    Read the article

  • Calculating usage of localStorage space

    - by WmasterJ
    I am creating an app using the Bespin editor and HTML5's localStorage. It stores all files locally and helps with grammar, uses JSLint and some other parsers for CSS and HTML to aid the user. I want to calculate how much of the localStorage limit has been used and how much there actually is. Is this possible today? I was thinking for not to simply calculate the bits that are stored. But then again I'm not sure what more is there that I can't measure myself.

    Read the article

  • Restarting service from a client computer without rights

    - by Jason
    I have already created the program to restart a SQL database but it only works if the client has the rights. This is going to be done on a local network from a client computer when they can't get a person that has the password on the phone. Any thoughts I'm currently using the servicecontroller to start and stop database. When I don't have the rights I get a access denied error, or This operation might require other privileges. Not sure if impersonation would work since I don't have the userid and password.

    Read the article

  • unable to download a file from rtmp server

    - by user309815
    Hi Team I want to download an audio file from red5 server using rtmp server. string strUri; strUri = "rtmp://XXX/oflaDemo/" + Session["streamName"].ToString(); string strUploadto; strUploadto = Server.MapPath("") + "\Audio\" + "myaudio.flv"; WebClient webClient = new WebClient(); //webClient.DownloadFile("rtmp://begoniaprojects.com/oflaDemo/" + Session["streamName"].ToString(), Page.MapPath("") + "\Audio\" +"myaudio.flv"); webClient.DownloadFile(strUri, strUploadto); but i am getting uri prefix is not recognized message while downloading. please suggest me.

    Read the article

  • Need to remotely create an ODBC connection to a SQl server.

    - by kris
    I have an Access 2007 database with a table in it that is linked to a SQL server. I need to roll this version of the database out to approximately 10 people in different states. In order to do that, they need an ODBC connection to the SQL server installed on their machines. I am looking for a way to do this remotely. Either through VBA in the database itself or perhaps a Batch file linked to their shortcut....I am open to ideas....

    Read the article

  • 3d engine with telnet access

    - by zaf
    Does anyone know of a open source 3d engine which can be operated via telnet? What I'm looking for is scripting via a socket connection. To allow for world creation and/or camera movement. Does anybody know of any that has this built in or very, very easy to add as a plugin or script? The platform is not crucial.

    Read the article

  • SSH into Ubuntu Linux on a box without a static IP address

    - by Steven Xu
    Basically, how do I do it? I'd like to connect to my home computer from work, but my internet is routed through my apartment building's network, so I don't have the static IP address I'm accustomed to having. How do I go about accessing my home computer through SSH (I'll be using Putty at work if it matters) if my home computer doesn't have a static IP address?

    Read the article

  • Auto-resolving a hostname in WCF Metadata Publishing

    - by Mike C
    I am running a self-hosted WCF service. In the service configuration, I am using localhost in my BaseAddresses that I hook my endpoints to. When trying to connect to an endpoint using the WCF test client, I have no problem connecting to the endpoint and getting the metadata using the machine's name. The problem that I run into is that the client that is generated from metadata uses localhost in the endpoint URLs it wants to connect to. I'm assuming that this is because localhost is the endpoint URL published by metadata. As a result, any calls to the methods on the service will fail since localhost on the calling machine isn't running the service. What I would like to figure out is if it is possible for the service metadata to publish the proper URL to a client depending on the client who is calling it. For example, if I was requesting the service metadata from a machine on the same network as the server the endpoint should be net.tcp://MYSERVER:1234/MyEndpoint. If I was requesting it from a machine outside the network, the URL should be net.tcp://MYSERVER.mydomain.com:1234/MyEndpoint. And obviously if the client was on the same machine, THEN the URL could be net.tcp://localhost:1234/MyEndpoint. Is this just a flaw in the default IMetadataExchange contract? Is there some reason the metadata needs to publish the information in a non-contextual way? Is there another way I should be configuring my BaseAddresses in order to get the functionality I want? Thanks, Mike

    Read the article

  • How to safely store encryption key in a .NET assembly

    - by Alex
    In order to prevent somebody from grabbing my data easily, I cache data from my service as encrypted files (copy protection, basically). However, in order to do this, I must store the encryption key within the .NET assembly so it is able to encrypt and decrypt these files. Being aware of tools like Red Gate's .NET Reflector which can pull my key right out, I get a feeling that this is not a very safe way of doing it... are there any best practices to doing this?

    Read the article

  • Storing uploaded content on a website

    - by Matt
    For the past 5 years, my typical solution for storing uploaded files (images, videos, documents, etc) was to throw everything into an "upload" folder and give it a unique name. I'm looking to refine my methods for storing uploaded content and I'm just wondering what other methods are used / preferred. I've considered storing each item in their own folder (folder name is the Id in the db) so I can preserve the uploaded file name. I've also considered uploading all media to a locked folder, then using a file handler, which you pass the Id of the file you want to download in the querystring, it would then read the file and send the bytes to the user. This is handy for checking access, and restricting bandwidth for users.

    Read the article

  • Prevent Rails link_to_remote multiple submits w Javascript

    - by Chris
    In a Rails project I need to keep a link_to_remote from getting double-clicked. It looks like :before and :after are my only choices - they get prepended/appended to the onclick Ajax call, respectively. But if I try something like: :before => "self.stopObserving()" t,he Ajax is never run. If I try it for :after the Ajax is run but the link never stops observing. The solutions I've seen rely on creating a variable and blocking the whole form, but there are multiple link_to_remote rows on this page and it is valid to click more than one of them at a time - just not the same one twice. One variable per row declared outside of link_to_remote seems very kludgey... Instead of using Prototype I originally tried plain Javascript first for this proof of concept - but it fails too: <a href="#" onclick="self.onclick = function(){alert('foo');};"click</a just puts up an alert when clicked - the lambda here does nothing? This next one is more like the desired goal and should only alert the first time. But instead it alerts every time: <a href="#" onclick="alert('bar'); self.onclick = function(){return false;};"click</a All ideas appreciated!

    Read the article

  • Need to copy remotely hosted file vis Shell Command

    - by pnm123
    Hello, There is a file that hosted remotely on a server that is not supporting Shell Access. I bought a new server that supports Shell Access so now I want to copy a file that is on the non-supporting server to new server via a Shell Command using Putty. File url is like this http://www.domain.com/file.gzip and it is username/password protected. If I be more specified, I want to copy a backup of a home directory from cPanel to my new server via Shell command. I have done this few months ago but I don't remember it now and also I failed to google it. Thank you, Prasad

    Read the article

  • Compressing large text data before storing into db?

    - by Steel Plume
    Hello, I have application which retrieves many large log files from a system LAN. Currently I put all log files on Postgresql, the table has a column type TEXT and I don't plan any search on this text column because I use another external process which nightly retrieves all files and scans for sensitive pattern. So the column value could be also a BLOB or a CLOB, but now my question is the following, the database has already its compression system, but could I improve this compression manually like with common compressor utilities? And above all WHAT IF I manually pre-compress the large file and then I put as binary into the data table, is it unuseful as database system provides its internal compression?

    Read the article

  • How to store millions of pictures about 2k each in size

    - by LuftMensch
    We're creating an ASP.Net MVC site that will need to store 1 million+ pictures, all around 2k-5k in size. From previous ressearch, it looks like a file server is probably better than a db (feel free to comment otherwise). Is there anything special to consider when storing this many files? Are there any issues with Windows being able to find the photo quickly if there are so many files in one folder? Does a segmented directory structure need to be created, for example dividing them up by filename? It would be nice if the solution would scale to at least 10 million pictures for potential future expansion needs.

    Read the article

  • Reading what is in the vsdiagnostics blob in Azure (1.7)

    - by tomasmcguinness
    I've enabled Diagnostics in one of my Worker roles and published it to Azure. There was a new blob container created called "vsdiagnostics" and contained within in are two binary files. I'm assuming that these files contain the output of my Trace statements, but I'm unable to open these files as I have no idea what format they are in. I've not found anything on www.windowsazure.com about it and most of the tools they recommend are very outdated. I have installed Cerebrata's Azure Diagnostics Manager, but that isn't able to load the Trace Logs. If anyone could point me in the right direction I'd be grateful!

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >