Search Results

Search found 9156 results on 367 pages for 'cloud storage'.

Page 294/367 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • When compiling programs to run inside a VM, what should march and mtune be set to?

    - by Russ
    With VMs being slave to whatever the host machine is providing, what compiler flags should be provided to gcc? I would normally think that -march=native would be what you would use when compiling for a dedicated box, but the fine detail that -march=native is going to as indicated in this article makes me extremely wary of using it. So... what to set -march and -mtune to inside a VM? For a specific example... My specific case right now is compiling python (and more) in a linux guest inside a KVM-based "cloud" host that I have no real control over the host hardware (aside from 'simple' stuff like CPU GHz m CPU count, and available RAM). Currently, cpuinfo tells me I've got an "AMD Opteron(tm) Processor 6176" but I honestly don't know (yet) if that is reliable and whether the guest can get moved around to different architectures on me to meet the host's infrastructure shuffling needs (sounds hairy/unlikely). All I can really guarantee is my OS, which is a 64-bit linux kernel where uname -m yields x86_64.

    Read the article

  • JSON documents and SQL database tables

    - by Sharmi
    Do JSON documents in RavenDB cost more than the SQL Server tables in terms of the storage and query costs. And also for centralized access, which one is better? What are the disadvantages of NON-SQL databases like RavenDB,CouchDB,MongoDB, etc... ? I can get that some of these are open source and support more datatypes like enums,objects,etc. but otherwise i don't see any big advantage? Currently there is a problem of storing huge amount of logs from various locations. I am planning to suggest these to my manager so just need a clear idea.

    Read the article

  • Microsoft Access to SQL Server - synchronization

    - by David Pfeffer
    I have a client that uses a point-of-sale solution involving an Access database for its back-end storage. I am trying to provide this client with a service that involves, for SLA reasons, the need to copy parts of this Access database into tables in my own database server which runs SQL Server 2008. I need to do this on a periodic basis, probably about 5 times a day. I do have VPN connectivity to the client. Is there an easy programmatic way to do this, or an available tool? I don't want to handcraft what I assume is a relatively common task.

    Read the article

  • iPhone App with Web Service Access

    - by blake
    I have been asked to write a compliment website/service for an iPhone app. The app creates images. The author wants these images to be uploaded onto the server, into their personal storage area. These images need to be able to be pulled down to the iPhone later for editing. The user will be able to use the website as well to see these images. I have yet to decide (or understand) what the best way of implementing this would be. And with no experience with iPhone development I have no idea what it can actually handle.

    Read the article

  • Whats the difference between Paxos and W+R>=N in Cassandra?

    - by user1128016
    Dynamo-like databases (e.g. Cassandra) provide ability to enforce consistency by means of quorum, i.e. a number of synchronously written replicas (W) and a number of replicas to read (R) should be chosen in such a way that W+RN where N is a replication factor. On the other hand, PAXOS-based systems like Zookeeper are also used as a consistent fault-tolerant storage. What is the difference between these two approaches? Does PAXOS provide guarantees that are not provided by W+RN schema?

    Read the article

  • How to initialize an std::string using ""?

    - by Mohsin
    I'm facing problems with initializing a std::string variable using "" (i.e. an empty string). It's causing strange behavior in code that was previously working. Is the following statement wrong? std::string operationalReason = ""; When I use the following code everything works fine: std::string operationalReason; operationalReason.clear(); I believe that string literals are stored in a separate memory location that is compiler-dependent. Could the problem I'm seeing actually be indicating a corruption of that storage? If so, it would get hidden by my usage of the clear() function. Thanks.

    Read the article

  • Optimizing MySQL statement with lot of count(row) an sum(row+row2)...

    - by Zombies
    I need to use InnoDB storage engine on a table with about 1mil or so records in it at any given time. It has records being inserted to it at a very fast rate, which are then dropped within a few days, maybe a week. The ping table has about a million rows, whereas the website table only about 10,000. My statement is this: select url from website ws, ping pi where ws.idproxy = pi.idproxy and pi.entrytime > curdate() - 3 and contentping+tcpping is not null group by url having sum(contentping+tcpping)/(count(*)-count(errortype)) < 500 and count(*) > 3 and count(errortype)/count(*) < .15 order by sum(contentping+tcpping)/(count(*)-count(errortype)) asc; I added an index on entrytime, yet no dice. Can anyone throw me a bone as to what I should consider to look into for basic optimization of this query. The result set is only like 200 rows, so I'm not getting killed there.

    Read the article

  • What languages, frameworks, and technologies have you used to implement document searching?

    - by Bill Brasky
    I am at a new company and one of our goals is to implement a document search portal for our team and our clients. I am a bit worried that if we use an external service provider like Salesforce or some other ECM in the cloud there will be a lot of integration work in the future. From a client perspective, these documents will also exist in the same bucket as our structured content (stored in the DB, not a MS Word doc). If you have implemented document searching, what languages, frameworks, and technologies have you used? Do you have any failure stories? I don't have a problem using something out of the box, but I think it is important that we have control over the documents and the API to access them. I would like to use Rails if we go fully custom.

    Read the article

  • Key Value Observation (ala Cocoa) in GWT ?

    - by user179997
    Hi all, There are these 2 frameworks out there in the same "cloud application" space as GWT: Sproutcore and Cappuccino. Cappuccino is Cocoa for the web, Sproutcore is Cocoa-like and one very central idea in both is Key Value Observation where the framework itself provides the glue to change all dependencies of an object when it changes, and you only have to declare those dependencies. If that was too poorly expressed please see this presentation: http://www.infoq.com/presentations/subelsky-sproutcore-intro Since the pattern reduces the amount of code you type it reduces the number of bugs. Maybe it's too much to ask but I would like to have that and all the benefits of Eclipse/compiler that come with GWT. Is there support for this in GWT, or a library already developed? Or maybe there is support in some of the component libraries for GWT out there? Thanks

    Read the article

  • SQL Query: Using Cursors

    - by user2953138
    I need some directions for SQL Server & Cursors: I have a table named Order: OrderID Item Amount 1 A 10 1 B 1 2 A 5 2 C 4 2 D 21 3 B 11 I have a second table named Storage: Item Amount A 40 B 44 C 20 D 1 For every OrderID, I want to check if enough items are available. If not, I want to return an error message. How can this be done with Cursors at all? Are nested cursors the solution to this? My main issue is to understand how I can fetch the OrderID as actual "Group" of ID=1, 2, 3 etc. instead of line by line

    Read the article

  • Migrating a Core Data Store from iCloud to local

    - by schmok
    I'm currently struggling with Core Data iCloud migration. I want to move a store from an iCloud ubiquity container (.nosync) to a local URL. Problem is whenever I call something like this: NSPersistentStore *newStore = [self.persistentStoreCoordinator migratePersistentStore: currentiCloudStore toURL: localURL options: nil withType: NSSQLiteStoreType error: &error]; I get this error: -[NSPersistentStoreCoordinator addPersistentStoreWithType:configuration:URL:options:error:](1055): CoreData: Ubiquity: Error: A persistent store which has been previously added to a coordinator using the iCloud integration options must always be added to the coordinator with the options present in the options dictionary. If you wish to use the store without iCloud, migrate the data from the iCloud store file to a new store file in local storage. file://localhost/Users/sch/Library/Containers/bla/Data/Documents/tmp.sqlite. This will be a fatal error in a future release Anyone ever seen this error? Maybe I'm just missing the right migration options?

    Read the article

  • Using a password to generate two distinct hashes without reducing password security

    - by Nevins
    Hi there, I'm in the process of designing a web application that will require the storage of GPG keys in an encrypted format in a database. I'm planning on storing the user's password in a bCrypt hash in the database. What I would like to be able to do is to use that bCrypt to authenticate the user then use the combination of the stored bCrypt hash and another hash of the password to encrypt and decrypt the GPG keys. My question is whether I can do this without reducing the security of the password? I was thinking I may be able to use something like an HMAC-SHA256 of a static string using the password and a salt as the secret key. Is there a better way to do this that I haven't thought of? Thanks

    Read the article

  • c arrays: setting size dynamically?

    - by user336994
    Hello, I am new to C programming. I am trying to set the size of the array using a variable but I am getting an error: Storage size of 'array' isn't constant !! 01 int bound = bound*4; 02 static GLubyte vertsArray[bound]; I have noticed that when I replace bounds (within the brackets on line 02) with the number say '20', the program would run with no problems. But I am trying to set the size of the array dynamically ... Any ideas why I am getting this error ? thanks much,

    Read the article

  • InnoDB not supported by webhost. What now?

    - by Peter Perhác
    I was developing a small WAMP web application on my laptop, where I have an instance of mySQL running and I chose InnoDB for my DB engine. After several weeks' development I wanted to make it available to the public and found out the database server provided by my web host does not support InnoDB, only MyISAM. The create-and-populate script generated from the innoDB schema on my laptop, when executed against the live database, can manage to create individual TABLES but then runs into problems creating the VIEWs. Are views not supported in MyISAM? I know FOREIGN KEYs are not. That's very much why I made the choice of InnoDB... What are my chances of making my innoDB schema design work with myISAM? Is there any straightforward way of converting the whole schema from one storage engine to the other? Should I look for another web host that does provide a mysql db that supports innoDB?

    Read the article

  • With the attachment_fu rails plugin, is there any way to delete files uploaded to Amazon S3?

    - by Eric Nguyen
    Let's say I'm using attachment_fu to attach profile pics to user profiles in a system, with Amazon S3 used as the actual file storage. When users upload new profile pics, I'd like to replace the attached file with the new one. I can do this within my database (i.e. the file metadata) easily, but attachment_fu doesn't seem to provide methods for deleting the files from S3. Am I missing something, or am I approaching this the wrong way? Many thanks!

    Read the article

  • SDCC and malloc() - allocating much less memory than is available

    - by Duncan Bayne
    When I run compile this code with SDCC 3.1.0, and run it on an Amstrad CPC 464 (under emulation, with WinCPC 0.9.26 running on Wine): void _test_malloc() { long idx = 0; while (1) { if (malloc(5)) { printf("%ld\r\n", ++idx); } else { printf("done"); break; } } } ... it consistently taps out at 92 malloc()s. I make that 460 bytes, which leads me to a couple of questions: What is malloc() doing on this system? I was sort of hoping for an order of magnitude more storage even on a 64kB system The behaviour is consistent on 64kB systems and 128kB systems; do I have to perform some sort of magic to access the additional memory, like manual bank switching?

    Read the article

  • Embarrassingly parallel workflow creates too many output files

    - by Hooked
    On a Linux cluster I run many (N > 10^6) independent computations. Each computation takes only a few minutes and the output is a handful of lines. When N was small I was able to store each result in a separate file to be parsed later. With large N however, I find that I am wasting storage space (for the file creation) and simple commands like ls require extra care due to internal limits of bash: -bash: /bin/ls: Argument list too long. Each computation is required to run through a qsub scheduling algorithm so I am unable to create a master program which simply aggregates the output data to a single file. The simple solution of appending to a single fails when two programs finish at the same time and interleave their output. I have no admin access to the cluster, so installing a system-wide database is not an option. How can I collate the output data from embarrassingly parallel computation before it gets unmanageable?

    Read the article

  • Breaking the SQL Compact 8K Limit?

    - by David Veeneman
    I am creating a desktop application that stores rich text documents to a SQL Compact database. Documents are converted to a byte array and stored as a Binary column, and I am running into SQL Compact's 8K limit for Binary field length. Is there a simple way to get around the 8K limit? I can come up with lots of complicated ways to do it, such as parsing into 8K chunks for storage and reassembling on fetch. But before I get into something that complex, I would like to make sure I can't solve the problem more simply, such as by changing data type. If there is no simple way of getting around the 8K limit, is thare a best practice for storing documents greater than 8K? Thanks for your help.

    Read the article

  • Exploring search options for PHP

    - by Joshua
    I have innoDB table using numerous foreign keys, but we just want to look up some basic info out of it. I've done some research but still lost. 1) How can I tell if my host has Sphinx installed already? I don't see it as an option for table storage method (i.e. innodb, myisam). 2) Zend_Search_Lucene, responsive enough for AJAX functionality of millions of records? 3) Mirror my innoDB with a myisam? Make every innodb transaction end with a write to the myisam, then use 1:1 lookups? How would I do this automagically? This should make MyISAM ACID-compliant and free(er) from corruption no? 4) PostgreSQL fulltext queries don't even look like SQL to me wtf, I don't have time to learn a new SQL syntax I need noob options 5) ???????????????????? This is high volume site on a decently-equipped VPS Thanks very much for any ideas.

    Read the article

  • Rails diff model config in dev or prod environment

    - by Denis
    Hi, I've a model which use paperclip, in dev env I want to store files on the file system. In production I want to store them on my s3 account. How to configure my model to reflet this difference? Here is my model class Photo < ActiveRecord::Base has_attached_file :photo, :styles => { :medium => "200x200>", :thumb => "100x100>" }, :storage => :s3, :s3_credentials => "#{Rails.root}/config/s3.yml", :path => "/:style/:filename" end

    Read the article

  • Getting the Video file Stored on DVR by the IP Camera in BlackBerry

    - by sHaH..
    HI all.. i need help of you guys.. Well i am making an application which will display the live video from the IP camera and display it on my screen. well my friends now the help i required is that many camera also save their videos onto the DVR.. now how can i access the DVR and get the video stored onto that storage medi.. i need some helping material or guide from you all.. since m new in black berry... Thnks a bunch in advance.. i m waiting for ur +ve responce..

    Read the article

  • Speed-up of readonly MyISAM table

    - by Ozzy
    We have a large MyISAM table that is used to archive old data. This archiving is performed every month, and except from these occasions data is never written to the table. Is there anyway to "tell" MySQL that this table is read-only, so that MySQL might optimize the performance of reads from this table? I've looked at the MEMORY storage engine, but the problem is that this table is so large that it would take a large portion of the servers memory, which I don't want. Hope my question is clear enough, I'm a novice when it comes to db administration so any input or suggestions are welcome.

    Read the article

  • How to save the world from your computer?

    - by Francisco Garcia
    Sometimes I miss the "help other people" factor within computer related careers. Sure that out there I could find many great projects improving society, but that is not common. However there are little things that we all can do to make this a better place beyond trying to erradicate annoynig stuff such as Visual Basic. You could join a cloud computing network such as World Community Grid to fight cancer. Write a charityware application such as Vim, improve an office IT infrastructure to support telecommuting and reduce CO2 emissions, use an ebook reader for saving paper... what else would you? which projects do you think can have an impact?

    Read the article

  • Embedded C++, any tips to avoid a local thats only used to return a value on the stack?

    - by lisarc
    I have a local that's only used for the purposes of checking the result from another function and passing it on if it meets certain criteria. Most of the time, that criteria will never be met. Is there any way I can avoid this "extra" local? I only have about 1MB of storage for my binary, and I have several thousand function calls that follow this pattern. I know it's a minor thing, but if there's a better pattern I'd love to know! SomeDataType myclass::myFunction() { SomeDataType result; // do I really need this local??? // i need to check the result and pass it on if it meets a certain condition result = doSomething(); if ( ! result ) { return result; } // do other things here ... // normal result of processing return SomeDataType(whatever); }

    Read the article

  • STL member variable initalization issue with windows API

    - by Django
    I am creating a windows app that uses a vector of stings as a member variable. For some reason, I can compile but when it tries to get at any of the vectors members is crashes. the error is 0xC0000005: Access violation reading location 0xcdcdcdd9. in the member function of the vector class. this is the size() function where it breaks. size_type capacity() const { // return current length of allocated storage return (this->_Myend - this->_Myfirst); } I am using visual studios 2010. thank you Django

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >