Search Results

Search found 5528 results on 222 pages for 'offsite storage'.

Page 168/222 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • NoSQL and meteorological data

    - by christian studer
    So there's this new cool thing, these NoSQL-databases. And so there's my data: Rows of rows of rows of meteorological data: Values, representing certain measurements at a certain station (Identified by a WMO number, not coordinates), at a certain time. Not every station measures every parameter, not every parameter is measured all the time. I store this data (30 years worth of hourly values, resulting in ~1 billion values) currently in MySQL. The continous growth and the forseeable addition of even more data give me a little headache. Reading about the document based NoSQL systems which seem to scale rather easily, I was wondering if NoSQL is a viable data storage concept for meteorological data too. Do you have any experience with this?

    Read the article

  • Problems compiling an external library on linux...

    - by Kris
    So I am trying to compile the libssh2 library on linux, but when I try to compile the example it comes up with a lot of errors, and even though I include the headerfile it asks for, it still asks for it. Here are the error messages and the resulting messages: ~/ gcc -include /home/Roosevelt/libssh2-1.2.5/src/libssh2_config.h -o lolbaise /home/Roosevelt/libssh2-1.2.5/example/scp.c /home/Roosevelt/libssh2-1.2.5/example/scp.c:7:28: error: libssh2_config.h: No such file or directory /home/Roosevelt/libssh2-1.2.5/example/scp.c: In function 'main': /home/Roosevelt/libssh2-1.2.5/example/scp.c:39: error: storage size of 'sin' isn't known /home/Roosevelt/libssh2-1.2.5/example/scp.c:81: error: 'AF_INET' undeclared (first use in this function) /home/Roosevelt/libssh2-1.2.5/example/scp.c:81: error: (Each undeclared identifier is reported only once /home/Roosevelt/libssh2-1.2.5/example/scp.c:81: error: for each function it appears in.) /home/Roosevelt/libssh2-1.2.5/example/scp.c:81: error: 'SOCK_STREAM' undeclared (first use in this function) /home/Roosevelt/libssh2-1.2.5/example/scp.c:87: error: invalid application of 'sizeof' to incomplete type 'struct sockaddr_in'

    Read the article

  • Optimize loading an XAP file with an asp.net website

    - by theoneawaited
    I've been developing a game using Silverlight 4 and silversprite (http://silversprite.codeplex.com/) This game is HEAVILY content dependent, using a lot of audio and images. My content folder is around 90 mbs worth of stuff. And because of that, my XAP file is around 60 MB, and takes 5 minutes to download from the website before any user can start playing. I am using Visual Web Developer 2010 to create my site and load the XAP. Is there a way where I can take content out of my XAP and put it in my ASP.net site project? Or perhaps upload my content files to the site's storage? This would make my XAP file much quicker to download. Anyone have suggestions? Thanks!

    Read the article

  • Iterative / Additive MD5

    - by Andrew Robinson
    I need to generate a checksum over a dictionary. Keys and Values. Is there any simple way to accomplish this in an iterative way. foreach(var item in dic.Keys) checksum += checksum(dic[item]) + checksum(item); In this case, keys and values could be converted to strings, concatinated and then a single checksum applied over these but is there a better way? Ideally MD5 but other options could work. Using this to validate data that is passed over a couple of storage methods. The checksum is then encrypted along with some other information (using AES) so I am not horribly worried about an ideal, unbreakable checksum.

    Read the article

  • Combine Hibernate class with @Bindable for SwingBuilder without Griffon?

    - by Misha Koshelev
    Dear All: I have implemented a back-end for my application in Groovy/Gradle, and am now trying to implement a GUI. I am using Hibernate for my data storage (with HSQLDB) per http://groovy.codehaus.org/Using+Hibernate+with+Groovy (with Jasypt for encryption) and it is working quite well. I was wondering if there are any good tips for using @Bindable with, e.g., an @Entity class such as @Entity class Book { @Id @GeneratedValue(strategy = GenerationType.AUTO) public Long id @OneToMany(cascade=CascadeType.ALL) public Set<Author> authors public String title String toString() { "$title by ${authors.name.join(', ')}" } } or if I am: (i) asking for Griffon (ii) completely on the wrong track? Thank you! Misha

    Read the article

  • How to implement a good system for login/out into a webapp

    - by Brandon Wang
    I am one of the developers at PassPad, a secure password generator and username storage system. We're still working on it, but I have a few questions on the best way to implement a secure login/out system. Right now, what we plan on doing is to have the login system save a cookie with the username and a session key, and that's all that serves as authentication. The server verifies the two to match. Upon login/out a new key is created. This is a security-related webapp and while we don't actually store any information that might make the user queasy, because it is security-oriented it makes it a necessity for us to at least appear secure in a way that the user would be happy with. Is there a better way to implement a login/out system in PHP? Preferably it won't take too much coding time or server resources. Is there anything else I need to implement, like brute-force protection, etc? How would I go about that?

    Read the article

  • Is the MySQL FOSS License Exception transitive - does it remove the GPL restrictions for downstream

    - by Eric
    I'm looking at building a MySQL client plugin for a proprietary product, which would violate the GPL as discussed in the FAQ at http://www.gnu.org/licenses/gpl-faq.html#NFUseGPLPlugins However, according to the MySQL FOSS License Exception ("FLE"), discussed at http://www.mysql.com/about/legal/licensing/foss-exception/, you can license an open-source product built with the client with many alternatives. The oursql library (https://launchpad.net/oursql) is BSD-licensed. Is this a valid way around the GPL? By my reading of the FLE, the only clause that refers to downstream uses of derived works is section 2.e: All works that are aggregated with the Program or the Derivative Work on a medium or volume of storage are not derivative works of the Program, Derivative Work or FOSS Application, and must reasonably be considered independent and separate works. This is the case for our product: it is not a derivative work of oursql, and in fact accesses it only via a plugin-driven interface. So is this a valid loophole?

    Read the article

  • What first game did you program, and did it make you a better developer?

    - by thenonhacker
    What first game did you program? Name your game, the OS and language, and even a Website URL to get your game. Old DOS Games and Flash Games with ActionScript are allowed. Game kits are allowed, too. ...and did it make you a better developer? Programming games can be addicting, and it will bring out the best in us as we create our first game. What lessons did you learn form most? Algorithm and/or AI's? Graphics? User Interface? File Formats and Data Storage? Project and Time Management? Can you say that because you practiced programming by creating this game, you became more immersed with the programming language you used and helped you become a better developer?

    Read the article

  • Does the iPhone compress images saved within my app's documents directory?

    - by Jane Sales
    We are caching images downloaded from our server. We write them to our local storage like this: NSArray *paths = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0] ; NSString* folder = [[documentsDirectory stringByAppendingPathComponent:@"flook.images"] retain]; NSString* fileName = [folder stringByAppendingFormat:@"/%@", aBaseFilename]; BOOL writeSuccess = [anImageData writeToFile:fileName atomically:NO]; The downloaded images are always the expected size, around 45-85KB. Later, we read images from our cache like this: NSArray *paths = NSSearchPathForDirectoriesInDomains(NSCachesDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0] ; NSString* folder = [[documentsDirectory stringByAppendingPathComponent:@"flook.images"] retain]; NSString* fileName = [folder stringByAppendingFormat:@"/%@", aBaseFilename]; image = [UIImage imageWithContentsOfFile:fileName]; Occasionally, the images returned from this cache read are much smaller because they are much more compressed - around 5-10KB. Has the OS done this to us?

    Read the article

  • Limit the model data fields serialized by Web API based on the return type Interface

    - by Stevo3000
    We're updating our architecture to use a single object model for desktop, web and mobile that can be used in the MVVM pattern. I would like to be able to limit the data fields that are serialized through Web API by using interfaces on the controllers. This is required because the model objects for mobile are stored in HTML5 local storage so don't carry optional data while a thin desktop client would be able to store (and work with) more data. To achieve this a model will implement the different interfaces that define which data fields should be serialized and there will be a controller specific to the interface. The problem is that the Web API always serializes every field in the model even if it is not part of the interface being returned. How can we only serialize fields in the returned interface?

    Read the article

  • Alternative databases to use when putting IIS Logs into a database using LogParser

    - by Robin Day
    We have run some scripts that use LogParser to dump our IIS logs into a SQL Server database. We can then query this to get simple stats on hits, usage etc. It's also good when linking it to error log databases and performance counter database to compare usage with errors, etc. Having implemented this for just one system and for the last 2-3 weeks we already have a 5GB database with around 10 million records. This is making any queries to this database quite slow and will no doubt cause storage issues if we continue to log as we are. Can anyone suggest any alternative databases that we could use for this data that would be more efficient for such logs? I'd be particularly interested in any experience of Google's BigTable or Amazon's SimbleDB. Are either of these suitable for reporting queries? COUNTs, GROUP BYs, PIVOTs?

    Read the article

  • Consolidate loan, purchase & sale tables into one transaction table.

    - by Frank Computer
    INFORMIX-SE with ISQL 7.3: I have separate tables for Loan, Purchase & Sales transactions. Each tables rows are joined to their respective customer rows by: customer.id [serial] = loan.foreign_id [integer]; = purchase.foreign_id [integer]; = sale.foreign_id [integer]; I would like to consolidate the three tables into one table called "transaction", where a column: transaction.trx_type char(1) {L=Loan, P=Purchase, S=Sale} identifies the transaction type. Each transaction will be assigned a unique transaction number [serial]. Is this a good idea or is it better to keep them in separate tables? Storage space is not a concern, I think it would be easier programming & user-wise to have all types of transactions under one table, whenever possible. This implies denormalization.

    Read the article

  • Where should I catch WM_HIBERNATE and WM_CLOSE in Windows Mobile/WinCE?

    - by afriza
    I have read about Windows Mobile's X button's behaviour, WM_HIBERNATE, and WM_CLOSE on Low Memory Situation. MSDN on WM_HIBERNATE: This message is sent to an application when system resources are running low. An application should attempt to release as many resources as possible when sent this message by unloading dialog boxes, destroying windows, or freeing up as much local storage as possible without changing the internal state. MSDN on WM_CLOSE: This message is sent as a signal that a window or an application should terminate. Where should I catch the message? in the main message pump? in every window? or only some windows? If I am using MFC, where should I catch it?

    Read the article

  • Populate properties decorated with an attribute

    - by PUT
    Are there any frameworks that assist me with this: (thinking that perhaps StructureMap can help me) Whenever I create a new instance of "MyClass" or any other class that inherits from IMyInterface I want all properties decorated with [MyPropertyAttribute] to be populated with values from a database or some other data storage using the property Name in the attribute. public class MyClass : IMyInterface { [MyPropertyAttribute("foo")] public string Foo { get; set; } } [AttributeUsage(AttributeTargets.Property)] public sealed class MyPropertyAttribute : System.Attribute { public string Name { get; private set; } public MyPropertyAttribute(string name) { Name = name; } }

    Read the article

  • Stream classes ... design, pattern for creating views over streams

    - by ToxicAvenger
    A question regarding the design of stream classes - I need a pattern to create independent views over a single stream instance (in my case for reading). A view would be a consecutive part of the stream. The problem I have with the stream classes is that the state (reading or writing) is coupled with the underlying data/storage. So if I need to partition a stream into different segments (whether segments overlap or not doesn't matter), I cannot easily create views over the stream, the views would store start and end position. Because reading from a view - which would translate to reading from the underlying stream adjusted based on the start/end positions - would change the state of the underlying stream instance. So what I could do is take a read on a view instance, adjust the Position of the stream, read the chunks I need. But I cannot do that concurrently. Why is it designed in such a way, and what kind of pattern could I implement to create independet views over a single stream instance which would allow to read/write independently (and concurrently)?

    Read the article

  • Flex / ZendAMF / PHP app, corrupted data with some linux clients problem

    - by Laurent Jégou
    Hello, i'm buiding a survey web application, using Flex for the front-end (nice forms), and a MySQL database for the storage, linked by PHP with the help of ZendAMF. I largely borrowed from this nice tutorial by Alan Gruskoff : http://digitalshowcase.biz/wordpress/?page_id=26 (The only one tutorial i've found to work with the last version of Flex). The app seems to works nicely in my tests, except on certain linux boxes : the data is somehow corrupted : there is no error message, no glitch, but the response of the forms are not what the user selected. I tried to reproduce the error on a fresh installed ubuntu VM, but it works fine. I've asked friends for some tests, and several linux users showed the same problem, on ubuntu and suse machines, all freshly updated and functionnal. The application was targeted to be the survey tool for my doctoral thesis, so i'm quite desperate here, and before i'm dumping it to start anew with php only, i'm asking here in case someone can help, thanks :-) Please excuse my english, by the way. LJ.

    Read the article

  • "conveyor belt" cache architecture

    - by Andrew Matthews
    I'm producing an application with a few peculiar internal communication characteristics that make the usual suspects for data storage and transport (Qs and RDBMSs) ill-fitted. I'm wondering whether there is a product out there that matches the following characteristics: all data put into it is peristent all reads are delivered out of memory data is universally available data lives where it is most needed data is versioned (nice to have) updates are transactional (I'd like ACID characteristics) data is potentially replicated, but always in sync works on windows is based on or has bindings for .NET is really fast is really robust is redundant is scalable I'm looking at things like Microsoft codename "Velocity", but I am not sure whether it fits all of the above characteristics. Likewise, Memcached is not a perfect fit either. The current version of this app opts for an RDBMS with a signaling system for inter-system sync, but latency is too high and versioning of the DB is a pain. I need all the robustness, but with none of the trade-offs.

    Read the article

  • Linq to sql Incorrect varchar length

    - by scott
    I have a table with a nullable varchar(50) column in it. When I am updating the value through linq to sql and trace the call in profiler it is defining the parameter as varchar(36). This is obviously causing some minor issues when we are trying to insert data that is between 37 and 50 characters long. I have tried removing the table and re-adding it to the design surface but the same thing happens. I also tried removing that property and adding it manually, same issue. When I look at the designer.cs code it shows the attribute properly: [Column(Storage="_Name", DbType="VarChar(50)")] I am out of ideas, anybody seen this before? Every other column is correct.

    Read the article

  • MySQL storing negative and positive decimals

    - by Shishant
    Hello, I want to be able to store -11.99 and +11.99 kind of values in mysql db I am thinking of decimals instead of varchar. But reading mysql site I found out that its incompatible with older versions of mysql As a result of the change from string to numeric format for DECIMAL storage, DECIMAL columns no longer store a leading + or - character or leading 0 digits. Before MySQL 5.0.3, if you inserted +0003.1 into a DECIMAL(5,1) column, it was stored as +0003.1. As of MySQL 5.0.3, it is stored as 3.1. For negative numbers, a literal - character is no longer stored. Applications that rely on the older behavior must be modified to account for this change. So what should be the data type, If I have to give up varchar and make it compatible with older versions too?

    Read the article

  • Entity Framework How to specify paramter type in generated SQL (SQLServer 2005) Nvarchar vs Varchar

    - by Gratzy
    In entity framework I have an Entity 'Client' that was generated from a database. There is a property called 'Account' it is defined in the storage model as: <Property Name="Account" Type="char" Nullable="false" MaxLength="6" /> And in the Conceptual Model as: <Property Name="Account" Type="String" Nullable="false" /> When select statements are generated using a variable for Account i.e. where m.Account == myAccount... Entity Framework generates a paramaterized query with a paramater of type NVarchar(6). The problem is that the column in the table is data type of char(6). When this is executed there is a large performance hit because of the data type difference. Account is an index on the table and instead of using the index I believe an Index scan is done. Anyone know how to force EF to not use Unicode for the paramater and use Varchar(6) instead?

    Read the article

  • How does jQuery store data with .data()?

    - by TK
    I am a little confused how jQuery stores data with .data() functions. Is this something called expando? Or is this using HTML5 Web Storage although I think this is very unlikely? The documentation says: The .data() method allows us to attach data of any type to DOM elements in a way that is safe from circular references and therefore from memory leaks. As I read about expando, it seems to have a rick of memory leak. Unfortunately my skills are not enough to read and understand jQuery code itself, but I want to know how jQuery stores such data by using data(). http://api.jquery.com/data/

    Read the article

  • Zend_Session: unserialize session data

    - by takeshin
    I'm using session SaveHandler to persist session data in the database. Sample session_data column from the database: Messenger|a:1:{s:13:"page_messages";a:0:{}}userSession|a:1:{s:7:"referer";s:32:"http://cms.dev/user/profile/view";}Zend_Auth|a:1:{s:7:"storage";O:19:"User_Model_Identity":3:{s:2:"id";s:1:"1";s:8:"username";s:13:"administrator";s:4:"slug";s:13:"administrator";}} I want to delete Zend_Auth object from this session data. How can I unserialize those objects and remove object I need? I suspect, that I don't have to write my custom parser, that Zend_Session already has a method to do this. I have tried different combinations of unserialize but it still returns false. I'm using autoloader from ZF 1.10.2 and Doctrine 1.2

    Read the article

  • How should my application keep clients in sync with schema changes to HTML5 databases?

    - by Chad Johnson
    I'm wanting to incorporate HTML5 database storage into my web application to make it online-accessible. I've done lots of development in server-side environments with databases, and we all know that database schema additions and modifications are often necessary. I am wondering what should happen if my application uses an offline database schema, and that schema changes. How do I prevent the application from breaking on the client side? How do I ensure the database is always up to date on the client end? Anyone have any solutions?

    Read the article

  • SoundPool.load() and FileDescriptor from file

    - by Hans
    I tried using the load function of the SoundPool that takes a FileDescriptor, because I wanted to be able to set the offset and length. The File is not stored in the Ressources but a file on the storage card. Even though neither the load nor the play function of the SoundPool throw any Exception or print anything to the console, the sound is not played. Using the same code, but use the file path string in the SoundPool constructor works perfectly. This is how I have tried the loading (start equals 0 and length is the length of the file in miliseconds): FileInputStream fileIS = new FileInputStream(new File(mFile)); mStreamID = mSoundPool.load(fileIS.getFD(), start, length, 0); mPlayingStreamID = mSoundPool.play(mStreamID, 1f, 1f, 1, 0, 1f); If I would use this, it works: mStreamID = mSoundPool.load(mFile, 0); mPlayingStreamID = mSoundPool.play(mStreamID, 1f, 1f, 1, 0, 1f); Any ideas anyone? Thanks

    Read the article

  • Django gives "I/O operation on closed file" error when reading from a saved ImageField

    - by Rob Osborne
    I have a model with two image fields, a source image and a thumbnail. When I update the new source image, save it and then try to read the source image to crop/scale it to a thumbnail I get an "I/O operation on closed file" error from PIL. If I update the source image, don't save the source image, and then try to read the source image to crop/scale, I get an "attempting to read from closed file" error from PIL. In both cases the source image is actually saved and available in later request/response loops. If I don't crop/scale in a single request/response loop but instead upload on one page and then crop/scale in another page this all works fine. This seems to be a cached buffer being reused some how, either by PIL or by the Django file storage. Any ideas on how to make an ImageField readable after saving?

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >