Search Results

Search found 1614 results on 65 pages for 'emps (expensive managemen'.

Page 36/65 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Distributed datastore

    - by Julien Genestoux
    We're trying to add some kind of persistence in our app. The app generates about 250 entries per second. Each of these entries belong to one of 2M files. For each file, we want to keep the last 10 entries, so we can look them up later. The way our client application works : it gets a stream of all the data it fetches the right file (GET) it adds the new content it saves the file back (PUT) We're looking for an efficient way to store this data that can scale horizontally as the amount of data we're getting is doubling every few weeks. We initially looked at S3. It works fine, but becomes very expensive very fast ($1000 monthly just in PUT operations!) We then gave a shot at Riak. But it seems we can't get more than 60 write/sec on each node, which is very very slow. Any other solution out there?

    Read the article

  • Experimenting with data to determine value - migration/methods?

    - by TCK
    Hey guys, I have a LOT of data available to me, and want to experiment with data that isn't currently being used in production. The obvious solution seems to be to make a copy of production data and integrate it with what I want to play around with, but I was wondering if there was a better (less expensive?) way to do this. Both isolation and integration are important. I'd like to be able to keep lightweight/experimental data assets apart from high volume production data, but also be able to integrate (RELATIVELY) painlessly if experimental assets are deemed useful. Thanks.

    Read the article

  • Future proofing client-server code?

    - by Koran
    Hi, We have a web based client-server product. The client is expected to be used in the upwards of 1M users (a famous company is going to use it). Our server is set up in the cloud. One of the major questions while designing is how to make the whole program future proof. Say: Cloud provider goes down, then move automatically to backup in another cloud Move to a different server altogether etc The options we thought till now are: DNS: Running a DNS name server on the cloud ourselves. Directory server - The directory server also lives on the cloud Have our server returning future movements and future URLs etc to the client - wherein the client is specifically designed to handle those scenarios Since this should be a usual problem, which is the best solution for the same? Since our company is a very small one, we are looking at the least technically and financially expensive solution (say option 3 etc)? Could someone provide some pointers for the same? K

    Read the article

  • Sprite backgrounds

    - by Mattias Akerman
    I use cocos2d for iphone and in my game I'm using a sprite as static background image. I've noticed that when removing code for adding the sprite the framerate goes from ~30fps to over 40fps. Is it any other way to show a static background that is less expensive? I'm not moving the background sprite at all. To code right now: background = [Sprite spriteWithFile:@"t1_5.jpg"]; [self addChild:background z:0]; background.position = ccp(240, 160);

    Read the article

  • Will pool the connection help threading in sqlite (and how)?

    - by mamcx
    I currently use a singleton to acces my database (see related question) but now when try to add some background processing everything fall apart. I read the sqlite docs and found that sqlite could work thread-safe, but each thread must have their own db connection. I try using egodatabase that promise a sqlite wrapper with thread safety but is very buggy, so I return to my old FMDB library I start to see how use it in multi-thread way. Because I have all code with the idea of singleton, change everything will be expensive (and a lot of open/close connections could become slow), so I wonder if, as the sqlite docs hint, build a pooling for each connection will help. If is the case, how make it? How know wich connection get from the pool (because 2 threads can't share the connection)? I wonder if somebody already use sqlite in multu-threading with NSOperation or similar stuff, my searching only return "yeah, its possible" but let the details to my imagination...

    Read the article

  • Migrating schema and SP from informix to mysql

    - by zombiegx
    We need to redo a database in MySQL that has been already done on Informix, is there a way to migrate not only the schema, but the stored procedures as well? Thanks. We have a client whom we built a web application that uses an Informix database. Now the client wants to be able to implement the same software but on multiple closed networks (like 20). Doing this using Informix would be very expensive (20 licences X_X). So the best approach is to redo the database on something like MySQL. The application was done using Flex, .Net (using ODBC) and Informix.

    Read the article

  • inserting std::strings in to a std::map

    - by PaulH
    I have a program that reads data from a file line-by-line. I would like to copy some substring of that line in to a map as below: std::map< DWORD, std::string > my_map; DWORD index; // populated with some data char buffer[ 1024 ]; // populated with some data char* element_begin; // points to some location in buffer char* element_end; // points to some location in buffer > element_begin my_map.insert( std::make_pair( index, std::string( element_begin, element_end ) ) ); This std::map<>::insert() operation takes a long time (It doubles the file parsing time). Is there a way to make this a less expensive operation? Thanks, PaulH

    Read the article

  • VSTS 2008 Load testing, Is it any good?

    - by anshu
    I have already spent couple of weeks trying to use this tool to generate some webtest and load test. But every day it throws a weird problem for which I do not find anything in document. examples: Hidden variables (_lastfocus) not found in the context error. Today, all of sudden it is now refusing to run some of the webtest which are part of the test mix in my load test run (is working fine with another load test). Are enterprise level, expensive tools are only good? (like loadrunner, silkperformer etc).

    Read the article

  • Haskell compile time function calculation

    - by egon
    I would like to precalculate values for a function at compile-time. Example (real function is more complex, didn't try compiling): base = 10 mymodulus n = n `mod` base -- or substitute with a function that takes -- too much to compute at runtime printmodules 0 = [mymodulus 0] printmodules z = (mymodulus z):(printmodules (z-1)) main = printmodules 64 I know that mymodulus n will be called only with n < 64 and I would like to precalculate mymodulus for n values of 0..64 at compile time. The reason is that mymodulus would be really expensive and will be reused multiple times.

    Read the article

  • Protect .NET assemblies from decomplie

    - by Holli
    One if the first things I learned when I started with C# was the most important one. You can decompile any .NET assembly with Reflector or other tools. Many developers are not aware of this fact and most of them are shocked when I show them their source code. Protection against decompilation is still a difficult task. I am still looking for a fast, easy and secure way to do it. I don't want to obfuscate my code so my method names will be a,b,c or so. Reflector or other tools should be unable to recognize my application as .NET assembly at all. I know about some tools already but they are very expensive. Is there any other way to protect my applications?

    Read the article

  • Good Front end for PostgreSQL on Windows or Mac

    - by anjanb
    I'm considering using postgreSQL 8.4x db on windows/Mac for development and linux for production. wondering what front-end tools are out there that are comparable to Toad (for Oracle) ? PostgreSQL comes with PgAdminIII. It's OK but I feel there might be something better than that. I prefer free or open source but if something is NOT too expensive(we are a NON-PROFIT), would not mind evaluating. minimal tools that I want 1) connection manager 2) Nice SQL Editor 3) Explain Plan 4) help with SQL embedding into various languages. 5) E-R diagrams would be good but this would not be a killer feature for me. Anything work for you guys ? Will be using java 6 to build the application(s) if that is relevant at all. Thank you,

    Read the article

  • How to populate an array with recordset data

    - by Curtis Inderwiesche
    I am attempting to move data from a recordset directly into an array. I know this is possible, but specifically I want to do this in VBA as this is being done in MS Access 2003. Typically I would do something like the following to archive this: Dim vaData As Variant Dim rst As ADODB.Recordset ' Pull data into recordset code here... ' Populate the array with the whole recordset. vaData = rst.GetRows What differences exist between VB and VBA which makes this type of operation not work? What about performance concerns? Is this an "expensive" operations?

    Read the article

  • Is ruby ||= intelligent?

    - by brad
    I have a question regarding the ||= statement in ruby and this is of particular interest to me as I'm using it to write to memcache. What I'm wondering is, does ||= check the receiver first to see if it's set before calling that setter, or is it literally an alias to x = x || y This wouldn't really matter in the case of a normal variable but using something like: CACHE[:some_key] ||= "Some String" could possibly do a memcache write which is more expensive than a simple variable set. I couldn't find anything about ||= in the ruby api oddly enough so I haven't been able to answer this myself. Of course I know that: CACHE[:some_key] = "Some String" if CACHE[:some_key].nil? would achieve this, I'm just looking for the most terse syntax.

    Read the article

  • UML tool required for C#

    - by Peter Morris
    I need some help - here are my requirements. 1: I should be able to modify the UML model without affecting the code, and then later apply the changes. This is because I need to print the changes, get them confirmed, and then develop them. 2: I should be able to reuse parts of the model. For example I would create one project which outputs A.dll assembly, and then another UML project would use the classes in the first to crate B.dll 3: Project stored as text so I can see changes in version control history. 4: Together is too expensive :-)

    Read the article

  • What's the reason both Image and Bitmap classes don't implement a custom equality/hashcode logic?

    - by devoured elysium
    From MSDN documentation, it seems as both GetHashCode() and Equals() haven't been overriden in Bitmap. Neither have them been overriden in Image. So both classes are using the Object's version of them just compares references. I wasn't too convinced so I decided to fire up Reflector to check it out. It seems MSDN is correct in that matter. So, is there any special reason why MS guys wouldn't implement "comparison logic", at least for the Bitmap class? I find it is kinda acceptable for Image, as it is an abstract class, but not so much for the Bitmap class. I can see in a lot of situations calculating the hash code can be an expensive operation, but it'd be alright if it used some kind of lazy evaluation (storing the computed hash code integer in a variable a variable, so it wouldn't have to calculate it later again). When wanting to compare 2 bitmaps, will I have to resort to having to run all over the picture comparing each one of its pixels? Thanks

    Read the article

  • C++: A variation of priority queue

    - by Helltone
    I need some kind of priority queue to store pairs <key, value>. Values are unique, but keys aren't. I will be performing the following operations (most common first): random insertion; retrieving (and removing) all elements with the least key. random removal (by value); I can't use std::priority_queue because it only supports removing the head. For now, I'm using an unsorted std::list. Insertion is performed by just pushing new elements to the back (O(1)). Operation 2 sorts the list with std::sort (O(N*logN)), before performing the actual retrieval. Removal, however, is O(n), which is a bit expensive. Any idea of a better data structure?

    Read the article

  • cached schwartzian transform

    - by davidk01
    I'm going through "Intermediate Perl" and it's pretty cool. I just finished the section on "The Schwartzian Transform" and after it sunk in I started to wonder why the transform doesn't use a cache. In lists that have several repeated values the transform recomputes the value for each one so I thought why not use a hash to cache results. Here' some code: # a place to keep our results my %cache; # the transformation we are interested in sub foo { # expensive operations } # some data my @unsorted_list = ....; # sorting with the help of the cache my @sorted_list = sort { ($cache{$a} or $cache{$a} = &foo($a)) <=> ($cache{$b} or $cache{$b} = &foo($b)) } @unsorted_list; Am I missing something? Why isn't the cached version of the Schwartzian transform listed in books and in general just better circulated because on first glance I think the cached version should be more efficient?

    Read the article

  • What are all the disadvantages of using files as a means of communicating between two processes?

    - by Manny
    I have legacy code which I need to improve for performance reasons. My application comprises of two executables that need to exchange certain information. In the legacy code, one exe writes to a file ( the file name is passed as an argument to exe) and the second executable first checks if such a file exists; if does not exist checks again and when it finds it, then goes on to read the contents of the file. This way information in transferred between the two executables. The way the code is structured, the second executable is successful on the first try itself. Now I have to clean this code up and was wondering what are the disadvantages of using files as a means of communication rather than some inter-process communication like pipes.Is opening and reading a file more expensive than pipes? Are there any other disadvantages? And how significant do you think would be the performance degradation. The legacy code is run on both windows and linux.

    Read the article

  • What is a reasonly priced map API solution for a startup?

    - by Kevin
    I've been developing my application with Google Maps and the wonderful rails plugin for it, expecting to find that when I put my app into production that the commercial licensing wouldn't be too expensive. Then I found out it cost $10,000/year, no exceptions so far. http://www.47hats.com/2009/07/google-maps-the-10k-gotcha/ That's not a terrible price to pay for unlimited usage when your site becomes successful, but for those of us trying to build something from the ground up, that's a hefty price to pay. I've looked at Bing and Yahoo but they're very wishy-washy with what ballpark the pricing is. That on top of the fact I have to ditch my nice rails plugin YM4R for Google maps... Is anyone out there using a map API solution that doesn't cost an arm and a leg to get started with in a commercial aspect? I don't mind not using a plugin, I just need something that will work and is cost affordable in the beginning.

    Read the article

  • What makes COBOL such a hated language?

    - by cable
    I'm rather young and so I haven't experienced the times when COBOL was still in use and even going mainstream, maybe you can help me out. Everywhere I go and surf I: Am getting told about how horrible COBOL is See sites making fun of COBOL Hear that I should be happy that I don't have to use COBOL, from older programmers See others make anti-COBOL-ist jokes I don't know much about COBOL, except that it is still used everywhere as many old systems haven't got refactored and that it is even still developed and managed. I have also looked at some COBOL code examples but it seems like they are as rare and expensive as unicorn blood. If you close this question or give me sarcastic, irrelevant answers I'll have to learn COBOL to find it out the hard way. I'm serious.

    Read the article

  • Efficient data structure for fast random access, search, insertion and deletion

    - by Leonel
    I'm looking for a data structure (or structures) that would allow me keep me an ordered list of integers, no duplicates, with indexes and values in the same range. I need four main operations to be efficient, in rough order of importance: taking the value from a given index finding the index of a given value inserting a value at a given index deleting a value at a given index Using an array I have 1 at O(1), but 2 is O(N) and insertion and deletions are expensive (O(N) as well, I believe). A Linked List has O(1) insertion and deletion (once you have the node), but 1 and 2 are O(N) thus negating the gains. I tried keeping two arrays a[index]=value and b[value]=index, which turn 1 and 2 into O(1) but turn 3 and 4 into even more costly operations. Is there a data structure better suited for this?

    Read the article

  • Python threads all executing on a single core

    - by Rob Lourens
    I have a Python program that spawns many threads, runs 4 at a time, and each performs an expensive operation. Pseudocode: for object in list: t = Thread(target=process, args=(object)) # if fewer than 4 threads are currently running, t.start(). Otherwise, add t to queue But when the program is run, Activity Monitor in OS X shows that 1 of the 4 logical cores is at 100% and the others are at nearly 0. Obviously I can't force the OS to do anything but I've never had to pay attention to performance in multi-threaded code like this before so I was wondering if I'm just missing or misunderstanding something. Thanks.

    Read the article

  • ASP.NET error when trying to configure ListView + SqlDatasource for a MySql Table

    - by stighy
    Hi, i'm using ASP.NET + a MySql Db. I'm trying to configure a ListView so i've written: <asp:SqlDataSource ID="dsDatiUtente" runat="server" ConnectionString="Server=12.28.136.29;Database=mydb;Uid=m111d1;Pwd=fake;Pooling=false;" ProviderName="MySql.Data.MySqlClient" SelectCommand="SELECT * FROM user WHERE idUser=@IdUser" / At the beginning of my aspx page i've added <%@ Import Namespace="MySql.Data.MySqlClient" %> But if i click to the sqldatasource and click "Refresh Schema" i got this error: "Unable to retrive schema.... Unable to find the requested .Net Framework data provider" For instance, i've installed it , but i've also uninstalled old version, then installed new versions. In my project i simple copy Mysql dll into "bin" folder, then add a reference to that dll. I'm not sure is the corrected way... I need to have the "refreshed schema" to permit vs.net to build automatically my listview ... if i can't to "auto build" listview, i have to write all code by hand, and it is a too expensive work me :( What i'm wrong ? Thank you!

    Read the article

  • Send a "304 Not Modified" for images stored in the datastore

    - by Emilien
    I store user-uploaded images in the Google App Engine datastore as db.Blob, as proposed in the docs. I then serve those images on /images/<id>.jpg. The server always sends a 200 OK response, which means that the browser has to download the same image multiple time (== slower) and that the server has to send the same image multiple times (== more expensive). As most of those images will likely never change, I'd like to be able to send a 304 Not Modified response. I am thinking about calculating some kind of hash of the picture when the user uploads it, and then use this to know if the user already has this image (maybe send the hash as an Etag?) I have found this answer and this answer that explain the logic pretty well, but I have 2 questions: Is it possible to send an Etag in Google App Engine? Has anyone implemented such logic, and/or is there any code snippet available?

    Read the article

  • A compiler for automata theory

    - by saadtaame
    I'm designing a programming language for automata theory. My goal is to allow programmers to use machines (DFA, NFA, etc...) as units in expressions. I'm confused whether the language should be compiled, interpreted, or jit-compiled! My intuition is that compilation is a good choice, for some operations might take too much time (converting NFA's to equivalent DFA's can be expensive). Translating to x86 seems good. There is one issue however: I want the user to be able to plot machines. Any ideas?

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >