Search Results

Search found 5528 results on 222 pages for 'offsite storage'.

Page 202/222 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Multithreaded linked list traversal

    - by Rob Bryce
    Given a (doubly) linked list of objects (C++), I have an operation that I would like multithread, to perform on each object. The cost of the operation is not uniform for each object. The linked list is the preferred storage for this set of objects for a variety of reasons. The 1st element in each object is the pointer to the next object; the 2nd element is the previous object in the list. I have solved the problem by building an array of nodes, and applying OpenMP. This gave decent performance. I then switched to my own threading routines (based off Windows primitives) and by using InterlockedIncrement() (acting on the index into the array), I can achieve higher overall CPU utilization and faster through-put. Essentially, the threads work by "leap-frog'ing" along the elements. My next approach to optimization is to try to eliminate creating/reusing the array of elements in my linked list. However, I'd like to continue with this "leap-frog" approach and somehow use some nonexistent routine that could be called "InterlockedCompareDereference" - to atomically compare against NULL (end of list) and conditionally dereference & store, returning the dereferenced value. I don't think InterlockedCompareExchangePointer() will work since I cannot atomically dereference the pointer and call this Interlocked() method. I've done some reading and others are suggesting critical sections or spin-locks. Critical sections seem heavy-weight here. I'm tempted to try spin-locks but I thought I'd first pose the question here and ask what other people are doing. I'm not convinced that the InterlockedCompareExchangePointer() method itself could be used like a spin-lock. Then one also has to consider acquire/release/fence semantics... Ideas? Thanks!

    Read the article

  • typedef a functions prototype

    - by bitmask
    I have a series of functions with the same prototype, say int func1(int a, int b) { // ... } int func2(int a, int b) { // ... } // ... Now, I want to simplify their definition and declaration. Of course I could use a macro like that: #define SP_FUNC(name) int name(int a, int b) But I'd like to keep it in C, so I tried to use the storage specifier typedef for this: typedef int SpFunc(int a, int b); This seems to work fine for the declaration: SpFunc func1; // compiles but not for the definition: SpFunc func1 { // ... } which gives me the following error: error: expected '=', ',', ';', 'asm' or '__attribute__' before '{' token Is there a way to do this correctly or is it impossible? To my understanding of C this should work, but it doesn't. Why? Note, gcc understands what I am trying to do, because, if I write SpFunc func1 = { /* ... */ } it tells me error: function 'func1' is initialized like a variable Which means that gcc understands that SpFunc is a function type.

    Read the article

  • Generating authentication header from azure table through objective-c

    - by user923370
    I'm fetching data from iCloud and for that I need to generate a header (azure table storage). I used the code below for that and it is generating the headers. But when I use these headers in my project it is showing "make sure that the value of authorization header is formed correctly including the signature." I googled a lot and tried many codes but in vain. Can anyone kindly please help me with where I'm going wrong in this code. -(id)generat{ NSString *messageToSign = [NSString stringWithFormat:@"%@/%@/%@", dateString,AZURE_ACCOUNT_NAME, tableName]; NSString *key = @"asasasasasasasasasasasasasasasasasasasasas=="; const char *cKey = [key cStringUsingEncoding:NSUTF8StringEncoding]; const char *cData = [messageToSign cStringUsingEncoding:NSUTF8StringEncoding]; unsigned char cHMAC[CC_SHA256_DIGEST_LENGTH]; CCHmac(kCCHmacAlgSHA256, cKey, strlen(cKey), cData, strlen(cData), cHMAC); NSData *HMAC = [[NSData alloc] initWithBytes:cHMAC length:sizeof(cHMAC)]; NSString *hash = [Base64 encode:HMAC]; NSLog(@"Encoded hash: %@", hash); NSURL *url=[NSURL URLWithString: @"http://my url"]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:url]; [request addValue:[NSString stringWithFormat:@"SharedKeyLite %@:%@",AZURE_ACCOUNT_NAME, hash] forHTTPHeaderField:@"Authorization"]; [request addValue:dateString forHTTPHeaderField:@"x-ms-date"]; [request addValue:@"application/atom+xml, application/xml"forHTTPHeaderField:@"Accept"]; [request addValue:@"UTF-8" forHTTPHeaderField:@"Accept-Charset"]; NSLog(@"Headers: %@", [request allHTTPHeaderFields]); NSLog(@"URL: %@", [[request URL] absoluteString]); return request; } -(NSString*)rfc1123String:(NSDate *)date { static NSDateFormatter *df = nil; if(df == nil) { df = [[NSDateFormatter alloc] init]; df.locale = [[[NSLocale alloc] initWithLocaleIdentifier:@"en_US"] autorelease]; df.timeZone = [NSTimeZone timeZoneWithAbbreviation:@"GMT"]; df.dateFormat = @"EEE',' dd MMM yyyy HH':'mm':'ss 'GMT'"; } return [df stringFromDate:date]; }

    Read the article

  • How much does an InnoDB table benefit from having fixed-length rows?

    - by Philip Eve
    I know that dependent on the database storage engine in use, a performance benefit can be found if all of the rows in the table can be guaranteed to be the same length (by avoiding nullable columns and not using any VARCHAR, TEXT or BLOB columns). I'm not clear on how far this applies to InnoDB, with its funny table arrangements. Let's give an example: I have the following table CREATE TABLE `PlayerGameRcd` ( `User` SMALLINT UNSIGNED NOT NULL, `Game` MEDIUMINT UNSIGNED NOT NULL, `GameResult` ENUM('Quit', 'Kicked by Vote', 'Kicked by Admin', 'Kicked by System', 'Finished 5th', 'Finished 4th', 'Finished 3rd', 'Finished 2nd', 'Finished 1st', 'Game Aborted', 'Playing', 'Hide' ) NOT NULL DEFAULT 'Playing', `Inherited` TINYINT NOT NULL, `GameCounts` TINYINT NOT NULL, `Colour` TINYINT UNSIGNED NOT NULL, `Score` SMALLINT UNSIGNED NOT NULL DEFAULT 0, `NumLongTurns` TINYINT UNSIGNED NOT NULL DEFAULT 0, `Notes` MEDIUMTEXT, `CurrentOccupant` TINYINT UNSIGNED NOT NULL DEFAULT 0, PRIMARY KEY (`Game`, `User`), UNIQUE KEY `PGR_multi_uk` (`Game`, `CurrentOccupant`, `Colour`), INDEX `Stats_ind_PGR` (`GameCounts`, `GameResult`, `Score`, `User`), INDEX `GameList_ind_PGR` (`User`, `CurrentOccupant`, `Game`, `Colour`), CONSTRAINT `Constr_PlayerGameRcd_User_fk` FOREIGN KEY `User_fk` (`User`) REFERENCES `User` (`UserID`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `Constr_PlayerGameRcd_Game_fk` FOREIGN KEY `Game_fk` (`Game`) REFERENCES `Game` (`GameID`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=INNODB CHARACTER SET utf8 COLLATE utf8_general_ci The only column that is nullable is Notes, which is MEDIUMTEXT. This table presently has 33097 rows (which I appreciate is small as yet). Of these rows, only 61 have values in Notes. How much of an improvement might I see from, say, adding a new table to store the Notes column in and performing LEFT JOINs when necessary?

    Read the article

  • Catch a PHP Object Instantiation Error

    - by Rob Wilkerson
    It's really irking me that PHP considers the failure to instantiate an object a Fatal Error (which can't be caught) for the application as a whole. I have set of classes that aren't strictly necessary for my application to function--they're really a convenience. I have a factory object that attempts to instantiate the class variant that's indicated in a config file. This mechanism is being deployed for message storage and will support multiple store types: DatabaseMessageStore FileMessageStore MemcachedMessageStore etc. A MessageStoreFactory class will read the application's preference from a config file, instantiate and return an instance of the appropriate class. It might be easy enough to slap a conditional around the instantiation to ensure that class_exists(), but MemcachedMessageStore extends PHP's Memcached class. As a result, the class_exists() test will succeed--though instantiation will fail--if the memcached bindings for PHP aren't installed. Is there any other way to test whether a class can be instantiated properly? If it can't, all I need to do is tell the user which features won't be available to them, but let them continue one with the application. Thanks.

    Read the article

  • XUL: Printing value from SQL query without using value attribute

    - by Grip
    I want to print the results of an SQL query in XUL with word-wrapping using the description tag. I have the following code: <grid> <columns> <column flex="2" /> <column flex="1" /> </columns> <rows datasources="chrome://glossary/content/db/development.sqlite3" ref="?" querytype="storage" > <template> <query>select distinct * from Terms</query> <action> <row uri="?"> <description value="?name" /> <description value="?desc" /> </row> </action> </template> </rows> This works for printing the data, but since I'm using the value attribute of description, there's no word-wrapping. I don't understand entirely what I'm doing with the value attribute, but I don't seem to be able to get the values of my SQL query in any other way. So, how could I get those values without the value tag? Thanks.

    Read the article

  • Error handling approach on PHP

    - by Industrial
    Hi everybody, We have a web server that we're about to launch a number of applications onto. They will all share database and memcached servers, but each application has it's own mySQL database and all memcached keys per application, is prefixed. Possible scenario: If a memcached server in our cluster goes boom, we want someone (operative system admin) to be automatically contacted by email/iphone push notification or in any other appropriate way. If we we're about to install 150 identical applications for our customers on our servers, and a memcached server dies - all 150 applications will individually find this out and contact our system admin, which most certainly is going to think about getting a new job where he or she isn't about to be woken up by getting 150 messages sent 4:15 in the morning. Possible solution: One idea is to set up an external server for error handling that gets a $_POST or cURL request sent, and handles storage of the error message depending on the seriousness of the actual error message. It would of course check upon receiving the error call, that if the same memcached server have already been reported as offline, there would be no need to spam the system admin with additional reminders... The questions: What's a good approach on how to handle errors? How does the big guys in the industry handle this? Thanks!

    Read the article

  • Omit Properties from WebControl Serialization

    - by Laramie
    I have to serialize several objects inheriting from WebControl for database storage. These include several unecessary (to me) properties that I would prefer to omit from serialization. For example BackColor, BorderColor, etc. Here is an example of an XML serialization of one of my controls inheriting from WebControl. <Control xsi:type="SerializePanel"> <ID>grCont</ID> <Controls /> <BackColor /> <BorderColor /> <BorderWidth /> <CssClass>grActVid bwText</CssClass> <ForeColor /> <Height /> <Width /> ... </Control> I have been trying to create a common base class for my controls that inherits from WebControl and uses the "xxxSpecified" trick to selectively choose not to serialize certain properties. For example to ignore an empty BorderColor property, I'd expect [XmlIgnore] public bool BorderColorSpecified() { return !base.BorderColor.IsEmpty; } to work, but it's never called during serialization. I've also tried it in the class to be serialized as well as the base class. Since the classes themselves might change, I'd prefer not to have to create a custom serializer. Any ideas?

    Read the article

  • "date_part('epoch', now() at time zone 'UTC')" not the same time as "now() at time zone 'UTC'" in po

    - by sirlark
    I'm writing a web based front end to a database (PHP/Postgresql) in which I need to store various dates/times. The times are meant to be always be entered on the client side in the local time, and displayed in the local time too. For storage purposes, I store all dates/times as integers (UNIX timestamps) and normalised to UTC. One particular field has a restriction that the timestamp filled in is not allowed to be in the future, so I tried this with a database constraint... CONSTRAINT not_future CHECK (timestamp-300 <= date_part('epoch', now() at time zone 'UTC')) The -300 is to give 5 minutes leeway in case of slightly desynchronised times between browser and server. The problem is, this constraint always fails when submitting the current time. I've done testing, and found the following. In PostgreSQL client: SELECT now() -- returns correct local time SELECT date_part('epoch', now()) -- returns a unix timestamp at UTC (tested by feeding the value into the date function in PHP correcting for its compensation to my time zone) SELECT date_part('epoch', now() at time zone 'UTC') -- returns a unix timestamp at two time zone offsets west, e.g. I am at GMT+2, I get a GMT-2 timestamp. I've figured out obviously that dropping the "at time zone 'UTC'" will solve my problem, but my question is if 'epoch' is meant to return a unix timestamp which AFAIK is always meant to be in UTC, why would the 'epoch' of a time already in UTC be corrected? Is this a bug, or I am I missing something about the defined/normal behaviour here.

    Read the article

  • Scalability 101: How can I design a scalable web application using PHP?

    - by Legend
    I am building a web-application and have a couple of quick questions. From what I learnt, one should not worry about scalability when initially building the app and should only start worrying when the traffic increases. However, this being my first web-application, I am not quite sure if I should take an approach where I design things in an ad-hoc manner and later "fix" them. I have been reading stories about how people start off with an app that gets millions of users in a week or two. Not that I will face the same situation but I can't help but wonder, how do these people do it? Currently, I bought a shared hosting account on Lunarpages and that got me started in building and testing the application. However, I am interested in learning how to build the same application in a scalable-manner using the cloud, for instance, Amazon's EC2. From my understanding, I can see a couple of components: There is a load balancer that first receives requests and then decides where to route each request This request is then handled by a server replica that then processes the request and updates (if required) the database and sends back the response to the client If a similar request comes in, then a caching mechanism like memcached kicks into picture and returns objects from the cache A blackbox that handles database replication Specifically, I am trying to do the following: Setting up a load balancer (my homework revealed that HAProxy is one such load balancer) Setting up replication so that databases can be synchronized Using memcached Configuring Apache to work with multiple web servers Partitioning application to use Amazon EC2 and Amazon S3 (my application is something that will need great deal of storage) Finally, how can I avoid burning myself when using Amazon services? Because this is just a learning phase, I can probably do with 2-3 servers with a simple load balancer and replication but until I want to avoid paying loads of money accidentally. I am able to find resources on individual topics but am unable to find something that starts off from the big picture. Can someone please help me get started?

    Read the article

  • Need help/suggestions for creating fantasy sports scoring databases and queries

    - by MGumbel
    I'm trying to create a website for my friends and I to keep track of fantasy sports scoring. So far, I've been doing the calculations and storage in Excel, which is very tedious. I'm trying to make it more simplified and automated through a SQL database that I can then wrap a web app around to enter daily stat updates. It's premised on our participation in another commercial site where we trade virtual shares of athletes, and thus acquire an "ownership percentage" in each athlete. For instance, if there are 100 shares of AROD, and I own 10 shares, then I own 10%. It then applies this to traditional baseball rotisserie scoring. So, for instance, if AROD has 1 HR today, then his adjusted HR stat would be 1.10. If he also has 2 RBI's, then his adjusted RBI stat today would be 2.20, based on (2 x 1.10)(1 to normalize the stat, and the .10 to represent the ownership percentage). All the stats for my team would then be summed each day and added to my stat history to come to an aggregated total. After that, points are allocated based on the ranking of each participant in each category at the end of the day. E.g. if there are 10 participants, and I have the highest total aggregate number of Adjusted HR's, then I get 10 pts. The points are then summed across the different stat categories to come up with a total point ranking for that day. An added difficulty is that ownership %'s can change on a daily basis. So far, in playing around with different schema, I don't know that having a separate table for each athlete's stats and each player's ownership %'s is the wisest choice. It seems to me that simply having two tables, one that contains the daily stat information for each athlete, and another that shows the ownership % of each player. My friend suggested using a start and end date for each ownership % to represent the potential daily changes in this category. I'm admittedly new to database development, so any suggestions on query code would be appreciated.

    Read the article

  • J2EE and alternatives

    - by Ilya K
    Hello, I am J2SE developer but I have rich web-background (php, perl/cgi and so on) and now I am starting new project. It will have web interface, spaghetti business logic, relational database as storage and connections to other services. I do it from the scratch. My colleagues told me to use spring, spring security and struts. I look briefly at J2EE spec and found that it covers almost all aspects of enterprise application. I asked my colleagues why do they need spring and struts, but looks like they use technologies simply because they are familiar with them and not familiar with classic J2EE stack. So, my question is: what is bad about J2EE? Why do I need spring if there are JNDI lookups? It will take a day or two to create fake InitialContext for unit-tests. And that is all: I stand with out of external tools like spring. Why do I need spring-security if there is a security built in Servlets spec? I can map any request to any servlet using web.xml, no struts.xml is needed. I can use servlet-filters instead of struts interceptors. There is RMI, so I do not need spring-remote. And so on.. Why should I bother my self with all that fancy stuff if there is J2EE? I really want to find situation when J2EE is not enough. Do you have any? Thanks!

    Read the article

  • Drupal 6 Validation for Form Callback Function

    - by Wade
    I have a simple form with a select menu on the node display page. Is there an easy way to validate the form in my callback function? By validation I don't mean anything advanced, just to check that the values actually existed in the form array. For example, without ajax, if my select menu has 3 items and I add a 4th item and try to submit the form, drupal will give an error saying something similar to "an illegal choice was made, please contact the admin." With ajax this 4th item you created would get saved into the database. So do I have to write validation like if ($select_item > 0 && $select_item <= 3) { //insert into db } Or is there an easier way that will check that the item actually existed in the form array? I'm hoping there is since without ajax, drupal will not submit the form if it was manipulated. Thanks. EDIT: So I basically need this in my callback function? $form_state = array('storage' => NULL, 'submitted' => FALSE); $form_build_id = $_POST['form_build_id']; $form = form_get_cache($form_build_id, $form_state); $args = $form['#parameters']; $form_id = array_shift($args); $form_state['post'] = $form['#post'] = $_POST; $form['#programmed'] = $form['#redirect'] = FALSE; drupal_process_form($form_id, $form, $form_state); To get $_POST['form_build_id'], I sent it as a data param, is that right? Where I use form_get_cache, looks like there is no data. Kind of lost now.

    Read the article

  • Cross-platform HTML application options

    - by Charles
    I'd like to develop a stand-alone desktop application targeting Windows (XP through 7) and Mac (Tiger through Snow Leopard), and if possible iPhone and Android. In order to make it all work with as much common code as possible (and because it's the only thing I'm good at), I'd like to handle the main logic with HTML and JS. Using Adobe AIR is a possibility. And I think I can do this with various application wrappers, using .NET for Windows XP, Objective C for iPhone, Java for Android and native "widget" platform support for Mac and Windows Vista & 7 (though I'd like to keep the widget in the foreground, so the Mac dashboard isn't ideal). Does anyone have any suggestions on where to start? The two sticking points are: I'll certainly need some form of persistent storage (cookies perhaps) to keep state between sessions I'll also probably need access to remote data files, so if I use AJAX and the hosting HTML file resides on the device, it will need to be able to do cross-domain requests. I've done this on the iPhone without any problems, but I'd be surprised if this were possible on other platforms. For me, Android and iPhone will be the easiest to handle, and it looks like I can use Adobe AIR to handle the rest. But I wanted to know if there are any other alternatives. Does anyone have any suggesions?

    Read the article

  • checking mongo database for data

    - by user1647484
    I'm playing around with this tutorial that uses Sinatra, backbone.js, and mongodb for the database. It's my first time using mongo. As far as I understand it the app uses both local storage and a database. it has these routes for the database. For example, it has these routes get '/api/:thing' do DB.collection(params[:thing]).find.to_a.map{|t| from_bson_id(t)}.to_json end get '/api/:thing/:id' do from_bson_id(DB.collection(params[:thing]).find_one(to_bson_id(params[:id]))).to_json end post '/api/:thing' do oid = DB.collection(params[:thing]).insert(JSON.parse(request.body.read.to_s)) "{\"_id\": \"#{oid.to_s}\"}" end After turning the server off and then on, I could see in the server getting data from the database routes 127.0.0.1 - - [17/Sep/2012 08:21:58] "GET /api/todos HTTP/1.1" 200 430 0.0033 My question is, how can I check from within the mongo shell whether the data's in the database? I started the mongo shell ./bin/mongo I selected the database 'use mydb' and then looking at the docs (http://www.mongodb.org/display/DOCS/Tutorial) I tried commands such as > var cursor = db.things.find(); > while (cursor.hasNext()) printjson(cursor.next()); but they didn't return anything.

    Read the article

  • Error monitoring/handling on webservers

    - by Industrial
    Hi everybody, We have a web server that we're about to launch a number of applications onto. They will all share database and memcached servers, but each application has it's own mySQL database and all memcached keys per application, is prefixed. Possible scenario: If a memcached server in our cluster goes boom, we want someone (operative system admin) to be automatically contacted by email/iphone push notification or in any other appropriate way. If we we're about to install 150 identical applications for our customers on our servers, and a memcached server dies - all 150 applications will individually find this out and contact our system admin, which most certainly is going to think about getting a new job where he or she isn't about to be woken up by getting 150 messages sent 4:15 in the morning. Possible solution: One idea is to set up an external server for error handling that gets a $_POST or cURL request sent, and handles storage of the error message depending on the seriousness of the actual error message. It would of course check upon receiving the error call, that if the same memcached server have already been reported as offline, there would be no need to spam the system admin with additional reminders... The questions: What's a good approach on how to handle errors? How does the big guys in the industry handle this? Thanks!

    Read the article

  • trouble calculating offset index into 3D array

    - by Derek
    Hello, I am writing a CUDA kernel to create a 3x3 covariance matrix for each location in the rows*cols main matrix. So that 3D matrix is rows*cols*9 in size, which i allocated in a single malloc accordingly. I need to access this in a single index value the 9 values of the 3x3 covariance matrix get their values set according to the appropriate row r and column c from some other 2D arrays. In other words - I need to calculate the appropriate index to access the 9 elements of the 3x3 covariance matrix, as well as the row and column offset of the 2D matrices that are inputs to the value, as well as the appropriate index for the storage array. i have tried to simplify it down to the following: //I am calling this kernel with 1D blocks who are 512 cols x 1row. TILE_WIDTH=512 int bx = blockIdx.x; int by = blockIdx.y; int tx = threadIdx.x; int ty = threadIdx.y; int r = by + ty; int c = bx*TILE_WIDTH + tx; int offset = r*cols+c; int ndx = r*cols*rows + c*cols; if((r < rows) && (c < cols)){ //this IF statement is trying to avoid the case where a threadblock went bigger than my original array..not sure if correct d_cov[ndx + 0] = otherArray[offset]; d_cov[ndx + 1] = otherArray[offset] d_cov[ndx + 2] = otherArray[offset] d_cov[ndx + 3] = otherArray[offset] d_cov[ndx + 4] = otherArray[offset] d_cov[ndx + 5] = otherArray[offset] d_cov[ndx + 6] = otherArray[offset] d_cov[ndx + 7] = otherArray[offset] d_cov[ndx + 8] = otherArray[offset] } When I check this array with the values calculated on the CPU, which loops over i=rows, j=cols, k = 1..9 The results do not match up. in other words d_cov[i*rows*cols + j*cols + k] != correctAnswer[i][j][k] Can anyone give me any tips on how to sovle this problem? Is it an indexing problem, or some other logic error?

    Read the article

  • nservicebus deleting subscription record after inserting it?

    - by Justin Holbrook
    I have been playing with nservicebus for a few weeks now and since everything was going well on my local machine I decided to try to set up a test environment and work on deployment. I am using the generic host that comes with nservicebus and was using the nservicebus.Integration profile when running locally, but would like to use Nservicebus.Production in the test environment. I set up a sql server 2008 database, made changes to my app.config and everything seemed to work fine. But after a few attempts, I noticed messages were not being picked up by my subscriber. I checked the subscription table and it was empty. Upon examination of the logs I noticed the following: 2010-05-06 15:07:57,416 [1] DEBUG NHibernate.Persister.Entity.AbstractEntityPers ister [(null)] <(null) - Insert 0: INSERT INTO [Subscription] (SubscriberEndpo int, MessageType) VALUES (?, ?) 2010-05-06 15:07:57,416 [1] DEBUG NHibernate.Persister.Entity.AbstractEntityPers ister [(null)] <(null) - Update 0: 2010-05-06 15:07:57,416 [1] DEBUG NHibernate.Persister.Entity.AbstractEntityPers ister [(null)] <(null) - Delete 0: DELETE FROM [Subscription] WHERE Subscriber Endpoint = ? AND MessageType = ? Why would it insert then delete my subscription right afterwards? To try to rule out a nhibernate dialect issue I tried switching my subscription storage to an oracle 10g database. It behaved exactly the same, it worked the first 2 times, then I started seeing my subscriptions being deleted right after they were inserted. Any ideas what is causing this behavior?

    Read the article

  • Use of extern in C++ dll

    - by dom_beau
    I declare then instantiate a static variable in a DLL. // DLL.h class A { //... }; static A* a; // DLL.cpp A* a = new A; So far, so good... I was suggested to use extern rather than static. extern A* a; // in DLL.h No problem with that but the extern variable must be declared somewhere. I got Invalid storage class member. In other words, what I was used to do is to declare a variable in a source file like this: // In src.cpp A a; then extern declare it in another source file in the same project: // In src2.cpp extern A a; so it is the same object a at link time. Maybe it is not the right thing to do? So, where to declare the variable that is now extern? Note that I used static declaration in order to see the variable instantiated as soon as the dll is loaded. Note that the current use of static works most of the time but I think I observe a delay or something like this in the variable instantiation while it should always be instantiated at load time. I'm investigating this problem for a week now and I can't find no solution.

    Read the article

  • ASP.NET or PHP: Is Memcached useful for storing user-state information?

    - by hamlin11
    This question may expose my ignorance as a web developer, but that wouldn't exactly be a bad thing for me now would it? I have the need to store user-state information. Examples of information that I need to store per user. (define user: unauthenticated visitor) User arrived to the site from google/bing/yahoo User utilized the search feature (true/false) List of previous visited product pages on current visit It is my understanding that I could store this in the view state, but that causes a problem with page load from the end-users' perspective because a significant amount of non-viewable information is being transferred to and from the end-users even though the server is the only side that needs the info. On a similar note, it is my understanding that the session state can be used to store such information, but does not this also result in the same information being transferred to the user and stored in their cookie? (Not quite as bad as viewstate, but it does not feel ideal). This leaves me with either a server-only-session storage system or a mem-caching solution. Is memcached the only good option here?

    Read the article

  • Efficient mapping of game entity positions in Java

    - by byte
    In Java (Swing), say I've got a 2D game where I have various types of entities on the screen, such as a player, bad guys, powerups, etc. When the player moves across the screen, in order to do efficient checking of what is in the immediate vicinity of the player, I would think I'd want indexed access to the things that are near the character based on their position. For example, if player 'P' steps onto element 'E' in the following example... | | | | | | | | | |P| | | | |E| | | | | | | | | ... would be to do something like: if(player.getPosition().x == entity.getPosition().x && entity.getPosition.y == thing.getPosition().y) { //do something } And thats fine, but that implies that the entities hold their positions, and therefor if I had MANY entities on the screen I would have to loop through all possible entities available and check each ones position against the player position. This seems really inefficient especially if you start getting tons of entities. So, I would suspect I'd want some sort of map like Map<Point, Entity> map = new HashMap<Point, Entity>(); And store my point information there, so that I could access these entities in constant time. The only problem with that approach is that, if I want to move an entity to a different point on the screen, I'd have to search through the values of the HashMap for the entity I want to move (inefficient since I dont know its Point position ahead of time), and then once I've found it remove it from the HashMap, and re-insert it with the new position information. Any suggestions or advice on what sort of data structure / storage format I ought to be using here in order to have efficient access to Entities based on their position, as well as Position's based on the Entity?

    Read the article

  • Suitable data structures for saving files in localStorage (HTML5) ?

    - by WmasterJ
    It is nice when there isn't a DB to maintain and users to authenticate. My professor has asked me to convert a recent research project of his that uses Bespin and calculates errors made by users in a code editor as part of his research. The goal is to convert from MySQL to using HTML5 localStorage completely. Doesn't seem so hard to do, even though digging in his code might take some time. Question: I need to store files and state (last placement of cursor and active file). I have already done so by implementing the recommendations in another stackoverflow thread. But would like your input considering how to structure the content to use. My current solution Hashmap like solution with javascript objects: files = {}; // later, saving files[fileName] = data; And then storing in localStorage using some recommendations localStorage.setObject("files", files); // Note that setObject(key, data) does not exist but is added // using Storage.prototype.setObject = function() {... Currently I'm also considering using some type of numeric id. So that names can be changed without any hassle renaming the key in the hashmap. What is your opinion on the way it is solved and would you do it any differently?

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Stable/repeatable random sort (MySQL, Rails)

    - by Matt Rogish
    I'd like to paginate through a randomly sorted list of ActiveRecord models (rows from MySQL database). However, this randomization needs to persist on a per-session basis, so that other people that visit the website also receive a random, paginate-able list of records. Let's say there are enough entities (tens of thousands) that storing the randomly sorted ID values in either the session or a cookie is too large, so I must temporarily persist it in some other way (MySQL, file, etc.). Initially I thought I could create a function based on the session ID and the page ID (returning the object IDs for that page) however since the object ID values in MySQL are not sequential (there are gaps), that seemed to fall apart as I was poking at it. The nice thing is that it would require no/minimal storage but the downsides are that it is likely pretty complex to implement and probably CPU intensive. My feeling is I should create an intersection table, something like: random_sorts( sort_id, created_at, user_id NULL if guest) random_sort_items( sort_id, item_id, position ) And then simply store the 'sort_id' in the session. Then, I can paginate the random_sorts WHERE sort_id = n ORDER BY position LIMIT... as usual. Of course, I'd have to put some sort of a reaper in there to remove them after some period of inactivity (based on random_sorts.created_at). Unfortunately, I'd have to invalidate the sort as new objects were created (and/or old objects being removed, although deletion is very rare). And, as load increases the size/performance of this table (even properly indexed) drops. It seems like this ought to be a solved problem but I can't find any rails plugins that do this... Any ideas? Thanks!!

    Read the article

  • Interesting Scala typing solution, doesn't work in 2.7.7?

    - by djc
    I'm trying to build some image algebra code that can work with images (basically a linear pixel buffer + dimensions) that have different types for the pixel. To get this to work, I've defined a parametrized Pixel trait with a few methods that should be able to get used with any Pixel subclass. (For now, I'm only interested in operations that work on the same Pixel type.) Here it is: trait Pixel[T <: Pixel[T]] { def mul(v: Double): T def max(v: T): T def div(v: Double): T def div(v: T): T } Now I define a single Pixel type that has storage based on three doubles (basically RGB 0.0-1.0), I've called it TripleDoublePixel: class TripleDoublePixel(v: Array[Double]) extends Pixel[TripleDoublePixel] { var data: Array[Double] = v def this() = this(Array(0.0, 0.0, 0.0)) def toString(): String = { "(" + data(0) + ", " + data(1) + ", " + data(2) + ")" } def increment(v: TripleDoublePixel) { data(0) += v.data(0) data(1) += v.data(1) data(2) += v.data(2) } def mul(v: Double): TripleDoublePixel = { new TripleDoublePixel(data.map(x => x * v)) } def div(v: Double): TripleDoublePixel = { new TripleDoublePixel(data.map(x => x / v)) } def div(v: TripleDoublePixel): TripleDoublePixel = { var tmp = new Array[Double](3) tmp(0) = data(0) / v.data(0) tmp(1) = data(1) / v.data(1) tmp(2) = data(2) / v.data(2) new TripleDoublePixel(tmp) } def max(v: TripleDoublePixel): TripleDoublePixel = { val lv = data(0) * data(0) + data(1) * data(1) + data(2) * data(2) val vv = v.data(0) * v.data(0) + v.data(1) * v.data(1) + v.data(2) * v.data(2) if (lv > vv) (this) else v } } Now I want to write code to use this, that doesn't have to know what type the pixels are. For example: def idiv[T](a: Image[T], b: Image[T]) { for (i <- 0 until a.data.size) { a.data(i) = a.data(i).div(b.data(i)) } } Unfortunately, this doesn't compile: (fragment of lindet-gen.scala):145: error: value div is not a member of T a.data(i) = a.data(i).div(b.data(i)) I was told in #scala that this worked for someone else, but that was on 2.8. I've tried to get 2.8-rc1 going, but it doesn't compile for me. Is there any way to get this to work in 2.7.7?

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >