Search Results

Search found 8397 results on 336 pages for 'implementation'.

Page 263/336 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • how to Solve the "Digg" problem in MongoDB

    - by user193116
    A while back,a Digg developer had posted this blog ,"http://about.digg.com/blog/looking-future-cassandra", where the he described one of the issues that were not optimally solved in MySQL. This was cited as one of the reasons for their move to Cassandra. I have been playing with MongoDB and I would like to understand how to implement the MongoDB collections for this problem From the article, the schema for this information in MySQL : CREATE TABLE Diggs ( id INT(11), itemid INT(11), userid INT(11), digdate DATETIME, PRIMARY KEY (id), KEY user (userid), KEY item (itemid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; CREATE TABLE Friends ( id INT(10) AUTO_INCREMENT, userid INT(10), username VARCHAR(15), friendid INT(10), friendname VARCHAR(15), mutual TINYINT(1), date_created DATETIME, PRIMARY KEY (id), UNIQUE KEY Friend_unique (userid,friendid), KEY Friend_friend (friendid) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; This problem is ubiquitous in social networking scenario implementation. People befriend a lot of people and they in turn digg a lot of things. Quickly showing a user what his/her friends are up to is very critical. I understand that several blogs have since then provided a pure RDBMs solution with indexes for this issue; however I am curious as to how this could be solved in MongoDB.

    Read the article

  • How to easily map c++ enums to strings

    - by Roddy
    I have a bunch of enum types in some library header files that I'm using, and I want to have a way of converting enum values to user strings - and vice-versa. RTTI won't do it for me, because the 'user strings' need to be a bit more readable than the enumerations. A brute force solution would be a bunch of functions like this, but I feel that's a bit too C-like. enum MyEnum {VAL1, VAL2,VAL3}; String getStringFromEnum(MyEnum e) { switch e { case VAL1: return "Value 1"; case VAL2: return "Value 2"; case VAL1: return "Value 3"; default: throw Exception("Bad MyEnum"); } } I have a gut feeling that there's an elegant solution using templates, but I can't quite get my head round it yet. UPDATE: Thanks for suggestions - I should have made clear that the enums are defined in a third-party library header, so I don't want to have to change the definition of them. My gut feeling now is to avoid templates and do something like this: char * MyGetValue(int v, char *tmp); // implementation is trivial #define ENUM_MAP(type, strings) char * getStringValue(const type &T) \ { \ return MyGetValue((int)T, strings); \ } ; enum eee {AA,BB,CC}; - exists in library header file ; enum fff {DD,GG,HH}; ENUM_MAP(eee,"AA|BB|CC") ENUM_MAP(fff,"DD|GG|HH") // To use... eee e; fff f; std::cout<< getStringValue(e); std::cout<< getStringValue(f);

    Read the article

  • Zend_Auth and database SaveHandler

    - by takeshin
    I have created Zend_Auth adapter implementing Zend_Auth_Adapter_Interface (similar to Pádraic's adapter) and created simple ACL plugin. Everything works fine with default session handler. So far, so good. As a next step I have created custom Session SaveHandler to persist session data in the database. My implementation is very similar to this one from parables-demo. Seems that everything is working fine. Session data are properly saved to the database, session objects are serialized, but authentication does not work when I enable this custom SaveHandler. I have debugged the authentication and all works fine up till the next request, when the authentication data are lost. I suspected, that is has something to do with the fact, that I use $adapter->write($object) instead $adapter->write($string), but the same happens with strings. I'm bootstrapping Zend_Application_Resource_Session in the first Bootstrap method, as early as possible. Does Zend_Auth need any extra configuration to persist data in the database? Why the authentity is being lost?

    Read the article

  • NSTimer Reset Not Working

    - by user355900
    hi, i have a nstimer and it works perfectly counting down from 2:00 but when i hit the reset button it does not work it just stops the timer and when i press start again it will carry on with the timer as if it had never been stopped. Here is my code `@implementation TimerAppDelegate @synthesize window; (void)applicationDidFinishLaunching:(UIApplication *)application { timerLabel.text = @"2:00"; seconds = 120; // Override point for customization after application launch [window makeKeyAndVisible]; } (void)viewDidLoad { [timer invalidate]; } (void)countDownOneSecond { seconds--; int currentTime = [timerLabel.text intValue]; int newTime = currentTime - 1; int displaySeconds = !(seconds % 60) ? 0 : seconds < 60 ? seconds : seconds - 60; int displayMinutes = floor(seconds / 60); NSString *time = [NSString stringWithFormat:@"%d:%@%d", displayMinutes, [[NSString stringWithFormat:@"%d", displaySeconds] length] == 1 ? @"0" : @"", displaySeconds ]; timerLabel.text = time; if(seconds == 0) { [timer invalidate]; } } (void)startOrStopTimer { if(timerIsRunning){ [timer invalidate]; [startOrStopButton setTitle:@"Start" forState:UIControlStateNormal]; } else { timer = [[NSTimer scheduledTimerWithTimeInterval:1.0 target:self selector:@selector(countDownOneSecond) userInfo:nil repeats:YES] retain]; [startOrStopButton setTitle:@"Stop" forState:UIControlStateNormal]; } timerIsRunning = !timerIsRunning; } (void)resetTimer { [timer invalidate]; [startOrStopButton setTitle:@"Start" forState:UIControlStateNormal]; [timer invalidate]; timerLabel.text = @"2:00"; } (void)dealloc { [window release]; [super dealloc]; } @end` thanks

    Read the article

  • Why does this program take up so much memory?

    - by Adrian
    I am learning Objective-C. I am trying to release all of the memory that I use. So, I wrote a program to test if I am doing it right: #import <Foundation/Foundation.h> #define DEFAULT_NAME @"Unknown" @interface Person : NSObject { NSString *name; } @property (copy) NSString * name; @end @implementation Person @synthesize name; - (void) dealloc { [name release]; [super dealloc]; } - (id) init { if (self = [super init]) { name = DEFAULT_NAME; } return self; } @end int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; Person *person = [[Person alloc] init]; NSString *str; int i; for (i = 0; i < 1e9; i++) { str = [NSString stringWithCString: "Name" encoding: NSUTF8StringEncoding]; person.name = str; [str release]; } [person release]; [pool drain]; return 0; } I am using a mac with snow leopard. To test how much memory this is using, I open Activity Monitor at the same time that it is running. After a couple of seconds, it is using gigabytes of memory. What can I do to make it not use so much?

    Read the article

  • Connections hanging on read()

    - by viraptor
    Hi, Short version: I've got a strange issue with a server accepting TCP connections. Even though there are normally some processes waiting, at some volume of connections it hangs. Long version: The server is written in Perl and binds a $srv socket with the reuse flag and listen == 5. Afterwards, it forks into 10 processes with a loop of $clt=$srv->accept(); do_processing($clt); $clt->shutdown(2); The client written in C is also very simple - it sends some lines, then receives all lines available and does a shutdown(sockfd, 2); There's nothing async going on and at the end both send and receive queues are empty (as reported by netstat). Connections last only ~20ms. All clients behave the same way, are the same implementation, etc. Now let's say I'm accepting X connections from client 1 and another X from client 2. Processes still report that they're idle all the time. If I add another X connections from client 3, suddenly the server processes start hanging just after accepting. The first blocking thing they do after accept(); is while (<$clt>) ... - but they don't get any data (on the first try already). Suddenly all 10 processes are in this state and do not stop waiting. On strace, the server processes seem to hang on read(), which makes sense. There are loads of connections in TIME_WAIT state belonging to that server (~100 when the problem starts to manifest), but this might be a red herring. What could be happening here?

    Read the article

  • How do I optimize this postfix expression tree for speed?

    - by Peter Stewart
    Thanks to the help I received in this post: I have a nice, concise recursive function to traverse a tree in postfix order: deque <char*> d; void Node::postfix() { if (left != __nullptr) { left->postfix(); } if (right != __nullptr) { right->postfix(); } d.push_front(cargo); return; }; This is an expression tree. The branch nodes are operators randomly selected from an array, and the leaf nodes are values or the variable 'x', also randomly selected from an array. char *values[10]={"1.0","2.0","3.0","4.0","5.0","6.0","7.0","8.0","9.0","x"}; char *ops[4]={"+","-","*","/"}; As this will be called billions of times during a run of the genetic algorithm of which it is a part, I'd like to optimize it for speed. I have a number of questions on this topic which I will ask in separate postings. The first is: how can I get access to each 'cargo' as it is found. That is: instead of pushing 'cargo' onto a deque, and then processing the deque to get the value, I'd like to start processing it right away. I don't yet know about parallel processing in c++, but this would ideally be done concurrently on two different processors. In python, I'd make the function a generator and access succeeding 'cargo's using .next(). But I'm using c++ to speed up the python implementation. I'm thinking that this kind of tree has been around for a long time, and somebody has probably optimized it already. Any Ideas? Thanks

    Read the article

  • Most Elegant Way to write isPrime in java

    - by Anantha Kumaran
    public class Prime { public static boolean isPrime1(int n) { if (n <= 1) { return false; } if (n == 2) { return true; } for (int i = 2; i <= Math.sqrt(n) + 1; i++) { if (n % i == 0) { return false; } } return true; } public static boolean isPrime2(int n) { if (n <= 1) { return false; } if (n == 2) { return true; } if (n % 2 == 0) { return false; } for (int i = 3; i <= Math.sqrt(n) + 1; i = i + 2) { if (n % i == 0) { return false; } } return true; } } public class PrimeTest { public PrimeTest() { } @Test public void testIsPrime() throws IllegalArgumentException, IllegalAccessException, InvocationTargetException { Prime prime = new Prime(); TreeMap<Long, String> methodMap = new TreeMap<Long, String>(); for (Method method : Prime.class.getDeclaredMethods()) { long startTime = System.currentTimeMillis(); int primeCount = 0; for (int i = 0; i < 1000000; i++) { if ((Boolean) method.invoke(prime, i)) { primeCount++; } } long endTime = System.currentTimeMillis(); Assert.assertEquals(method.getName() + " failed ", 78498, primeCount); methodMap.put(endTime - startTime, method.getName()); } for (Entry<Long, String> entry : methodMap.entrySet()) { System.out.println(entry.getValue() + " " + entry.getKey() + " Milli seconds "); } } } I am trying to find the fastest way to check whether the given number is prime or not. This is what is finally came up with. Is there any better way than the second implementation(isPrime2).

    Read the article

  • Boost's "cstdint" Usage

    - by patt0h
    Boost's C99 stdint implementation is awfully handy. One thing bugs me, though. They dump all of their typedefs into the boost namespace. This leaves me with three choices when using this facility: Use "using namespace boost" Use "using boost::[u]<type><width>_t" Explicitly refer to the target type with the boost:: prefix; e.g., boost::uint32_t foo = 0; Option ? 1 kind of defeats the point of namespaces. Even if used within local scope (e.g., within a function), things like function arguments still have to be prefixed like option ? 3. Option ? 2 is better, but there are a bunch of these types, so it can get noisy. Option ? 3 adds an extreme level of noise; the boost:: prefix is often = to the length of the type in question. My question is: What would be the most elegant way to bring all of these types into the global namespace? Should I just write a wrapper around boost/cstdint.hpp that utilizes option ? 2 and be done with it? Also, wrapping the header like so didn't work on VC++ 10 (problems with standard library headers): namespace Foo { #include <boost/cstdint.hpp> using namespace boost; } using namespace Foo; Even if it did work, I guess it would cause ambiguity problems with the ::boost namespace.

    Read the article

  • How does one implement storage/retrieval of smart-search/mailbox features?

    - by humble_coder
    Hi All, I have a question regarding implementation of smart-search features. For example, consider something like "smart mailboxes" in various email applications. Let's assume you have your data (emails) stored in a database and, depending on the field for which the query will be created, you present different options to the end user. At the moment let's assume the Subject, Verb, Object approach… For instance, say you have the following: SUBJECTs: message, to_address, from_address, subject, date_received VERBs: contains, does_not_contain, is_equal_to, greater_than, less_than OBJECTs: ??????? Now, in case it isn't clear, I want a table structure (although I'm not opposed to an external XMLesque file of some sort) to store (and later retrieve/present) my criteria for smart searches/mailboxes for later use. As an example, using SVO I could easily store then reconstruct a query for "date between two dates" -- simply use "date greater than" AND "date less than". However, what if, in the same smart search, I wanted a "between" OR'ed with another criterion? You can see that it might get out of hand -- not necessarily in the query creation (as that is rather simplistic), but in the option presentation and storage mechanism. Perhaps I need to think more on a more granular level. Perhaps I need to simply allow the user to select AND or OR for each entry independently instead of making it an ALL OR NOTHING type smart search (i.e. instead of MATCH ALL or MATCH ANY, I need to simply allow them to select -- I just don't want it to turn into a Hydra). Any input would be most appreciated. My apologies if the question is a bit incoherent. It is late, and I my brain is toast. Best.

    Read the article

  • Fastest XML parser for small, simple documents in Java

    - by Varkhan
    I have to objectify very simple and small XML documents (less than 1k, and it's almost SGML: no namespaces, plain UTF-8, you name it...), read from a stream, in Java. I am using JAXP to process the data from my stream into a Document object. I have tried Xerces, it's way too big and slow... I am using Dom4j, but I am still spending way too much time in org.dom4j.io.SAXReader. Does anybody out there have any suggestion on a faster, more efficient implementation, keeping in mind I have very tough CPU and memory constraints? [Edit 1] Keep in mind that my documents are very small, so the overhead of staring the parser can be important. For instance I am spending as much time in org.xml.sax.helpers.XMLReaderFactory.createXMLReader as in org.dom4j.io.SAXReader.read [Edit 2] The result has to be in Dom format, as I pass the document to decision tools that do arbitrary processing on it, like switching code based on the value of arbitrary XPaths, but also extracting lists of values packed as children of a predefined node. [Edit 3] In any case I eventually need to load/parse the complete document, since all the information it contains is going to be used at some point. (This question is related to, but different from, http://stackoverflow.com/questions/373833/best-xml-parser-for-java )

    Read the article

  • How to set interface method between two viewController to pass paramter in Navigation Controller

    - by TechFusion
    Hello, I have created Window based application, root controller as Tab Bar controller. One Tab bar has Navigation controller. Navigation controller's ViewControlller implementation, I am pushing Viewcontroller. I am looking to pass parameter from Navigation Controller's View controller to pushed View Controller. I have tried to pass as per below method. //ViewController.h @interface ViewController:UIViewController{ NSString *String; } @property(copy, nonatomic)NSString *String; @end //ViewController.m #import "ViewController1.h" ViewController1 *level1view = [[ViewController alloc]init]; level1view.hideBottomBarWhenPushed = YES; level1view.theString = String; [self.navigationController pushViewController:level1view animated:YES]; [level1view release]; //ViewController1.h NSString *theString; @property(copy, nonatomic)NSString *theString; This is working fine. but I want to pass more than one parameter like Integer and UITextFiled Values so how to do that? Is there any Apple Doc that I can get idea about this? Thanks,

    Read the article

  • Tagging in Subversion - how do I make the decision about continuing to work on my trunk vs. the new

    - by Howiecamp
    I'm running Tortoise SVN to manage a project. Obviously the principles around tagging apply to any implementation of SVN but in this question I'll be referring to some TortoiseSVN-specific dialog boxes and messages. My working directory and the subversion repository structure both have a Source root directory and the Trunk, Tags and Branches directories underneath. (I couldn't figure out how to do a multilevel indented hierarchy in markdown without using bullets, so if someone could edit and fix this I'd appreciate it.) I'm working out of the Trunk directory in my working copy and it's pointing at the Trunk directory in the repo. I want to apply a Tag "Release1" so I click the "Branch/tag..." menu option and set the repo path as my [repo_path/bla/Source/Tags/Release1" tag. This dialog box gives me the option to "Switch my working copy to new branch/tag". I understand that if this option is left unchecked, the new "Release1" branch under /Tags" will be created but my working copy will remain on the previous "Trunk" path. If I do check this option (or use the Switch command) I understand that my working copy will switch to the new "Release1" branch under "/Tags". Where I'm missing a concept is how to make this decision. It doesn't seem like I want to switch my working directory to the recently created tag since by definition (?) I want that tag to be a snapshot of my code as of a point in time. If I don't switch the working directory, I'll continue working off Trunk and when I'm ready to take another snapshot I'll make another tag. And so on... Am I understanding this right or am I stating something incorrectly in the previous paragraph (e.g. the statement about not wanting to switch to the tag since the tag should represent a point in time snapshot) or otherwise missing something regarding how to make this decision?

    Read the article

  • ASP.NET MVC editor template for property

    - by Idsa
    Usually I render my forms by @Html.RenderModel, but this time I have a complex rendering logic and I render it manually. I decided to create a editor template for one property. Here is the code (copy pasted from default object editor template implementation): <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> <% var modelMetadata = ViewData.ModelMetadata; %> <% if (modelMetadata.HideSurroundingHtml) { %> <%= Html.Editor(modelMetadata.PropertyName) %> <% } else { %> <% if (!String.IsNullOrEmpty(Html.Label(modelMetadata.PropertyName).ToHtmlString())) { %> <div class="editor-label"><%= Html.Label(modelMetadata.PropertyName) %></div> <% } %> <div class="editor-field"> <%= Html.Editor(modelMetadata.PropertyName) %> <%= Html.ValidationMessage(modelMetadata.PropertyName) %> </div> <% } %> And here is how I use it: @Html.EditorFor(x => x.SomeProperty, "Property") //"Property" is template above But it didn't work: labels are rendered regardless of DisplayName and editors are not rendered at all (in Watches Html.Editor(modelMetadata.PropertyName shows empty string). What am I doing wrong?

    Read the article

  • 'WebException' error on back button even when calling 'void' async method

    - by BlazingFrog
    I have a windows phone app that allows the user to interact with it. Each interaction will always result in an async WCF call. In addition to that, some interactions will result in opening the browser, maps, email, etc... The problem is that, when hitting the back button, I sometime get the following error "An error (WebException) occurred while transmitting data over the HTTP channel." with the following stack trace: at System.ServiceModel.Channels.HttpChannelUtilities.ProcessGetResponseWebException(WebException webException, HttpWebRequest request, HttpAbortReason abortReason) at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelAsyncRequest.CompleteGetResponse(IAsyncResult result) at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelAsyncRequest.OnGetResponse(IAsyncResult result) at System.Net.Browser.ClientHttpWebRequest.<>c__DisplayClassa.<InvokeGetResponseCallback>b__8(Object state2) at System.Threading.ThreadPool.WorkItem.WaitCallback_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadPool.WorkItem.doWork(Object o) at System.Threading.Timer.ring() My understanding is that it's happening because my app opened another app (browser, maps, etc) before it had the time to execute the EndMyAsyncMethod(System.IAsyncResult result). Fair enough... What's really annoying is that it seems it should get fixed by cloning the server-side method, only making it void with the following operation contract [OperationContract(IsOneWay = true)] but I'm still getting the error. What's worse is that the exception is thrown in a system-generated part of the code and, thus, cannot be manually caught causing the app to just crash. I simply don't understand the need to execute an Endxxx method when it's explicitely marked as OneWay and void. EDIT I did find a similar issue here. It does seem that it is related to the message getting to the service (not the client callback). My next question is: if I'm now calling a method marked AsyncPattern and OneWay, what exactly should I be waiting for on the client to be sure the message was transmitted successfully? This is new service definition: [OperationContract(IsOneWay = true, AsyncPattern = true)] IAsyncResult BeginCacheQueryWithoutCallback(string param1, QueryInfoDataContract queryInfo, AsyncCallback cb, Object s); void EndCacheQueryWithoutCallback(IAsyncResult r); And the implementation: public IAsyncResult BeginCacheQueryWithoutCallback(string param1, QueryInfoDataContract queryInfo, AsyncCallback cb, Object s) { // do some stuff return new CompletedAsyncResult<string>(""); } public void EndCacheQueryWithoutCallback(IAsyncResult r) { }

    Read the article

  • Problem with "moveable-only types" in VC++ 2010

    - by Luc Touraille
    I recently installed Visual Studio 2010 Professional RC to try it out and test the few C++0x features that are implemented in VC++ 2010. I instantiated a std::vector of std::unique_ptr, without any problems. However, when I try to populate it by passing temporaries to push_back, the compiler complains that the copy constructor of unique_ptr is private. I tried inserting an lvalue by moving it, and it works just fine. #include <utility> #include <vector> int main() { typedef std::unique_ptr<int> int_ptr; int_ptr pi(new int(1)); std::vector<int_ptr> vec; vec.push_back(std::move(pi)); // OK vec.push_back(int_ptr(new int(2)); // compiler error } As it turns out, the problem is neither unique_ptr nor vector::push_back but the way VC++ resolves overloads when dealing with rvalues, as demonstrated by the following code: struct MoveOnly { MoveOnly() {} MoveOnly(MoveOnly && other) {} private: MoveOnly(const MoveOnly & other); }; void acceptRValue(MoveOnly && mo) {} int main() { acceptRValue(MoveOnly()); // Compiler error } The compiler complains that the copy constructor is not accessible. If I make it public, the program compiles (even though the copy constructor is not defined). Did I misunderstand something about rvalue references, or is it a (possibly known) bug in VC++ 2010 implementation of this feature?

    Read the article

  • Two Problems I'm having with UIButton and UIView.

    - by Andy
    Hi all, I haven't been programming on the iPhone for very long, but I'm picking it up slowly by googling problems I get. Unfortunately I haven't been able to find an answer for these. I have started a new View-based application in Xcode 3.2.2 and immediately added the following files: myUIView.m and myUIView.h, which are subclasses of UIView. In Interface Builder, I set the subclass of the default UIView to be myUIView. I made a button in the drawRect method. Problem one: The title of the button only appears AFTER I click the screen, why? Problem two: I want the button to produce the modalview - is this possible? The code is as follow: #import "myUIView.h" @implementation myUIView - (void)drawRect:(CGRect)rect { // Drawing code button = [UIButton buttonWithType:UIButtonTypeRoundedRect]; button.frame = CGRectMake(0,0,100,100); [button setTitle:@"butty" forState:UIControlStateNormal]; [button addTarget:self action:@selector(buttonPressed:) forControlEvents:UIControlEventTouchUpInside]; [self addSubview:button]; } -(void)buttonPressed:(id)sender{ NSLog(@"Button pressed"); //present modal view somehow..? } I can't see how to post attachments, but if anyone thinks it will help I can upload the source. Many thanks, Andy

    Read the article

  • Thread management advice - Is TPL a good idea?

    - by Ian
    I'm hoping to get some advice on the use of thread managment and hopefully the task parallel library, because I'm not sure I've been going down the correct route. Probably best is that I give an outline of what I'm trying to do. Given a Problem I need to generate a Solution using a heuristic based algorithm. I start of by calculating a base solution, this operation I don't think can be parallelised so we don't need to worry about. Once the inital solution has been generated, I want to trigger n threads, which attempt to find a better solution. These threads need to do a couple of things: They need to be initalized with a different 'optimization metric'. In other words they are attempting to optimize different things, with a precedence level set within code. This means they all run slightly different calculation engines. I'm not sure if I can do this with the TPL.. If one of the threads finds a better solution that the currently best known solution (which needs to be shared across all threads) then it needs to update the best solution, and force a number of other threads to restart (again this depends on precedence levels of the optimization metrics). I may also wish to combine certain calculations across threads (e.g. keep a union of probabilities for a certain approach to the problem). This is probably more optional though. The whole system needs to be thread safe obviously and I want it to be running as fast as possible. I tried quite an implementation that involved managing my own threads and shutting them down etc, but it started getting quite complicated, and I'm now wondering if the TPL might be better. I'm wondering if anyone can offer any general guidance? Thanks...

    Read the article

  • Warning: cast increases required alignment

    - by dash-tom-bang
    I'm recently working on this platform for which a legacy codebase issues a large number of "cast increases required alignment to N" warnings, where N is the size of the target of the cast. struct Message { int32_t id; int32_t type; int8_t data[16]; }; int32_t GetMessageInt(const Message& m) { return *reinterpret_cast<int32_t*>(&data[0]); } Hopefully it's obvious that a "real" implementation would be a bit more complex, but the basic point is that I've got data coming from somewhere, I know that it's aligned (because I need the id and type to be aligned), and yet I get the message that the cast is increasing the alignment, in the example case, to 4. Now I know that I can suppress the warning with an argument to the compiler, and I know that I can cast the bit inside the parentheses to void* first, but I don't really want to go through every bit of code that needs this sort of manipulation (there's a lot because we load a lot of data off of disk, and that data comes in as char buffers so that we can easily pointer-advance), but can anyone give me any other thoughts on this problem? I mean, to me it seems like such an important and common option that you wouldn't want to warn, and if there is actually the possibility of doing it wrong then suppressing the warning isn't going to help. Finally, can't the compiler know as I do how the object in question is actually aligned in the structure, so it should be able to not worry about the alignment on that particular object unless it got bumped a byte or two?

    Read the article

  • Credit Card storage solution

    - by jtnire
    Hi Everyone, I'm developing a solution that is designed to store membership details, as well as credit card details. I'm trying to comply with PCI DSS as much as I can. Here is my design so far: PAN = Primary account number == long number on credit card Server A is a remote server. It stores all membership details (Names, Address etc..) and provides indivudal Key A's for each PAN stored Server B is a local server, and actually holds the encrypted PANs, as well as Key B, and does the decryption. To get a PAN, the client has to authenticate with BOTH servers, ask Server A for the respective Key A, then give Key A to server B, which will return the PAN to the client (provided authentication was sucessful). Server A will only ever encrypt Key A with Server B's public Key, as it will have it beforehand. Server B will probably have to send a salt first though, however I doin't think that has to be encrypted I havn't really thought about any implementation (i.e. coding) specifics yet regarding the above, however the solution is using Java's Cajo framework (wrapper for RMI) so that is how the servers will communicate with each other (Currently, membership details are transfered in this way). The reason why I want Server B to do the decryption, and not the client, is that I am afraid of decryption keys going into the client's RAM, even though it's probably just as bad on the server... Can anyone see anything wrong with the above design? It doesn't matter if the above has to be changed. Thanks jtnire

    Read the article

  • To use the 'I' prefix for interfaces or not to

    - by ng
    That is the question? So how big a sin is it not to use this convention when developing a c# project? This convention is widely used in the .NET class library. However, I am not a fan to say the least, not just for asthetic reasons but I don't think it makes any contribution. For example is IPSec an interface of PSec? Is IIOPConnection An interface of IOPConnection, I usually go to the definition to find out anyway. So would not using this convention cause confusion? Are there any c# projects or libraries of note that drop this convention? Do any c# projects that mix conventions, as unfortunately Apache Wicket does? The Java class libraries have existed without this for many years, I don't feel I have ever struggled to read code without it. Also, should the interface not be the most primitive description? I mean IList<T> as an interface for List<T> in c#, is it not better to have List<T> and LinkedList<T> or ArrayList<T> or even CopyOnWriteArrayList<T>? The classes describe the implementation? I think I get more information here, than I do from List<T> in c#.

    Read the article

  • What hash algorithms are paralellizable? Optimizing the hashing of large files utilizing on mult-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • LINQ-SQL Updating Multiple Rows in a single transaction

    - by RPM1984
    Hi guys, I need help re-factoring this legacy LINQ-SQL code which is generating around 100 update statements. I'll keep playing around with the best solution, but would appreciate some ideas/past experience with this issue. Here's my code: List<Foo> foos; int userId = 123; using (DataClassesDataContext db = new FooDatabase()) { foos = (from f in db.FooBars where f.UserId = userId select f).ToList(); foreach (FooBar fooBar in foos) { fooBar.IsFoo = false; } db.SubmitChanges() } Essentially i want to update the IsFoo field to false for all records that have a particular UserId value. Whats happening is the .ToList() is firing off a query to get all the FooBars for a particular user, then for each Foo object, its executing an UPDATE statement updating the IsFoo property. Can the above code be re-factored to one single UPDATE statement? Ideally, the only SQL i want fired is the below: UPDATE FooBars SET IsFoo = FALSE WHERE UserId = 123 EDIT Ok so looks like it cant be done without using db.ExecuteCommand. Grr...! What i'll probably end up doing is creating another extension method for the DLINQ namespace. Still require some hardcoding (ie writing "WHERE" and "UPDATE"), but at least it hides most of the implementation details away from the actual LINQ query syntax.

    Read the article

  • Comparing two Objects which implement the same interface for equality / equivalence - Design help

    - by gav
    Hi All, I have an interface and two objects implementing that interface, massively simplied; public interface MyInterface { public int getId(); public int getName(); ... } public class A implements MyInterface { ... } public class B implements MyInterface { ... } We are migrating from using one implementation to the other but I need to check that the objects of type B that are generated are equivalent to those of type A. Specifically I mean that for all of the interface methods an object of Type A and Type B will return the same value (I'm just checking my code for generating this objects is correct). How would you go about this? Map<String, MyInterface> oldGeneratedObjects = getOldGeneratedObjects(); Map<String, MyInterface> newGeneratedObjects = getNewGeneratedObjects(); // TODO: Establish that for each Key the Values in the two maps return equivalent values. I'm looking for good coding practices and style here. I appreciate that I could just iterate through one key set pulling out both objects which should be equivalent and then just call all the methods and compare, I'm just thinking there may be a cleaner, more extensible way and I'm interested to learn what options there might be. Would it be appropriate / possible / advised to override equals or implement Comparable? Thanks in advance, Gavin

    Read the article

  • Existentials and Scrap your Boilerplate

    - by finnsson
    I'm writing a XML (de)serializer using Text.XML.Light and Scrap your Boilerplate (at http://github.com/finnsson/Text.XML.Generic) and so far I got working code for "normal" ADTs but I'm stuck at deserializing existentials. I got the existential data type data DataBox where DataBox :: (Show d, Eq d, Data d) => d -> DataBox and I'm trying to get this to compile instance Data DataBox where gfoldl k z (DataBox d) = z DataBox `k` d gunfold k z c = k (z DataBox) -- not OK toConstr (DataBox d) = toConstr d dataTypeOf (DataBox d) = dataTypeOf d but I can't figure out how to implement gunfold for DataBox. The error message is Text/XML/Generic.hs:274:23: Ambiguous type variable `b' in the constraints: `Eq b' arising from a use of `DataBox' at Text/XML/Generic.hs:274:23-29 `Show b' arising from a use of `DataBox' at Text/XML/Generic.hs:274:23-29 `Data b' arising from a use of `k' at Text/XML/Generic.hs:274:18-30 Probable fix: add a type signature that fixes these type variable(s) It's complaining about not being able to figure out the data type of b. I'm also trying to implement dataCast1 and dataCast2 but I think I can live without them (i.e. an incorrect implementation). I guess my questions are: Is it possible to combine existentials with Scrap your Boilerplate? If so: how do you implement gunfold for an existential data type?

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >