Search Results

Search found 8440 results on 338 pages for 'wms implementation'.

Page 264/338 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • PHP package manager

    - by Mathias
    Hey, does anyone know a package manager library for PHP (as e.g. apt or yum for linux distros) apart from PEAR? I'm working on a system which should include a package management system for module management. I managed to get a working solution using PEAR, but using the PEAR client for anything else than managing a PEAR installation is not really the optimal solution as it's not designed for that. I would have to modify/extend it (e.g. to implement actions on installation/upgrade or to move PEAR specific files like lockfiles away from the system root) and especially the CLI client code is quite messy and PHP4. So maybe someone has some suggestions for an alternative PEAR client library which is easy to use and extend (the server side has some nice implementations like Pirum and pearhub) for completely different package management systems written in PHP (ideally including dependency tracking and different channels) for some general ideas how to implement such a PM system (yes, I'm still tinkering with the idea of implementing such a system from scratch) I know that big systems like Magento and symfony use PEAR for their PM. Magento uses a hacked version of the original PEAR client (which I'd like to avoid), symfony's implementation seems quite integrated with the framework, but would be a good starting point to at least write the client from scratch. Anyway, if anybody has suggestions: please :)

    Read the article

  • How do I remove implementing types from GWT’s Serialization Policy?

    - by Bluu
    The opposite of this question: http://stackoverflow.com/questions/138099/how-do-i-add-a-type-to-gwts-serialization-policy-whitelist GWT is adding undesired types to the serialization policy and bloating my JS. How do I trim my GWT whitelist by hand? Or should I at all? For example, if I put the interface List on a GWT RPC service class, GWT has to generate Javascript that handles ArrayList, LinkedList, Stack, Vector, ... even though my team knows we're only ever going to return an ArrayList. I could just make the method's return type ArrayList, but I like relying on an interface rather than a specific implementation. After all, maybe one day we will switch it up and return e.g. a LinkedList. In that case, I'd like to force the GWT serialization policy to compile for only ArrayList and LinkedList. No Stacks or Vectors. These implicit restrictions have one huge downside I can think of: a new member of the team starts returning Vectors, which will be a runtime error. So besides the question in the title, what is your experience designing around this?

    Read the article

  • Should a developer be a coauthor to a paper presented about the application they developed?

    - by ved
    In our organization, project teams come up with a need and funding and developers are given a basic scope and are allowed to develop the solution. There is a certain degree of implementation freedom given to the developers. They drive the solution to pilot and live deployment from its inception. If the solution is presented in a conference as a technical paper/white paper what is the protocol for the list of authors: because for the most part I see the project manager's and the dev team manager's names as authors but no mention of the actual developer. Is this correct? A lot of us developers feel pretty bummed to never see our names as the coauthors. Appreciate any pointers. Answers to the FOLLOW UP questions (1) in what field of study is the paper, and what are the standards of authorship for that field? The paper is for Flood Plain Management - there is nothing on the abstract guidelines, I have called the contact person listed for comment - waiting to hear. 2) was the paper literally about the software application as your question implies, or were the software issues incidental to the topic of the paper? The paper specifically deals with a GIS Application that is used in Coastal Engineering, yes the software is not incidental, but the meat of the paper and mentioned in the Title. 2

    Read the article

  • Warning: cast increases required alignment

    - by dash-tom-bang
    I'm recently working on this platform for which a legacy codebase issues a large number of "cast increases required alignment to N" warnings, where N is the size of the target of the cast. struct Message { int32_t id; int32_t type; int8_t data[16]; }; int32_t GetMessageInt(const Message& m) { return *reinterpret_cast<int32_t*>(&data[0]); } Hopefully it's obvious that a "real" implementation would be a bit more complex, but the basic point is that I've got data coming from somewhere, I know that it's aligned (because I need the id and type to be aligned), and yet I get the message that the cast is increasing the alignment, in the example case, to 4. Now I know that I can suppress the warning with an argument to the compiler, and I know that I can cast the bit inside the parentheses to void* first, but I don't really want to go through every bit of code that needs this sort of manipulation (there's a lot because we load a lot of data off of disk, and that data comes in as char buffers so that we can easily pointer-advance), but can anyone give me any other thoughts on this problem? I mean, to me it seems like such an important and common option that you wouldn't want to warn, and if there is actually the possibility of doing it wrong then suppressing the warning isn't going to help. Finally, can't the compiler know as I do how the object in question is actually aligned in the structure, so it should be able to not worry about the alignment on that particular object unless it got bumped a byte or two?

    Read the article

  • How frequently IP packets are fragmented at the source host?

    - by Methos
    I know that if IP payload MTU then routers usually fragment the IP packet. Finally all the fragmented packets are assembled at the destination using the fields IP-ID, IP fragment offsets and fragmentation flags. Max length of IP payload is 64K. Thus its very plausible for L4 to hand over payload which is 64K. If the L2 protocol is Ethernet, which often is the case, then the MTU will be about 1600 bytes. Hence IP packet will be fragmented at the source host itself. However, a quick search about IP implementation in Linux tells me that in recent kernels, L4 protocols are fragment friendly i.e. they try to save the fragmentation work for IP by handing over buffers of size which is close to MTU. Considering these two facts, I am wondering about how frequently does the IP packet gets fragmented at the source host itself. Does it occur sometimes/rarely/never? Does anyone know if there are exceptions to the rule of fragmentation in linux kernel (i.e. are there situations where L4 protocols are not fragment friendly)? How is this handled in other common OSes like windows? In general how frequently IP packets are fragmented?

    Read the article

  • How to find an embedded platform?

    - by gmagana
    I am new to the locating hardware side of embedded programming and so after being completely overwhelmed with all the choices out there (pc104, custom boards, a zillion option for each board, volume discounts, devel kits, ahhh!!) I am asking here for some direction. Basically, I must find a new motherboard and (most likely) re-implement the program logic. Rewriting this in C/C++/Java/C#/Pascal/BASIC is not a problem for me. so my real problem is finding the hardware. This motherboard will have several other devices attached to it. Here is a summary of what I need to do: Required: 2 RS232 serial ports (one used all the time for primary UI, the second one not continuous) 1 modem (9600+ baud ok) [Modem will be in simultaneous use with only one of the serial port devices, so interrupt sharing with one serial port is OK, but not both] Minimum permanent/long term storage: Whatever O/S requires + 1 MB (executable) + 512 KB (Data files) RAM: Minimal, whatever the O/S requires plus maybe 1MB for executable. Nice to have: USB port(s) Ethernet network port Wireless network Implementation languages (any O/S I will adapt to): First choice Java/C# (Mono ok) Second choice is C/Pascal Third is BASIC Ok, given all this, I am having a lot of trouble finding hardware that will support this that is low in cost. Every manufacturer site I visit has a lot of options, and it's difficult to see if their offering will even satisfy my must-have requirements (for example they sometimes list 3 "serial ports", but it appears that only one of the three is RS232, for example, and don't mention what the other two are). The #1 constraint is cost, #2 is size. Can anyone help me with this? This little task has left me thinking I should have gone for EE and not CS :-). EDIT: A bit of background: This is a system currently in production, but the original programmer passed away, and the current hardware manufacturer cannot find hardware to run the (currently) DOS system, so I need to reimplement this in a modern platform. I can only change the programming and the motherboard hardware.

    Read the article

  • Why does this program take up so much memory?

    - by Adrian
    I am learning Objective-C. I am trying to release all of the memory that I use. So, I wrote a program to test if I am doing it right: #import <Foundation/Foundation.h> #define DEFAULT_NAME @"Unknown" @interface Person : NSObject { NSString *name; } @property (copy) NSString * name; @end @implementation Person @synthesize name; - (void) dealloc { [name release]; [super dealloc]; } - (id) init { if (self = [super init]) { name = DEFAULT_NAME; } return self; } @end int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; Person *person = [[Person alloc] init]; NSString *str; int i; for (i = 0; i < 1e9; i++) { str = [NSString stringWithCString: "Name" encoding: NSUTF8StringEncoding]; person.name = str; [str release]; } [person release]; [pool drain]; return 0; } I am using a mac with snow leopard. To test how much memory this is using, I open Activity Monitor at the same time that it is running. After a couple of seconds, it is using gigabytes of memory. What can I do to make it not use so much?

    Read the article

  • Best approach for Java/Maven/JPA/Hibernate build with multiple database vendor support?

    - by HDave
    I have an enterprise application that uses a single database, but the application needs to support mysql, oracle, and sql*server as installation options. To try to remain portable we are using JPA annotations with Hibernate as the implementation. We also have a test-bed instance of each database running for development. The app is building nicely in Maven, and I've played around with the hibernate3-maven-plugin and can auto-generate DDL for a given database dialect. What is the best way to approach this so that individual developers can easily test against all three databases and our Hudson based CI server can build things propertly. More specifically: 1) I thought the hbm2ddl goal in the hibernate3-maven-plugin would just generate a schema file, but apparently it connects to a live database and attempts to create the schema. Is there a way to have this just create the schema file for each database dialect without connecting to a database? 2) If the hibernate3-maven-plug insists on actually creating the database schema, is there a way to have it drop the database and recreate it before creating the schema? 3) I am thinking that each developer (and the hudson build machine) should have their own separate database on each database server. Is this typical? 4) Will developers have to run Maven three times...once for each database vendor? If so, how do I merge the results on the build machine? 5) There is a hbm2doc goal within hibernate3-maven-plugin. It seems overkill to run this three times...I gotta believe it'd be nearly identical for each database.

    Read the article

  • C++ operator lookup rules / Koenig lookup

    - by John Bartholomew
    While writing a test suite, I needed to provide an implementation of operator<<(std::ostream&... for Boost unit test to use. This worked: namespace theseus { namespace core { std::ostream& operator<<(std::ostream& ss, const PixelRGB& p) { return (ss << "PixelRGB(" << (int)p.r << "," << (int)p.g << "," << (int)p.b << ")"); } }} This didn't: std::ostream& operator<<(std::ostream& ss, const theseus::core::PixelRGB& p) { return (ss << "PixelRGB(" << (int)p.r << "," << (int)p.g << "," << (int)p.b << ")"); } Apparently, the second wasn't included in the candidate matches when g++ tried to resolve the use of the operator. Why (what rule causes this)? The code calling operator<< is deep within the Boost unit test framework, but here's the test code: BOOST_AUTO_TEST_SUITE(core_image) BOOST_AUTO_TEST_CASE(test_output) { using namespace theseus::core; BOOST_TEST_MESSAGE(PixelRGB(5,5,5)); // only compiles with operator<< definition inside theseus::core std::cout << PixelRGB(5,5,5) << "\n"; // works with either definition BOOST_CHECK(true); // prevent no-assertion error } BOOST_AUTO_TEST_SUITE_END() For reference, I'm using g++ 4.4 (though for the moment I'm assuming this behaviour is standards-conformant).

    Read the article

  • Bulletin board - Database optimisation

    - by andrew
    This question is a follow on from this Question The project and problem The project I am currently working on is a bulletin board for a large non-profit organisation. The bulletin board will be used to allow inter-office communication within the organisation. I am building the application and have been having trouble extracting the results that I need from my database because I don't think it is properly normalized and because of limitations in my knowledge of relational database theory and mysql. I would appreciate input into the design of the board in general and in particular, ways that the database structure can be improved to facilitate efficient queries and help me develop this application and future application faster Business Logic The bulletin board will be used in the following way Posting bulletins and responses to bulletins Employees or 'users' in offices around the country will be able to post messages to the bulletin board.Bulletins must be posted to a location and categorised- i'll call these "bulletins". Users will be able to post any number of replies to any one bulletin and users will be able to reply to their own bulletin - i'll call these 'replies'. Rating bulletins and replies Users will be able to either 'like' or 'dislike' a bulletin or a reply and the total number of likes or dislikes will be shown for each bulletin or reply. Viewing the bulletin board and responses Bulletins can be displayed chronologically. Users can sort bulletins chronologically or chronologically by the latest reply to that bulletin(let me know if you need more explanation) When a particular bulletin is selected, replies to that bulletin will be displayed chronologically @PerformanceDBA - edited 10:34 est 28/12/10I have begun implementing the data model. I assume that the 6th data model is the physical model because it contains the associative tables. I am going to post any questions that I have below. I will put up a database dump once I am done. I will then put up a list of all the queries that I need to run on the database and begin writing them. I hope you had a good Christmas. I'm in Canada and there's snow! Implementation of Physical model

    Read the article

  • Recursive QuickSort suffering a StackOverflowException -- Need fresh eyes

    - by jon
    I am working on a Recursive QuickSort method implementation in a GenericList Class. I will have a second method that accepts a compareDelegate to compare different types, but for development purposes I'm sorting a GenericList<int I am recieving stackoverflow areas in different places depending on the list size. I've been staring at and tracing through this code for hours and probably just need a fresh pair of (more experienced)eyes. Definitely wanting to learn why it is broken, not just how to fix it. public void QuickSort() { int i, j, lowPos, highPos, pivot; GenericList<T> leftList = new GenericList<T>(); GenericList<T> rightList = new GenericList<T>(); GenericList<T> tempList = new GenericList<T>(); lowPos = 1; highPos = this.Count; if (lowPos < highPos) { pivot = (lowPos + highPos) / 2; for (i = 1; i <= highPos; i++) { if (this[i].CompareTo(this[pivot]) <= 0) leftList.Add(this[i]); else rightList.Add(this[i]); } leftList.QuickSort(); rightList.QuickSort(); for(i=1;i<=leftList.Count;i++) tempList.Add(leftList[i]); for(i=1;i<=rightList.Count;i++) tempList.Add(rightList[i]); this.items = tempList.items; this.count = tempList.count; } }

    Read the article

  • Encapsulating user input of data for a class (C++)

    - by Dr. Monkey
    For an assignment I've made a simple C++ program that uses a superclass (Student) and two subclasses (CourseStudent and ResearchStudent) to store a list of students and print out their details, with different details shown for the two different types of students (using overriding of the display() method from Student). My question is about how the program collects input from the user of things like the student name, ID number, unit and fee information (for a course student) and research information (for research students): My implementation has the prompting for user input and the collecting of that input handled within the classes themselves. The reasoning behind this was that each class knows what kind of input it needs, so it makes sense to me to have it know how to ask for it (given an ostream through which to ask and an istream to collect the input from). My lecturer says that the prompting and input should all be handled in the main program, which seems to me somewhat messier, and would make it trickier to extend the program to handle different types of students. I am considering, as a compromise, to make a helper class that handles the prompting and collection of user input for each type of Student, which could then be called on by the main program. The advantage of this would be that the student classes don't have as much in them (so they're cleaner), but also they can be bundled with the helper classes if the input functionality is required. This also means more classes of Student could be added without having to make major changes to the main program, as long as helper classes are provided for these new classes. Also the helper class could be swapped for an alternative language version without having to make any changes to the class itself. What are the major advantages and disadvantages of the three different options for user input (fully encapsulated, helper class or in the main program)?

    Read the article

  • How to set interface method between two viewController to pass paramter in Navigation Controller

    - by TechFusion
    Hello, I have created Window based application, root controller as Tab Bar controller. One Tab bar has Navigation controller. Navigation controller's ViewControlller implementation, I am pushing Viewcontroller. I am looking to pass parameter from Navigation Controller's View controller to pushed View Controller. I have tried to pass as per below method. //ViewController.h @interface ViewController:UIViewController{ NSString *String; } @property(copy, nonatomic)NSString *String; @end //ViewController.m #import "ViewController1.h" ViewController1 *level1view = [[ViewController alloc]init]; level1view.hideBottomBarWhenPushed = YES; level1view.theString = String; [self.navigationController pushViewController:level1view animated:YES]; [level1view release]; //ViewController1.h NSString *theString; @property(copy, nonatomic)NSString *theString; This is working fine. but I want to pass more than one parameter like Integer and UITextFiled Values so how to do that? Is there any Apple Doc that I can get idea about this? Thanks,

    Read the article

  • How can I simply change a class variable from another class in ObjectiveC?

    - by Daniel
    I simply want to change a variable of an object from another class. I can compile without a problem, but my variable always is set to 'null'. I used the following code: Object.h: @interface Object : NSObject { //... NSString *color; //... } @property(nonatomic, retain) NSString* color; + (id)Object; - (void)setColor:(NSString*)col; - (NSString*)getColor; @end Object.m: +(id)Object{ return [[[Object alloc] init] autorelease]; } - (void)setColor:(NSString*)col { self.color = col; } - (NSString*)getColor { return self.color; } MyViewController.h #import "Object.h" @interface ClassesTestViewController : UIViewController { Object *myObject; UILabel *label1; } @property UILabel *label1; @property (assign) Object *myObject; @end MyViewController.m: #import "Object.h" @implementation MyViewController @synthesize myObject; - (void)viewDidLoad { [myObject setColor:@"red"]; NSLog(@"Color = %@", [myObject getColor]); [super viewDidLoad]; } The NSLog message is always Color = (null) I tried many different ways to solve this problem, but no success. Any help would be appreciated.

    Read the article

  • Is it possible to have asynchronous processing

    - by prashant2361
    Hi, I have a requirement where I need to send continuous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client. I am looking for suggestions of this implementation at the server end. Basically what I need is this: 1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client 2. server process now waits for new client connections 3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required. Can we do something like this in Apache module: 1. Apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection 2. the Apache process though has returned the status to root process but it is also executing in parallel where it going through its global store and sending updates to the client, if any. So can a Apache process do these things: 1. Have more than one connection associated with it 2. Asynchronously waiting for new connection and at the same time processing the previous connections? Regards Prashant

    Read the article

  • Any merit to a lazy-ish juxt function?

    - by NielsK
    In answering a question about a function that maps over multiple functions with the same arguments (A: juxt), I came up with a function that basically took the same form as juxt, but used map: (defn could-be-lazy-juxt [& funs] (fn [& args] (map #(apply %1 %2) funs (repeat args)))) => ((juxt inc dec str) 1) [2 0 "1"] => ((could-be-lazy-juxt inc dec str) 1) (2 0 "1") => ((juxt * / -) 6 2) [12 3 4] => ((could-be-lazy-juxt * / -) 6 2) (12 3 4) As posted in the original question, I have little clue about the laziness or performance of it, but timing in the REPL does suggest something lazy-ish is going on. => (time (apply (juxt + -) (range 1 100))) "Elapsed time: 0.097198 msecs" [4950 -4948] => (time (apply (could-be-lazy-juxt + -) (range 1 100))) "Elapsed time: 0.074558 msecs" (4950 -4948) => (time (apply (juxt + -) (range 10000000))) "Elapsed time: 1019.317913 msecs" [49999995000000 -49999995000000] => (time (apply (could-be-lazy-juxt + -) (range 10000000))) "Elapsed time: 0.070332 msecs" (49999995000000 -49999995000000) I'm sure this function is not really that quick (the print of the outcome 'feels' about as long in both). Doing a 'take x' on the function only limits the amount of functions evaluated, which probably is limited in it's applicability, and limiting the other parameters by 'take' should be just as lazy in normal juxt. Is this juxt really lazy ? Would a lazy juxt bring anything useful to the table, for instance as a compositing step between other lazy functions ? What are the performance (mem / cpu / object count / compilation) implications ? Is that why the Clojure juxt implementation is done with a reduce and returns a vector ? Edit: Somehow things can always be done simpler in Clojure. (defn could-be-lazy-juxt [& funs] (fn [& args] (map #(apply % args) funs)))

    Read the article

  • Dealing with multiple generics in a method call

    - by thaBadDawg
    I've been dealing a lot lately with abstract classes that use generics. This is all good and fine because I get a lot of utility out of these classes but now it's making for some rather ugly code down the line. For example: abstract class ClassBase<T> { T Property { get; set; } } class MyClass : ClassBase<string> { OtherClass PropertyDetail { get; set; } } This implementation isn't all that crazy, except when I want to reference the abstract class from a helper class and then I have to make a list of generics just to make reference to the implemented class, like this below. class Helper { void HelpMe<C, T>(object Value) where C : ClassBase<T>, new() { DoWork(); } } This is just a tame example, because I have some method calls where the list of where clauses end up being 5 or 6 lines long to handle all of the generic data. What I'd really like to do is class Helper { void HelpMe<C>(object Value) where C : ClassBase, new() { DoWork(); } } but it obviously won't compile. I want to reference ClassBase without having to pass it a whole array of generic classes to get the function to work, but I don't want to reference the higher level classes because there are a dozen of those. Am I the victim of my own cleverness or is there an avenue that I haven't considered yet?

    Read the article

  • Endianness and C API's: Specifically OpenSSL.

    - by Hassan Syed
    I have an algorithm that uses the following OpenSSL calls: HMAC_update() / HMAC_final() // ripe160 EVP_CipherUpdate() / EVP_CipherFinal() // cbc_blowfish These algorithm take a unsigned char * into the "plain text". My input data is comes from a C++ std::string::c_str() which originate from a protocol buffer object as a encoded UTF-8 string. UTF-8 strings are meant to be endian neutrial. However I'm a bit paranoid about how OpenSSL may perform operations on the data. My understanding is that encryption algorithms work on 8-bit blocks of data, and if a unsigned char * is used for pointer arithmetic when the operations are performed the algorithms should be endian neutral and I do not need to worry about anything. My uncertainty is compounded by the fact that I am working on a little-endian machine and have never done any real cross-architecture programming. My beliefs/reasoning are/is based on the following two properties std::string (not wstring) internally uses a 8-bit ptr and a the resulting c_str() ptr will itterate the same way regardless of the CPU architecture. Encryption algorithms are either by design, or by implementation, endian neutral. I know the best way to get a definitive answer is to use QEMU and do some cross-platform unit tests (which I plan to do). My question is a request for comments on my reasoning, and perhaps will assist other programmers when faced with similar problems.

    Read the article

  • Plotting an Arc in Discrete Steps

    - by phobos51594
    Good afternoon, Background My question relates to the plotting of an arbitrary arc in space using discrete steps. It is unique, however, in that I am not drawing to a canvas in the typical sense. The firmware I am designing is for a gcode interpreter for a CNC mill that will translate commands into stepper motor movements. Now, I have already found a similar question on this very site, but the methodology suggested (Bresenham's Algorithm) appears to be incompatable for moving an object in space, as it only relies on the calculation of one octant of a circle which is then mirrored about the remaining axes of symmetry. Furthermore, the prescribed method of calculation an arc between two arbitrary angles relies on trigonometry (I am implementing on a microcontroller and would like to avoid costly trig functions, if possible) and simply not taking the steps that are out of the range. Finally, the algorithm only is designed to work in one rotational direction (e.g. counterclockwise). Question So, on to the actual question: Does anyone know of a general-purpose algorithm that can be used to "draw" an arbitrary arc in discrete steps while still giving respect to angular direction (CW / CCW)? The final implementation will be done in C, but the language for the purpose of the question is irrelevant. Thank you in advance. References S.O post on drawing a simple circle using Bresenham's Algorithm: "Drawing" an arc in discrete x-y steps Wiki page describing Bresenham's Algorithm for a circle http://en.wikipedia.org/wiki/Midpoint_circle_algorithm Gcode instructions to be implemented (see. G2 and G3) http://linuxcnc.org/docs/html/gcode.html

    Read the article

  • NSTimer Reset Not Working

    - by user355900
    hi, i have a nstimer and it works perfectly counting down from 2:00 but when i hit the reset button it does not work it just stops the timer and when i press start again it will carry on with the timer as if it had never been stopped. Here is my code `@implementation TimerAppDelegate @synthesize window; (void)applicationDidFinishLaunching:(UIApplication *)application { timerLabel.text = @"2:00"; seconds = 120; // Override point for customization after application launch [window makeKeyAndVisible]; } (void)viewDidLoad { [timer invalidate]; } (void)countDownOneSecond { seconds--; int currentTime = [timerLabel.text intValue]; int newTime = currentTime - 1; int displaySeconds = !(seconds % 60) ? 0 : seconds < 60 ? seconds : seconds - 60; int displayMinutes = floor(seconds / 60); NSString *time = [NSString stringWithFormat:@"%d:%@%d", displayMinutes, [[NSString stringWithFormat:@"%d", displaySeconds] length] == 1 ? @"0" : @"", displaySeconds ]; timerLabel.text = time; if(seconds == 0) { [timer invalidate]; } } (void)startOrStopTimer { if(timerIsRunning){ [timer invalidate]; [startOrStopButton setTitle:@"Start" forState:UIControlStateNormal]; } else { timer = [[NSTimer scheduledTimerWithTimeInterval:1.0 target:self selector:@selector(countDownOneSecond) userInfo:nil repeats:YES] retain]; [startOrStopButton setTitle:@"Stop" forState:UIControlStateNormal]; } timerIsRunning = !timerIsRunning; } (void)resetTimer { [timer invalidate]; [startOrStopButton setTitle:@"Start" forState:UIControlStateNormal]; [timer invalidate]; timerLabel.text = @"2:00"; } (void)dealloc { [window release]; [super dealloc]; } @end` thanks

    Read the article

  • Fastest XML parser for small, simple documents in Java

    - by Varkhan
    I have to objectify very simple and small XML documents (less than 1k, and it's almost SGML: no namespaces, plain UTF-8, you name it...), read from a stream, in Java. I am using JAXP to process the data from my stream into a Document object. I have tried Xerces, it's way too big and slow... I am using Dom4j, but I am still spending way too much time in org.dom4j.io.SAXReader. Does anybody out there have any suggestion on a faster, more efficient implementation, keeping in mind I have very tough CPU and memory constraints? [Edit 1] Keep in mind that my documents are very small, so the overhead of staring the parser can be important. For instance I am spending as much time in org.xml.sax.helpers.XMLReaderFactory.createXMLReader as in org.dom4j.io.SAXReader.read [Edit 2] The result has to be in Dom format, as I pass the document to decision tools that do arbitrary processing on it, like switching code based on the value of arbitrary XPaths, but also extracting lists of values packed as children of a predefined node. [Edit 3] In any case I eventually need to load/parse the complete document, since all the information it contains is going to be used at some point. (This question is related to, but different from, http://stackoverflow.com/questions/373833/best-xml-parser-for-java )

    Read the article

  • Credit Card storage solution

    - by jtnire
    Hi Everyone, I'm developing a solution that is designed to store membership details, as well as credit card details. I'm trying to comply with PCI DSS as much as I can. Here is my design so far: PAN = Primary account number == long number on credit card Server A is a remote server. It stores all membership details (Names, Address etc..) and provides indivudal Key A's for each PAN stored Server B is a local server, and actually holds the encrypted PANs, as well as Key B, and does the decryption. To get a PAN, the client has to authenticate with BOTH servers, ask Server A for the respective Key A, then give Key A to server B, which will return the PAN to the client (provided authentication was sucessful). Server A will only ever encrypt Key A with Server B's public Key, as it will have it beforehand. Server B will probably have to send a salt first though, however I doin't think that has to be encrypted I havn't really thought about any implementation (i.e. coding) specifics yet regarding the above, however the solution is using Java's Cajo framework (wrapper for RMI) so that is how the servers will communicate with each other (Currently, membership details are transfered in this way). The reason why I want Server B to do the decryption, and not the client, is that I am afraid of decryption keys going into the client's RAM, even though it's probably just as bad on the server... Can anyone see anything wrong with the above design? It doesn't matter if the above has to be changed. Thanks jtnire

    Read the article

  • Is it safe to silently catch ClassCastException when searching for a specific value?

    - by finnw
    Suppose I am implementing a sorted collection (simple example - a Set based on a sorted array.) Consider this (incomplete) implementation: import java.util.*; public class SortedArraySet<E> extends AbstractSet<E> { @SuppressWarnings("unchecked") public SortedArraySet(Collection<E> source, Comparator<E> comparator) { this.comparator = (Comparator<Object>) comparator; this.array = source.toArray(); Collections.sort(Arrays.asList(array), this.comparator); } @Override public boolean contains(Object key) { return Collections.binarySearch(Arrays.asList(array), key, comparator) >= 0; } private final Object[] array; private final Comparator<Object> comparator; } Now let's create a set of integers Set<Integer> s = new SortedArraySet<Integer>(Arrays.asList(1, 2, 3), null); And test whether it contains some specific values: System.out.println(s.contains(2)); System.out.println(s.contains(42)); System.out.println(s.contains("42")); The third line above will throw a ClassCastException. Not what I want. I would prefer it to return false (as HashSet does.) I can get this behaviour by catching the exception and returning false: @Override public boolean contains(Object key) { try { return Collections.binarySearch(Arrays.asList(array), key, comparator) >= 0; } catch (ClassCastException e) { return false; } } Assuming the source collection is correctly typed, what could go wrong if I do this?

    Read the article

  • ObjC internals. Why my duck typing attempt failed?

    - by Piotr Czapla
    I've tried to use id to create duck typing in objective-c. The concept looks fine in theory but failed in practice. I was unable to use any parameters in my methods. The methods were called but parameters were wrong. I was getting BAD_ACESS for objects and random values for primitives. I've attached a simple example below. The question: Does any one knows why the methods parameters are wrong? What is happening under the hood of the objective-c? Note: I'm interest in the details. I know how to make the example below work. An example: I've created a simple class Test that is passed to an other class using property id test. @implementation Test - (void) aSampleMethodWithFloat:(float) f andInt: (int) i { NSLog(@"Parameters: %f, %i\n", f, i); } @end Then in the class the following loop is executed: for (float f=0.0f; f < 100.0f ; f += 0.3f) { [self.test aSampleMethodWithOneFloatParameter: f]; // warning: no method found. } Here is the output that I'm getting. As you can see the method was called but the parameters were wrong. Parameters: 0.000000, 0 Parameters: -0.000000, 1069128089 Parameters: -0.000000, 1070176665 Parameters: 2.000000, 1070805811 Parameters: -0.000000, 1071225241 Parameters: 0.000000, 1071644672 Parameters: 2.000000, 1071854387 Parameters: 36893488147419103232.000000, 1072064102 Parameters: -0.000000, 1072273817 Parameters: -36893488147419103232.000000, 1072483532

    Read the article

  • Most Elegant Way to write isPrime in java

    - by Anantha Kumaran
    public class Prime { public static boolean isPrime1(int n) { if (n <= 1) { return false; } if (n == 2) { return true; } for (int i = 2; i <= Math.sqrt(n) + 1; i++) { if (n % i == 0) { return false; } } return true; } public static boolean isPrime2(int n) { if (n <= 1) { return false; } if (n == 2) { return true; } if (n % 2 == 0) { return false; } for (int i = 3; i <= Math.sqrt(n) + 1; i = i + 2) { if (n % i == 0) { return false; } } return true; } } public class PrimeTest { public PrimeTest() { } @Test public void testIsPrime() throws IllegalArgumentException, IllegalAccessException, InvocationTargetException { Prime prime = new Prime(); TreeMap<Long, String> methodMap = new TreeMap<Long, String>(); for (Method method : Prime.class.getDeclaredMethods()) { long startTime = System.currentTimeMillis(); int primeCount = 0; for (int i = 0; i < 1000000; i++) { if ((Boolean) method.invoke(prime, i)) { primeCount++; } } long endTime = System.currentTimeMillis(); Assert.assertEquals(method.getName() + " failed ", 78498, primeCount); methodMap.put(endTime - startTime, method.getName()); } for (Entry<Long, String> entry : methodMap.entrySet()) { System.out.println(entry.getValue() + " " + entry.getKey() + " Milli seconds "); } } } I am trying to find the fastest way to check whether the given number is prime or not. This is what is finally came up with. Is there any better way than the second implementation(isPrime2).

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >