Search Results

Search found 8219 results on 329 pages for 'less'.

Page 296/329 | < Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >

  • Can I call make runtime decided method calls in Java?

    - by Catalin Marin
    I know there is an invoke function that does the stuff, I am overall interested in the "correctness" of using such a behavior. My issue is this: I have a Service Object witch contains methods which I consider services. What I want to do is alter the behavior of those services without later intrusion. For example: class MyService { public ServiceResponse ServeMeDonuts() { do stuff... return new ServiceResponse(); } after 2 months I find out that I need to offer the same service to a new client app and I also need to do certain extra stuff like setting a flag, or make or updating certain data, or encode the response differently. What I can do is pop it up and throw down some IFs. In my opinion this is not good as it means interaction with tested code and may result in un wanted behaviour for the previous service clients. So I come and add something to my registry telling the system that the "NewClient" has a different behavior. So I'll do something like this: public interface Behavior { public void preExecute(); public void postExecute(); } public class BehaviorOfMyService implements Behavior{ String method; String clientType; public void BehaviorOfMyService(String method,String clientType) { this.method = method; this.clientType = clientType; } public void preExecute() { Method preCall = this.getClass().getMethod("pre" + this.method + this.clientType); if(preCall != null) { return preCall.invoke(); } return false; } ...same for postExecute(); public void preServeMeDonutsNewClient() { do the stuff... } } when the system will do something like this if(registrySaysThereIs different behavior set for this ServiceObject) { Class toBeCalled = Class.forName("BehaviorOf" + usedServiceObjectName); Object instance = toBeCalled.getConstructor().newInstance(method,client); instance.preExecute(); ....call the service... instance.postExecute(); .... } I am not particularly interested in correctness of code as in correctness of thinking and approach. Actually I have to do this in PHP, witch I see as a kind of Pop music of programming which I have to "sing" for commercial reasons, even though I play POP I really want to sing by the book, so putting aside my more or less inspired analogy I really want to know your opinion on this matter for it's practical necessity and technical approach. Thanks

    Read the article

  • Switch case assembly level code

    - by puffadder
    Hi All, I am programming C on cygwin windows. After having done a bit of C programming and getting comfortable with the language, I wanted to look under the hood and see what the compiler is doing for the code that I write. So I wrote down a code block containing switch case statements and converted them into assembly using: gcc -S foo.c Here is the C source: switch(i) { case 1: { printf("Case 1\n"); break; } case 2: { printf("Case 2\n"); break; } case 3: { printf("Case 3\n"); break; } case 4: { printf("Case 4\n"); break; } case 5: { printf("Case 5\n"); break; } case 6: { printf("Case 6\n"); break; } case 7: { printf("Case 7\n"); break; } case 8: { printf("Case 8\n"); break; } case 9: { printf("Case 9\n"); break; } case 10: { printf("Case 10\n"); break; } default: { printf("Nothing\n"); break; } } Now the resultant assembly for the same is: movl $5, -4(%ebp) cmpl $10, -4(%ebp) ja L13 movl -4(%ebp), %eax sall $2, %eax movl L14(%eax), %eax jmp *%eax .section .rdata,"dr" .align 4 L14: .long L13 .long L3 .long L4 .long L5 .long L6 .long L7 .long L8 .long L9 .long L10 .long L11 .long L12 .text L3: movl $LC0, (%esp) call _printf jmp L2 L4: movl $LC1, (%esp) call _printf jmp L2 L5: movl $LC2, (%esp) call _printf jmp L2 L6: movl $LC3, (%esp) call _printf jmp L2 L7: movl $LC4, (%esp) call _printf jmp L2 L8: movl $LC5, (%esp) call _printf jmp L2 L9: movl $LC6, (%esp) call _printf jmp L2 L10: movl $LC7, (%esp) call _printf jmp L2 L11: movl $LC8, (%esp) call _printf jmp L2 L12: movl $LC9, (%esp) call _printf jmp L2 L13: movl $LC10, (%esp) call _printf L2: Now, in the assembly, the code is first checking the last case (i.e. case 10) first. This is very strange. And then it is copying 'i' into 'eax' and doing things that are beyond me. I have heard that the compiler implements some jump table for switch..case. Is it what this code is doing? Or what is it doing and why? Because in case of less number of cases, the code is pretty similar to that generated for if...else ladder, but when number of cases increases, this unusual-looking implementation is seen. Thanks in advance.

    Read the article

  • Java Runtime.freeMemory() returning bizarre results when adding more objects

    - by Sotirios Delimanolis
    For whatever reason, I wanted to see how many objects I could create and populate a LinkedList with. I used Runtime.getRuntime().freeMemory() to get the approximation of free memory in my JVM. I wrote this: public static void main(String[] arg) { Scanner kb = new Scanner(System.in); List<Long> mem = new LinkedList<Long>(); while (true) { System.out.println("Max memory: " + Runtime.getRuntime().maxMemory() + ". Available memory: " + Runtime.getRuntime().freeMemory() + " bytes. Press enter to use more."); String s = kb.nextLine(); if (s.equals("m")) for (int i = 0; i < 1000000; i++) { mem.add(new Long((new Random()).nextLong())); } } } If I write in m, the app adds a million Long objects to the list. You would think the more objects (to which we have references, so can't be gc'ed), the less free memory. Running the code: Max memory: 1897725952. Available memory: 127257696 bytes. m Max memory: 1897725952. Available memory: 108426520 bytes. m Max memory: 1897725952. Available memory: 139873296 bytes. m Max memory: 1897725952. Available memory: 210632232 bytes. m Max memory: 1897725952. Available memory: 137268792 bytes. m Max memory: 1897725952. Available memory: 239504784 bytes. m Max memory: 1897725952. Available memory: 169507792 bytes. m Max memory: 1897725952. Available memory: 259686128 bytes. m Max memory: 1897725952. Available memory: 189293488 bytes. m Max memory: 1897725952. Available memory: 387686544 bytes. The available memory fluctuates. How does this happen? Is the GC cleaning up other things (what other things are there on the heap to really clean up?), is the freeMemory() method returning an approximation that's way off? Am I missing something or am I crazy?

    Read the article

  • Parallel For Loop - Problems when adding to a List

    - by Kevin Crowell
    I am having some issues involving Parallel for loops and adding to a List. The problem is, the same code may generate different output at different times. I have set up some test code below. In this code, I create a List of 10,000 int values. 1/10th of the values will be 0, 1/10th of the values will be 1, all the way up to 1/10th of the values being 9. After setting up this List, I setup a Parallel for loop that iterates through the list. If the current number is 0, I add a value to a new List. After the Parallel for loop completes, I output the size of the list. The size should always be 1,000. Most of the time, the correct answer is given. However, I have seen 3 possible incorrect outcomes occur: The size of the list is less than 1,000 An IndexOutOfRangeException occurs @ doubleList.Add(0.0); An ArgumentException occurs @ doubleList.Add(0.0); The message for the ArgumentException given was: Destination array was not long enough. Check destIndex and length, and the array's lower bounds. What could be causing the errors? Is this a .Net bug? Is there something I can do to prevent this from happening? Please try the code for yourself. If you do not get an error, try it a few times. Please also note that you probably will not see any errors using a single-core machine. using System; using System.Collections.Generic; using System.Threading.Tasks; namespace ParallelTest { class Program { static void Main(string[] args) { List<int> intList = new List<int>(); List<double> doubleList = new List<double>(); for (int i = 0; i < 250; i++) { intList.Clear(); doubleList.Clear(); for (int j = 0; j < 10000; j++) { intList.Add(j % 10); } Parallel.For(0, intList.Count, j => { if (intList[j] == 0) { doubleList.Add(0.0); } }); if (doubleList.Count != 1000) { Console.WriteLine("On iteration " + i + ": List size = " + doubleList.Count); } } Console.WriteLine("\nPress any key to exit."); Console.ReadKey(); } } }

    Read the article

  • File sizing issue in DOS/FAT

    - by Heather
    I've been tasked with writing a data collection program for a Unitech HT630, which runs a proprietary DOS operating system that can run executables compiled for 16-bit MS DOS with some restrictions. I'm using the Digital Mars C/C++ compiler, which is working well thus far. One of the application requirements is that the data file must be human-readable plain text, meaning the file can be imported into Excel or opened by Notepad. I'm using a variable length record format much like CSV that I've successfully implemented using the C standard library file I/O functions. When saving a record, I have to calculate whether the updated record is larger or smaller than the version of the record currently in the data file. If larger, I first shift all records immediately after the current record forward by the size difference calculated before saving the updated record. EOF is extended automatically by the OS to accommodate the extra data. If smaller, I shift all records backwards by my calculated offset. This is working well, however I have found no way to modify the EOF marker or file size to ignore the data after the end of the last record. Most of the time records will grow in size because the data collection program will be filling some of the empty fields with data when saving a record. Records will only shrink in size when a correction is made on an existing entry, or on a normal record save if the descriptive data in the record is longer than what the program reads in memory. In the situation of a shrinking record, after the last record in the file I'm left with whatever data was sitting there before the shift. I have been writing an EOF delimiter into the file after a "shrinking record save" to signal where the end of my records are and space-filling the remaining data, but then I no longer have a clean file until a "growing record save" extends the size of the file over the space-filled area. The truncate() function in unistd.h does not work (I'm now thinking this is for *nix flavors only?). One proposed solution I've seen involves creating a second file and writing all the data you wish to save into that file, and then deleting the original. Since I only have 4MB worth of disk space to use, this works if the file size is less than 2MB minus the size of my program executable and configuration files, but would fail otherwise. It is very likely that when this goes into production, users would end up with a file exceeding 2MB in size. I've looked at Ralph Brown's Interrupt List and the interrupt reference in IBM PC Assembly Language and Programming and I can't seem to find anything to update the file size or similar. Is reducing a file's size without creating a second file even possible in DOS?

    Read the article

  • c++ class member functions selected by traits

    - by Jive Dadson
    I am reluctant to say I can't figure this out, but I can't figure this out. I've googled and searched stackoverflow, and come up empty. The abstract, and possibly overly vague form of the question is, how can I use the traits-pattern to instantiate non-virtual member functions? The question came up while modernizing a set of multivariate function optimizers that I wrote more than 10 years ago. The optimizers all operate by selecting a straight-line path through the parameter space away from the current best point (the "update"), then finding a better point on that line (the "line search"), then testing for the "done" condition, and if not done, iterating. There are different methods for doing the update, the line-search, and conceivably for the done test, and other things. Mix and match. Different update formulae require different state-variable data. For example, the LMQN update requires a vector, and the BFGS update requires a matrix. If evaluating gradients is cheap, the line-search should do so. If not, it should use function evaluations only. Some methods require more accurate line-searches than others. Those are just some examples. The original version instatiates several of the combinations by means of virtual functions. Some traits are selected by setting mode bits. Yuck. It would be trivial to define the traits with #define's and the member functions with #ifdef's and macros. But that's so twenty years ago. It bugs me that I cannot figure out a whiz-bang modern way. If there were only one trait that varied, I could use the curiously recurring template pattern. But I see no way to extend that to arbitrary combinations of traits. I tried doing it using boost::enable_if, etc.. The specialized state info was easy. I managed to get the functions done, but only by resorting to non-friend external functions that have the this-pointer as a parameter. I never even figured out how to make the functions friends, much less member functions. Perhaps tag-dispatch is the key. I haven't gotten very deeply into that. Surely it's possible, right? If so, what is best practice?

    Read the article

  • Error Handling for Application in PHP

    - by Zubair1
    I was wondering if someone can show me a good way to handle errors in my PHP app, that i am also easily able to reuse in my codes. So far i have been using the following functions: Inline Errors function display_errors_for($fieldname) { global $errors; if (isset($errors[$fieldname])) { return '<label for="' .$fieldname. '" class="error">' . ucfirst($errors[$fieldname]). '</label>'; } else { return false; } } All Errrors function display_all_errors($showCounter = true) { global $errors; $counter = 0; foreach ($errors as $errorFieldName => $errorText) { if ($showCounter == true) { $counter++; echo '<li>' . $counter . ' - <label for="' .$errorFieldName. '">' .$errorText. '</label></li>'; } else { echo '<li><label for="' .$errorFieldName. '">' .$errorText. '</label></li>'; } } } I have a $errors = array(); defined on the top of my global file, so it is appended to all files. The way i use it is that if i encounter an error, i push a new error key/value to the $errors array holder, something like the following: if (strlen($username) < 3) { $errors['username'] = "usernames cannot be less then 3 characters."; } This all works great and all, But i wondering if some one has a better approach for this? with classes? i don't think i want to use Exceptions with try/catch seems like an overkill to me. I'm planning to make a new app, and i'll be getting my hands wet with OOP alot, though i have already made apps using OOP but this time i'm planning to go even deeper and use OOP approach more extensively. What i have in mind is something like this, though its just a basic class i will add further detail to it as i go deeper and deeper in my app to see what it needs. class Errors { public $errors = array(); public function __construct() { // Initialize Default Values // Initialize Methods } public function __destruct() { //nothing here too... } public function add($errorField, $errorDesc) { if (!is_string($errorField)) return false; if (!is_string($errorDesc)) return false; $this->errors[$errorField] = $errorDesc; } public function display($errorsArray) { // code to iterate through the array and display all errors. } } Please share your thoughts, if this is a good way to make a reusable class to store and display errors for an entire app, or is getting more familiar with exceptions and try/catch my only choice?

    Read the article

  • CSS three column layout, liquid center, no left-margin!

    - by moontear
    Hi, I am all in favor of CSS based layouts, but this one I just can't figure out. With a table it is oh-so-easy: <html> <head><title>Three Column</title></head> <body> <p>Test</p> <table style="width: 100%; border: 1px solid black; min-height: 300px;"> <tr> <td style="border: 1px solid green;" colspan="3">Header</td> </tr> <tr> <td style="border: 1px solid green; width: 150px;" rowspan="2">Left</td> <td style="border: 1px solid yellow;">Content</td> <td style="border: 1px solid blue; width: 200px;" rowspan="2">Right</td> </tr> <tr> <td style="border: 1px solid fuchsia;">Additional stuff</td> </tr> <tr><td style="border: 1px solid green;" colspan="3">Footer</td></tr> </body> <html> Left is fixed width Right is fixed width Content is liquid Additional stuff sits beneath content Now here is the important part: "Left" may not exist. Again this is easy with the table. Delete the column and "Content" expands. Beautiful. I have looked through many examples (and "holy grails") of liquid and table less three-column CSS based layouts, but I have not found one which is not using some kind of margin-left for the middle column ("Content"). Any margin-left will suck once "Left" is gone as "Content" will just stay at it's place. I'm just about to switch to old school table based layout for this problem, so I'm hoping someone has some idea - I don't care about excess markup, wrappers and the like, I would just like to know how to solve this with plain CSS. Btw: look at how easy equal height columns are... Cheers PS: No CSS3 please

    Read the article

  • Catch Multiple Custom Exceptions? - C++

    - by Alex
    Hi all, I'm a student in my first C++ programming class, and I'm working on a project where we have to create multiple custom exception classes, and then in one of our event handlers, use a try/catch block to handle them appropriately. My question is: How do I catch my multiple custom exceptions in my try/catch block? GetMessage() is a custom method in my exception classes that returns the exception explanation as a std::string. Below I've included all the relevant code from my project. Thanks for your help! try/catch block // This is in one of my event handlers, newEnd is a wxTextCtrl try { first.ValidateData(); newEndT = first.ComputeEndTime(); *newEnd << newEndT; } catch (// don't know what do to here) { wxMessageBox(_(e.GetMessage()), _("Something Went Wrong!"), wxOK | wxICON_INFORMATION, this);; } ValidateData() Method void Time::ValidateData() { int startHours, startMins, endHours, endMins; startHours = startTime / MINUTES_TO_HOURS; startMins = startTime % MINUTES_TO_HOURS; endHours = endTime / MINUTES_TO_HOURS; endMins = endTime % MINUTES_TO_HOURS; if (!(startHours <= HOURS_MAX && startHours >= HOURS_MIN)) throw new HourOutOfRangeException("Beginning Time Hour Out of Range!"); if (!(endHours <= HOURS_MAX && endHours >= HOURS_MIN)) throw new HourOutOfRangeException("Ending Time Hour Out of Range!"); if (!(startMins <= MINUTE_MAX && startMins >= MINUTE_MIN)) throw new MinuteOutOfRangeException("Starting Time Minute Out of Range!"); if (!(endMins <= MINUTE_MAX && endMins >= MINUTE_MIN)) throw new MinuteOutOfRangeException("Ending Time Minute Out of Range!"); if(!(timeDifference <= P_MAX && timeDifference >= P_MIN)) throw new PercentageOutOfRangeException("Percentage Change Out of Range!"); if (!(startTime < endTime)) throw new StartEndException("Start Time Cannot Be Less Than End Time!"); } Just one of my custom exception classes, the others have the same structure as this one class HourOutOfRangeException { public: // param constructor // initializes message to passed paramater // preconditions - param will be a string // postconditions - message will be initialized // params a string // no return type HourOutOfRangeException(string pMessage) : message(pMessage) {} // GetMessage is getter for var message // params none // preconditions - none // postconditions - none // returns string string GetMessage() { return message; } // destructor ~HourOutOfRangeException() {} private: string message; };

    Read the article

  • Interoperability between two AES algorithms

    - by lpfavreau
    Hello, I'm new to cryptography and I'm building some test applications to try and understand the basics of it. I'm not trying to build the algorithms from scratch but I'm trying to make two different AES-256 implementation talk to each other. I've got a database that was populated with this Javascript implementation stored in Base64. Now, I'm trying to get an Objective-C method to decrypt its content but I'm a little lost as to where the differences in the implementations are. I'm able to encrypt/decrypt in Javascript and I'm able to encrypt/decrypt in Cocoa but cannot make a string encrypted in Javascript decrypted in Cocoa or vice-versa. I'm guessing it's related to the initialization vector, nonce, counter mode of operation or all of these, which quite frankly, doesn't speak to me at the moment. Here's what I'm using in Objective-C, adapted mainly from this and this: @implementation NSString (Crypto) - (NSString *)encryptAES256:(NSString *)key { NSData *input = [self dataUsingEncoding: NSUTF8StringEncoding]; NSData *output = [NSString cryptoAES256:input key:key doEncrypt:TRUE]; return [Base64 encode:output]; } - (NSString *)decryptAES256:(NSString *)key { NSData *input = [Base64 decode:self]; NSData *output = [NSString cryptoAES256:input key:key doEncrypt:FALSE]; return [[[NSString alloc] initWithData:output encoding:NSUTF8StringEncoding] autorelease]; } + (NSData *)cryptoAES256:(NSData *)input key:(NSString *)key doEncrypt:(BOOL)doEncrypt { // 'key' should be 32 bytes for AES256, will be null-padded otherwise char keyPtr[kCCKeySizeAES256 + 1]; // room for terminator (unused) bzero(keyPtr, sizeof(keyPtr)); // fill with zeroes (for padding) // fetch key data [key getCString:keyPtr maxLength:sizeof(keyPtr) encoding:NSUTF8StringEncoding]; NSUInteger dataLength = [input length]; // See the doc: For block ciphers, the output size will always be less than or // equal to the input size plus the size of one block. // That's why we need to add the size of one block here size_t bufferSize = dataLength + kCCBlockSizeAES128; void* buffer = malloc(bufferSize); size_t numBytesCrypted = 0; CCCryptorStatus cryptStatus = CCCrypt(doEncrypt ? kCCEncrypt : kCCDecrypt, kCCAlgorithmAES128, kCCOptionECBMode | kCCOptionPKCS7Padding, keyPtr, kCCKeySizeAES256, nil, // initialization vector (optional) [input bytes], dataLength, // input buffer, bufferSize, // output &numBytesCrypted ); if (cryptStatus == kCCSuccess) { // the returned NSData takes ownership of the buffer and will free it on deallocation return [NSData dataWithBytesNoCopy:buffer length:numBytesCrypted]; } free(buffer); // free the buffer; return nil; } @end Of course, the input is Base64 decoded beforehand. I see that each encryption with the same key and same content in Javascript gives a different encrypted string, which is not the case with the Objective-C implementation that always give the same encrypted string. I've read the answers of this post and it makes me believe I'm right about something along the lines of vector initialization but I'd need your help to pinpoint what's going on exactly. Thank you!

    Read the article

  • Android pluginable application

    - by Alxandr
    I've been trying to create an android-application the last couple of weeks, and mostly everything has worked out great, but there is one thing that I was wondering about, and that is pluginability trough the use of intents. What I'm trying to create is basically a comic-reader. As of the version I use now, I open the application and get a list of commics that are my favourites, then I enter one to get a detailed view, and finally I enter a page. This is managed trough 3 activities. List, Details and Page. However, as of now the application can only read comics of one source (a specialiced xml-feed comming from my server), and I was hoping to be able to expand this a litle (also, the page-activity and some other stuff needs to be cleaned up in, so I'm thinking about remaking from scratch, and just take the first go as a learning-round). And I came up with an idea which I think sounds great, but I don't know if it's possible, but this is what I'm thinking about: The user enters the application and get an (first time empty) list of comics. The user hits a button to find comics, this launces an intent that says something like "find comic" or something like that. This should cause the system to display all matching activities. This would make it possible to provide different comic-providers trough different applications. Another activity kicks in and might displays some options to the user (for instance a file-browser), or might not (in the example of an xml-feed, which should just load). The list is returned to the first activity and displayed to the user. The second (find) activity is closed. The user picks a comic from the list. This should open some details-activity. The details-activity should receive a key which corresponds to the comic selected. This should be unique amongst the comic-providers. The details-view should get it's data trough some cind of content-provider, or an activity (whichever is most suited, if one of them is). The user can select a page. This should be the same routine as step 5. My question is, is this possible in the android system, and if it is, is it a bad idea? And also, is there any better way to achieve more or less the same thing?

    Read the article

  • Initialization of components with interdependencies - possible antipattern?

    - by Rosarch
    I'm writing a game that has many components. Many of these are dependent upon one another. When creating them, I often get into catch-22 situations like "WorldState's constructor requires a PathPlanner, but PathPlanner's constructor requires WorldState." Originally, this was less of a problem, because references to everything needed were kept around in GameEngine, and GameEngine was passed around to everything. But I didn't like the feel of that, because it felt like we were giving too much access to different components, making it harder to enforce boundaries. Here is the problematic code: /// <summary> /// Constructor to create a new instance of our game. /// </summary> public GameEngine() { graphics = new GraphicsDeviceManager(this); Components.Add(new GamerServicesComponent(this)); //Sets dimensions of the game window graphics.PreferredBackBufferWidth = 800; graphics.PreferredBackBufferHeight = 600; graphics.ApplyChanges(); IsMouseVisible = true; screenManager = new ScreenManager(this); //Adds ScreenManager as a component, making all of its calls done automatically Components.Add(screenManager); // Tell the program to load all files relative to the "Content" directory. Assets = new CachedContentLoader(this, "Content"); inputReader = new UserInputReader(Constants.DEFAULT_KEY_MAPPING); collisionRecorder = new CollisionRecorder(); WorldState = new WorldState(new ReadWriteXML(), Constants.CONFIG_URI, this, contactReporter); worldQueryUtils = new WorldQueryUtils(worldQuery, WorldState.PhysicsWorld); ContactReporter contactReporter = new ContactReporter(collisionRecorder, worldQuery, worldQueryUtils); gameObjectManager = new GameObjectManager(WorldState, assets, inputReader, pathPlanner); worldQuery = new DefaultWorldQueryEngine(collisionRecorder, gameObjectManager.Controllers); gameObjectManager.WorldQueryEngine = worldQuery; pathPlanner = new PathPlanner(this, worldQueryUtils, WorldQuery); gameObjectManager.PathPlanner = pathPlanner; combatEngine = new CombatEngine(worldQuery, new Random()); } Here is an excerpt of the above that's problematic: gameObjectManager = new GameObjectManager(WorldState, assets, inputReader, pathPlanner); worldQuery = new DefaultWorldQueryEngine(collisionRecorder, gameObjectManager.Controllers); gameObjectManager.WorldQueryEngine = worldQuery; I hope that no one ever forgets that setting of gameObjectManager.WorldQueryEngine, or else it will fail. Here is the problem: gameObjectManager needs a WorldQuery, and WorldQuery needs a property of gameObjectManager. What can I do about this? Have I found an anti-pattern?

    Read the article

  • php paypal ipn membership error

    - by aleksander haugas
    hello I integrated the membership subscription in the website, but I fail to insert data to the database. checking out all the function I make a txt file with php with data and generates me correctly. but I can not verify any data and less insert. <?php // read the post from PayPal system and add 'cmd' $req = 'cmd=_notify-validate'; foreach ($_POST as $key => $value) { $value = urlencode(stripslashes($value)); $req .= "&$key=$value"; } // Post back to PayPal system to validate $header .= "POST /cgi-bin/webscr HTTP/1.0\r\n"; $header .= "Content-Type: application/x-www-form-urlencoded\r\n"; $header .= "Content-Length: " . strlen($req) . "\r\n\r\n"; $fp = fsockopen ('ssl://www.paypal.com', 443, $errno, $errstr, 30); // Assign posted variables to local variables $item_name = $_POST['item_name']; $item_number = $_POST['item_number']; $payment_status = $_POST['payment_status']; $payment_amount = $_POST['mc_gross']; $payment_currency = $_POST['mc_currency']; $txn_id = $_POST['txn_id']; $receiver_email = $_POST['receiver_email']; $payer_email = $_POST['payer_email']; $user_id = $_POST['custom']; //Here i make the txt file and it works correctly //but after this does not work or make a txt file //Insert and update data if (!$fp) { // HTTP ERROR } else { fputs ($fp, $header . $req); while (!feof($fp)) { $res = fgets ($fp, 1024); if (strcmp ($res, "VERIFIED") == 0) { //Verificamos el estado del pedido if ($payment_status == 'Completed') { $txn_id_check = mysql_query("SELECT Transaction_ID FROM ".$db_table_prefix."Users_Payments WHERE Transaction_ID='".$txn_id."'"); if (mysql_num_rows($txn_id_check) !=1) { //Si el pago se ha realizado al dueño especificado y coincide con el dueño if ($receiver_email == $website_Payment_Email) { //Tipo de cuenta segun pagado if ($payment_amount == '10.00' && $payment_currency == 'EUR') { //add txn_id to database $log_query = mysql_query("INSERT INTO `".$db_table_prefix."Users_Payments` VALUES ('','".$txn_id."','".$payer_email."')"); //update premium to 3 $update_premium = mysql_query("UPDATE ".$db_table_prefix."Users SET Group_ID='3' WHERE User_ID='".$user_id."'"); } } } } } else if (strcmp ($res, "INVALID") == 0) { // log for manual investigation } } fclose ($fp); } ?> I call this file with require_once () with the connection to the database any ideas? thanks

    Read the article

  • Go - Using a container/heap to implement a priority queue

    - by Seth Hoenig
    In the big picture, I'm trying to implement Dijkstra's algorithm using a priority queue. According to members of golang-nuts, the idiomatic way to do this in Go is to use the heap interface with a custom underlying data structure. So I have created Node.go and PQueue.go like so: //Node.go package pqueue type Node struct { row int col int myVal int sumVal int } func (n *Node) Init(r, c, mv, sv int) { n.row = r n.col = c n.myVal = mv n.sumVal = sv } func (n *Node) Equals(o *Node) bool { return n.row == o.row && n.col == o.col } And PQueue.go: // PQueue.go package pqueue import "container/vector" import "container/heap" type PQueue struct { data vector.Vector size int } func (pq *PQueue) Init() { heap.Init(pq) } func (pq *PQueue) IsEmpty() bool { return pq.size == 0 } func (pq *PQueue) Push(i interface{}) { heap.Push(pq, i) pq.size++ } func (pq *PQueue) Pop() interface{} { pq.size-- return heap.Pop(pq) } func (pq *PQueue) Len() int { return pq.size } func (pq *PQueue) Less(i, j int) bool { I := pq.data.At(i).(Node) J := pq.data.At(j).(Node) return (I.sumVal + I.myVal) < (J.sumVal + J.myVal) } func (pq *PQueue) Swap(i, j int) { temp := pq.data.At(i).(Node) pq.data.Set(i, pq.data.At(j).(Node)) pq.data.Set(j, temp) } And main.go: (the action is in SolveMatrix) // Euler 81 package main import "fmt" import "io/ioutil" import "strings" import "strconv" import "./pqueue" const MATSIZE = 5 const MATNAME = "matrix_small.txt" func main() { var matrix [MATSIZE][MATSIZE]int contents, err := ioutil.ReadFile(MATNAME) if err != nil { panic("FILE IO ERROR!") } inFileStr := string(contents) byrows := strings.Split(inFileStr, "\n", -1) for row := 0; row < MATSIZE; row++ { byrows[row] = (byrows[row])[0 : len(byrows[row])-1] bycols := strings.Split(byrows[row], ",", -1) for col := 0; col < MATSIZE; col++ { matrix[row][col], _ = strconv.Atoi(bycols[col]) } } PrintMatrix(matrix) sum, len := SolveMatrix(matrix) fmt.Printf("len: %d, sum: %d\n", len, sum) } func PrintMatrix(mat [MATSIZE][MATSIZE]int) { for r := 0; r < MATSIZE; r++ { for c := 0; c < MATSIZE; c++ { fmt.Printf("%d ", mat[r][c]) } fmt.Print("\n") } } func SolveMatrix(mat [MATSIZE][MATSIZE]int) (int, int) { var PQ pqueue.PQueue var firstNode pqueue.Node var endNode pqueue.Node msm1 := MATSIZE - 1 firstNode.Init(0, 0, mat[0][0], 0) endNode.Init(msm1, msm1, mat[msm1][msm1], 0) if PQ.IsEmpty() { // make compiler stfu about unused variable fmt.Print("empty") } PQ.Push(firstNode) // problem return 0, 0 } The problem is, upon compiling i get the error message: [~/Code/Euler/81] $ make 6g -o pqueue.6 Node.go PQueue.go 6g main.go main.go:58: implicit assignment of unexported field 'row' of pqueue.Node in function argument make: *** [all] Error 1 And commenting out the line PQ.Push(firstNode) does satisfy the compiler. But I don't understand why I'm getting the error message in the first place. Push doesn't modify the argument in any way.

    Read the article

  • Memory corruption in System.Move due to changed 8087CW mode (png + stretchblt)

    - by André Mussche
    I have strange a memory corruption problem. After many hours debugging and trying I think I found something. For example: I do a simple string assignment: sTest := 'SET LOCK_TIMEOUT '; However, the result sometimes becomes: sTest = 'SET LOCK'#0'TIMEOUT ' So, the _ gets replaced by an 0 byte. I have seen this happening once (reproducing is tricky, dependent on timing) in the System.Move function, when it uses the FPU stack (fild, fistp) for fast memory copy (in case of 9 till 32 bytes to move): ... @@SmallMove: {9..32 Byte Move} fild qword ptr [eax+ecx] {Load Last 8} fild qword ptr [eax] {Load First 8} cmp ecx, 8 jle @@Small16 fild qword ptr [eax+8] {Load Second 8} cmp ecx, 16 jle @@Small24 fild qword ptr [eax+16] {Load Third 8} fistp qword ptr [edx+16] {Save Third 8} ... Using the FPU view and 2 memory debug views (Delphi - View - Debug - CPU - Memory) I saw it going wrong... once... could not reproduce however... This morning I read something about the 8087CW mode, and yes, if this is changed into $27F I get memory corruption! Normally it is $133F: The difference between $133F and $027F is that $027F sets up the FPU for doing less precise calculations (limiting to Double in stead of Extended) and different infiniti handling (which was used for older FPU’s, but is not used any more). Okay, now I found why but not when! I changed the working of my AsmProfiler with a simple check (so all functions are checked at enter and leave): if Get8087CW = $27F then //normally $1372? if MainThreadID = GetCurrentThreadId then //only check mainthread DebugBreak; I "profiled" some units and dll's and bingo (see stack): Windows.StretchBlt(3372289943,0,0,514,345,4211154027,0,0,514,345,13369376) pngimage.TPNGObject.DrawPartialTrans(4211154027,(0, 0, 514, 345, (0, 0), (514, 345))) pngimage.TPNGObject.Draw($7FF62450,(0, 0, 514, 345, (0, 0), (514, 345))) Graphics.TCanvas.StretchDraw((0, 0, 514, 345, (0, 0), (514, 345)),$7FECF3D0) ExtCtrls.TImage.Paint Controls.TGraphicControl.WMPaint((15, 4211154027, 0, 0)) So it is happening in StretchBlt... What to do now? Is it a fault of Windows, or a bug in PNG (included in D2007)? Or is the System.Move function not failsafe?

    Read the article

  • Building a QueryExpression where name field is either A or B

    - by Mike
    I'm trying to build a Dynamics CRM 4 query so that I can get calendar events that are named either "Event A" or "Event B". A QueryByAttribute doesn't seem to do the job as I cannot specify a condition where the field called "event_name" = "Event A" of "event_name" = "Event B". When using the QueryExpression, I've found the FilterExpression applies to the Referencing Entity. I don't know if the FilterExpression can be used on the Referenced Entity at all. The example below is something like what I want to achieve, though this would return an empty result set as it will go looking in the entity called "my_event_response" for a "name" attribute. It's starting to look like I will need to run several queries to get this but this is less efficient than if I can submit it all at once. ColumnSet columns = new ColumnSet(); columns.Attributes = new string[]{ "event_name", "eventid", "startdate", "city" }; ConditionExpression eventname1 = new ConditionExpression(); eventname1.AttributeName = "event_name"; eventname1.Operator = ConditionOperator.Equal; eventname1.Values = new string[] { "Event A" }; ConditionExpression eventname2 = new ConditionExpression(); eventname2.AttributeName = "event_name"; eventname2.Operator = ConditionOperator.Equal; eventname2.Values = new string[] { "Event B" }; FilterExpression filter = new FilterExpression(); filter.FilterOperator = LogicalOperator.Or; filter.Conditions = new ConditionExpression[] { eventname1, eventname2 }; LinkEntity link = new LinkEntity(); link.LinkCriteria = filter; link.LinkFromEntityName = "my_event"; link.LinkFromAttributeName = "eventid"; link.LinkToEntityName = "my_event_response"; link.LinkToAttributeName = "eventid"; QueryExpression query = new QueryExpression(); query.ColumnSet = columns; query.EntityName = EntityName.mbs_event.ToString(); query.LinkEntities = new LinkEntity[] { link }; RetrieveMultipleRequest request = new RetrieveMultipleRequest(); request.Query = query; return (RetrieveMultipleResponse)crmService.Execute(request); I'd appreciate some advice on how to get the data I need.

    Read the article

  • How do I classify using SVM Classifier?

    - by Gomathi
    I'm doing a project in liver tumor classification. Actually I initially used Region Growing method for liver segmentation and from that I segmented tumor using FCM. I,then, obtained the texture features using Gray Level Co-occurence Matrix. My output for that was stats = autoc: [1.857855266614132e+000 1.857955341199538e+000] contr: [5.103143332457753e-002 5.030548650257343e-002] corrm: [9.512661919561399e-001 9.519459060378332e-001] corrp: [9.512661919561385e-001 9.519459060378338e-001] cprom: [7.885631654779597e+001 7.905268525471267e+001] Now how should I give this as an input to the SVM program. function [itr] = multisvm( T,C,tst ) %MULTISVM(2.0) classifies the class of given training vector according to the % given group and gives us result that which class it belongs. % We have also to input the testing matrix %Inputs: T=Training Matrix, C=Group, tst=Testing matrix %Outputs: itr=Resultant class(Group,USE ROW VECTOR MATRIX) to which tst set belongs %----------------------------------------------------------------------% % IMPORTANT: DON'T USE THIS PROGRAM FOR CLASS LESS THAN 3, % % OTHERWISE USE svmtrain,svmclassify DIRECTLY or % % add an else condition also for that case in this program. % % Modify required data to use Kernel Functions and Plot also% %----------------------------------------------------------------------% % Date:11-08-2011(DD-MM-YYYY) % % This function for multiclass Support Vector Machine is written by % ANAND MISHRA (Machine Vision Lab. CEERI, Pilani, India) % and this is free to use. email: [email protected] % Updated version 2.0 Date:14-10-2011(DD-MM-YYYY) u=unique(C); N=length(u); c4=[]; c3=[]; j=1; k=1; if(N>2) itr=1; classes=0; cond=max(C)-min(C); while((classes~=1)&&(itr<=length(u))&& size(C,2)>1 && cond>0) %This while loop is the multiclass SVM Trick c1=(C==u(itr)); newClass=c1; svmStruct = svmtrain(T,newClass); classes = svmclassify(svmStruct,tst); % This is the loop for Reduction of Training Set for i=1:size(newClass,2) if newClass(1,i)==0; c3(k,:)=T(i,:); k=k+1; end end T=c3; c3=[]; k=1; % This is the loop for reduction of group for i=1:size(newClass,2) if newClass(1,i)==0; c4(1,j)=C(1,i); j=j+1; end end C=c4; c4=[]; j=1; cond=max(C)-min(C); % Condition for avoiding group %to contain similar type of values %and the reduce them to process % This condition can select the particular value of iteration % base on classes if classes~=1 itr=itr+1; end end end end Kindly guide me. Images:

    Read the article

  • Neo4j 1.9.4 (REST Server,CYPHER) performance issue

    - by user2968943
    I have Neo4j 1.9.4 installed on 24 core 24Gb ram (centos) machine and for most queries CPU usage spikes goes to 200% with only few concurrent requests. Domain: some sort of social application where few types of nodes(profiles) with 3-30 text/array properties and 36 relationship types with at least 3 properties. Most of nodes currently has ~300-500 relationships. Current data set footprint(from console): LogicalLogSize=4294907 (32MB) ArrayStoreSize=1675520 (12MB) NodeStoreSize=1342170 (10MB) PropertyStoreSize=1739548 (13MB) RelationshipStoreSize=6395202 (48MB) StringStoreSize=1478400 (11MB) which is IMHO really small. most queries looks like this one(with more or less WITH .. MATCH .. statements and few queries with variable length relations but the often fast): START targetUser=node({id}), currentUser=node({current}) MATCH targetUser-[contact:InContactsRelation]->n, n-[:InLocationRelation]->l, n-[:InCategoryRelation]->c WITH currentUser, targetUser,n, l,c, contact.fav is not null as inFavorites MATCH n<-[followers?:InContactsRelation]-() WITH currentUser, targetUser,n, l,c,inFavorites, COUNT(followers) as numFollowers RETURN id(n) as id, n.name? as name, n.title? as title, n._class as _class, n.avatar? as avatar, n.avatar_type? as avatar_type, l.name as location__name, c.name as category__name, true as isInContacts, inFavorites as isInFavorites, numFollowers it runs in ~1s-3s(for first run) and ~1s-70ms (for consecutive and it depends on query) and there is about 5-10 queries runs for each impression. Another interesting behavior is when i try run query from console(neo4j) on my local machine many consecutive times(just press ctrl+enter for few seconds) it has almost constant execution time but when i do it on server it goes slower exponentially and i guess it somehow related with my problem. Problem: So my problem is that neo4j is very CPU greedy(for 24 core machine its may be not an issue but its obviously overkill for small project). First time i used AWS EC2 m1.large instance but over all performance was bad, during testing, CPU always was over 100%. Some relevant parts of configuration: neostore.nodestore.db.mapped_memory=1280M wrapper.java.maxmemory=8192 note: I already tried configuration where all memory related parameters where HIGH and it didn't worked(no change at all). Question: Where to digg? configuration? scheme? queries? what i'm doing wrong? if need more info(logs, configs) just ask ;)

    Read the article

  • how to handle a array of objects in a session

    - by Robert
    Hello, In the project I'm working on I have got a list List<Item> with objects that Is saved in a session. Session.Add("SessionName", List); In the Controller I build a viewModel with the data from this session var arrayList = (List<Item>)Session["SessionName"]; var arrayListItems= new List<CartItem>(); foreach (var item in arrayList) { var listItem = new Item { Amount = item.Amount, Variant= item.variant, Id = item.Id }; arrayListItems.Add(listItem); } var viewModel = new DetailViewModel { itemList = arrayListItems } and in my View I loop trough the list of Items and make a form for all of them to be able to remove the item. <table> <%foreach (var Item in Model.itemList) { %> <% using (Html.BeginForm()) { %> <tr> <td><%=Html.Hidden(Settings.Prefix + ".VariantId", Item .Variant.Id)%> <td> <%=Html.TextBox(Settings.Prefix + ".Amount", Item.Amount)%></td> <td> <%=Html.Encode(Item.Amount)%> </td> <td> <input type="submit" value="Remove" /> </td> </tr> <% } %> <% } %> </table> When the post from the submit button is handeld the item is removed from the array and post back exactly the same viewModel (with 1 item less in the itemList). return View("view.ascx", viewModel); When the post is handled and the view has reloaded the value's of the html.Hidden and Html.Textbox are the value's of the removed item. The value of the html.Encode is the correct value. When i reload the page the correct values are in the fields. Both times i build the viewModel the exact same way. I cant find the cause or solution of this error. I would be very happy with any help to solve this problem Thanx in advance for any tips or help

    Read the article

  • SQL Server CTE referred in self joins slow

    - by Kharlos Dominguez
    Hello, I have written a table-valued UDF that starts by a CTE to return a subset of the rows from a large table. There are several joins in the CTE. A couple of inner and one left join to other tables, which don't contain a lot of rows. The CTE has a where clause that returns the rows within a date range, in order to return only the rows needed. I'm then referencing this CTE in 4 self left joins, in order to build subtotals using different criterias. The query is quite complex but here is a simplified pseudo-version of it WITH DataCTE as ( SELECT [columns] FROM table INNER JOIN table2 ON [...] INNER JOIN table3 ON [...] LEFT JOIN table3 ON [...] ) SELECT [aggregates_columns of each subset] FROM DataCTE Main LEFT JOIN DataCTE BananasSubset ON [...] AND Product = 'Bananas' AND Quality = 100 LEFT JOIN DataCTE DamagedBananasSubset ON [...] AND Product = 'Bananas' AND Quality < 20 LEFT JOIN DataCTE MangosSubset ON [...] GROUP BY [ I have the feeling that SQL Server gets confused and calls the CTE for each self join, which seems confirmed by looking at the execution plan, although I confess not being an expert at reading those. I would have assumed SQL Server to be smart enough to only perform the data retrieval from the CTE only once, rather than do it several times. I have tried the same approach but rather than using a CTE to get the subset of the data, I used the same select query as in the CTE, but made it output to a temp table instead. The version referring the CTE version takes 40 seconds. The version referring the temp table takes between 1 and 2 seconds. Why isn't SQL Server smart enough to keep the CTE results in memory? I like CTEs, especially in this case as my UDF is a table-valued one, so it allowed me to keep everything in a single statement. To use a temp table, I would need to write a multi-statement table valued UDF, which I find a slightly less elegant solution. Did some of you had this kind of performance issues with CTE, and if so, how did you get them sorted? Thanks, Kharlos

    Read the article

  • Embedded "Smart" character LCD driver. Is this a good idea?

    - by chris12892
    I have an embedded project that I am working on, and I am currently coding the character LCD driver. At the moment, the LCD driver only supports "dumb" writing. For example, let's say line 1 has some text on it, and I make a call to the function that writes to the line. The function will simply seek to the beginning of the line and write the text (plus enough whitespace to erase whatever was last written). This is well and good, but I get the feeling it is horribly inefficient sometimes, since some lines are simply: "Some-reading: some-Value" Rather than "brute force" replacing the entire line, I wanted to develop some code that would figure out the best way to update the information on the LCD. (just as background, it takes 2 bytes to seek to any char position. I can then begin writing the string) My idea was to first have a loop. This loop would compare the input to the last write, and in doing so, it would cover two things: A: Collect all the differences between the last write and the input. For every contiguous segment (be it same or different) add two bytes to the byte count. This is referenced in B to determine if we are wasting serial bandwidth. B: The loop would determine if this is really a smart thing to do. If we end up using more bytes to update the line than to "brute force" the line, then we should just return and let the brute force method take over. We should exit the smart write function as soon as this condition is met to avoid wasting time. The next part of the function would take all the differences, seek to the required char on the LCD, and write them. Thus, if we have a string like this already on the LCD: "Current Temp: 80F" and we want to update it to "Current Temp: 79F" The function will go through and see that it would take less bandwidth to simply seek to the "8" and write "79". The "7" will cover the "8" and the "9" will cover the "0". That way, we don't waste time writing out the entire string. Does this seem like a practical idea?

    Read the article

  • Xpath expression to retrieve oldest/earliest node

    - by gkrogers
    I have an XML snippet, so: <STATES> <STATE> <NAME>Alabama</NAME> <ABBREVIATION>AL</ABBREVIATION> <CAPITAL>Montgomery</CAPITAL> <POPULATION>4661900</POPULATION> <AREA>52419</AREA> <DATEOFSTATEHOOD>14 December 1819</DATEOFSTATEHOOD> </STATE> <STATE> <NAME>Alaska</NAME> <ABBREVIATION>AK</ABBREVIATION> <CAPITAL>Juneau</CAPITAL> <POPULATION>698473</POPULATION> <AREA>663268</AREA> <DATEOFSTATEHOOD>1 January 1959</DATEOFSTATEHOOD> </STATE> <STATE> <NAME>Delaware</NAME> <ABBREVIATION>DE</ABBREVIATION> <CAPITAL>Dover</CAPITAL> <POPULATION>885122</POPULATION> <AREA>2490</AREA> <DATEOFSTATEHOOD>7 December 1787</DATEOFSTATEHOOD> </STATE> </STATES> <etc, etc.> I want to retrieve (for example) the capital of the oldest state (i.e. "Dover"). I have managed to get this far: //STATES/STATE[DATEOFSTATEHOOD='7 December 1787']/CAPITAL/text() but can't figure out how to say 'DATEOFSTATEHOOD={the earliest DATEOFSTATEHOOD}'. Can anybody point me in the right direction, please? SOLUTION: Matt's solution is more or less spot on. I had to reformat the dates (I used YYYYMMDDD) because, as was pointed out, Xpath 1.0 doesn't support the date format I was using. Also, Microsoft's XML library (4.0 and 6.0) returned the whole node list with Matt's expression. Reversing the test fixed that problem, making it return just the earliest node. So: //STATES/STATE[(DATEOFSTATEHOOD < //STATES/STATE/DATEOFSTATEHOOD)]/CAPITAL/text()

    Read the article

  • Setting up Netbeans/Eclipse for Linux Kernel Development

    - by red.october
    Hi: I'm doing some Linux kernel development, and I'm trying to use Netbeans. Despite declared support for Make-based C projects, I cannot create a fully functional Netbeans project. This is despite compiling having Netbeans analyze a kernel binary that was compiled with full debugging information. Problems include: files are wrongly excluded: Some files are incorrectly greyed out in the project, which means Netbeans does not believe they should be included in the project, when in fact they are compiled into the kernel. The main problem is that Netbeans will miss any definitions that exist in these files, such as data structures and functions, but also miss macro definitions. cannot find definitions: Pretty self-explanatory - often times, Netbeans cannot find the definition of something. This is partly a result of the above problem. can't find header files: self-explanatory I'm wondering if anyone has had success with setting up Netbeans for Linux kernel development, and if so, what settings they used. Ultimately, I'm looking for Netbeans to be able to either parse the Makefile (preferred) or extract the debug information from the binary (less desirable, since this can significantly slow down compilation), and automatically determine which files are actually compiled and which macros are actually defined. Then, based on this, I would like to be able to find the definitions of any data structure, variable, function, etc. and have complete auto-completion. Let me preface this question with some points: I'm not interested in solutions involving Vim/Emacs. I know some people like them, but I'm not one of them. As the title suggest, I would be also happy to know how to set-up Eclipse to do what I need While I would prefer perfect coverage, something that only misses one in a million definitions is obviously fine SO's useful "Related Questions" feature has informed me that the following question is related: http://stackoverflow.com/questions/149321/what-ide-would-be-good-for-linux-kernel-driver-development. Upon reading it, the question is more of a comparison between IDE's, whereas I'm looking for how to set-up a particular IDE. Even so, the user Wade Mealing seems to have some expertise in working with Eclipse on this kind of development, so I would certainly appreciate his (and of course all of your) answers. Cheers

    Read the article

  • Best of both worlds: browser and desktop game?

    - by Ricket
    When considering a platform for a game, I've decided on multi-platform (Win/Lin/Mac) but can't make up my mind as far as browser vs. desktop. As I'm not all too far in development, and now having second thoughts, I'd like your opinion! Browser-based games using Java applets: market penetration is reasonably high (for version 6, it's somewhere around 60% I believe?) using JOGL, 3D performance/quality is decent; certainly good enough to render the crappy 3D graphics that I make there's the (small?) possibility of porting something to Android great for an audience of gamers who switch computers often; can sit down at any computer, load a webpage and play it also great for casual gamers or less knowledgeable gamers who are quite happy with playing games in a browser but don't want to install more things to their computer written in a high-level language which I am more familiar with than C++ - but at the same time, I would like to improve my skills with C++ as it is probably where I am headed in the game industry once I get out of school... easier update process: reload the page. Desktop games using good ol' C++ and OpenGL 100% market penetration, assuming complete cross-platform; however, that number reduces when you consider how many people will go through downloading and installing an executable compared to just browsing to a webpage and hitting "yes" to a security warning. more trouble to maintain the cross-platform; but again, for learning purposes I would embrace the challenge and the knowledge I would gain better performance all around true full screen, whereas browser games often struggle with smooth full screen graphics (especially on Linux, in my experience) can take advantage of distribution platforms such as Steam more likely to be considered a "real" game, whereas browser and Java games are often dismissed as not being real games and therefore not played by "hardcore gamers" installer can be large; don't have to worry so much about download times Is there a way to have the best of both worlds? I love Java applets, but I also really like the reasons to write a desktop game. I don't want to constantly port everything between a Java applet project and a C++ project; that would be twice the work! Unity chose to write their own web player plugin. I don't like this, because I am one of the people that will not install their web player for anything, and I don't see myself being able to convince my audience to install a browser plugin. What are my options? Are there other examples out there besides Unity, of games that have browser and desktop versions? Did I leave out anything in the pro/con lists above?

    Read the article

  • How to stream semi-live audio over internet

    - by Thomas Tempelmann
    I want to write something like Skype, i.e. I have a constant audio stream on one computer and then recompress it in a format that's suitable for a latent internet connection, receive it on the other end and play it. Let's also assume that the internet connection is fairly modern and fast, i.e. DSL or alike, no slow connections over phone and such. The involved computers will also be rather modern (Dual Core Intel CPUs at 2GHz or more). I know how to handle the audio on the machines. What I don't know is how to transmit the audio in an efficient way. The challenges are: I'd like get good audio quality across the line. The stream should be received without drops. The stream may, however, be received with a little delay (a second delay is acceptable). I imagine that the transport software could first determine the average (and max) latency, then start the stream and tell the receiver to wait for that max latency before starting to play the audio. With that, if the latency doesn't get any higher, the entire stream will be playable on the other side without stutter or drops. If, due to unexpected IP latencies or blockages, the stream does get cut off, I want to be able to notice this so that I can take actions (e.g. abort the stream) and eventually start a new transmission. What are my options if I want do use ready-made software for the compression and tranmission? I have no intention to write my own audio compression engine, really. OTOH, I plan to sell the solution in a vertical market, meaning I can afford a few dollars of license fees per copy, but not $100s. I guess the simplest solution would be to just open a TCP stream, send a few packets back and forth to determine their running time (or even use UDP for that), then use the results as the guide for my max latency value, then simply fire the audio data in its raw form (uncompressed 16 bit stereo), along with a timing code over the TCP connection. The receiver reads the data and plays it with the pre-determined delay. That might just work with the type of fast connection I expect. I just wonder if there are better solutions to reach this goal, with better performance (lower latency) and less data (compressed). BTW, I first try to implement this on OS X, but might want to do it on Windows, too, if it proves successful.

    Read the article

< Previous Page | 292 293 294 295 296 297 298 299 300 301 302 303  | Next Page >