Search Results

Search found 8219 results on 329 pages for 'less'.

Page 251/329 | < Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >

  • Trouble with custom WPF Panel-derived class

    - by chaiguy
    I'm trying to write a custom Panel class for WPF, by overriding MeasureOverride and ArrangeOverride but, while it's mostly working I'm experiencing one strange problem I can't explain. In particular, after I call Arrange on my child items in ArrangeOverride after figuring out what their sizes should be, they aren't sizing to the size I give to them, and appear to be sizing to the size passed to their Measure method inside MeasureOverride. Am I missing something in how this system is supposed to work? My understanding is that calling Measure simply causes the child to evaluate its DesiredSize based on the supplied availableSize, and shouldn't affect its actual final size. Here is my full code (the Panel, btw, is intended to arrange children in the most space-efficient manner, giving less space to rows that don't need it and splitting remaining space up evenly among the rest--it currently only supports vertical orientation but I plan on adding horizontal once I get it working properly): protected override Size MeasureOverride( Size availableSize ) { foreach ( UIElement child in Children ) child.Measure( availableSize ); return availableSize; } protected override System.Windows.Size ArrangeOverride( System.Windows.Size finalSize ) { double extraSpace = 0.0; var sortedChildren = Children.Cast<UIElement>().OrderBy<UIElement, double>( new Func<UIElement, double>( delegate( UIElement child ) { return child.DesiredSize.Height; } ) ); double remainingSpace = finalSize.Height; double normalSpace = 0.0; int remainingChildren = Children.Count; foreach ( UIElement child in sortedChildren ) { normalSpace = remainingSpace / remainingChildren; if ( child.DesiredSize.Height < normalSpace ) // if == there would be no point continuing as there would be no remaining space remainingSpace -= child.DesiredSize.Height; else { remainingSpace = 0; break; } remainingChildren--; } extraSpace = remainingSpace / Children.Count; double offset = 0.0; foreach ( UIElement child in Children ) { //child.Measure( new Size( finalSize.Width, normalSpace ) ); double value = Math.Min( child.DesiredSize.Height, normalSpace ) + extraSpace; child.Arrange( new Rect( 0, offset, finalSize.Width, value ) ); offset += value; } return finalSize; }

    Read the article

  • Is my objective possible using WCF (and is it the right way to do things?)

    - by David
    I'm writing some software that modifies a Windows Server's configuration (things like MS-DNS, IIS, parts of the filesystem). My design has a server process that builds an in-memory object graph of the server configuration state and a client which requests this object graph. The server would then serialize the graph, send it to the client (presumably using WCF), the server then makes changes to this graph and sends it back to the server. The server receives the graph and proceeds to make modifications to the server. However I've learned that object-graph serialisation in WCF isn't as simple as I first thought. My objects have a hierarchy and many have parametrised-constructors and immutable properties/fields. There are also numerous collections, arrays, and dictionaries. My understanding of WCF serialisation is that it requires use of either the XmlSerializer or DataContractSerializer, but DCS places restrictions on the design of my object-graph (immutable data seems right-out, it also requires parameter-less constructors). I understand XmlSerializer lets me use my own classes provided they implement ISerializable and have the de-serializer constructor. That is fine by me. I spoke to a friend of mine about this, and he advocates going for a Data Transport Object-only route, where I'd have to maintain a separate DataContract object-graph for the transport of data and re-implement my server objects on the client. Another friend of mine said that because my service only has two operations ("GetServerConfiguration" and "PutServerConfiguration") it might be worthwhile just skipping WCF entirely and implementing my own server that uses Sockets. So my questions are: Has anyone faced a similar problem before and if so, are there better approaches? Is it wise to send an entire object graph to the client for processing? Should I instead break it down so that the client requests a part of the object graph as it needs it and sends only bits that have changed (thus reducing concurrency-related risks?)? If sending the object-graph down is the right way, is WCF the right tool? And if WCF is right, what's the best way to get WCF to serialise my object graph?

    Read the article

  • Parse usable Street Address, City, State, Zip from a string

    - by Rob Allen
    Problem: I have an address field from an Access database which has been converted to Sql Server 2005. This field has everything all in one field. I need to parse out the individual sections of the address into their appropriate fields in a normalized table. I need to do this for approximately 4,000 records and it needs to be repeatable. Here are the rules for this exercise: 1 - no whining about how this should have been separate fields in the first place, we are often confronted with less than ideal situations and have to make the best of them 2- for this post, use any language you want 3- feel free to play code golf 4 - Assume an address in the US (for now) 5 - assume that the input string will sometimes contain an addressee (the person being addressed) and/or a second street address (i.e. Suite B) 6 - states may be abbreviated 7 - zip code could be standard 5 digit or zip+4 8 - there are typos in some instances UPDATE: In response to the questions posed, standards were not universally followed, I need need to store the individual values, not just geocode and errors means typo (corrected above) Sample Data: A. P. Croll & Son 2299 Lewes-Georgetown Hwy, Georgetown, DE 19947 11522 Shawnee Road, Greenwood DE 19950 144 Kings Highway, S.W. Dover, DE 19901 Intergrated Const. Services 2 Penns Way Suite 405 New Castle, DE 19720 Humes Realty 33 Bridle Ridge Court, Lewes, DE 19958 Nichols Excavation 2742 Pulaski Hwy Newark, DE 19711 2284 Bryn Zion Road, Smyrna, DE 19904 VEI Dover Crossroads, LLC 1500 Serpentine Road, Suite 100 Baltimore MD 21 580 North Dupont Highway Dover, DE 19901 P.O. Box 778 Dover, DE 19903

    Read the article

  • PHP Object References in Frameworks

    - by bigstylee
    Before I dive into the disscusion part a quick question; Is there a method to determine if a variable is a reference to another variable/object? For example $foo = 'Hello World'; $bar = &$foo; echo (is_reference($bar) ? 'Is reference' : 'Is orginal'; I have been using PHP5 for a few years now (personal use only) and I would say I am moderately reversed on the topic of Object Orientated implementation. However the concept of Model View Controller Framework is fairly new to me. I have looked a number of tutorials and looked at some of the open source frameworks (mainly CodeIgnitor) to get a better understanding how everything fits together. I am starting to appreciate the real benefits of using this type of structure. I am used to implementing object referencing in the following technique. class Foo{ public $var = 'Hello World!'; } class Bar{ public function __construct(){ global $Foo; echo $Foo->var; } } $Foo = new Foo; $Bar = new Bar; I was surprised to see that CodeIgnitor and Yii pass referencs of objects and can be accessed via the following method: $this->load->view('argument') The immediate advantage I can see is a lot less code and more user friendly. But I do wonder if it is more efficient as these frameworks are presumably optimised? Or simply to make the code more user friendly? This was an interesting article Do not use PHP references.

    Read the article

  • help with mysql triggers (checking values before insert)

    - by user332817
    hi I'm quite new to mysql and I'm trying to figure out how to use triggers. what I'm trying to do: I have 2 tables, max and sub_max, when I insert a new row to sub_max I want to check if the SUM of the values with the same foreign_key as the new row are less than the value in the max table. I think this sounds confusing so here are my tables: CREATE TABLE max( number INT , MaxAmount integer NOT NULL) CREATE TABLE sub_max( sub_number INT , sub_MaxAmount integer NOT NULL, number INT, FOREIGN KEY ( number ) REFERENCES max( number )) and here is my code for the trigger, I know the syntax is off but this is the best I could do from looking up tutorials. CREATE TRIGGER maxallowed after insert on submax FOR EACH ROW BEGIN DECLARE submax integer; DECLARE maxmax integer; submax = select sum(sub_MaxAmount) from sub_max where sub_number = new.sub_number; submax = submax + new. sub_MaxAmount; maxmax = select MaxAmount from max where number = new.number ; if max>maxmax rollback? END I wanted to know if I'm doing this remotely correctly. Thanks in advance.

    Read the article

  • Scope of "library" methods

    - by JS
    Hello, I'm apparently laboring under a poor understanding of Python scoping. Perhaps you can help. Background: I'm using the 'if name in "main"' construct to perform "self-tests" in my module(s). Each self test makes calls to the various public methods and prints their results for visual checking as I develop the modules. To keep things "purdy" and manageable, I've created a small method to simplify the testing of method calls: def pprint_vars(var_in): print("%s = '%s'" % (var_in, eval(var_in))) Calling pprint_vars with: pprint_vars('some_variable_name') prints: some_variable_name = 'foo' All fine and good. Problem statement: Not happy to just KISS, I had the brain-drizzle to move my handy-dandy 'pprint_vars' method into a separate file named 'debug_tools.py' and simply import 'debug_tools' whenever I wanted access to 'pprint_vars'. Here's where things fall apart. I would expect import debug_tools foo = bar debug_tools.pprint_vars('foo') to continue working its magic and print: foo = 'bar' Instead, it greets me with: NameError: name 'some_var' is not defined Irrational belief: I believed (apparently mistakenly) that import puts imported methods (more or less) "inline" with the code, and thus the variable scoping rules would remain similar to if the method were defined inline. Plea for help: Can someone please correct my (mis)understanding of scoping regards imports? Thanks, JS

    Read the article

  • array_map applied on a function with 2 parameters

    - by mat
    I've 2 arrays ($numbers and $letters) and I want to create a new array based on a function that combines every $numbers with every $letters. The parameters of this function involes the value of both $numbers and $letters. (Note: $numbers and $letters doesn't have the same amount of values). I need something like this: $numbers = array(1,2,3,4,5,6,...); $letters = array('a','b','c','d','e',...); function myFunction($x,$y){ // $output = some code that use $x and $y return $output; }; $array_1 = array( (myFunction($numbers[0],$letters[0])), (myFunction($numbers[0],$letters[1])), myFunction($numbers[0],$letters[2]), myFunction($numbers[0],$letters[3]), etc); $array_2 = array( (myFunction($numbers[1],$letters[0])), (myFunction($numbers[1],$letters[1])), myFunction($numbers[1],$letters[2]), myFunction($numbers[1],$letters[3]), etc); $array_3 = array( (myFunction($numbers[2],$letters[0])), (myFunction($numbers[2],$letters[1])), myFunction($numbers[2],$letters[2]), myFunction($numbers[2],$letters[3]), etc); ... $array_N = array( (myFunction($numbers[N],$letters[0])), (myFunction($numbers[N],$letters[1])), myFunction($numbers[N],$letters[2]), myFunction($numbers[N],$letters[3]), etc); $array = array($array_1, $array_2, $array_3, etc.); I know that this may work, but it's a lot of code, especially if I have a many values for each array. Is there a way to get the same result with less code? I tried this, but it's not working: $array = array_map("myFunction($value, $letters)",$numbers)); Any help would be appriciated!

    Read the article

  • Validating Time & Date To Be At Least A Certain Amount Of Time In The Future

    - by MJH
    I've built a reservation form for a taxi company which works fine, but I'm having an issue with users making reservations that are due too soon in the future. Since the entire form is kind of long, I first want to make sure the user is not trying to make a reservation for less than an hour ahead of time, without them having to fill out the whole form. This is what I have come up with so far, but it's just not working: <?php //Set local time zone. date_default_timezone_set('America/New_York'); //Get current date and time. $current_time = date('Y-m-d H:i:s'); //Set reservation time variable $res_datetime = $_POST['res_datetime']; //Set event time. $event_time = strtotime($res_datetime); ?> <!doctype html> <html> <head> <meta charset="utf-8"> <title>Check Date and Time</title> </head> <?php //Check to be sure reservation time is at least one hour in the future. if (($current_time - $event_time) <= (3600)) { echo "You must make a reservation at least one hour ahead of time."; } ?> <form name="datetime" action="" method="post"> <input name="res_datetime" type="datetime-local" id="res_datetime"> <input type="submit"> </form> <body> </body> </html> How can I create a validation check to make sure the date and time of the reservation is at least one hour ahead of time?

    Read the article

  • Which version of Grady Booch's OOA/D book should I buy?

    - by jackj
    Grady Booch's "Object-Oriented Analysis and Design with Applications" is available brand new in both the 2nd edition (1993) and the 3rd edition (2007), while many used copies of both editions are available. Here are my concerns: 1) The 2nd edition uses C++: given that I just finished reading my first two C++ books (Accelerated C++ and C++ Primer) I guess practical tips can only help, so the 2nd edition is probably best (I think the 3rd edition has absolutely no code). On the other hand, the C++ books I read insist on the importance of using standard C++, whereas Booch's 2nd edition was published before the 1998 standard. 2) The 2nd edition is shorter (608 pages vs. 720) so, I guess, it will be slightly easier to get through. 3) The 3rd edition uses UML 2.0, whereas the 2nd edition is pre-UML. Some reviews say that the notation in the 2nd edition is close enough to UML, so it doesn't matter, but I don't know if I should be worrying about this or not. 4) The 2nd edition is available in good-shape used copies for considerably less than what the 3rd one goes for. Given all the above factors, do you think I should buy the 2nd or the 3rd edition? Recommendations on other books are also welcome but I would prefer it if whoever answers has read at least one of the versions of Booch's book (preferably both!). I have already bought but not read GoF and Riel's books. I also know that I should practice a lot with real-life code. Thanks.

    Read the article

  • Is XMLReader a SAX parser, a DOM parser, or neither?

    - by Renesis
    I am testing various methods to read (possibly large, and very often) XML configuration files in PHP. No writing is ever needed. I have two successful implementations, one using SimpleXML (which I know is a DOM parser) and one using XMLReader. I know that a DOM reader must read the whole tree and therefore uses more memory. My tests reflect that. I also know that A SAX parser is an "event-based" parser that uses less memory because it reads each node from the stream without checking what is next. XMLReader also reads from a stream with the cursor providing data about the node it is currently at. So, it definitely sounds like XMLReader (http://us2.php.net/xmlreader) is not a DOM parser, but my question is, is it a SAX parser, or something else? It seems like XMLReader behaves the way a SAX parser does but does not throw the events themselves (in other words, can you construct a SAX parser with XMLReader?) If it is something else, does the classification it's in have a name?

    Read the article

  • How do I protect the trunk from hapless newbies?

    - by Michael Haren
    A coworker relayed the following problem, let's say it's fictional to protect the guilty: A team of 5-10 works on a project which is issue-driven. That is, the typical flow goes like this: a chunk of work (bug, enhancement, etc.) is created as an issue in the issue tracker The issue is assigned to a developer The developer resolves the issue and commits their code changes to the trunk At release time, the frozen, and heavily tested trunk or release branch or whatever is built in release mode and released The problem he's having is that a couple newbies made several bad commits that weren't caught due to an unfortunate chain of events. This was followed by a bad release with a rollback or flurry of hot fixes. One idea we're toying with: Revoke commit access to the trunk for newbies and make them develop on a per-developer branch (we're using SVN): Good: newbies are isolated and can't hurt others Good: committers merge newbie branches with the trunk frequently Good: this enforces rigid code reviews Bad: this is burdensome on the committers (but there's probably no way around it since the code needs reviewed!) Bad: it might make traceability of trunk changes a little tougher since the reviewer would be doing the commit--not too sure on this. Update: Thank you, everyone, for your valuable input. I have concluded that this is far less a code/coder problem than I first presented. The root of the issue is that the release procedure failed to capture and test some poor quality changes to the trunk. Plugging that hole is most important. Relying on the false assumption that code in the trunk is "good" is not the solution. Once that hole--testing--is plugged, mistakes by everyone--newbie or senior--will be caught properly and dealt with accordingly. Next, a greater emphasis on code reviews and mentorship (probably driven by some systematic changes to encourage it) will go a long way toward improving code quality. With those two fixes in place, I don't think something as rigid or draconian as what I proposed above is necessary. Thanks!

    Read the article

  • Should I go for Arrays or Objects in PHP in a CouchDB/Ajax app?

    - by karlthorwald
    I find myself converting between array and object all the time in PHP application that uses couchDB and Ajax. Of course I am also converting objects to JSON and back (for sometimes couchdb but mostly Ajax), but this is not so much disturbing my workflow. At the present I have php objects that are returned by the CouchDB modules I use and on the other hand I have the old habbit to return arrays like array("error"="not found","data"=$dataObj) from my functions. This leads to a mixed occurence of real php objects and nested arrays and I cast with (object) or (array) if necessary. The worst thing is that I know more or less by heart what a function returns, but not what type (array or object), so I often run into type errors. My plan is now to always cast arrays to objects before returning from a function. Of course this implies a lot of refactoring. Is this the right way to go? What about the conversion overhead? Other ideas or tips? Edit: Kenaniah's answer suggests I should go the other way, this would mean I'd cast everything to arrays. And for all the Ajax / JSON stuff and also for CouchDB I would use $myarray = json_decode($json_data,$assoc = true); //EDIT: changed to true, whcih is what I really meant Even more work to change all the CouchDB and Ajax functions but in the end I have better code.

    Read the article

  • Spring 3.0 vs J2EE 6.0

    - by StudiousJoseph
    Hi everybody, I'm confronted with a situation... I've been asked to give an advise regarding which approach to take, in terms of J2EE development between Spring 3.0 and J2EE 6.0. I was, and still am, a promoter of Spring 2.5 over classic J2EE 5 development, specially with JBoss, I even migrated old apps to Spring and influenced the re-definition of the development policy here to include Spring specific APIs, and helped the development of a strategic plan to foster more lightweight solutions like Spring + Tomcat, instead of the heavier ones of JBoss, right now, we're using JBoss merely as a Web container, having what i call the "container inside the container paradox", that is, having Spring apps, with most of its APIs, running inside JBoss, So we're in the process of migrating to tomcat. However, with the coming of J2EE 6.0 many features, that made Spring attractive at that time, easy deployment, less-coupling, even some sort of D.I, etc, seems to have been mimicked, in one way or the other. JSF 2.0, JPA 2.0, WebBeans, WebProfiles, etc. So, the question goes... From your point of view, how save, and logical, it is to continue to invest in a non-standard J2EE development framework like Spring given the new perspectives offered by J2EE 6.0? Can we talk about maybe 3 or 4 more years of Spring development, or do you recommend early adoption of J2EE 6.0 APIs and it's practices? I'll appreciate any insights with this...

    Read the article

  • Choosing a method for a webservice

    - by Wrikken
    I'm asked to set up a new webservice which should be easily usable in whatever language (php, .NET, Java, etc.) possible. Of course rolling my own can be done, accepting different content-types (xml / x-www-form-urlencoded (normal post) / json / etc.), but an existing method or mechanism would of course be prefered, cutting down time spent on development for the consumers of the service. The webservice does accept modifications / sets (it is not only simply data retrieval), but those will most likely be quite a lot less then gets (we estimate about 2.5% sets, 97.5 gets). The term webservice here indicates the protocol should go over HTTP, not being able to implement it totally client sided (javascript in the end-users browser etc.), as it needs specific user authentication. Both gets and sets are pretty light on the parameter count (usually 1 to 4). Methods like REST (which I'd prefer for only gets), XML-RPC & SOAP (might be a bit overkill, but has the advantage of explicitly defined methods and returns) are the usual suspects. What in your opinion / experience is the most widely 'spoken' and most easily implementable protocol in different languages (seen from the consumers' viewpoint) which could fullfill this need?

    Read the article

  • Is there a practical benefit to casting a NULL pointer to an object and calling one of its member fu

    - by zdawg
    Ok, so I know that technically this is undefined behavior, but nonetheless, I've seen this more than once in production code. And please correct me if I'm wrong, but I've also heard that some people use this "feature" as a somewhat legitimate substitute of a lacking aspect of the current C++ standard, namely, the inability to obtain the address (well, offset really) of a member function. For example, this is out of a popular implementation of a PCRE (Perl-compatible Regular Expression) library: #ifndef offsetof #define offsetof(p_type,field) ((size_t)&(((p_type *)0)->field)) #endif One can debate whether the exploitation of such a language subtlety in a case like this is valid or not, or even necessary, but I've also seen it used like this: struct Result { void stat() { if(this) // do something... else // do something else... } }; // ...somewhere else in the code... ((Result*)0)->stat(); This works just fine! It avoids a null pointer dereference by testing for the existence of this, and it does not try to access class members in the else block. So long as these guards are in place, it's legitimate code, right? So the question remains: Is there a practical use case, where one would benefit from using such a construct? I'm especially concerned about the second case, since the first case is more of a workaround for a language limitation. Or is it? PS. Sorry about the C-style casts, unfortunately people still prefer to type less if they can.

    Read the article

  • handling large arrays with array_diff

    - by bigmac
    I have been trying to compare two arrays. Using array_intersect presents no problems. When using array_diff and arrays with ~5,000 values, it works. When I get to ~10,000 values, the script dies when I get to array_diff. Turning on error_reporting did not produce anything. I tried creating my own array_diff function: function manual_array_diff($arraya, $arrayb) { foreach ($arraya as $keya => $valuea) { if (in_array($valuea, $arrayb)) { unset($arraya[$keya]); } } return $arraya; } source: http://stackoverflow.com/questions/2479963/how-does-array-diff-work I would expect it to be less efficient that than the official array_diff, but it can handle arrays of ~10,000. Unfortunately, both array_diffs fail when I get to ~15,000. I tried the same code on a different machine and it runs fine, so it's not an issue with the code or PHP. There must be some limit set somewhere on that particular server. Any idea how I can get around that limit or alter it or just find out what it is?

    Read the article

  • jQuery arrays - newbie needs a kick start

    - by Jonny Wood
    I've only really started using this site and alredy I am very impressed by the community here! This is my third question in less than three days. Hopefully I'll be able to start answering questions soon instead of just asking them! I'm fairly new to jQuery and can't find a decent tutorial on Arrays. I'd like to be able to create an array that targets several ID's on my page and performs the same effect for each. For example I have tabs set up with the following: $('.tabs div.tab').hide(); $('.tabs div:first').show(); $('.tabs ul li:first a').addClass('current'); $('.tabs ul li a').click(function(){ $('.tabs ul li a').removeClass('current'); $(this).addClass('current'); var currentTab = $(this).attr('href'); $('.tabs div.tab').hide(); $(currentTab).show(); return false; }); I've used the class .tag to target the tabs as there are several sets on the same page, but I've heard jQuery works much faster when targetting ID's How would I add an array to the above code to target 4 different ID's? I've looked at var myArray = new Array('#id1', 'id2', 'id3', 'id4'); And also var myValues = [ '#id1', 'id2', 'id3', 'id4' ]; Which is correct and how do I then use the array in the code for my tabs...?

    Read the article

  • Should java try blocks be scoped as tightly as possible?

    - by isme
    I've been told that there is some overhead in using the Java try-catch mechanism. So, while it is necessary to put methods that throw checked exception within a try block to handle the possible exception, it is good practice performance-wise to limit the size of the try block to contain only those operations that could throw exceptions. I'm not so sure that this is a sensible conclusion. Consider the two implementations below of a function that processes a specified text file. Even if it is true that the first one incurs some unnecessary overhead, I find it much easier to follow. It is less clear where exactly the exceptions come from just from looking at statements, but the comments clearly show which statements are responsible. The second one is much longer and complicated than the first. In particular, the nice line-reading idiom of the first has to be mangled to fit the readLine call into a try block. What is the best practice for handling exceptions in a funcion where multiple exceptions could be thrown in its definition? This one contains all the processing code within the try block: void processFile(File f) { try { // construction of FileReader can throw FileNotFoundException BufferedReader in = new BufferedReader(new FileReader(f)); // call of readLine can throw IOException String line; while ((line = in.readLine()) != null) { process(line); } } catch (FileNotFoundException ex) { handle(ex); } catch (IOException ex) { handle(ex); } } This one contains only the methods that throw exceptions within try blocks: void processFile(File f) { FileReader reader; try { reader = new FileReader(f); } catch (FileNotFoundException ex) { handle(ex); return; } BufferedReader in = new BufferedReader(reader); String line; while (true) { try { line = in.readLine(); } catch (IOException ex) { handle(ex); break; } if (line == null) { break; } process(line); } }

    Read the article

  • What Javascript graphing package will let me plot points against a user-selected coordinate system?

    - by wes
    My customer has some specific requirements for a graph to show in our web app. We use HighCharts elsewhere in the app for more traditional graphing, but it doesn't seem to work for this situation. Their requirements: Allow the user to select a background image, set the scale and origin of the coordinate system. We'll graph our points against the user-defined coordinates. Points can be color coded Mouse-over boxes show more detail about the points Support for zooming and panning, scaling the background appropriately Less importantly: Support for drawing vectors off the points Some of this seems basic, but looking around at different graph packages, I was unable to find any with an example of this kind of usage. I've entertained the thought of just hacking it together in canvas myself, but I've never worked with canvas before so I don't think it would be cost effective. The basics of plotting points with a scaled coordinate system against an image background wouldn't be too hard, but the mouse-over details, zooming and panning sound much more daunting to me. More info: Right now we use jQuery, HighCharts, and ExtJS for our app. We tried flot in the past but switched to HighCharts after flot didn't meet our needs.

    Read the article

  • Windows FTP batch sript to read & dl from external user list

    - by Will Sims
    i have several old, unused batches that i'm redoing.. I have a batch file for an old network arch from several years ago.. the main thing I'd like it to do now is read a list of files.. I'll explain the setup.. Server updates a complete list [CurrentMediaStores.txt] 2x a day. The laptops can set settings to DL this list through their start.bat which also runs addins and updates I aply to my pc's, to give my batches and myself a break from slavish folder assignments and add a lil more dynamics and less adminin the bats now call on a list the user makes by simply copying a line from the CMS.txt file and pasting it into their [Grab_List.txt] My problem is though I have the branch :: off right now and the code that detects if LAN is connected or not to switch to an ftp connection. I'd like for the ftp batch to call/ use the Grab_List also. but I just can't/ don't know how to pass and do the for loop with a ftp session to loop through x amount of files in the users req list.. Anyhelp would be greatly appreciated

    Read the article

  • How much effort do you have to put in to get gains from using SSE?

    - by John
    Case One Say you have a little class: class Point3D { private: float x,y,z; public: operator+=() ...etc }; Point3D &Point3D::operator+=(Point3D &other) { this->x += other.x; this->y += other.y; this->z += other.z; } A naive use of SSE would simply replace these function bodies with using a few intrinsics. But would we expect this to make much difference? MMX used to involve costly state cahnges IIRC, does SSE or are they just like other instructions? And even if there's no direct "use SSE" overhead, would moving the values into SSE registers and back out again really make it any faster? Case Two Instead, you're working with a less OO-based code base. Rather than an array/vector of Point3D objects, you simply have a big array of floats: float coordinateData[NUM_POINTS*3]; void add(int i,int j) //yes it's unsafe, no overlap check... example only { for (int x=0;x<3;++x) { coordinateData[i*3+x] += coordinateData[j*3+x]; } } What about use of SSE here? Any better? In conclusion Is trying to optimise single vector operations using SSE actually worthwhile, or is it really only valuable when doing bulk operations?

    Read the article

  • P0ST variables to web server?

    - by OverTheRainbow
    Hello I've been trying several things from Google to POST data to a web server, but none of them work: I'm still stuck at how to convert the variables into the request, considering that the second variable is an SQL query so it has spaces. Does someone know the correct way to use a WebClient to POST data? I'd rather use WebClient because it requires less code than HttpWebRequest/HttpWebResponse. Here's what I tried so far: Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Dim wc = New WebClient() 'convert data wc.Headers.Add("Content-Type", "application/x-www-form-urlencoded") Dim postData = String.Format("db={0}&query={1}", _ HttpUtility.UrlEncode("books.sqlite"), _ HttpUtility.UrlEncode("SELECT id,title FROM boooks")) 'Dim bytArguments As Byte() = Encoding.ASCII.GetBytes("db=books.sqlite|query=SELECT * FROM books") 'POST query Dim bytRetData As Byte() = wc.UploadData("http://localhost:9999/get", "POST", postData) RichTextBox1.Text = Encoding.ASCII.GetString(bytRetData) Exit Sub Dim client = New WebClient() Dim nv As New Collection nv.Add("db", "books.sqlite") nv.Add("query", "SELECT id,title FROM books") Dim address As New Uri("http://localhost:9999/get") 'Dim bytRetData As Byte() = client.UploadValues(address, "POST", nv) RichTextBox1.Text = Encoding.ASCII.GetString(bytRetData) Exit Sub 'Dim wc As New WebClient() 'convert data wc.Headers.Add("Content-Type", "application/x-www-form-urlencoded") Dim bytArguments As Byte() = Encoding.ASCII.GetBytes("db=books.sqlite|query=SELECT * FROM books") 'POST query 'Dim bytRetData As Byte() = wc.UploadData("http://localhost:9999/get", "POST", bytArguments) RichTextBox1.Text = Encoding.ASCII.GetString(bytRetData) Exit Sub End Sub Thank you.

    Read the article

  • c++ issues with cin.fail() in my program

    - by Wallace
    I want to use input y to do saving thing,and r to do resuming, but then i write it in the following codes,and then I input y or r,I just to be noticed ""Please enter two positve numbers" this line code "if(x==(int)('y'))"and next line is ignored.how could this happen int main(){ cout<<"It's player_"<<player+1<<"'s turn please input a row and col,to save and exit,input 0,resume game input"<<endl; while(true){ cin>>x; if(x==(int)('y')) {save();has_saved=true;break;} if(x==(int)('r')) {resume();has_resumed=true;break;} cin>>y; if(cin.fail()){ cout<<"Please enter two positve numbers"<<endl; cin.clear(); cin.sync();} else if (x>n||x<1||y<1||y>n) { cout<<"your must input a positive number less or equal than "<<n<<endl; continue;} else if(chessboard[x][y]!=' ') {cout<<"Wrong input please try again!"<<endl; continue;} else { chessboard[x][y]=player_symbol[player+1]; break; } } }

    Read the article

  • Prevent two users from editing the same data

    - by Industrial
    Hi everyone, I have seen a feature in different web applications including Wordpress (not sure?) that warns a user if he/she opens an article/post/page/whatever from the database, while someone else is editing the same data simultaneously. I would like to implement the same feature in my own application and I have given this a bit of thought. Is the following example a good practice on how to do this? It goes a little something like this: 1) User A enters a the editing page for the mysterious article X. The database tableEvents is queried to make sure that no one else is editing the same page for the moment, which no one is by then. A token is then randomly being generated and is inserted into a database table called Events. 1) User B also want's to make updates to the article X. Now since our User A already is editing the article, the Events table is queried and looks like this: | timestamp | owner | Origin | token | ------------------------------------------------------------ | 1273226321 | User A | article-x | uniqueid## | 2) The timestamp is being checked. If it's valid and less than say 100 seconds old, a message appears and the user cannot make any changes to the requested article X: Warning: User A is currently working with this article. In the meantime, editing cannot be done. Please do something else with your life. 3) If User A decides to go on and save his changes, the token is posted along with all other data to update the database, and toggles a query to delete the row with token uniqueid##. If he decides to do something else instead of committing his changes, the article X will still be available for editing in 100 seconds for User B Let me know what you think about this approach! Wish everyone a great weekend!

    Read the article

  • capturing CMD batch file parameter list; write to file for later processing

    - by BobB
    I have written a batch file that is launched as a post processing utility by a program. The batch file reads ~24 parameters supplied by the calling program, stores them into variables, and then writes them to various text files. Since the max input variable in CMD is %9, it's necessary to use the 'shift' command to repeatedly read and store these individually to named variables. Because the program outputs several similar batch files, the result is opening several CMD windows sequentially, assigning variables and writing data files. This ties up the calling program for too long. It occurs to me that I could free up the calling program much faster if maybe there's a way to write a very simple batch file that can write all the command parameters to a text file, where I can process them later. Basically, just grab the parameter list, write it and done. Q: Is there some way to treat an entire series of parameter data as one big text string and write it to one big variable... and then echo the whole big thing to one text file? Then later read the string into %n variables when there's no program waiting to resume? Parameter list is something like 25 - 30 words, less than 200 characters. Sample parameter list: "First Name" "Lastname" "123 Steet Name Way" "Cityname" ST 12345 1004968 06/01/2010 "Firstname+Lastname" 101738 "On Account" 20.67 xy-1z 1 8.95 3.00 1.39 0 0 239 8.95 Items in quotes are processed as string variables. List is space delimited. Any suggestions?

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >