Search Results

Search found 3758 results on 151 pages for 'efficient'.

Page 127/151 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Real time embeddable http server library required

    - by Howard May
    Having looked at several available http server libraries I have not yet found what I am looking for and am sure I can't be the first to have this set of requirements. I need a library which presents an API which is 'pipelined'. Pipelining is used to describe an HTTP feature where multiple HTTP requests can be sent across a TCP link at a time without waiting for a response. I want a similar feature on the library API where my application can receive all of those request without having to send a response (I will respond but want the ability to process multiple requests at a time to reduce the impact of internal latency). So the web server library will need to support the following flow 1) HTTP Client transmits http request 1 2) HTTP Client transmits http request 2 ... 3) Web Server Library receives request 1 and passes it to My Web Server App 4) My Web Server App receives request 1 and dispatches it to My System 5) Web Server receives request 2 and passes it to My Web Server App 6) My Web Server App receives request 2 and dispatches it to My System 7) My Web Server App receives response to request 1 from My System and passes it to Web Server 8) Web Server transmits HTTP response 1 to HTTP Client 9) My Web Server App receives response to request 2 from My System and passes it to Web Server 10) Web Server transmits HTTP response 2 to HTTP Client Hopefully this illustrates my requirement. There are two key points to recognise. Responses to the Web Server Library are asynchronous and there may be several HTTP requests passed to My Web Server App with responses outstanding. Additional requirements are Embeddable into an existing 'C' application Small footprint; I don't need all the functionality available in Apache etc. Efficient; will need to support thousands of requests a second Allows asynchronous responses to requests; their is a small latency to responses and given the required request throughput a synchronous architecture is not going to work for me. Support persistent TCP connections Support use with Server-Push Comet connections Open Source / GPL support for HTTPS Portable across linux, windows; preferably more. I will be very grateful for any recommendation Best Regards

    Read the article

  • Return call from ggplot object

    - by aL3xa
    I've been using ggplot2 for a while now, and I can't find a way to get formula from ggplot object. Though I can get basic info with summary(<ggplot_object>), in order to get complete formula, usually I was combing up and down through .Rhistory file. And this becomes frustrating when you experiment with new graphs, especially when code gets a bit lengthy... so searching through history file isn't quite convenient way of doing this... Is there a more efficient way of doing this? Just an illustration: p <- qplot(data = mtcars, x = factor(cyl), geom = "bar", fill = factor(cyl)) + scale_fill_manual(name = "Cylinders", value = c("firebrick3", "gold2", "chartreuse3")) + stat_bin(aes(label = ..count..), vjust = -0.2, geom = "text", position = "identity") + xlab("# of cylinders") + ylab("Frequency") + opts(title = "Barplot: # of cylinders") I can get some basic info with summary: > summary(p) data: mpg, cyl, disp, hp, drat, wt, qsec, vs, am, gear, carb [32x11] mapping: fill = factor(cyl), x = factor(cyl) scales: fill faceting: facet_grid(. ~ ., FALSE) ----------------------------------- geom_bar: stat_bin: position_stack: (width = NULL, height = NULL) mapping: label = ..count.. geom_text: vjust = -0.2 stat_bin: width = 0.9, drop = TRUE, right = TRUE position_identity: (width = NULL, height = NULL) But I want to get code I typed in to get the graph. I reckon that I'm missing something essential here... it's seems impossible that there's no way to get call from ggplot object!

    Read the article

  • Tail recursion and memoization with C#

    - by Jay
    I'm writing a function that finds the full path of a directory based on a database table of entries. Each record contains a key, the directory's name, and the key of the parent directory (it's the Directory table in an MSI if you're familiar). I had an iterative solution, but it started looking a little nasty. I thought I could write an elegant tail recursive solution, but I'm not sure anymore. I'll show you my code and then explain the issues I'm facing. Dictionary<string, string> m_directoryKeyToFullPathDictionary = new Dictionary<string, string>(); ... private string ExpandDirectoryKey(Database database, string directoryKey) { // check for terminating condition string fullPath; if (m_directoryKeyToFullPathDictionary.TryGetValue(directoryKey, out fullPath)) { return fullPath; } // inductive step Record record = ExecuteQuery(database, "SELECT DefaultDir, Directory_Parent FROM Directory where Directory.Directory='{0}'", directoryKey); // null check string directoryName = record.GetString("DefaultDir"); string parentDirectoryKey = record.GetString("Directory_Parent"); return Path.Combine(ExpandDirectoryKey(database, parentDirectoryKey), directoryName); } This is how the code looked when I realized I had a problem (with some minor validation/massaging removed). I want to use memoization to short circuit whenever possible, but that requires me to make a function call to the dictionary to store the output of the recursive ExpandDirectoryKey call. I realize that I also have a Path.Combine call there, but I think that can be circumvented with a ... + Path.DirectorySeparatorChar + .... I thought about using a helper method that would memoize the directory and return the value so that I could call it like this at the end of the function above: return MemoizeHelper( m_directoryKeyToFullPathDictionary, Path.Combine(ExpandDirectoryKey(database, parentDirectoryKey)), directoryName); But I feel like that's cheating and not going to be optimized as tail recursion. Any ideas? Should I be using a completely different strategy? This doesn't need to be a super efficient algorithm at all, I'm just really curious. I'm using .NET 4.0, btw. Thanks!

    Read the article

  • Inserting javascript with jQuery .html

    - by Andrew Appleby
    I'm experiencing an issue with a website I'm working on, where I originally believed I could simply replace $("main").html('THIS') which was originally code for a flash object, with a new and improved HTML5/Javascript version. The original code: $("#main").html('<div style="height: 100%; width: 100%; overflow: hidden;"><object width="100%" height="100%" codebase="http://fpdownload.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=8,0,0,0" classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000">\ <param value="images/uploads/'+image_id+'.swf" name="movie">\ <param value="true" name="allowFullScreen">\ <param value="#737373" name="bgcolor">\ <param value="" name="FlashVars">\ <embed width="100%" height="100%" flashvars="" bgcolor="#737373" allowfullscreen="true" src="images/uploads/'+image_id+'.swf" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer">\ </object></div>\ '); }); And my (failed) attempt at inserting my new code: $("#main").html('<script type="text/javascript">pano=new pano2vrPlayer("container");skin=new pano2vrSkin(pano);pano.readConfigUrl("xml/tablet_'+image_id+'.xml");hideUrlBar();</script>'); It doesn't even work when I just put , so I know it's gotta be the javascript itself. I've looked at the solutions out there, but I can't make sense in how to properly implement them here in the most efficient way. Your help is much appreciated, thanks!

    Read the article

  • How to check if two records have a self-referencing relation?

    - by Machine
    Consider the following schema with users and their collegues (friends): Users User: columns: user_id: name: user_id as userId type: integer(8) unsigned: 1 primary: true autoincrement: true first_name: name: first_name as firstName type: string(45) notnull: true last_name: name: last_name as lastName type: string(45) notnull: true email: type: string(45) notnull: true unique: true relations: Collegues: class: User local: invitor foreign: invitee refClass: CollegueStatus equal: true onDelete: CASCADE onUpdate: CASCADE Join table: CollegueStatus: columns: invitor: type: integer(8) unsigned: 1 primary: true invitee: type: integer(8) unsigned: 1 primary: true status: type: enum(8) values: [pending, accepted, denied] default: pending notnull: true Now, let's say I two records, one for the user making a HTTP request (the logged in user), and one record for a user he wants to send a message to. I want to check if these users are collegues. Questions: Does Doctrine have any pre-build functionality to check if two records with with self-relations are related? If not, how would you write a method to check this? Where would you put said method? (In the User-class, UserTable-class etc) I could probably do something like this: public function (User $user1, User $user2) { // Ensure we load collegues if $user1 was fetched with DQL that // doesn't load this relation $collegues = $user1->get('Collegues'); $areCollegues = false; foreach($collegues as $collegue) { if($collegue['userId'] === $user2['userId']) { $areCollegues = true; break; } } return $areCollegues; } But this looks a neither efficient nor pretty. I just feel that it should be solved already for self-referencing relations to be nice to use.

    Read the article

  • Sorting by custom field and fetching whole tree from DB

    - by Niaxon
    Hello everyone, I am trying to do file browser in a tree form and have a problem to sort it somehow. I use PHP and MySQL for that. I've created mixed (nested set + adjacency) table 'element' with the following fields: element_id, left_key, right_key, level, parent_id, element_name, element_type (enum: 'folder','file'), element_size. Let's not discuss right now that it is better to move information about element (name, type, size) into other table. Function to scan specified directory and fill table work correctly. Noteworthy, i am adding elements to tree in specific order: folders first and then files. After that i can easily fetch and display whole table on the page using simple query: SELECT * FROM element WHERE 1=1 ORDER BY left_key With the result of that query and another function i can generate correct html code (<ul><li>... and so on). to display tree. Now back to the question (finally, huh?). I am struggling to add sorting functionality. For example i want to order my result by size. Here i need to keep in my mind whole hierarchy of tree and rule: folders first, files later. I believe i can do that by generating in PHP recursive query: SELECT * FROM element WHERE parent_id = {$parentId} ORDER BY element_type (so folders would be first), size (or name for example) asc/desc After that for each result which has type = 'folder' i will send another query to get it's content. Also it's possible to fetch whole tree by left_key and after that sort it in PHP as array but i guess that would be worse :) I wonder if there is better and more efficient way to do such a thing?

    Read the article

  • Perl - CodeGolf - Nested loops & SQL inserts

    - by CheeseConQueso
    I had to make a really small and simple script that would fill a table with string values according to these criteria: 2 characters long 1st character is always numeric (0-9) 2nd character is (0-9) but also includes "X" Values need to be inserted into a table on a database The program would execute: insert into table (code) values ('01'); insert into table (code) values ('02'); insert into table (code) values ('03'); insert into table (code) values ('04'); insert into table (code) values ('05'); insert into table (code) values ('06'); insert into table (code) values ('07'); insert into table (code) values ('08'); insert into table (code) values ('09'); insert into table (code) values ('0X'); And so on, until the total 110 values were inserted. My code (just to accomplish it, not to minimize and make efficient) was: use strict; use DBI; my ($db1,$sql,$sth,%dbattr); %dbattr=(ChopBlanks => 1,RaiseError => 0); $db1=DBI->connect('DBI:mysql:','','',\%dbattr); my @code; for(0..9) { $code[0]=$_; for(0..9) { $code[1]=$_; insert(@code); } insert($code[0],"X"); } sub insert { my $skip=0; foreach(@_) { if($skip==0) { $sql="insert into table (code) values ('".$_[0].$_[1]."');"; $sth=$db1->prepare($sql); $sth->execute(); $skip++; } else { $skip--; } } } exit; I'm just interested to see a really succinct & precise version of this logic.

    Read the article

  • Good strategy for copying a "sliding window" of data from a table?

    - by chiborg
    I have a MySQL table from a third-party application that has millions of rows and only one index - the timestamp of each entry. Now I want to do some heavy self-joins and queries on the data using fields other than the timestamp. Doing the query on the original table would bring the database to a crawl, adding indexes to the table is not an option. Additionally, I only need entries that are newer than one week. My current strategy for doing the queries efficiently is to use a separate table (aux_table) that has the necessary indexes. My questions are: Is there another way to do the queries? and if not, How do I update the data in the indexed table efficiently? So far I have found two approaches for updating aux_table: Truncate aux_table and insert the desired data from the original table. Not very efficient because all the indexes must be re-crated. Check for the biggest timestamp in aux_table and insert all entries with a greater or equal timestamp from the original table. Occasionally drop older entries. Only copying entries with greater timestamp leads to dropped entries (because of entries with same timestamp that were inserted into the original table after the last update).

    Read the article

  • Recursion in prepared statements

    - by Rob
    I've been using PDO and preparing all my statements primarily for security reasons. However, I have a part of my code that does execute the same statement many times with different parameters, and I thought this would be where the prepared statements really shine. But they actually break the code... The basic logic of the code is this. function someFunction($something) { global $pdo; $array = array(); static $handle = null; if (!$handle) { $handle = $pdo->prepare("A STATEMENT WITH :a_param"); } $handle->bindValue(":a_param", $something); if ($handle->execute()) { while ($row = $handle->fetch()) { $array[] = someFunction($row['blah']); } } return $array; } It looked fine to me, but it was missing out a lot of rows. Eventually I realised that the statement handle was being changed (executed with different param), which means the call to fetch in the while loop will only ever work once, then the function calls itself again, and the result set is changed. So I am wondering what's the best way of using PDO prepared statements in a recursive way. One way could be to use fetchAll(), but it says in the manual that has a substantial overhead. The whole point of this is to make it more efficient. The other thing I could do is not reuse a static handle, and instead make a new one every time. I believe that since the query string is the same, internally the MySQL driver will be using a prepared statement anyway, so there is just the small overhead of creating a new handle on each recursive call. Personally I think that defeats the point. Or is there some way of rewriting this?

    Read the article

  • Web development scheme for staging and production servers using Git Push

    - by ServAce85
    I am using git to manage a dynamic website (PHP + MySQL) and I want to send my files from my localhost to my staging and development servers in the most efficient and hassle-free way. I am currently convinced that the best way for me to approach this problem is to use this git branching model to organize my local git repo. From there, I will use the release branches to push to my staging server for testing. Once I am happy that the release code works on the staging server, I can then merge with my master branch and push that to my production server. Pushing to Staging Server: As noted in many introductory git posts, I could run into problems pushing into a non-bare repo, so, as suggested in this response, I plan to push the release branch to a bare repo on the server and have a post-receive hook that clones the bare repo to a non-bare repo that also acts as the web-hosted directory. Pushing to Production Server: Here's my newest source of confusion... In the response that I cited above, it made me curious as to why @Paul states that it's a completely different story when pushing to a live, development server. I guess I don't see the problem. Would it be safe and hassle-free to follow the same steps as above, but for the master branch? Where are the potential pit-falls? Config Files: With respect to configuration files that are unique to each environment (.htaccess, config.php, etc), it seems simplest to .gitignore each of those files in their respective repos on their respective servers. Can you see anything immediately wrong with this? Better solutions? Accessing Data: Finally, as I initially stated, the site uses MySQL databases to store data. How would you suggest I access that data (for testing purposes) from the staging server and localhost? I realize that I may have asked way too many questions for a single post, but since they're all related to the best way to set up this development scheme, I thought it was necessary.

    Read the article

  • SQL Server 2008 - Keyword search using table Join

    - by Aaron Wagner
    Ok, I created a Stored Procedure that, among other things, is searching 5 columns for a particular keyword. To accomplish this, I have the keywords parameter being split out by a function and returned as a table. Then I do a Left Join on that table, using a LIKE constraint. So, I had this working beautifully, and then all of the sudden it stops working. Now it is returning every row, instead of just the rows it needs. The other caveat, is that if the keyword parameter is empty, it should ignore it. Given what's below, is there A) a glaring mistake, or B) a more efficient way to approach this? Here is what I have currently: ALTER PROCEDURE [dbo].[usp_getOppsPaged] @startRowIndex int, @maximumRows int, @city varchar(100) = NULL, @state char(2) = NULL, @zip varchar(10) = NULL, @classification varchar(15) = NULL, @startDateMin date = NULL, @startDateMax date = NULL, @endDateMin date = NULL, @endDateMax date = NULL, @keywords varchar(400) = NULL AS BEGIN SET NOCOUNT ON; ;WITH Results_CTE AS ( SELECT opportunities.*, organizations.*, departments.dept_name, departments.dept_address, departments.dept_building_name, departments.dept_suite_num, departments.dept_city, departments.dept_state, departments.dept_zip, departments.dept_international_address, departments.dept_phone, departments.dept_website, departments.dept_gen_list, ROW_NUMBER() OVER (ORDER BY opp_id) AS RowNum FROM opportunities JOIN departments ON opportunities.dept_id = departments.dept_id JOIN organizations ON departments.org_id=organizations.org_id LEFT JOIN Split(',',@keywords) AS kw ON (title LIKE '%'+kw.s+'%' OR [description] LIKE '%'+kw.s+'%' OR tasks LIKE '%'+kw.s+'%' OR requirements LIKE '%'+kw.s+'%' OR comments LIKE '%'+kw.s+'%') WHERE ( (@city IS NOT NULL AND (city LIKE '%'+@city+'%' OR dept_city LIKE '%'+@city+'%' OR org_city LIKE '%'+@city+'%')) OR (@state IS NOT NULL AND ([state] = @state OR dept_state = @state OR org_state = @state)) OR (@zip IS NOT NULL AND (zip = @zip OR dept_zip = @zip OR org_zip = @zip)) OR (@classification IS NOT NULL AND (classification LIKE '%'+@classification+'%')) OR ((@startDateMin IS NOT NULL AND @startDateMax IS NOT NULL) AND ([start_date] BETWEEN @startDateMin AND @startDateMax)) OR ((@endDateMin IS NOT NULL AND @endDateMax IS NOT NULL) AND ([end_date] BETWEEN @endDateMin AND @endDateMax)) OR ( (@city IS NULL AND @state IS NULL AND @zip IS NULL AND @classification IS NULL AND @startDateMin IS NULL AND @startDateMax IS NULL AND @endDateMin IS NULL AND @endDateMin IS NULL) ) ) ) SELECT * FROM Results_CTE WHERE RowNum >= @startRowIndex AND RowNum < @startRowIndex + @maximumRows; END

    Read the article

  • C++ - passing references to boost::shared_ptr

    - by abigagli
    If I have a function that needs to work with a shared_ptr, wouldn't it be more efficient to pass it a reference to it (so to avoid copying the shared_ptr object)? What are the possible bad side effects? I envision two possible cases: 1) inside the function a copy is made of the argument, like in ClassA::take_copy_of_sp(boost::shared_ptr<foo> &sp) { ... m_sp_member=sp; //This will copy the object, incrementing refcount ... } 2) inside the function the argument is only used, like in Class::only_work_with_sp(boost::shared_ptr<foo> &sp) //Again, no copy here { ... sp->do_something(); ... } I can't see in both cases a good reason to pass the boost::shared_ptr by value instead of by reference. Passing by value would only "temporarily" increment the reference count due to the copying, and then decrement it when exiting the function scope. Am I overlooking something? Andrea. EDIT: Just to clarify, after reading several answers : I perfectly agree on the premature-optimization concerns, and I alwasy try to first-profile-then-work-on-the-hotspots. My question was more from a purely technical code-point-of-view, if you know what I mean.

    Read the article

  • How can I get this week's dates in Perl?

    - by ABach
    I have the following loop to calculate the dates of the current week and print them out. It works, but I am swimming in the amount of date/time possibilities in Perl and want to get your opinion on whether there is a better way. Here's the code I've written: #!/usr/bin/env perl use warnings; use strict; use DateTime; # Calculate numeric value of today and the # target day (Monday = 1, Sunday = 7); the # target, in this case, is Monday, since that's # when I want the week to start my $today_dt = DateTime->now; my $today = $today_dt->day_of_week; my $target = 1; # Create DateTime copies to act as the "bookends" # for the date range my ($start, $end) = ($today_dt->clone(), $today_dt->clone()); if ($today == $target) { # If today is the target, "start" is already set; # we simply need to set the end date $end->add( days => 6 ); } else { # Otherwise, we calculate the Monday preceeding today # and the Sunday following today my $delta = ($target - $today + 7) % 7; $start->add( days => $delta - 7 ); $end->add( days => $delta - 1 ); } # I clone the DateTime object again because, for some reason, # I'm wary of using $start directly... my $cur_date = $start->clone(); while ($cur_date <= $end) { my $date_ymd = $cur_date->ymd; print "$date_ymd\n"; $cur_date->add( days => 1 ); } As mentioned, this works, but is it the quickest or most efficient? I'm guessing that quickness and efficiency may not necessarily go together, but your feedback is very appreciated.

    Read the article

  • Which languages and techniques can I use to improve my coding practices?

    - by Danjah
    I've been offered the opportunity upskill through study, while at work which is great. My background I am mostly self-taught, but have worked with many excellent people over the years - both self-taught and fully educated, and on many decent projects. I have mild experience in Actionscript, I'm getting better every day with my Javascript, and my CSS is angled at best practice, but needs a bit of modernising. I'm a traditional interface developer, I'm not stupid and I like a challenge. My goal I need to start seeing ways of applying better logic, optimising code, refactoring, different styles of development (agile, others?), and.. well I need to try and start thinking like.. a more solid programmer. Its hard to describe, I have good solutions and I'm efficient - but I KNOW that there's a bunch I am missing. I am already employed with a solid career, but I feel the need to fill gaps. My question/s Are there a set of guiding principles you can recommend I focus on to improve the points above? Are there particular programming languages which I might focus on to get a broader overview? Do you think I should avoid particular styles of development, or even languages, while solidifying what might end up being part 'the basics' but hopefully 'advanced programming'? -- Sorry if this appears off topic or something but I figure you're probably some of the best people to ask.

    Read the article

  • Need Help finding an appropriate task asignment algoritm for a collage project involving coordinatin

    - by Trif Mircea
    Hello. I am a long time lurker here and have found over time many answers regarding jquery and web development topics so I decided to ask a question of my own. This time I have to create a c++ project for collage which should help manage the workflow of a company providing all kinds of services through in the field teams. The ideas I have so far are: client-server application; the server is a dispatcher where all the orders from clients get and the clients are mobile devices (PDAs) each team in the field having one a order from a client is a task. Each task is made up of a series of subtasks. You have a database with estimations on how long a task should take to complete you also know what tasks or subtasks each team on the field can perform based on what kind of specialists made up the team (not going to complicate the problem by adding needed materials, it is considered that if a member of a team can perform a subtask he has the stuff needed) Now knowing these factors, what would a good task assignment algorithm be? The criteria is: how many tasks can a team do, how many tasks they have in the queue, it could also be location, how far away are they from the place but I don't think I can implement that.. It needs to be efficient and also to adapt quickly is the human dispatcher manually assigns a task. Any help or leads would be really appreciated. Also I'm not 100% sure in the idea so if you have another way you would go about creating such an application please share, even if it just a quick outline. I have to write a theoretical part too so even if the ideas are far more complex that what i outlined that would be ok ; I'd write those and implement what I can.

    Read the article

  • Right way to return proxy model instance from a base model instance in Django ?

    - by sotangochips
    Say I have models: class Animal(models.Model): type = models.CharField(max_length=255) class Dog(Animal): def make_sound(self): print "Woof!" class Meta: proxy = True class Cat(Animal): def make_sound(self): print "Meow!" class Meta: proxy = True Let's say I want to do: animals = Animal.objects.all() for animal in animals: animal.make_sound() I want to get back a series of Woofs and Meows. Clearly, I could just define a make_sound in the original model that forks based on animal_type, but then every time I add a new animal type (imagine they're in different apps), I'd have to go in and edit that make_sound function. I'd rather just define proxy models and have them define the behavior themselves. From what I can tell, there's no way of returning mixed Cat or Dog instances, but I figured maybe I could define a "get_proxy_model" method on the main class that returns a cat or a dog model. Surely you could do this, and pass something like the primary key and then just do Cat.objects.get(pk = passed_in_primary_key). But that'd mean doing an extra query for data you already have which seems redundant. Is there any way to turn an animal into a cat or a dog instance in an efficient way? What's the right way to do what I want to achieve?

    Read the article

  • Determining polygon intersection and containment

    - by Victor Liu
    I have a set of simple (no holes, no self-intersections) polygons, and I need to check that they don't intersect each other (one can be entirely contained in another; that is okay). I can check this by simply checking the per-vertex inside-ness of one polygon versus other polygons. I also need to determine the containment tree, which is the set of relationships that say which polygon contains any given polygon. Since no polygon can intersect any other, then any contained polygon has a unique container; the "next-bigger" one. In other words, if A contains B contains C, then A is the parent of B, and B is the parent of C, and we don't consider A the parent of C. The question: How do I efficiently determine the containment relationships and check the non-intersection criterion? I ask this as one question because maybe a combined algorithm is more efficient than solving each problem separately. The algorithm should take as input a list of polygons, given by a list of their vertices. It should produce a boolean B indicating if none of the polygons intersect any other polygon, and also if B = true, a list of pairs (P, C) where polygon P is the parent of child C. This is not homework. This is for a hobby project I am working on.

    Read the article

  • Efficiently sending protocol buffer messages with http on an android platform

    - by Ben Griffiths
    I'm trying to send messages generated by Google Protocol Buffer code via a simple HTTP scheme to a server. What I have currently have implemented is here (forgive the obvious incompletion...): HttpClient client = new DefaultHttpClient(); String url = "http://192.168.1.69:8888/sdroidmarshal"; HttpPost postRequest = new HttpPost(url); String proto = offers.build().toString(); List<NameValuePair> nameValuePairs = new ArrayList<NameValuePair>(1); nameValuePairs.add(new BasicNameValuePair("sdroidmsg", proto)); postRequest.setEntity(new UrlEncodedFormEntity(nameValuePairs)); try { ResponseHandler<String> responseHandler = new BasicResponseHandler(); String responseBody = client.execute(postRequest, responseHandler); } catch (Throwable t) { } I'm not that experienced with communications over the internet and no more so with HTTP - while I do understand the basics... So my question, before I blindly develop the rest of the application around this, is whether or not this is particularly efficient? I ideally would like to keep messages small and I assume toString() adds some unnecessary formatting.

    Read the article

  • What's the difference between these LINQ queries ?

    - by SnAzBaZ
    I use LINQ-SQL as my DAL, I then have a project called DB which acts as my BLL. Various applications then access the BLL to read / write data from the SQL Database. I have these methods in my BLL for one particular table: public IEnumerable<SystemSalesTaxList> Get_SystemSalesTaxList() { return from s in db.SystemSalesTaxLists select s; } public SystemSalesTaxList Get_SystemSalesTaxList(string strSalesTaxID) { return Get_SystemSalesTaxList().Where(s => s.SalesTaxID == strSalesTaxID).FirstOrDefault(); } public SystemSalesTaxList Get_SystemSalesTaxListByZipCode(string strZipCode) { return Get_SystemSalesTaxList().Where(s => s.ZipCode == strZipCode).FirstOrDefault(); } All pretty straight forward I thought. Get_SystemSalesTaxListByZipCode is always returning a null value though, even when it has a ZIP Code that exists in that table. If I write the method like this, it returns the row I want: public SystemSalesTaxList Get_SystemSalesTaxListByZipCode(string strZipCode) { var salesTax = from s in db.SystemSalesTaxLists where s.ZipCode == strZipCode select s; return salesTax.FirstOrDefault(); } Why does the other method not return the same, as the query should be identical ? Note that, the overloaded Get_SystemSalesTaxList(string strSalesTaxID) returns a record just fine when I give it a valid SalesTaxID. Is there a more efficient way to write these "helper" type classes ? Thanks!

    Read the article

  • Two radically different queries against 4 mil records execute in the same time - one uses brute force.

    - by IanC
    I'm using SQL Server 2008. I have a table with over 3 million records, which is related to another table with a million records. I have spent a few days experimenting with different ways of querying these tables. I have it down to two radically different queries, both of which take 6s to execute on my laptop. The first query uses a brute force method of evaluating possibly likely matches, and removes incorrect matches via aggregate summation calculations. The second gets all possibly likely matches, then removes incorrect matches via an EXCEPT query that uses two dedicated indexes to find the low and high mismatches. Logically, one would expect the brute force to be slow and the indexes one to be fast. Not so. And I have experimented heavily with indexes until I got the best speed. Further, the brute force query doesn't require as many indexes, which means that technically it would yield better overall system performance. Below are the two execution plans. If you can't see them, please let me know and I'll re-post then in landscape orientation / mail them to you. Brute-force query: Index-based exception query: My question is, based on the execution plans, which one look more efficient? I realize that thing may change as my data grows.

    Read the article

  • how to synchronize database table and directory with php

    - by twmulloy
    hello, I have a directory with files and a database table with what should be the same files. I would like to be able to synchronize the database table with the directory. What would be the most efficient way to do this? or would I realistically only be able to do this in a brute manner? Here's my approach: 1. retrieve all of the files in the directory as array 2. retrieve all of the filenames in the database table as array 3. loop through the file values in the directory array and use in_array() on the database table array to verify the filename is in that array, and if not then start building an array to insert the missing filenames. run db query to add each missing file row to database table 4. loop through directory array and use in_array() on the directory array and anything not found in the directory array will just be deleted from the table. Is there a better way to go about this? or something better for this in php than in_array()?

    Read the article

  • Oracle - UPSERT with update not executed for unmodified values

    - by Buthrakaur
    I'm using following update or insert Oracle statement at the moment: BEGIN UPDATE DSMS SET SURNAME = :SURNAME, FIRSTNAME = :FIRSTNAME, VALID = :VALID WHERE DSM = :DSM; IF (SQL%ROWCOUNT = 0) THEN INSERT INTO DSMS (DSM, SURNAME, FIRSTNAME, VALID) VALUES (:DSM, :SURNAME, :FIRSTNAME, :VALID); END IF; END; This runs fine except that the update statement performs dummy update if the data is same as the parameter values provided. I would not mind the dummy update in normal situation, but there's a replication/synchronization system build over this table using triggers on tables to capture updated records and executing this statement frequently for many records simply means that I'd cause huge traffic in triggers and the sync system. Is there any simple method how to reformulate this code that the update statement wouldn't update record if not necessary without using following IF-EXISTS check code which I find not sleek enough and maybe also not most efficient for this task? DECLARE CNT NUMBER; BEGIN SELECT COUNT(1) INTO CNT FROM DSMS WHERE DSM = :DSM; IF SQL%FOUND THEN UPDATE DSMS SET SURNAME = :SURNAME, FIRSTNAME = :FIRSTNAME, VALID = :VALID WHERE DSM = :DSM AND (SURNAME != :SURNAME OR FIRSTNAME != :FIRSTNAME OR VALID != :VALID); ELSE INSERT INTO DSMS (DSM, SURNAME, FIRSTNAME, VALID) VALUES (:DSM, :SURNAME, :FIRSTNAME, :VALID); END IF; END;

    Read the article

  • Is there any benefit to my rather quirky character sizing convention?

    - by Paul Alan Taylor
    I love things that are a power of 2. I celebrated my 32nd birthday knowing it was the last time in 32 years I'd be able to claim that my age was a power of 2. I'm obsessed. It's like being some Z-list Batman villain, except without the colourful adventures and a face full of batarangs. I ensure that all my enum values are powers of 2, if only for future bitwise operations, and I'm reasonably assured that there is some purpose (even if latent) for doing it. Where I'm less sure, is in how I define the lengths of database fields. Again, I can't help it. Everything ends up being a power of 2. CREATE TABLE Person ( PersonID int IDENTITY PRIMARY KEY ,Firstname varchar(64) ,Surname varchar(128) ) Can any SQL super-boffins who know about the internals of how stuff is stored and retrieved tell me whether there is any benefit to my inexplicable obsession? Is it more efficient to size character fields this way? Can anyone pop in with an "actually, what you're doing works because ....."? I suspect I'm just getting crazier in my older age, but it'd be nice to know that there is some method to my madness.

    Read the article

  • Noob - Cycle through stored names and skip blanks

    - by ActiveJimBob
    NOOB trying to make my code more efficient. On scroll button push, the function 'SetName' stores a number to integer 'iName' which is index against 5 names stored in memory. If a name is not set in memeory, it skips to the next. The code works, but takes up a lot of room. Any advice appreciated. Code: #include <string.h> int iName = 0; int iNewName = 0; BYTE GetName () { return iName; } void SetName (int iNewName) { while (iName != iNewName) { switch (byNewName) { case 1: if (strlen (memory.m_nameA) == 0) new_name++; else iName = iNewName; break; case 2: if (strlen (memory.m_nameB) == 0) new_name++; else iName = iNewName; break; case 3: if (strlen (memory.m_nameC) == 0) new_name++; else iName = iNewName; break; case 4: if (strlen (memory.m_nameD) == 0) new_name++; else iName = iNewName; break; case 5: if (strlen (memory.m_nameE) == 0) new_name++; else iName = iNewName; break; default: iNewName = 1; break; } // end of case } // end of loop } // end of SetName function void main () { while(1) { if (Button_pushed) SetName(GetName+1); } // end of infinite loop } // end of main

    Read the article

  • Bulletin board - Database optimisation

    - by andrew
    This question is a follow on from this Question The project and problem The project I am currently working on is a bulletin board for a large non-profit organisation. The bulletin board will be used to allow inter-office communication within the organisation. I am building the application and have been having trouble extracting the results that I need from my database because I don't think it is properly normalized and because of limitations in my knowledge of relational database theory and mysql. I would appreciate input into the design of the board in general and in particular, ways that the database structure can be improved to facilitate efficient queries and help me develop this application and future application faster Business Logic The bulletin board will be used in the following way Posting bulletins and responses to bulletins Employees or 'users' in offices around the country will be able to post messages to the bulletin board.Bulletins must be posted to a location and categorised- i'll call these "bulletins". Users will be able to post any number of replies to any one bulletin and users will be able to reply to their own bulletin - i'll call these 'replies'. Rating bulletins and replies Users will be able to either 'like' or 'dislike' a bulletin or a reply and the total number of likes or dislikes will be shown for each bulletin or reply. Viewing the bulletin board and responses Bulletins can be displayed chronologically. Users can sort bulletins chronologically or chronologically by the latest reply to that bulletin(let me know if you need more explanation) When a particular bulletin is selected, replies to that bulletin will be displayed chronologically @PerformanceDBA - edited 10:34 est 28/12/10I have begun implementing the data model. I assume that the 6th data model is the physical model because it contains the associative tables. I am going to post any questions that I have below. I will put up a database dump once I am done. I will then put up a list of all the queries that I need to run on the database and begin writing them. I hope you had a good Christmas. I'm in Canada and there's snow! Implementation of Physical model

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >