Search Results

Search found 12439 results on 498 pages for 'wondering'.

Page 485/498 | < Previous Page | 481 482 483 484 485 486 487 488 489 490 491 492  | Next Page >

  • dynamic module creation

    - by intuited
    I'd like to dynamically create a module from a dictionary, and I'm wondering if adding an element to sys.modules is really the best way to do this. EG context = { a: 1, b: 2 } import types test_context_module = types.ModuleType('TestContext', 'Module created to provide a context for tests') test_context_module.__dict__.update(context) import sys sys.modules['TestContext'] = test_context_module My immediate goal in this regard is to be able to provide a context for timing test execution: import timeit timeit.Timer('a + b', 'from TestContext import *') It seems that there are other ways to do this, since the Timer constructor takes objects as well as strings. I'm still interested in learning how to do this though, since a) it has other potential applications; and b) I'm not sure exactly how to use objects with the Timer constructor; doing so may prove to be less appropriate than this approach in some circumstances. EDITS/REVELATIONS/PHOOEYS/EUREKAE: I've realized that the example code relating to running timing tests won't actually work, because import * only works at the module level, and the context in which that statement is executed is that of a function in the testit module. In other words, the globals dictionary used when executing that code is that of main, since that's where I was when I wrote the code in the interactive shell. So that rationale for figuring this out is a bit botched, but it's still a valid question. I've discovered that the code run in the first set of examples has the undesirable effect that the namespace in which the newly created module's code executes is that of the module in which it was declared, not its own module. This is like way weird, and could lead to all sorts of unexpected rattlesnakeic sketchiness. So I'm pretty sure that this is not how this sort of thing is meant to be done, if it is in fact something that the Guido doth shine upon. The similar-but-subtly-different case of dynamically loading a module from a file that is not in python's include path is quite easily accomplished using imp.load_source('NewModuleName', 'path/to/module/module_to_load.py'). This does load the module into sys.modules. However this doesn't really answer my question, because really, what if you're running python on an embedded platform with no filesystem? I'm battling a considerable case of information overload at the moment, so I could be mistaken, but there doesn't seem to be anything in the imp module that's capable of this. But the question, essentially, at this point is how to set the global (ie module) context for an object. Maybe I should ask that more specifically? And at a larger scope, how to get Python to do this while shoehorning objects into a given module?

    Read the article

  • Across process marhalling problem with an array of points

    - by ElMagn
    Hi All, We have what we think is a marshalling problem with a renderer object when called across process boundaries. The renderer is an ATL COM server with a COM object that implements the IPoints interface defined below: typedef [uuid(B0E01719-005A-427c-B9DD-B42A18E969AE)] struct Point { double X; double Y; } Point; [ object, uuid(3BFECFE3-B4FB-4f14-8257-6E065D02E3B3), helpstring("IPoints Interface"), dual, ] interface IPoints : IDispatch { HRESULT DrawPolyLine([in] long hDC, [in] short count, [in, size_is(count)] Point * points ); // many more like DrawLine } The count parameter represents the number of points and the points parameter represents an array of the actual points. We have two process running, a graphical display process (GDP) and a tabular (grid) display process (TDP). A factory in the GDP, written in C#, creates the renderer and the clients of the renderer in the GDP. When the clients call into the renderer, everything displays correctly. The renderer is created at start up BTW. There is another factory in the TDP, written in VB6, that calls into the factory in the GDP to create the clients. When the clients call into the renderer, only the first point in the array is marshaled correctly, all the other points are garbage. Seems that the rendering works only when the client creation is started from the same process as the renderer. Now, i am not sure what the solution to this problem is. It seems that if we can guarantee that the clients are always created from a thread in the same GDP process as the renderer then the points are marshaled correctly. We tried using a background thread from the Thread Pool in C# and it indeed worked. The problem is that Windows Forms created from the clients stopped working because accessing the form's controls from a thread other than the thread that created the control is not allowed. We might change the calls to access the forms but we have quite a few of them and are trying to look into a different solution that might involve making changes to the renderer. The other problem is that the renderer is legacy code and we can't just change the interface. I am wondering what can we do to the renderer's interface that would help with marshalling from across process calls. Any ideas would be greatly appreciated. Regards, ElMagn

    Read the article

  • Getting 404 when attempting to POST file to Google Cloud Storage from service account

    - by klactose
    I'm wondering if anyone can tell me the proper syntax & formatting for a service account to send a POST Object to bucket request? I'm attempting it programmatically using the HttpComponents library. I manage to get a token from my GoogleCredential, but every time I construct the POST request, I get: HTTP/1.1 403 Forbidden <?xml version='1.0' encoding='UTF-8'?><Error><Code>AccessDenied</Code><Message>Access denied.</Message><Detailsbucket-name</Details></Error The Google documentation that describes the request methods, mentions posting using html forms, but I'm hoping that wasn't suggesting the ONLY way to get the job done. I know that HttpComponents has a way to explicitly create form data by using UrlEncodedFormEntity, but it doesn't support multipart data. Which is why I went with using the MultipartEntity class. My code is below: MultipartEntity entity = new MultipartEntity( HttpMultipartMode.BROWSER_COMPATIBLE ); String token = credential.getAccessToken(); entity.addPart("Authorization", new StringBody("OAuth " + token)); String date = formatDate(new Date()); entity.addPart("Date", new StringBody(date)); entity.addPart("Content-Encoding", new StringBody("UTF-8")); entity.addPart("Content-Type", new StringBody("multipart/form-data")); entity.addPart("bucket", new StringBody(bucket)); entity.addPart("key", new StringBody("fileName")); entity.addPart("success_action_redirect", new StringBody("/storage")); File uploadFile = new File("pathToFile"); FileBody fileBody = new FileBody(uploadFile, "text/xml"); entity.addPart("file", fileBody); httppost.setEntity(entity); System.out.println("Posting URI = "+httppost.toString()); HttpResponse response = client.execute(httppost); HttpEntity resp_entity = response.getEntity(); As I mentioned, I am able to get an actual token, so I'm pretty sure the problem is in how I've formed the request as opposed to not being properly authenticated. Keep in mind: This is being performed by a service account. Which means that it does have Read/Write access Thanks for reading, and I appreciate any help!

    Read the article

  • UCA + Natural Sorting

    - by Alix Axel
    I recently learnt that PHP already supports the Unicode Collation Algorithm via the intl extension: $array = array ( 'al', 'be', 'Alpha', 'Beta', 'Álpha', 'Àlpha', 'Älpha', '????', 'img10.png', 'img12.png', 'img1.png', 'img2.png', ); if (extension_loaded('intl') === true) { collator_asort(collator_create('root'), $array); } Array ( [0] => al [2] => Alpha [4] => Álpha [5] => Àlpha [6] => Älpha [1] => be [3] => Beta [11] => img1.png [9] => img10.png [8] => img12.png [10] => img2.png [7] => ???? ) As you can see this seems to work perfectly, even with mixed case strings! The only drawback I've encountered so far is that there is no support for natural sorting and I'm wondering what would be the best way to work around that, so that I can merge the best of the two worlds. I've tried to specify the Collator::SORT_NUMERIC sort flag but the result is way messier: collator_asort(collator_create('root'), $array, Collator::SORT_NUMERIC); Array ( [8] => img12.png [7] => ???? [9] => img10.png [10] => img2.png [11] => img1.png [6] => Älpha [5] => Àlpha [1] => be [2] => Alpha [3] => Beta [4] => Álpha [0] => al ) However, if I run the same test with only the img*.png values I get the ideal output: Array ( [3] => img1.png [2] => img2.png [1] => img10.png [0] => img12.png ) Can anyone think of a way to preserve the Unicode sorting while adding natural sorting capabilities?

    Read the article

  • MySQL - Calculating fields on the fly vs storing calculated data

    - by Christian Varga
    Hi Everyone, I apologise if this has been asked before, but I can't seem to find an answer to a question that I have about calculating on the fly vs storing fields in a database. I read a few articles that suggested it was preferable to calculate when you can, but I would just like to know if that still applies to the following 2 examples. Example 1. Say you are storing data relating to a car. You store the fuel tank size in litres, and how many litres it uses per 100km. You also want to know how many KMs it can travel, which can be calculated from the tank size and economy. I see 2 ways of doing this: When a car is added or updated, calculate the amount of KMs and store this as a static field in the database. Every time a car is accessed, calculate the amount of KMs on the fly. Because the cars economy/tank size doesn't change (although it could be edited), the KMs is a pretty static value. I don't see why we would calculate it every single time the car is accessed. Wouldn't this waste cpu time as opposed to simply storing it in a separate field in the database and calculating only when a car is added or updated? My next example, which is almost an entirely different question (but on the same topic), relates to counting children. Let's say we have a app which has categories and items. We have a view where we display all the categories, and a count of all the items inside each category. Again, I'm wondering what's better. To perform a MySQL query to count all the items in each category every single time the page is accessed? Or store the count in a field in the categories table and update when an item is added / deleted? I know it is redundant to store anything that can be calculated, but I worry that calculating fields or counting records might be slow as opposed to storing the data in a field. If it's not then please let me know, I just want to learn about when to use either method. On a small scale I guess it wouldn't matter either way, but apps like Facebook, would they really count the amount of friends you have every time someone views your profile or would they just store it as a field? I'd appreciate any responses to both of these scenarios, and any resource that might explain the benefits of calculating vs storing. Thanks in advance, Christian

    Read the article

  • CTE Join query issues

    - by Lee_McIntosh
    Hi everyone, this problem has me head going round in circles at the moment and i wondering if anyone could give any pointers as to where im going wrong. Im trying to produce a SPROC that produces a dataset to be called by SSRS for graphs spanning the last 6 months. The data for example purposes uses three tables (theres more but the it wont change the issue at hand) and are as follows: tbl_ReportList: Report Site ---------------- North abc North def East bbb East ccc East ddd South poa South pob South poc South pod West xyz tbl_TicketsRaisedThisMonth: Date Site Type NoOfTickets --------------------------------------------------------- 2010-07-01 00:00:00.000 abc Support 101 2010-07-01 00:00:00.000 abc Complaint 21 2010-07-01 00:00:00.000 def Support 6 ... 2010-12-01 00:00:00.000 abc Support 93 2010-12-01 00:00:00.000 xyz Support 5 tbl_FeedBackRequests: Date Site NoOfFeedBackR ---------------------------------------------------------------- 2010-07-01 00:00:00.000 abc 101 2010-07-01 00:00:00.000 def 11 ... 2010-12-01 00:00:00.000 abc 63 2010-12-01 00:00:00.000 xyz 4 I'm using CTE's to simplify the code, which is as follows: DECLARE @ReportName VarChar(200) SET @ReportName = 'North'; WITH TicketsRaisedThisMonth AS ( SELECT [Date], Site, SUM(NoOfTickets) AS NoOfTickets FROM tbl_TicketsRaisedThisMonth WHERE [Date] >= DATEADD(mm, DATEDIFF(m,0,GETDATE())-6,0) GROUP BY [Date], Site ), FeedBackRequests AS ( SELECT [Date], Site, SUM(NoOfFeedBackR) AS NoOfFeedBackR FROM tbl_FeedBackRequests WHERE [Date] >= DATEADD(mm, DATEDIFF(m,0,GETDATE())-6,0) GROUP BY [Date], Site ), SELECT trtm.[Date] SUM(trtm.NoOfTickets) AS NoOfTickets, SUM(fbr.NoOfFeedBackR) AS NoOfFeedBackR, FROM Reports rpts LEFT OUTER JOIN TotalIncidentsDuringMonth trtm ON rpts.Site = trtm.Site LEFT OUTER JOIN LoggedComplaints fbr ON rpts.Site = fbr.Site WHERE rpts.report = @ReportName GROUP BY trtm.[Date] And the output when the sproc is pass a parameter such as 'North' to be as follows: Date NoOfTickets NoOfFeedBackR ----------------------------------------------------------------------------------- 2010-07-01 00:00:00.000 128 112 2010-08-01 00:00:00.000 <data for that month> <data for that month> 2010-09-01 00:00:00.000 <data for that month> <data for that month> 2010-10-01 00:00:00.000 <data for that month> <data for that month> 2010-11-01 00:00:00.000 <data for that month> <data for that month> 2010-12-01 00:00:00.000 122 63 The issue I'm having is that when i execute the query I'm given a repeated list of values of each month, such as 128 will repeat 6 times then another value for the next months value repeated 6 times, etc. argh!

    Read the article

  • Methodology to understanding JQuery plugin & API's developed by third parties

    - by Taoist
    I have a question about third party created JQuery plug ins and API's and the methodology for understanding them. Recently I downloaded the JQuery Masonry/Infinite scroll plug in and I couldn't figure out how to configure it based on the instructions. So I downloaded a fully developed demo, then manually deleted everything that wouldn't break the functionality. The code that was left allowed me to understand the plug in much greater detail than the documentation. I'm now having a similar issue with a plug in called JQuery knob. http://anthonyterrien.com/knob/ If you look at the JQuery Knob readme file it says this is working code: <input type="text" value="75" class="dial"> $(function() { $('.dial') .trigger( 'configure', { "min":10, "max":40, "fgColor":"#FF0000", "skin":"tron", "cursor":true } ); }); But as far as I can tell it isn't at all. The read me also says the Plug in uses Canvas. I am wondering if I am suppose to wrap this code in a canvas context or if this functionality is already part of the plug in. I know this kind of "question" might not fit in here but I'm a bit confused on the assumptions around reading these kinds of documentation and thought I would post the query regardless. Curious to see if this is due to my "newbi" programming experience or if this is something seasoned coders also fight with. Thank you. Edit In response to Tyanna's reply. I modified the code and it still doesn't work. I posted it below. I made sure that I checked the Google Console to insure the basics were taken care of, such as not getting a read-error on the library. <!DOCTYPE html> <meta charset="UTF-8"> <title>knob</title> <link rel="stylesheet" href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/themes/hot-sneaks/jquery-ui.css" type="text/css" /> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.js" charset="utf-8"></script> <script src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.8.21/jquery-ui.min.js"></script> <script src="js/jquery.knob.js"></script> <div id="button1">test </div> <script> $(function() { $("#button1").click(function () { $('.dial').trigger( 'configure', { "min":10, "max":40, "fgColor":"#FF0000", "skin":"tron", "cursor":true } ); }); }); </script>

    Read the article

  • How to properly preload images, js and css files?

    - by Kenny Bones
    Hi, I'm creating a website from scratch and I was really into this in the late 90's but the web has changed alot since then! And I'm more of a designer so when I started putting this site together, I basically did a system of php includes to make the site more "dynamic" When you first visit the site, you'll be presented to a logon screen, if you're not already logged on (cookies). If you're not logged on, a page called access.php is introdused. I thought I'd preload the most heavy images at this point. So that when the user is done logging on, the images are already cached. And this is working as I want. But I still notice that the biggest image still isn't rendered immediatly anyway. So it's seems kinda pointless. All of this has made me rethink how the site is structured and how scripts and css files are loaded. Using FireBug and YSlow with Firefox I see a few pointers like expires headers and reducing the size of each script. But is this really the culprit? For example, would this be really really stupid in the main index.php? The entire site is basically structured like this <?php require("dbconnect.php"); ?> <?php include ("head.php"); ?> And below this is basically just the body and the content of the site. Head.php however consists of the doctype, head portions, linking of two css style sheets, jQuery library, jQuery validation engine, Cufon and Cufon font file, and then the small Cufon.Replace snippet. The rest of the body comes with the index.php file, but at the bottom of this again is an include of a file called "footer.php" which basically consists of loading of a couple of jsLoader scripts and a slidepanel and then a js function. All of this makes the end page source look like a typical complete webpage, but I'm wondering if any of you can see immediatly that "this is really really stupid" and "don't do that, do this instead" etc. :) Are includes a bad way to go? This site is also pretty image intensive and I can probably do a little more optimization. But I don't think that's its the primary culprit. YSlow gives me a report of what takes up the most space: doc(1) - 5.8K js(5) - 198.7K css(2) - 5.6K cssimage(8) - 634.7K image(6) - 110.8K I know it looks like it's cssimage(8) that weighs the most, but I've already preloaded these images from before and it doesn't really affect the rendering.

    Read the article

  • Why does std::map operator[] create an object if the key doesn't exist?

    - by n1ck
    Hi, I'm pretty sure I already saw this question somewhere (comp.lang.c++? Google doesn't seem to find it there either) but a quick search here doesn't seem to find it so here it is: Why does the std::map operator[] create an object if the key doesn't exist? I don't know but for me this seems counter-intuitive if you compare to most other operator[] (like std::vector) where if you use it you must be sure that the index exists. I'm wondering what's the rationale for implementing this behavior in std::map. Like I said wouldn't it be more intuitive to act more like an index in a vector and crash (well undefined behavior I guess) when accessed with an invalid key? Refining my question after seeing the answers: Ok so far I got a lot of answers saying basically it's cheap so why not or things similar. I totally agree with that but why not use a dedicated function for that (I think one of the comment said that in java there is no operator[] and the function is called put)? My point is why doesn't map operator[] work like a vector? If I use operator[] on an out of range index on a vector I wouldn't like it to insert an element even if it was cheap because that probably mean an error in my code. My point is why isn't it the same thing with map. I mean, for me, using operator[] on a map would mean: i know this key already exist (for whatever reason, i just inserted it, I have redundancy somewhere, whatever). I think it would be more intuitive that way. That said what are the advantage of doing the current behavior with operator[] (and only for that, I agree that a function with the current behavior should be there, just not operator[])? Maybe it give clearer code that way? I don't know. Another answer was that it already existed that way so why not keep it but then, probably when they (the ones before stl) choose to implement it that way they found it provided an advantage or something? So my question is basically: why choose to implement it that way, meaning a somewhat lack of consistency with other operator[]. What benefit do it give? Thanks

    Read the article

  • PHP - JSON Steam API query

    - by Hunter
    First time using "JSON" and I've just been working away at my dissertation and I'm integrating a few features from the steam API.. now I'm a little bit confused as to how to create arrays. function test_steamAPI() { $api = ('http://api.steampowered.com/ISteamUser/GetPlayerSummaries/v0002/?key='.get_Steam_api().'&steamids=76561197960435530'); $test = decode_url($api); var_dump($test['response']['players'][0]['personaname']['steamid']); } //Function to decode and return the data. function decode_url($url) { $decodeURL = $url; $data = file_get_contents($url); $data_output = json_decode($data, true); return $data_output; } So ea I've wrote a simple method to decode Json as I'll be doing a fair bit.. But just wondering the best way to print out arrays.. I can't for the life of me get it to print more than 1 element without it retunring an error e.g. Warning: Illegal string offset 'steamid' in /opt/lampp/htdocs/lan/lan-includes/scripts/class.steam.php on line 48 string(1) "R" So I can print one element, and if I add another it returns errors. EDIT -- Thanks for help, So this was my solution: function test_steamAPI() { $api = ('http://api.steampowered.com/ISteamUser/GetPlayerSummaries/v0002/?key='.get_Steam_api().'&steamids=76561197960435530,76561197960435530'); $data = decode_url($api); foreach($data ['response']['players'] as $player) { echo "Steam id:" . $player['steamid'] . "\n"; echo "Community visibility :" . $player['communityvisibilitystate'] . "\n"; echo "Player profile" . $player['profileurl'] ."\n"; } } //Function to decode and return the data. function decode_url($url) { $decodeURL = $url; $json = file_get_contents($decodeURL); $data_output = json_decode($json, true); return $data_output; } Worked this out by taking a look at the data.. and a couple json examples, this returns an array based on the Steam API URL (It works for multiple queries.... just FYI) and you can insert loops inside for items etc.. (if anyone searches for this).

    Read the article

  • CSS selectors : should I minimise my use of the class attribute in the HTML or optimise the speed

    - by Laurent Bourgault-Roy
    As I was working on a small website, I decided to use the PageSpeed extension to check if their was some improvement I could do to make the site load faster. However I was quite surprise when it told me that my use of CSS selector was "inefficient". I was always told that you should keep the usage of the class attribute in the HTML to a minimum, but if I understand correctly what PageSpeed tell me, it's much more efficient for the browser to match directly against a class name. It make sense to me, but it also mean that I need to put more CSS classes in my HTML. It also make my .css file a little harder to read. I usually tend to mark my CSS like this : #mainContent p.productDescription em.priceTag { ... } Which make it easy to read : I know this will affect the main content and that it affect something in a paragraph tag (so I wont start to put all sort of layout code in it) that describe a product and its something that need emphasis. However it seem I should rewrite it as .priceTag { ... } Which remove all context information about the style. And if I want to use differently formatted price tag (for example, one in a list on the sidebar and one in a paragraph), I need to use something like that .paragraphPriceTag { ... } .listPriceTag { ... } Which really annoy me since I seem to duplicate the semantic of the HTML in my classes. And that mean I can't put common style in an unqualified .priceTag { ... } and thus I need to replicate the style in both CSS rule, making it harder to make change. (Altough for that I could use multiple class selector, but IE6 dont support them) I believe making code harder to read for the sake of speed has never been really considered a very good practice . Except where it is critical, of course. This is why people use PHP/Ruby/C# etc. instead of C/assembly to code their site. It's easier to write and debug. So I was wondering if I should stick with few CSS classes and complex selector or if I should go the optimisation route and remove my fancy CSS selectors for the sake of speed? Does PageSpeed make over the top recommandation? On most modern computer, will it even make a difference?

    Read the article

  • How do I handle the Maybe result of at in Control.Lens.Indexed without a Monoid instance

    - by Matthias Hörmann
    I recently discovered the lens package on Hackage and have been trying to make use of it now in a small test project that might turn into a MUD/MUSH server one very distant day if I keep working on it. Here is a minimized version of my code illustrating the problem I am facing right now with the at lenses used to access Key/Value containers (Data.Map.Strict in my case) {-# LANGUAGE OverloadedStrings, GeneralizedNewtypeDeriving, TemplateHaskell #-} module World where import Control.Applicative ((<$>),(<*>), pure) import Control.Lens import Data.Map.Strict (Map) import qualified Data.Map.Strict as DM import Data.Maybe import Data.UUID import Data.Text (Text) import qualified Data.Text as T import System.Random (Random, randomIO) newtype RoomId = RoomId UUID deriving (Eq, Ord, Show, Read, Random) newtype PlayerId = PlayerId UUID deriving (Eq, Ord, Show, Read, Random) data Room = Room { _roomId :: RoomId , _roomName :: Text , _roomDescription :: Text , _roomPlayers :: [PlayerId] } deriving (Eq, Ord, Show, Read) makeLenses ''Room data Player = Player { _playerId :: PlayerId , _playerDisplayName :: Text , _playerLocation :: RoomId } deriving (Eq, Ord, Show, Read) makeLenses ''Player data World = World { _worldRooms :: Map RoomId Room , _worldPlayers :: Map PlayerId Player } deriving (Eq, Ord, Show, Read) makeLenses ''World mkWorld :: IO World mkWorld = do r1 <- Room <$> randomIO <*> (pure "The Singularity") <*> (pure "You are standing in the only place in the whole world") <*> (pure []) p1 <- Player <$> randomIO <*> (pure "testplayer1") <*> (pure $ r1^.roomId) let rooms = at (r1^.roomId) ?~ (set roomPlayers [p1^.playerId] r1) $ DM.empty players = at (p1^.playerId) ?~ p1 $ DM.empty in do return $ World rooms players viewPlayerLocation :: World -> PlayerId -> RoomId viewPlayerLocation world playerId= view (worldPlayers.at playerId.traverse.playerLocation) world Since rooms, players and similar objects are referenced all over the code I store them in my World state type as maps of Ids (newtyped UUIDs) to their data objects. To retrieve those with lenses I need to handle the Maybe returned by the at lens (in case the key is not in the map this is Nothing) somehow. In my last line I tried to do this via traverse which does typecheck as long as the final result is an instance of Monoid but this is not generally the case. Right here it is not because playerLocation returns a RoomId which has no Monoid instance. No instance for (Data.Monoid.Monoid RoomId) arising from a use of `traverse' Possible fix: add an instance declaration for (Data.Monoid.Monoid RoomId) In the first argument of `(.)', namely `traverse' In the second argument of `(.)', namely `traverse . playerLocation' In the second argument of `(.)', namely `at playerId . traverse . playerLocation' Since the Monoid is required by traverse only because traverse generalizes to containers of sizes greater than one I was now wondering if there is a better way to handle this that does not require semantically nonsensical Monoid instances on all types possibly contained in one my objects I want to store in the map. Or maybe I misunderstood the issue here completely and I need to use a completely different bit of the rather large lens package?

    Read the article

  • building list of child objects inside main object

    - by Asdfg
    I have two tables like this: Category: Id Name ------------------ 1 Cat1 2 Cat2 Feature: Id Name CategoryId -------------------------------- 1 F1 1 2 F2 1 3 F3 2 4 F4 2 5 F5 2 In my .Net classes, i have two POCO classes like this: public class Category { public int Id {get;set;} public int Name {get;set;} public IList<Feature> Features {get;set;} } public class Feature { public int Id {get;set;} public int CategoryId {get;set;} public int Name {get;set;} } I am using a stored proc that returns me a result set by joining these 2 tables. This is how my Stored Proc returns the result set. SELECT c.CategoryId, c.Name Category, f.FeatureId, f.Name Feature FROM Category c INNER JOIN Feature f ON c.CategoryId = f.CategoryId ORDER BY c.Name --Resultset produced by the above query CategoryId CategoryName FeatureId FeatureName --------------------------------------------------- 1 Cat1 1 F1 1 Cat1 2 F2 2 Cat2 3 F3 2 Cat2 4 F4 2 Cat2 5 F5 Now if i want to build the list of categories in my .Net code, i have to loop thru the result set and add features unless the category changes. This is how my .Net code looks like that builds Categories and Features. List<Category> categories = new List<Category>(); Int32 lastCategoryId = 0; Category c = new Category(); using (SqlDataReader reader = cmd.ExecuteReader()) { while (reader.Read()) { //Check if the categoryid is same as previous one. //If Not, add new category. //If Yes, dont add the category. if (lastCategoryId != Convert.ToInt32(reader["CategoryId"])) { c = new Category { Id = Convert.ToInt32(reader["CategoryId"]), Name = reader["CategoryName"].ToString() }; c.Features = new List<Feature>(); categories.Add(c); } lastCategoryId = Convert.ToInt32(reader["CategoryId"]); //Add Feature c.Features.Add(new Feature { Name = reader["FeatureName"].ToString(), Id = Convert.ToInt32(reader["FeatureId"]) }); } return categories; } I was wondering if there is a better way to do build the list of Categories?

    Read the article

  • Strange behavior of std::cout &operator<<...

    - by themoondothshine
    Hey ppl, I came across something weird today, and I was wondering if any of you here could explain what's happening... Here's a sample: #include <iostream> #include <cassert> using namespace std; #define REQUIRE_STRING(s) assert(s != 0) #define REQUIRE_STRING_LEN(s, n) assert(s != 0 || n == 0) class String { public: String(const char *str, size_t len) : __data(__construct(str, len)), __len(len) {} ~String() { __destroy(__data); } const char *toString() const { return const_cast<const char *>(__data); } String &toUpper() { REQUIRE_STRING_LEN(__data, __len); char *it = __data; while(it < __data + __len) { if(*it >= 'a' && *it <= 'z') *it -= 32; ++it; } return *this; } String &toLower() { REQUIRE_STRING_LEN(__data, __len); char *it = __data; while(it < __data + __len) { if(*it >= 'A' && *it <= 'Z') *it += 32; ++it; } return *this; } private: char *__data; size_t __len; protected: static char *__construct(const char *str, size_t len) { REQUIRE_STRING_LEN(str, len); char *data = new char[len]; std::copy(str, str + len, data); return data; } static void __destroy(char *data) { REQUIRE_STRING(data); delete[] data; } }; int main() { String s("Hello world!", __builtin_strlen("Hello world!")); cout << s.toLower().toString() << endl; cout << s.toUpper().toString() << endl; cout << s.toLower().toString() << endl << s.toUpper().toString() << endl; return 0; } Now, I had expected the output to be: hello world! HELLO WORLD! hello world! HELLO WORLD! but instead I got this: hello world! HELLO WORLD! hello world! hello world! I can't really understand why the second toUpper didn't have any effect.

    Read the article

  • Filter Facebook Stream by Post privacy?

    - by fabian
    Hi there, i query some wall data within my facebook tab. I was wondering how to filter the data (query) to show only post which are visible to a certain country. $query = " SELECT post_id, created_time, attachment,action_links, privacy FROM stream WHERE source_id = ".$page_id." AND viewer_id = ".$user_id." AND actor_id = ".$actor_id." LIMIT 50"; The Output already show Australia: But how to filter for Australia-Only. Array ( [posts] => Array ( [0] => Array ( [post_id] => 123 [viewer_id] => 123 [source_id] => 123 [type] => 46 [app_id] => [attribution] => [actor_id] => 123 [target_id] => [message] => Only for Austria [attachment] => Array ( [description] => ) [app_data] => [action_links] => [comments] => Array ( [can_remove] => 1 [can_post] => 1 [count] => 0 [comment_list] => ) [likes] => Array ( [href] => http://www.facebook.com/social_graph.php?node_id=118229678189906&class=LikeManager [count] => 0 [sample] => [friends] => [user_likes] => 0 [can_like] => 1 ) [privacy] => Array ( [description] => Austria [value] => CUSTOM [friends] => [networks] => [allow] => [deny] => ) [updated_time] => 1271520716 [created_time] => 1271520716 [tagged_ids] => [is_hidden] => 0 [filter_key] => [permalink] => http://www.facebook.com/pages/ )

    Read the article

  • HttpWebRequest possibly slowing website

    - by Steven Smith
    Using Visual studio 2012, C#.net 4.5 , SQL Server 2008, Feefo, Nopcommerce Hey guys I have Recently implemented a new review service into a current site we have. When the change went live the first day all worked fine. Since then though the sending of sales to Feefo hasnt been working, There are no logs either of anything going wrong. In the OrderProcessingService.cs in Nop Commerce's Service, i call a HttpWebrequest when an order has been confirmed as completed. Here is the code. var email = HttpUtility.UrlEncode(order.Customer.Email.ToString()); var name = HttpUtility.UrlEncode(order.Customer.GetFullName().ToString()); var description = HttpUtility.UrlEncode(productVariant.ProductVariant.Product.MetaDescription != null ? productVariant.ProductVariant.Product.MetaDescription.ToString() : "product"); var orderRef = HttpUtility.UrlEncode(order.Id.ToString()); var productLink = HttpUtility.UrlEncode(string.Format("myurl/p/{0}/{1}", productVariant.ProductVariant.ProductId, productVariant.ProductVariant.Name.Replace(" ", "-"))); string itemRef = ""; try { itemRef = HttpUtility.UrlEncode(productVariant.ProductVariant.ProductId.ToString()); } catch { itemRef = "0"; } var url = string.Format("feefo Url", login, password,email,name,description,orderRef,productLink,itemRef); var request = (HttpWebRequest)WebRequest.Create(url); request.KeepAlive = false; request.Timeout = 5000; request.Proxy = null; using (var response = (HttpWebResponse)request.GetResponse()) { if (response.StatusDescription == "OK") { var stream = response.GetResponseStream(); if(stream != null) { using (var reader = new StreamReader(stream)) { var content = reader.ReadToEnd(); } } } } So as you can see its a simple webrequest that is processed on an order, and all product variants are sent to feefo. Now: this hasnt been happening all week since the 15th (day of the implementation) the site has been grinding to a halt recently. The stream and reader in the the var content is there for debugging. Im wondering does the code redflag anything to you that could relate to the process of website? Also note i have run some SQL statements to see if there is any deadlocks or large escalations, so far seems fine, Logs have also been fine just the usual logging of Bots. Any help would be much appreciated! EDIT: also note that this code is in a method that is called and wrapped in A try catch UPDATE: well forget about the "not sending", thats because i was just told my code was rolled back last week

    Read the article

  • How can a C/C++ program put itself into background?

    - by Larry Gritz
    What's the best way for a running C or C++ program that's been launched from the command line to put itself into the background, equivalent to if the user had launched from the unix shell with '&' at the end of the command? (But the user didn't.) It's a GUI app and doesn't need any shell I/O, so there's no reason to tie up the shell after launch. But I want a shell command launch to be auto-backgrounded without the '&' (or on Windows). Ideally, I want a solution that would work on any of Linux, OS X, and Windows. (Or separate solutions that I can select with #ifdef.) It's ok to assume that this should be done right at the beginning of execution, as opposed to somewhere in the middle. One solution is to have the main program be a script that launches the real binary, carefully putting it into the background. But it seems unsatisfying to need these coupled shell/binary pairs. Another solution is to immediately launch another executed version (with 'system' or CreateProcess), with the same command line arguments, but putting the child in the background and then having the parent exit. But this seems clunky compared to the process putting itself into background. Edited after a few answers: Yes, a fork() (or system(), or CreateProcess on Windows) is one way to sort of do this, that I hinted at in my original question. But all of these solutions make a SECOND process that is backgrounded, and then terminate the original process. I was wondering if there was a way to put the EXISTING process into the background. One difference is that if the app was launched from a script that recorded its process id (perhaps for later killing or other purpose), the newly forked or created process will have a different id and so will not be controllable by any launching script, if you see what I'm getting at. Edit #2: fork() isn't a good solution for OS X, where the man page for 'fork' says that it's unsafe if certain frameworks or libraries are being used. I tried it, and my app complains loudly at runtime: "The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec()." I was intrigued by daemon(), but when I tried it on OS X, it gave the same error message, so I assume that it's just a fancy wrapper for fork() and has the same restrictions. Excuse the OS X centrism, it just happens to be the system in front of me at the moment. But I am indeed looking for a solution to all three platforms.

    Read the article

  • Make text in a <div> wrap around a child element.

    - by John
    In Word you can place an image on a page and have the text flow nicely around it. I was wondering how far one can get towards this using CSS, noting that is has to work in IE6. I already have something sort of close using float, but the floated child-element still 'blocks' text above it. So it partially wraps. Is it possible to put a child div at some arbitrary position in the parent, and have text flow around it freely? The actual use-case here is to put illustrations inside the main content , where each illustration is implemented inside a child . I repeat, it has to work on IE6. And I don't want to get too involved in browser-specific hacks... floating the child at least works on IE6 with no tweaking. Currently I have like this: <div> <div class="illustration"> <img src="image1.png" /> <p>Illustration caption</p> </div> <p>Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat, sed diam voluptua. Atvero eos et accusam et justo duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet. </p> </div> div.illustration { float:right; border-top: 1px solid #505050; border-left: 1px solid #505050; border-right: 1px solid #505050; border-bottom: 1px solid #505050; margin-right:30px; margin-top:100px; text-align:center; padding:2px; background: #96C3FF; } div.illustration p { margin:0; font-size:small; font-style:italic; padding:0; }

    Read the article

  • Incompatible library creating new project with Aptana

    - by Phil Rice
    I am a ruby and rails newbie, so my abilities to debug this are somewhat limited. I have just added the eclipse plugin which failed, then downloaded the latest aptana studio which also failed. The failure was the same in both cases. The nature of the failure is that when I create a new rails project, I get an error message about an incompatible library version "C:/Ruby193/lib/ruby/gems/1.9.1/gems/mongrel-1.1.5-x86-mswin32-60/lib/http11.so". The project is actually created, along with directories and files. Google searches around this error message have only returned a couple of hits, which were not very helpful I am wondering if this is about 64 bit libraries. My software stack is: Windows 7 home premium 64bit Aptana RadRails, build: 2.0.5.1278709071 Ruby1.9.3 gem 1.8.24 The console shows: "4320" C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require': iconv will be deprecated in the future, use String#encode instead. C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require': incompatible library version - C:/Ruby193/lib/ruby/gems/1.9.1/gems/mongrel-1.1.5-x86-mswin32-60/lib/http11.so (LoadError) from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `block in require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:521:in `new_constants_in' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/mongrel-1.1.5-x86-mswin32-60/lib/mongrel.rb:12:in `<top (required)>' from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:60:in `require' from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:60:in `rescue in require' from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:35:in `require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `block in require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:521:in `new_constants_in' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler/mongrel.rb:1:in `<top (required)>' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler.rb:17:in `const_get' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler.rb:17:in `block in get' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler.rb:17:in `each' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler.rb:17:in `get' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rails-2.3.4/lib/commands/server.rb:45:in `<top (required)>' from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from script/server:3:in `<top (required)>' from -e:2:in `load' from -e:2:in `<main>'

    Read the article

  • Need Google Map InfoWindow Hyperlink to Open Content in Overlay (Fusion Table Usage)

    - by McKev
    I have the following code established to render the map in my site. When the map is clicked, the info window pops up with a bunch of content including a hyperlink to open up a website with a form in it. I would like to utilize a function like fancybox to open up this link "form" in an overlay. I have read that fancybox doesn't support calling the function from within an iframe, and was wondering if there was a way to pass the link data to the DOM and trigger the fancybox (or another overlay option) in another way? Maybe a callback trick - any tips would be much appreciated! <style> #map-canvas { width:850px; height:600px; } </style> <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=true"></script> <script src="http://gmaps-utility-gis.googlecode.com/svn/trunk/fusiontips/src/fusiontips.js" type="text/javascript"></script> <script type="text/javascript"> var map; var tableid = "1nDFsxuYxr54viD_fuH7fGm1QRZRdcxFKbSwwRjk"; var layer; var initialLocation; var browserSupportFlag = new Boolean(); var uscenter = new google.maps.LatLng(37.6970, -91.8096); function initialize() { map = new google.maps.Map(document.getElementById('map-canvas'), { zoom: 4, mapTypeId: google.maps.MapTypeId.ROADMAP }); layer = new google.maps.FusionTablesLayer({ query: { select: "'Geometry'", from: tableid }, map: map }); //http://gmaps-utility-gis.googlecode.com/svn/trunk/fusiontips/docs/reference.html layer.enableMapTips({ select: "'Contact Name','Contact Title','Contact Location','Contact Phone'", from: tableid, geometryColumn: 'Geometry', suppressMapTips: false, delay: 500, tolerance: 8 }); ; // Try W3C Geolocation (Preferred) if(navigator.geolocation) { browserSupportFlag = true; navigator.geolocation.getCurrentPosition(function(position) { initialLocation = new google.maps.LatLng(position.coords.latitude,position.coords.longitude); map.setCenter(initialLocation); //Custom Marker var pinColor = "A83C0A"; var pinImage = new google.maps.MarkerImage("http://chart.apis.google.com/chart?chst=d_map_pin_letter&chld=%E2%80%A2|" + pinColor, new google.maps.Size(21, 34), new google.maps.Point(0,0), new google.maps.Point(10, 34)); var pinShadow = new google.maps.MarkerImage("http://chart.apis.google.com/chart?chst=d_map_pin_shadow", new google.maps.Size(40, 37), new google.maps.Point(0, 0), new google.maps.Point(12, 35)); new google.maps.Marker({ position: initialLocation, map: map, icon: pinImage, shadow: pinShadow }); }, function() { handleNoGeolocation(browserSupportFlag); }); } // Browser doesn't support Geolocation else { browserSupportFlag = false; handleNoGeolocation(browserSupportFlag); } function handleNoGeolocation(errorFlag) { if (errorFlag == true) { //Geolocation service failed initialLocation = uscenter; } else { //Browser doesn't support geolocation initialLocation = uscenter; } map.setCenter(initialLocation); } } google.maps.event.addDomListener(window, 'load', initialize); </script>

    Read the article

  • Why did File::Find finish short of completely traversing a large directory?

    - by Stan
    A directory exists with a total of 2,153,425 items (according to Windows folder Properties). It contains .jpg and .gif image files located within a few subdirectories. The task was to move the images into a different location while querying each file's name to retrieve some relevant info and store it elsewhere. The script that used File::Find finished at 20462 files. Out of curiosity I wrote a tiny recursive function to count the items which returned a count of 1,734,802. I suppose the difference can be accounted for by the fact that it didn't count folders, only files that passed the -f test. The problem itself can be solved differently by querying for file names first instead of traversing the directory. I'm just wondering what could've caused File::Find to finish at a small fraction of all files. The data is stored on an NTFS file system. Here is the meat of the script; I don't think including DBI stuff would be relevant since I reran the script with nothing but a counter in process_img() which returned the same number. find(\&process_img, $path_from); sub process_img { eval { return if ($_ eq "." or $_ eq ".."); ## Omitted querying and composing new paths for brevity. make_path("$path_to\\img\\$dir_area\\$dir_address\\$type"); copy($File::Find::name, "$path_to\\img\\$dir_area\\$dir_address\\$type\\$new_name"); }; if ($@) { print STDERR "eval barks: $@\n"; return } } And here is another method I used to count files: count_images($path_from); sub count_images { my $path = shift; opendir my $images, $path or die "died opening $path"; while (my $item = readdir $images) { next if $item eq '.' or $item eq '..'; $img_counter++ && next if -f "$path/$item"; count_images("$path/$item") if -d "$path/$item"; } closedir $images or die "died closing $path"; } print $img_counter;

    Read the article

  • CGAffineTransformMakeRotation goes the other way after 180 degrees (-3.14)

    - by TheKillerDev
    So, i am trying to do a very simple disc rotation (2d), according to the user touch on it, just like a DJ or something. It is working, but there is a problem, after certain amount of rotation, it starts going backwards, this amount is after 180 degrees or as i saw in while logging the angle, -3.14 (pi). I was wondering, how can i achieve a infinite loop, i mean, the user can keep rotating and rotating to any side, just sliding his finger? Also a second question is, is there any way to speed up the rotation? Here is my code right now: #import <UIKit/UIKit.h> @interface Draggable : UIImageView { CGPoint firstLoc; UILabel * fred; double angle; } @property (assign) CGPoint firstLoc; @property (retain) UILabel * fred; @end @implementation Draggable @synthesize fred, firstLoc; - (id)initWithFrame:(CGRect)frame { self = [super initWithFrame:frame]; angle = 0; if (self) { // Initialization code } return self; } -(void)handleObject:(NSSet *)touches withEvent:(UIEvent *)event isLast:(BOOL)lst { UITouch *touch =[[[event allTouches] allObjects] lastObject]; CGPoint curLoc = [touch locationInView:self]; float fromAngle = atan2( firstLoc.y-self.center.y, firstLoc.x-self.center.x ); float toAngle = atan2( curLoc.y-(self.center.y+10), curLoc.x-(self.center.x+10)); float newAngle = angle + (toAngle - fromAngle); NSLog(@"%f",newAngle); CGAffineTransform cgaRotate = CGAffineTransformMakeRotation(newAngle); self.transform = cgaRotate; if (lst) angle = newAngle; } -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch =[[[event allTouches] allObjects] lastObject]; firstLoc = [touch locationInView:self]; }; -(void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { [self handleObject:touches withEvent:event isLast:NO]; }; -(void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { [self handleObject:touches withEvent:event isLast:YES]; } @end And in the ViewController: UIImage *tmpImage = [UIImage imageNamed:@"theDisc.png"]; CGRect cellRectangle; cellRectangle = CGRectMake(-1,self.view.frame.size.height,tmpImage.size.width ,tmpImage.size.height ); dragger = [[Draggable alloc] initWithFrame:cellRectangle]; [dragger setImage:tmpImage]; [dragger setUserInteractionEnabled:YES]; dragger.layer.anchorPoint = CGPointMake(.5,.5); [self.view addSubview:dragger]; I am open to new/cleaner/more correct ways of doing this too. Thanks in advance.

    Read the article

  • Storing a NTFS Security Descriptor in C

    - by Doori Bar
    My goal is to store a NTFS Security Descriptor in its identical native state. The purpose is to restore it on-demand. I managed to write the code for that purpose, I was wondering if anybody mind to validate a sample of it? (The for loop represents the way I store the native descriptor) This sample only contains the flag for "OWNER", but my intention is to apply the same method for all of the security descriptor flags. I'm just a beginner, would appreciate the heads up. Thanks, Doori Bar #define _WIN32_WINNT 0x0501 #define WINVER 0x0501 #include <stdio.h> #include <windows.h> #include "accctrl.h" #include "aclapi.h" #include "sddl.h" int main (void) { DWORD lasterror; PSECURITY_DESCRIPTOR PSecurityD1, PSecurityD2; HANDLE hFile; PSID owner; LPTSTR ownerstr; BOOL ownerdefault; int ret = 0; unsigned int i; hFile = CreateFile("c:\\boot.ini", GENERIC_READ | ACCESS_SYSTEM_SECURITY, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, NULL); if (hFile == INVALID_HANDLE_VALUE) { fprintf(stderr,"CreateFile() failed. Error: INVALID_HANDLE_VALUE\n"); return 1; } lasterror = GetSecurityInfo(hFile, SE_FILE_OBJECT, OWNER_SECURITY_INFORMATION , &owner, NULL, NULL, NULL, &PSecurityD1); if (lasterror != ERROR_SUCCESS) { fprintf(stderr,"GetSecurityInfo() failed. Error: %lu;\n", lasterror); ret = 1; goto ret1; } ConvertSidToStringSid(owner,&ownerstr); printf("ownerstr of PSecurityD1: %s\n", ownerstr); /* The for loop represents the way I store the native descriptor */ PSecurityD2 = malloc( GetSecurityDescriptorLength(PSecurityD1) * sizeof(unsigned char) ); for (i=0; i < GetSecurityDescriptorLength(PSecurityD1); i++) ((unsigned char *) PSecurityD2)[i] = ((unsigned char *) PSecurityD1)[i]; if (IsValidSecurityDescriptor(PSecurityD2) == 0) { fprintf(stderr,"IsValidSecurityDescriptor(PSecurityD2) failed.\n"); ret = 2; goto ret2; } if (GetSecurityDescriptorOwner(PSecurityD2,&owner,&ownerdefault) == 0) { fprintf(stderr,"GetSecurityDescriptorOwner() failed."); ret = 2; goto ret2; } ConvertSidToStringSid(owner,&ownerstr); printf("ownerstr of PSecurityD2: %s\n", ownerstr); ret2: free(owner); free(ownerstr); free(PSecurityD1); free(PSecurityD2); ret1: CloseHandle(hFile); return ret; }

    Read the article

  • How to page multiple data sets in ASP.NET MVC

    - by REA_ANDREW
    On a single view I will have three sets of paged data. Which means for each model I will have The Objects The Page Index The Page Size My initial thought was for example: public class PagedModel<T> where T:class { public IList<T> Objects { get; set; } public int ModelPageIndex { get; set; } public int ModelPageSize { get; set; } } Then having a model which is to be supplied to the action as for example: public class TypesViewModel { public PagedModel<ObjectA> Types1 { get; set; } public PagedModel<ObjectB> Typed2 { get; set; } public PagedModel<ObjectC> Types3 { get; set; } } So if I then for example have the Index view inherit from the type: System.Web.Mvc.ViewPage<uk.co.andrewrea.forum.Web.Models.TypesViewModel> Now my initial aciton method for the index is simply: public ActionResult Index() { var forDisplayPurposes = new TypesViewModel(); return View(forDisplayPurposes); } If I then want to page, it is here where I am struggling to decide which action to take. Lets say that I select the next page of the Types2 PageModel. What should the action look like for this in order to return the new view showing the second page of the Types2 PageModel I was thinking possibly to duplicate the action but use it with POST [AcceptVerbs(HttpVerbs.Post)] public ActionResult Index(TypesViewModel model) { return View(model); } Is this a good way to approach it. I understand there is always Session, but I was just wondering how such a thing is achieved currently out there. If any best methods have been mutually accepted and things. So simply, one page with multiple paged models. How to persist the data for each using a wrapper model. Which way should you pass in the model and which way should you page the data, i.e. Form Post Lastly, I have seen the routes take this into account i.e. {controller}/{action}/{id}/{pageindex}/{pagesize} but this only accounts for one model and I do not really wwant to repeat the pagesize and pageindex values for the number of models I have inside the wrapper model. Thanks for your time!! Andrew

    Read the article

  • Python: (sampling with replacement): efficient algorithm to extract the set of UNIQUE N-tuples from a set

    - by Homunculus Reticulli
    I have a set of items, from which I want to select DISSIMILAR tuples (more on the definition of dissimilar touples later). The set could contain potentially several thousand items, although typically, it would contain only a few hundreds. I am trying to write a generic algorithm that will allow me to select N items to form an N-tuple, from the original set. The new set of selected N-tuples should be DISSIMILAR. A N-tuple A is said to be DISSIMILAR to another N-tuple B if and only if: Every pair (2-tuple) that occurs in A DOES NOT appear in B Note: For this algorithm, A 2-tuple (pair) is considered SIMILAR/IDENTICAL if it contains the same elements, i.e. (x,y) is considered the same as (y,x). This is a (possible variation on the) classic Urn Problem. A trivial (pseudocode) implementation of this algorithm would be something along the lines of def fetch_unique_tuples(original_set, tuple_size): while True: # randomly select [tuple_size] items from the set to create first set # create a key or hash from the N elements and store in a set # store selected N-tuple in a container if end_condition_met: break I don't think this is the most efficient way of doing this - and though I am no algorithm theorist, I suspect that the time for this algorithm to run is NOT O(n) - in fact, its probably more likely to be O(n!). I am wondering if there is a more efficient way of implementing such an algo, and preferably, reducing the time to O(n). Actually, as Mark Byers pointed out there is a second variable m, which is the size of the number of elements being selected. This (i.e. m) will typically be between 2 and 5. Regarding examples, here would be a typical (albeit shortened) example: original_list = ['CAGG', 'CTTC', 'ACCT', 'TGCA', 'CCTG', 'CAAA', 'TGCC', 'ACTT', 'TAAT', 'CTTG', 'CGGC', 'GGCC', 'TCCT', 'ATCC', 'ACAG', 'TGAA', 'TTTG', 'ACAA', 'TGTC', 'TGGA', 'CTGC', 'GCTC', 'AGGA', 'TGCT', 'GCGC', 'GCGG', 'AAAG', 'GCTG', 'GCCG', 'ACCA', 'CTCC', 'CACG', 'CATA', 'GGGA', 'CGAG', 'CCCC', 'GGTG', 'AAGT', 'CCAC', 'AACA', 'AATA', 'CGAC', 'GGAA', 'TACC', 'AGTT', 'GTGG', 'CGCA', 'GGGG', 'GAGA', 'AGCC', 'ACCG', 'CCAT', 'AGAC', 'GGGT', 'CAGC', 'GATG', 'TTCG'] Select 3-tuples from the original list should produce a list (or set) similar to: [('CAGG', 'CTTC', 'ACCT') ('CAGG', 'TGCA', 'CCTG') ('CAGG', 'CAAA', 'TGCC') ('CAGG', 'ACTT', 'ACCT') ('CAGG', 'CTTG', 'CGGC') .... ('CTTC', 'TGCA', 'CAAA') ] [[Edit]] Actually, in constructing the example output, I have realized that the earlier definition I gave for UNIQUENESS was incorrect. I have updated my definition and have introduced a new metric of DISSIMILARITY instead, as a result of this finding.

    Read the article

< Previous Page | 481 482 483 484 485 486 487 488 489 490 491 492  | Next Page >