Search Results

Search found 6008 results on 241 pages for 'chris pick'.

Page 234/241 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • Jquery removeClass Not Working

    - by Wade D Ouellet
    Hey, My site is here: http://treethink.com What I have going is a news ticker on the right that is inside a jquery function. The function starts right away and extracts the news ticker and then retracts it as it should. When a navigation it adds a class to the div, which the function then checks for to see if it should stop extracting/retracting. This all works great. The problem I am having is that after the close button is clicked (in the content window), the removeClass won't work. This means that it keeps thinking the window is open and therefore the conditional statement inside the function won't let it extract and retract again. With a simple firebug check, it's showing that the class isn't being removed period. It's not the cboxClose id that it's not finding because I tried changing that to just "a" tags in general and it still wouldn't work so it's something to do with the jQuery for sure. Someone also suggested a quick alert() to check if the callback is working but I'm not sure what this is. Here is the code: /* News Ticker */ /* Initially hide all news items */ $('#ticker1').hide(); $('#ticker2').hide(); $('#ticker3').hide(); var randomNum = Math.floor(Math.random()*3); /* Pick random number */ newsTicker(); function newsTicker() { if (!$("#ticker").hasClass("noTicker")) { $("#ticker").oneTime(2000,function(i) { /* Do the first pull out once */ $('div#ticker div:eq(' + randomNum + ')').show(); /* Select div with random number */ $("#ticker").animate({right: "0"}, {duration: 800 }); /* Pull out ticker with random div */ }); $("#ticker").oneTime(15000,function(i) { /* Do the first retract once */ $("#ticker").animate({right: "-450"}, {duration: 800}); /* Retract ticker */ $("#ticker").oneTime(1000,function(i) { /* Afterwards */ $('div#ticker div:eq(' + (randomNum) + ')').hide(); /* Hide that div */ }); }); $("#ticker").everyTime(16500,function(i) { /* Everytime timer gets to certain point */ /* Show next div */ randomNum = (randomNum+1)%3; $('div#ticker div:eq(' + (randomNum) + ')').show(); $("#ticker").animate({right: "0"}, {duration: 800}); /* Pull out right away */ $("#ticker").oneTime(15000,function(i) { /* Afterwards */ $("#ticker").animate({right: "-450"}, {duration: 800});/* Retract ticker */ }); $("#ticker").oneTime(16000,function(i) { /* Afterwards */ /* Hide all divs */ $('#ticker1').hide(); $('#ticker2').hide(); $('#ticker3').hide(); }); }); } else { $("#ticker").animate({right: "-450"}, {duration: 800}); /* Retract ticker */ $("#ticker").oneTime(1000,function(i) { /* Afterwards */ $('div#ticker div:eq(' + (randomNum) + ')').hide(); /* Hide that div */ }); $("#ticker").stopTime(); } } /* when nav item is clicked re-run news ticker function but give it new class to prevent activity */ $("#nav li").click(function() { $("#ticker").addClass("noTicker"); newsTicker(); }); /* when close button is clicked re-run news ticker function but take away new class so activity can start again */ $("#cboxClose").click(function() { $("#ticker").removeClass("noTicker"); newsTicker(); }); Thanks, Wade

    Read the article

  • Token replacement

    - by ClarkeyBoy
    Hey, I currently have a system on my website whereby I can put something like "[cfe]" anywhere in the site and, when the page is rendered, it will replace it with the root to the customer front end (same for "[afe]" and admin front end - so in the admin front end I can put "[cfe]/Default.aspx" to link to the homepage on the customer front end. This is in place as I have a development version of the site, then a test and a live version too. All 3 may have different roots to each section (for example the way the website is set up, the root to the admin front end in test is "/test/Administration/", but in live and development it is just "/Administration/"). Which version it is depends on the URL - all my development sites are in a folder called "development", whereas test is in a folder called "test" and any live urls do not contain either of these. There are also 3 different databases - one for each. All 3, obviously, require a different connection string. I also have a string replacement function in place which can change, for example, "[Product:Cards]" to point to the Cards catalogue page. Problem is that for this I go through all the products and do a replacement on "[Product:" & Product.Name() & "]". However I would like to take this further. I would like to pick up these custom strings when the page is rendered so it picks up "[Product:Cards]" and then goes off to find product "Cards" and replaces the string with a link to the Cards page, rather than looping through all the products and doing a replace just on the off chance that there are any replacements to make. One use for this, which I may start using in the future if I can figure out how to do this, is like on Wikipedia where you put the title of the page you want to point to, then a divider (think its a pipe from memory) then the link text. I would like to apply this to the above situation. This way broken links can also be picked up, and reported to admin (a major advantage as they can then locate them and remove the link or add the product / page that it refers to). I would like to take this to the stage where content of entire pages can be rearranged (kinda like web parts, but not as advanced as that). I mean like so you can put [layout type="3columnImageTopRight" image="imageurl"]Content here[/layout]. This will display, as specified, an image in the top right (with padding at the left and bottom) and 3 columns - maybe with the image spanning one or two columns). The imageurl can be specified as another token: maybe like [Image:imagename.gif] or something. This replaces it with the root to the image folder and then the specified filename. I have not really looked into how I am going to split the content into 3 columns yet, but this would be something to look at for my dissertation and then implement after my project deadline at least. Does anyone have any ideas or pointers which could help me with this? Also if this is not strictly token replacement then please point me to what it is, so I can further develop this. Thanks in advance, Regards, Richard Clarke

    Read the article

  • How do I use UIImagePickerController just to display the camera and not take a picture?

    - by Thomas
    Hello All: I'd like to know how to open the camera inside of a pre-defined frame (not the entire screen). When the view loads, I have a box, and inside it, I want to display what the camera sees. I don't want to snap a picture, just basically use the camera as a viewfinder. I have searched this site and have not yet found what I'm looking for. Please help. Thanks! Thomas Update 1: Here is what I have tried so far. 1.) I added UIImageView to my xib. 2.) Connect the following outlet to the UIImageView in IB IBOutlet UIImageView *cameraWindow; 3.) I put the following code in viewWillAppear -(void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; UIImagePickerController *picker = [[UIImagePickerController alloc] init]; picker.delegate = self; picker.sourceType = UIImagePickerControllerSourceTypeCamera; [self presentModalViewController:picker animated:YES]; NSLog(@"viewWillAppear ran"); } But this method does not run, as evident by the absence of NSLog statement from my console. Please help! Thanks, Thomas Update 2: OK I got it to run by putting the code in viewDidLoad but my camera still doesn't show up...any suggestions? Anyone....? I've been reading the UIImagePickerController class reference, but am kinda unsure how to make sense of it. I'm still learning iPhone, so it's a bit of a struggle. Please help! - (void)viewDidLoad { [super viewDidLoad]; // Create a bool variable "camera" and call isSourceTypeAvailable to see if camera exists on device BOOL camera = [UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]; // If there is a camera, then display the world throught the viewfinder if(camera) { UIImagePickerController *picker = [[UIImagePickerController alloc] init]; // Since I'm not actually taking a picture, is a delegate function necessary? picker.delegate = self; picker.sourceType = UIImagePickerControllerSourceTypeCamera; [self presentModalViewController:picker animated:YES]; NSLog(@"Camera is available"); } // Otherwise, do nothing. else NSLog(@"No camera available"); } Thanks! Thomas Update 3: A-HA! Found this on the Apple Class Reference. Discussion The delegate receives notifications when the user picks an image or movie, or exits the picker interface. The delegate also decides when to dismiss the picker interface, so you must provide a delegate to use a picker. If this property is nil, the picker is dismissed immediately if you try to show it. Gonna play around with the delegate now. Then I'm going to read on wtf a delegate is. Backwards? Whatever :-p Update 4: The two delegate functions for the class are – imagePickerController:didFinishPickingMediaWithInfo: – imagePickerControllerDidCancel: and since I don't actually want to pick an image or give the user the option to cancel, I am just defining the methods. They should never run though....I think.

    Read the article

  • How to pre-load all deployed assemblies for an AppDomain

    - by Andras Zoltan
    Given an App Domain, there are many different locations that Fusion (the .Net assembly loader) will probe for a given assembly. Obviously, we take this functionality for granted and, since the probing appears to be embedded within the .Net runtime (Assembly._nLoad internal method seems to be the entry-point when Reflect-Loading - and I assume that implicit loading is probably covered by the same underlying algorithm), as developers we don't seem to be able to gain access to those search paths. My problem is that I have a component that does a lot of dynamic type resolution, and which needs to be able to ensure that all user-deployed assemblies for a given AppDomain are pre-loaded before it starts its work. Yes, it slows down startup - but the benefits we get from this component totally outweight this. The basic loading algorithm I've already written is as follows. It deep-scans a set of folders for any .dll (.exes are being excluded at the moment), and uses Assembly.LoadFrom to load the dll if it's AssemblyName cannot be found in the set of assemblies already loaded into the AppDomain (this is implemented inefficiently, but it can be optimized later): void PreLoad(IEnumerable<string> paths) { foreach(path p in paths) { PreLoad(p); } } void PreLoad(string p) { //all try/catch blocks are elided for brevity string[] files = null; files = Directory.GetFiles(p, "*.dll", SearchOption.AllDirectories); AssemblyName a = null; foreach (var s in files) { a = AssemblyName.GetAssemblyName(s); if (!AppDomain.CurrentDomain.GetAssemblies().Any( assembly => AssemblyName.ReferenceMatchesDefinition( assembly.GetName(), a))) Assembly.LoadFrom(s); } } LoadFrom is used because I've found that using Load() can lead to duplicate assemblies being loaded by Fusion if, when it probes for it, it doesn't find one loaded from where it expects to find it. So, with this in place, all I now have to do is to get a list in precedence order (highest to lowest) of the search paths that Fusion is going to be using when it searches for an assembly. Then I can simply iterate through them. The GAC is irrelevant for this, and I'm not interested in any environment-driven fixed paths that Fusion might use - only those paths that can be gleaned from the AppDomain which contain assemblies expressly deployed for the app. My first iteration of this simply used AppDomain.BaseDirectory. This works for services, form apps and console apps. It doesn't work for an Asp.Net website, however, since there are at least two main locations - the AppDomain.DynamicDirectory (where Asp.Net places it's dynamically generated page classes and any assemblies that the Aspx page code references), and then the site's Bin folder - which can be discovered from the AppDomain.SetupInformation.PrivateBinPath property. So I now have working code for the most basic types of apps now (Sql Server-hosted AppDomains are another story since the filesystem is virtualised) - but I came across an interesting issue a couple of days ago where this code simply doesn't work: the nUnit test runner. This uses both Shadow Copying (so my algorithm would need to be discovering and loading them from the shadow-copy drop folder, not from the bin folder) and it sets up the PrivateBinPath as being relative to the base directory. And of course there are loads of other hosting scenarios that I probably haven't considered; but which must be valid because otherwise Fusion would choke on loading the assemblies. I want to stop feeling around and introducing hack upon hack to accommodate these new scenarios as they crop up - what I want is, given an AppDomain and its setup information, the ability to produce this list of Folders that I should scan in order to pick up all the DLLs that are going to be loaded; regardless of how the AppDomain is setup. If Fusion can see them as all the same, then so should my code. Of course, I might have to alter the algorithm if .Net changes its internals - that's just a cross I'll have to bear. Equally, I'm happy to consider SQL Server and any other similar environments as edge-cases that remain unsupported for now. Any ideas!?

    Read the article

  • WPF Combobox binding: can't change selection.

    - by SteveCav
    After wasting hours on this, following on the heels of my Last Problem, I'm starting to feel that Framework 4 is a master of subtle evil, or my PC is haunted. I have three comboboxes and a textbox on a WPF form, and I have an out-of-the-box Subsonic 3 ActiveRecord DAL. When I load this "edit record" form, the comboboxes fill correctly, they select the correct items, and the textbox has the correct text. I can change the TextBox text and save the record just fine, but the comboboxes CANNOT BE CHANGED. The lists drop down and highlight, but when you click on an item, the item selected stays the same. Here's my XAML: <StackPanel Orientation="Horizontal" Margin="10,10,0,0"> <TextBlock Width="80">Asset</TextBlock> <ComboBox Name="cboAsset" Width="180" DisplayMemberPath="AssetName" SelectedValuePath="AssetID" SelectedValue="{Binding AssetID}" ></ComboBox> </StackPanel> <StackPanel Orientation="Horizontal" Margin="10,10,0,0"> <TextBlock Width="80">Status</TextBlock> <ComboBox Name="cboStatus" Width="180" DisplayMemberPath="JobStatusDesc" SelectedValuePath="JobStatusID" SelectedValue="{Binding JobStatusID}" ></ComboBox> </StackPanel> <StackPanel Orientation="Horizontal" Margin="10,10,0,0"> <TextBlock Width="80">Category</TextBlock> <ComboBox Name="cboCategories" Width="180" DisplayMemberPath="CategoryName" SelectedValuePath="JobCategoryID" SelectedValue="{Binding JobCategoryID}" ></ComboBox> </StackPanel> <StackPanel Orientation="Horizontal" Margin="10,10,0,0"> <TextBlock Width="80">Reason</TextBlock> <TextBox Name="txtReason" Width="380" Text="{Binding Reason}"/> </StackPanel> Here are the relevant snips of my code (intJobID is passed in): SvcMgrDAL.Job oJob; IQueryable<SvcMgrDAL.JobCategory> oCategories = SvcMgrDAL.JobCategory.All().OrderBy(x => x.CategoryName); IQueryable<SvcMgrDAL.Asset> oAssets = SvcMgrDAL.Asset.All().OrderBy(x => x.AssetName); IQueryable<SvcMgrDAL.JobStatus> oStatus = SvcMgrDAL.JobStatus.All(); cboCategories.ItemsSource = oCategories; cboStatus.ItemsSource = oStatus; cboAsset.ItemsSource = oAssets; this.JobID = intJobID; oJob = SvcMgrDAL.Job.SingleOrDefault(x => x.JobID == intJobID); this.DataContext = oJob; Things I've tried: -Explicitly setting IsReadOnly="false" and IsSynchronizedWithCurrentItem="True" -Changing the combobox ItemSources from IQueryables to Lists. -Building my own Job object (plain vanilla entity class). -Every binding mode for the comboboxes. The Subsonic DAL doesn't implement INotifyPropertyChanged, but I don't see as it'd need to for simple binding like this. I just want to be able to pick something from the dropdown and save it. Comparing it with my last problem (link at the top of this message), I seem to have something really wierd with data sources going on. Maybe it's a Subsonic thing?

    Read the article

  • Oracle data warehouse design - fact table acting as a dimension?

    - by Elizabeth
    THANKS: Both answers here are very helpful, but I could only pick one. I really appreciate the advice! our datawarehouse will be used more for workflow reports than traditional analytical reports. Our users care about "current picture" far more than history. (though history matters, too.) We are a government entity that does not have costs or related calculations. Mostly just counts of people within given locations and with related history. We are using Oracle, and I have found distinct advantage in using the star join whenever possible and would like to rearchitect everything to as closely resemble the star schema as is reasonable for our business uses. Speed in this DW is vital, and a number of tests have already proven the star schema approach to me. Our "person" table is key - it contains over 4 million records and will be the most frequently used source in queries. It can be seen at the center of a star with multiple dimensions (like age, gender, affiliation, location, etc.). It is a very LONG table, particularly when I join it to the address and contact information. However, it is more like a dimension table when we start looking at history. For example, there are two different history tables that have a person key pointing to the person table. One has over 20 million records and the other has almost 50 million and grows daily. Is this table a fact table or a dimension table? Can one work as both? If so, is that going to be a big performance problem? Is it common to query more off of a dimension than a fact? What happens if a DIFFERENT fact table that uses the person table as a dimension is actually only 60,000 records (much smaller.). I think my problem is that our data and use of it does not fit with the commonly use examples of star schemas. CLARIFICATION: Some good thoughts have been added below, but perhaps I left too much out to really explain well. Here's some more info: We handle a voter database. We don't have any measures except voter counts by various groups: voter counts by party, by age, by location; voter counts by ballot type and election, by ballot status and election, etc. We do have a "voting history" log as well as an activity audit log (change of address, party, etc.). We have information on which voters are election workers and all that related information. I figure I'll get to the peripheral stuff later. For now I'm focusing on our two major "business processes": voter registration(which IS a voter.) and election turnout. In the first, voter is a fact. In the second, voter is a dimension, along with party, election, and type of ballot. (and in case anyone is worried - no we don't know HOW people vote. Just that they do. LOL ) I hope that clarifies things a bit.

    Read the article

  • WinQual: Why would WER not accept code-signing certificates?

    - by Ian Boyd
    In 2005 i tried to establish a WinQual account with Microsoft, so i could pick up our (if any) crash dump files submitted automatically through Windows Error Reporting (WER). i was not allowed to have my crash dumps, because i don't have a Verisign certificate. Instead i have a cheaper one, generated by a Verisign subsidiary: Thawte. The method in which you join is: you digitally sign a sample exe they provide. This proves that you are the same signer that signed apps that they got crash dumps from in the wild. Cryptographically, the private key is needed to generate a digital signature on an executable. Only the holder of that private key can create a signature with for the matching public key. It doesn't matter who generated that private key. That includes certificates that are generated from: self-signing Wells Fargo DigiCert SecureTrust Trustware QuoVadis GoDaddy Entrust Cybertrust GeoTrust GlobalSign Comodo Thawte Verisign Yet Microsof's WinQual only accepts digital certificates generated by Verisign. Not even Verisign's subsidiaries are good enough (Thawte). Can anyone think of any technical, legal or ethical reason why Microsoft doesn't want to accept code-signing certificates? The WinQual site says: Why Is a Digital Certificate Required for Winqual Membership? A digital certificate helps protect your company from individuals who seek to impersonate members of your staff or who would otherwise commit acts of fraud against your company. Using a digital certificate enables proof of an identity for a user or an organization. Is somehow a Thawte digital certificate not secure? Two years later, i sent a reminder notice to WinQual that i've been waiting to be able to get at my crash dumps. The response from WinQual team was: Hello, Thanks for the reminder. We have notified the appropriate people that this is still a request. In 2008 i asked this question in a Microsoft support forum, and the response was: We are only setup to accept VeriSign Certificates at this point. We have not had an overwhelming demand to support other types of certificates. What can it possibly mean to not be "setup" to accept other kinds of certificates? If the thumbprint of the key that signed the WinQual.exe test app is the same as the thumbprint that signed the executable who's crash dump you got in the wild: it is proven - they are my crash dumps, give them to me. And it's not like there's a special API to check if a Verisign digital signature is valid, as opposed to all other digital signatures. A valid signature is valid no matter who generated the key. Microsoft is free to not trust the signer, but that's not the same as identity. So that is my question, can anyone think of any practical reason why WinQual isn't setup to support digital signatures? One person theorized that the answer is that they're just lazy: Not that I know but I would assume that the team running the winQual system is a live team and not a dev team - as in, personality and skillset geared towards maintenance of existing systems. I could be wrong though. They don't want to do work to change it. But can anyone think of anything that would need to be changed? It's the same logic no matter what generated the key: "does the thumbprint match". What am i missing?

    Read the article

  • Is SQL Server DRI (ON DELETE CASCADE) slow?

    - by Aaronaught
    I've been analyzing a recurring "bug report" (perf issue) in one of our systems related to a particularly slow delete operation. Long story short: It seems that the CASCADE DELETE keys were largely responsible, and I'd like to know (a) if this makes sense, and (b) why it's the case. We have a schema of, let's say, widgets, those being at the root of a large graph of related tables and related-to-related tables and so on. To be perfectly clear, deleting from this table is actively discouraged; it is the "nuclear option" and users are under no illusions to the contrary. Nevertheless, it sometimes just has to be done. The schema looks something like this: Widgets | +--- Anvils (1:1) | | | +--- AnvilTestData (1:N) | +--- WidgetHistory (1:N) | +--- WidgetHistoryDetails (1:N) Nothing too scary, really. A Widget can be different types, an Anvil is a special type, so that relationship is 1:1 (or more accurately 1:0..1). Then there's a large amount of data - perhaps thousands of rows of AnvilTestData per Anvil collected over time, dealing with hardness, corrosion, exact weight, hammer compatibility, usability issues, and impact tests with cartoon heads. Then every Widget has a long, boring history of various types of transactions - production, inventory moves, sales, defect investigations, RMAs, repairs, customer complaints, etc. There might be 10-20k details for a single widget, or none at all, depending on its age. So, unsurprisingly, there's a CASCADE DELETE relationship at every level here. If a Widget needs to be deleted, it means something's gone terribly wrong and we need to erase any records of that widget ever existing, including its history, test data, etc. Again, nuclear option. Relations are all indexed, statistics are up to date. Normal queries are fast. The system tends to hum along pretty smoothly for everything except deletes. Getting to the point here, finally, for various reasons we only allow deleting one widget at a time, so a delete statement would look like this: DELETE FROM Widgets WHERE WidgetID = @WidgetID Pretty simple, innocuous looking delete... that takes over 2 minutes to run, for a widget with no data! After slogging through execution plans I was finally able to pick out the AnvilTestData and WidgetHistoryDetails deletes as the sub-operations with the highest cost. So I experimented with turning off the CASCADE (but keeping the actual FK, just setting it to NO ACTION) and rewriting the script as something very much like the following: DECLARE @AnvilID int SELECT @AnvilID = AnvilID FROM Anvils WHERE WidgetID = @WidgetID DELETE FROM AnvilTestData WHERE AnvilID = @AnvilID DELETE FROM WidgetHistory WHERE HistoryID IN ( SELECT HistoryID FROM WidgetHistory WHERE WidgetID = @WidgetID) DELETE FROM Widgets WHERE WidgetID = @WidgetID Both of these "optimizations" resulted in significant speedups, each one shaving nearly a full minute off the execution time, so that the original 2-minute deletion now takes about 5-10 seconds - at least for new widgets, without much history or test data. Just to be absolutely clear, there is still a CASCADE from WidgetHistory to WidgetHistoryDetails, where the fanout is highest, I only removed the one originating from Widgets. Further "flattening" of the cascade relationships resulted in progressively less dramatic but still noticeable speedups, to the point where deleting a new widget was almost instantaneous once all of the cascade deletes to larger tables were removed and replaced with explicit deletes. I'm using DBCC DROPCLEANBUFFERS and DBCC FREEPROCCACHE before each test. I've disabled all triggers that might be causing further slowdowns (although those would show up in the execution plan anyway). And I'm testing against older widgets, too, and noticing a significant speedup there as well; deletes that used to take 5 minutes now take 20-40 seconds. Now I'm an ardent supporter of the "SELECT ain't broken" philosophy, but there just doesn't seem to be any logical explanation for this behaviour other than crushing, mind-boggling inefficiency of the CASCADE DELETE relationships. So, my questions are: Is this a known issue with DRI in SQL Server? (I couldn't seem to find any references to this sort of thing on Google or here in SO; I suspect the answer is no.) If not, is there another explanation for the behaviour I'm seeing? If it is a known issue, why is it an issue, and are there better workarounds I could be using?

    Read the article

  • python: what are efficient techniques to deal with deeply nested data in a flexible manner?

    - by AlexandreS
    My question is not about a specific code snippet but more general, so please bear with me: How should I organize the data I'm analyzing, and which tools should I use to manage it? I'm using python and numpy to analyse data. Because the python documentation indicates that dictionaries are very optimized in python, and also due to the fact that the data itself is very structured, I stored it in a deeply nested dictionary. Here is a skeleton of the dictionary: the position in the hierarchy defines the nature of the element, and each new line defines the contents of a key in the precedent level: [AS091209M02] [AS091209M01] [AS090901M06] ... [100113] [100211] [100128] [100121] [R16] [R17] [R03] [R15] [R05] [R04] [R07] ... [1263399103] ... [ImageSize] [FilePath] [Trials] [Depth] [Frames] [Responses] ... [N01] [N04] ... [Sequential] [Randomized] [Ch1] [Ch2] Edit: To explain a bit better my data set: [individual] ex: [AS091209M02] [imaging session (date string)] ex: [100113] [Region imaged] ex: [R16] [timestamp of file] ex [1263399103] [properties of file] ex: [Responses] [regions of interest in image ] ex [N01] [format of data] ex [Sequential] [channel of acquisition: this key indexes an array of values] ex [Ch1] The type of operations I perform is for instance to compute properties of the arrays (listed under Ch1, Ch2), pick up arrays to make a new collection, for instance analyze responses of N01 from region 16 (R16) of a given individual at different time points, etc. This structure works well for me and is very fast, as promised. I can analyze the full data set pretty quickly (and the dictionary is far too small to fill up my computer's ram : half a gig). My problem comes from the cumbersome manner in which I need to program the operations of the dictionary. I often have stretches of code that go like this: for mk in dic.keys(): for rgk in dic[mk].keys(): for nk in dic[mk][rgk].keys(): for ik in dic[mk][rgk][nk].keys(): for ek in dic[mk][rgk][nk][ik].keys(): #do something which is ugly, cumbersome, non reusable, and brittle (need to recode it for any variant of the dictionary). I tried using recursive functions, but apart from the simplest applications, I ran into some very nasty bugs and bizarre behaviors that caused a big waste of time (it does not help that I don't manage to debug with pdb in ipython when I'm dealing with deeply nested recursive functions). In the end the only recursive function I use regularly is the following: def dicExplorer(dic, depth = -1, stp = 0): '''prints the hierarchy of a dictionary. if depth not specified, will explore all the dictionary ''' if depth - stp == 0: return try : list_keys = dic.keys() except AttributeError: return stp += 1 for key in list_keys: else: print '+%s> [\'%s\']' %(stp * '---', key) dicExplorer(dic[key], depth, stp) I know I'm doing this wrong, because my code is long, noodly and non-reusable. I need to either use better techniques to flexibly manipulate the dictionaries, or to put the data in some database format (sqlite?). My problem is that since I'm (badly) self-taught in regards to programming, I lack practical experience and background knowledge to appreciate the options available. I'm ready to learn new tools (SQL, object oriented programming), whatever it takes to get the job done, but I am reluctant to invest my time and efforts into something that will be a dead end for my needs. So what are your suggestions to tackle this issue, and be able to code my tools in a more brief, flexible and re-usable manner?

    Read the article

  • Java PHP posting using URLConnector, PHP file doesn't seem to receive parameters

    - by Emdiesse
    Hi there, I am trying to post some simple string data to my php script via a java application. My PHP script works fine when I enter the data myself using a web browser (newvector.php?x1=&y1=...) However using my java application the php file does not seem to pick up these parameters, in fact I don't even know if they are sending at all because if I comment out on of the parameters in the java code when I am writing to dta it doesn't actually return, you must enter 6 parameters. newvector.php if(!isset($_GET['x1']) || !isset($_GET['y1']) || !isset($_GET['t1']) || !isset($_GET['x2']) || !isset($_GET['y2']) || !isset($_GET['t2'])){ die("You must include 6 parameters within the URL: x1, y1, t1, x2, y2, t2"); } $x1 = $_GET['x1']; $x2 = $_GET['x2']; $y1 = $_GET['y1']; $y2 = $_GET['y2']; $t1 = $_GET['t1']; $t2 = $_GET['t2']; $insert = " INSERT INTO vectors( x1, x2, y1, y2, t1, t2 ) VALUES ( '$x1', '$x2', '$y1', '$y2', '$t1', '$t2' ) "; if(!mysql_query($insert, $conn)){ die('Error: ' . mysql_error()); } echo "Submitted Data x1=".$x1." y1=".$y1." t1=".$t1." x2=".$x2." y2=".$y2." t2=".$t2; include 'db_disconnect.php'; ?> The java code else if (action.equals("Play")) { for (int i = 0; i < 5; i++) { // data.size() String x1, y1, t1, x2, y2, t2 = ""; String date = "2010-04-03 "; String ms = ".0"; x1 = data.elementAt(i)[1]; y1 = data.elementAt(i)[0]; t1 = date + data.elementAt(i)[2] + ms; x2 = data.elementAt(i)[4]; y2 = data.elementAt(i)[3]; t2 = date + data.elementAt(i)[5] + ms; try { //Create Post String String dta = URLEncoder.encode("x1", "UTF-8") + "=" + URLEncoder.encode(x1, "UTF-8"); dta += "&" + URLEncoder.encode("y1", "UTF-8") + "=" + URLEncoder.encode(y1, "UTF-8"); dta += "&" + URLEncoder.encode("t1", "UTF-8") + "=" + URLEncoder.encode(t1, "UTF-8"); dta += "&" + URLEncoder.encode("x2", "UTF-8") + "=" + URLEncoder.encode(x2, "UTF-8"); dta += "&" + URLEncoder.encode("y2", "UTF-8") + "=" + URLEncoder.encode(y2, "UTF-8"); dta += "&" + URLEncoder.encode("t2", "UTF-8") + "=" + URLEncoder.encode(t2, "UTF-8"); System.out.println(dta); // Send Data To Page URL url = new URL("http://localhost/newvector.php"); URLConnection conn = url.openConnection(); conn.setDoOutput(true); OutputStreamWriter wr = new OutputStreamWriter(conn.getOutputStream()); wr.write(dta); wr.flush(); // Get The Response BufferedReader rd = new BufferedReader(new InputStreamReader(conn.getInputStream())); String line; while ((line = rd.readLine()) != null) { System.out.println(line); //you Can Break The String Down Here } wr.close(); rd.close(); } catch (Exception exc) { System.out.println("Hmmm!!! " + exc.getMessage()); } }

    Read the article

  • handling long running large transactions with perl dbi

    - by 1stdayonthejob
    I've got a large transaction comprising of getting lots of data from database A, do some manipulations with this data, then inserting the manipulated data into database B. I've only got permissions to select in database A but I can create tables and insert/update etc in database B. The manipulation and insertion part is written in perl and already in use for loading data into database B from other data sources, so all that's required is to get the necessary data from database A and using it to initialize the perl classes. How can I go about doing this so I can easily track back and pick up from where the error happened if any error occurs during the manipulation or insertion procedures (database disconnection, problems with class initialization because of invalid values, hard disk failure etc...)? Doing the transaction in one go doesn't seem like a good option because the amount data from database A means it would take at least a day or 2 for data manipulation and insertion into database B. The data from database A can be grouped into around 1000 groups using unique keys, with each key containing 1000s of rows each. One way I thought I could do is to write a script that does commits per group, meaning I've got to track which group has already been inserted into database B. The only way I can think of to track the progress of which groups have been processed or not is either in a log file or in a table in database B. A second way I thought could work is to dump all the necessary fields needed for loading the classes for manipulation and insertion into a flatfile, read the file to initialize the classes and insert into database B. This also means that I got to do some logging, but should narrow it down to the exact row in the flatfile if any error occurs. The script will look something like this: use strict; use warnings; use DBI; #connect to database A my $dbh = DBI->connect('dbi:oracle:my_db', $user, $password, { RaiseError => 1, AutoCommit => 0 }); #statement to get data based on group unique key my $sth = $dbh->prepare($my_sql); my @groups; #I have a list of this already open my $fh, '>>', 'my_logfile' or die "can't open logfile $!"; eval { foreach my $g (@groups){ #subroutine to check if group has already been processed, either from log file or from database table next if is_processed($g); $sth->execute($g); my $data = $sth->fetchall_arrayref; #manipulate $data, then use it to load perl classes for insertion into database B #. #. #. } print $fh "$g\n"; }; if ($@){ $dbh->rollback; die "something wrong...rollback"; } So if any errors do occur, I can just run this script again and it should skip the groups or rows that have been processed and continue. Both these methods is just variations on the same theme, and both require going back to where I've been tracking my progress (in table or file), skip the ones that've been commited to database B and process the remaining data. I'm sure there's a better way of doing this but am struggling to think of other solutions. Is there another way of handling large transactions between databases that require data manipulation between getting data out from one and inserting into another? The process doesn't need to be all in Perl, as long as I can reuse the perl classes for manipulating and inserting the data into the database.

    Read the article

  • Optimizing a 3D World Javascript Animation

    - by johnny
    Hi! I've recently come up with the idea to create a tag cloud like animation shaped like the earth. I've extracted the coastline coordinates from ngdc.noaa.gov and wrote a little script that displayed it in my browser. Now as you can imagine, the whole coastline consists of about 48919 points, which my script would individually render (each coordinate being represented by one span). Obviously no browser is capable of rendering this fluently - but it would be nice if I could render as much as let's say 200 spans (twice as much as now) on my old p4 2.8 Ghz (as a representative benchmark). Are there any javascript optimizations I could use in order to speed up the display of those spans? One 'coordinate': <div id="world_pixels"> <span id="wp_0" style="position:fixed; top:0px; left:0px; z-index:1; font-size:20px; cursor:pointer;cursor:hand;" onmouseover="magnify_world_pixel('wp_0');" onmouseout="shrink_world_pixel('wp_0');" onClick="set_askcue_bar('', 'new york')">new york</span> </div> The script: $(document).ready(function(){ world_pixels = $("#world_pixels span"); world_pixels.spin(); setInterval("world_pixels.spin()",1500); }); z = new Array(); $.fn.spin = function () { for(i=0; i<this.length; i++) { /*actual screen coordinates: x/y/z --> left/font-size/top 300/13/0 300/6/300 | / |/ 0/13/300 ----|---- 600/13/300 /| / | 300/20/300 300/13/600 */ /*scale font size*/ var resize_x = 1; /*scale width*/ var resize_y = 2.5; /*scale height*/ var resize_z = 2.5; var from_left = 300; var from_top = 20; /*actual math coordinates: 1 -1 | / |/ 1 ----|---- -1 /| / | 1 -1 */ //var get_element = document.getElementById(); //var font_size = parseInt(this.style.fontSize); var font_size = parseInt($(this[i]).css("font-size")); var left = parseInt($(this[i]).css("left")); if (coast_line_array[i][1]) { } else { var top = parseInt($(this[i]).css("top")); z[i] = from_top + (top - (300 * resize_z)) / (300 * resize_z); //global beacause it's used in other functions later on var top_new = from_top + Math.round(Math.cos(coast_line_array[i][2]/90*Math.PI) * (300 * resize_z) + (300 * resize_z)); $(this[i]).css("top", top_new); coast_line_array[i][3] = 1; } var x = resize_x * (font_size - 13) / 7; var y = from_left + (left- (300 * resize_y)) / (300 * resize_y); if (y >= 0) { this[i].phi = Math.acos(x/(Math.sqrt(x^2 + y^2))); } else { this[i].phi = 2*Math.PI - Math.acos(x/(Math.sqrt(x^2 + y^2))); i } this[i].theta = Math.acos(z[i]/Math.sqrt(x^2 + y^2 + z[i]^2)); var font_size_new = resize_x * Math.round(Math.sin(coast_line_array[i][4]/90*Math.PI) * Math.cos(coast_line_array[i][0]/180*Math.PI) * 7 + 13); var left_new = from_left + Math.round(Math.sin(coast_line_array[i][5]/90*Math.PI) * Math.sin(coast_line_array[i][0]/180*Math.PI) * (300 * resize_y) + (300 * resize_y)); //coast_line_array[i][6] = coast_line_array[i][7]+1; if ((coast_line_array[i][0] + 1) > 180) { coast_line_array[i][0] = -180; } else { coast_line_array[i][0] = coast_line_array[i][0] + 0.25; } $(this[i]).css("font-size", font_size_new); $(this[i]).css("left", left_new); } } resize_x = 1; function magnify_world_pixel(element) { $("#"+element).animate({ fontSize: resize_x*30+"px" }, { duration: 1000 }); } function shrink_world_pixel(element) { $("#"+element).animate({ fontSize: resize_x*6+"px" }, { duration: 1000 }); } I'd appreciate any suggestions to optimize my script, maybe there is even a totally different approach on how to go about this. The whole .js file which stores the array for all the coordinates is available on my page, the file is about 2.9 mb, so you might consider pulling the .zip for local testing: metaroulette.com/files/31218.zip metaroulette.com/files/31218.js P.S. the php I use to create the spans: <?php //$arbitrary_characters = array('a','b','c','ddsfsdfsdf','e','f','g','h','isdfsdffd','j','k','l','mfdgcvbcvbs','n','o','p','q','r','s','t','uasdfsdf','v','w','x','y','z','0','1','2','3','4','5','6','7','8','9',); $arbitrary_characters = array('cat','table','cool','deloitte','askcue','what','more','less','adjective','nice','clinton','mars','jupiter','testversion','beta','hilarious','lolcatz','funny','obama','president','nice','what','misplaced','category','people','religion','global','skyscraper','new york','dubai','helsinki','volcano','iceland','peter','telephone','internet', 'dialer', 'cord', 'movie', 'party', 'chris', 'guitar', 'bentley', 'ford', 'ferrari', 'etc', 'de facto'); for ($i=0; $i<96; $i++) { $arb_digits = rand (0,45); $arbitrary_character = $arbitrary_characters[$arb_digits]; //$arbitrary_character = "."; echo "<span id=\"wp_$i\" style=\"position:fixed; top:0px; left:0px; z-index:1; font-size:20px; cursor:pointer;cursor:hand;\" onmouseover=\"magnify_world_pixel('wp_$i');\" onmouseout=\"shrink_world_pixel('wp_$i');\" onClick=\"set_askcue_bar('', '$arbitrary_character')\">$arbitrary_character</span>\n"; } ?>

    Read the article

  • Sending emails in web applications

    - by Denise
    Hi everyone, I'm looking for some opinions here, I'm building a web application which has the fairly standard functionality of: Register for an account by filling out a form and submitting it. Receive an email with a confirmation code link Click the link to confirm the new account and log in When you send emails from your web application, it's often (usually) the case that there will be some change to the persistence layer. For example: A new user registers for an account on your site - the new user is created in the database and an email is sent to them with a confirmation link A user assigns a bug or issue to someone else - the issue is updated and email notifications are sent. How you send these emails can be critical to the success of your application. How you send them depends on how important it is that the intended recipient receives the email. We'll look at the following four strategies in relation to the case where the mail server is down, using example 1. TRANSACTIONAL & SYNCHRONOUS The sending of the email fails and the user is shown an error message saying that their account could not be created. The application will appear to be slow and unresponsive as the application waits for the connection timeout. The account is not created in the database because the transaction is rolled back. TRANSACTIONAL & ASYNCHRONOUS The transactional definition here refers to sending the email to a JMS queue or saving it in a database table for another background process to pick up and send. The user account is created in the database, the email is sent to a JMS queue for processing later. The transaction is successful and committed. The user is shown a message saying that their account was created and to check their email for a confirmation link. It's possible in this case that the email is never sent due to some other error, however the user is told that the email has been sent to them. There may be some delay in getting the email sent to the user if application support has to be called in to diagnose the email problem. NON-TRANSACTIONAL & SYNCHRONOUS The user is created in the database, but the application gets a timeout error when it tries to send the email with the confirmation link. The user is shown an error message saying that there was an error. The application is slow and unresponsive as it waits for the connection timeout When the mail server comes back to life and the user tries to register again, they are told their account already exists but has not been confirmed and are given the option of having the email re-sent to them. NON-TRANSACTIONAL & ASYNCHRONOUS The only difference between this and transactional & asynchronous is that if there is an error sending the email to the JMS queue or saving it in the database, the user account is still created but the email is never sent until the user attempts to register again. What I'd like to know is what have other people done here? Can you recommend any other solutions other than the 4 I've mentioned above? What's a reasonable way of approaching this problem? I don't want to over-engineer a system that's dealing with the (hopefully) rare situation where my mail server goes down! The simplest thing to do is to code it synchronously, but are there any other pitfalls to this approach? I guess I'm wondering if there's a best practice, I couldn't find much out there by googling.

    Read the article

  • Vista/7: How to get glass color?

    - by Ian Boyd
    How do you use DwmGetColorizationColor? The documentation says it returns two values: a 32-bit 0xAARRGGBB containing the color used for glass composition a boolean parameter that is true "if the color is an opaque blend" (whatever that means) Here's a color that i like, a nice puke green: You can notice the color is greeny, and the translucent title bar (against a white background) shows the snot color very clearly: i try to get the color from Windows: DwmGetColorizationColor(dwCcolorization, bIsOpaqueBlend); And i get dwColorization: 0x0D0A0F04 bIsOpaqueBlend: false According to the documentation this value is of the format AARRGGBB, and so contains: AA: 0x0D (13) RR: 0x0A (10) GG: 0x0F (15) BB: 0x04 (4) This supposedly means that the color is (10, 15, 4), with an opacity of ~5.1%. But if you actually look at this RGB value, it's nowhere near my desired snot green. Here is (10, 15, 4) with zero opacity (the original color), and (10,15,4) with 5% opacity against a white/checkerboard background: So the question is: How to get glass color in Windows Vista/7? i tried using DwmGetColorizationColor, but that doesn't work very well. A person with same problem, but a nicer shiny picture to attract you squirrels: So, it boils down to – DwmGetColorizationColor is completely unusable for applications attempting to apply the current color onto an opaque surface. i love this guy's screenshots much better than mine. Using his screenshots as a template, i made up a few more sparklies: For the last two screenshots, the alpha blended chip is a true partially transparent PNG, blending to your browser's background. Cool! (i'm such a geek) Edit 2: Had to arrange them in rainbow color. (i'm such a geek) Edit 3: Well now i of course have to add Yellow. Undocumented/Unsupported/Fragile Workarounds There is an undocumented export from DwmApi.dll at entry point 137, which we'll call DwmGetColorizationParameters: HRESULT GetColorizationParameters_Undocumented(out DWMCOLORIZATIONPARAMS params); struct DWMCOLORIZATIONPARAMS { public UInt32 ColorizationColor; public UInt32 ColorizationAfterglow; public UInt32 ColorizationColorBalance; public UInt32 ColorizationAfterglowBalance; public UInt32 ColorizationBlurBalance; public UInt32 ColorizationGlassReflectionIntensity; public UInt32 ColorizationOpaqueBlend; } We're interested in the first parameter: ColorizationColor. We can also read the value out of the registry: HKEY_CURRENT_USER\Software\Microsoft\Windows\DWM ColorizationColor: REG_DWORD = 0x6614A600 So you pick your poison of creating appcompat issues. You can rely on an undocumented API (which is bad, bad, bad, and can go away at any time) use an undocumented registry key (which is also bad, and can go away at any time) See also Is there a list of valid parameter combinations for GetThemeColor / Visual Styles API How does Windows change Aero Glass color? DWM - Colorization Color Handling Using DWMGetColorizationColor Retrieving Aero Glass base color for opaque surface rendering i've been wanting to ask this question for over a year now. i always knew that it's impossible to answer, and that the only way to get anyone to actually pay attention is to have colorful screenshots; developers are attracted to shiny things. But on the downside it means i had to put all kinds of work into making the lures.

    Read the article

  • retriving hearders in all pages of word

    - by udaya
    Hi I am exporting data from php page to word,, there i get 'n' number of datas in each page .... How to set the maximum number of data that a word page can contain ,,,, I want only 20 datas in a single page This is the coding i use to export the data to word i got the data in word format but the headers are not available for all the pages ex: Page:1 slno name country state Town 1 vivek india tamilnadu trichy 2 uday india kerala coimbatore like this i am getting many details but in my page:2 i dont get the headers like name country state and town....But i can get the details like kumar america xxxx yyyy i want the result to be like slno name country state town n chris newzealand ghgg jkgj Can i get the headers If it is not possible Is there anyway to limit the number of details being displayed in each page //EDIT YOUR MySQL Connection Info: $DB_Server = "localhost"; //your MySQL Server $DB_Username = "root"; //your MySQL User Name $DB_Password = ""; //your MySQL Password $DB_DBName = "cms"; //your MySQL Database Name $DB_TBLName = ""; //your MySQL Table Name $sql = "SELECT (SELECT COUNT(*) FROM tblentercountry t2 WHERE t2.dbName <= t1.dbName and t1.dbIsDelete='0') AS SLNO ,dbName as Namee,t3.dbCountry as Country,t4.dbState as State,t5.dbTown as Town FROM tblentercountry t1 join tablecountry as t3, tablestate as t4, tabletown as t5 where t1.dbIsDelete='0' and t1.dbCountryId=t3.dbCountryId and t1.dbStateId=t4.dbStateId and t1.dbTownId=t5.dbTownId order by dbName limit 0,50"; //Optional: print out title to top of Excel or Word file with Timestamp //for when file was generated: //set $Use_Titel = 1 to generate title, 0 not to use title $Use_Title = 1; //define date for title: EDIT this to create the time-format you need //$now_date = DATE('m-d-Y H:i'); //define title for .doc or .xls file: EDIT this if you want $title = "Country"; /* Leave the connection info below as it is: just edit the above. (Editing of code past this point recommended only for advanced users.) */ //create MySQL connection $Connect = @MYSQL_CONNECT($DB_Server, $DB_Username, $DB_Password) or DIE("Couldn't connect to MySQL:" . MYSQL_ERROR() . "" . MYSQL_ERRNO()); //select database $Db = @MYSQL_SELECT_DB($DB_DBName, $Connect) or DIE("Couldn't select database:" . MYSQL_ERROR(). "" . MYSQL_ERRNO()); //execute query $result = @MYSQL_QUERY($sql,$Connect) or DIE("Couldn't execute query:" . MYSQL_ERROR(). "" . MYSQL_ERRNO()); //if this parameter is included ($w=1), file returned will be in word format ('.doc') //if parameter is not included, file returned will be in excel format ('.xls') IF (ISSET($w) && ($w==1)) { $file_type = "vnd.ms-excel"; $file_ending = "xls"; }ELSE { $file_type = "msword"; $file_ending = "doc"; } //header info for browser: determines file type ('.doc' or '.xls') HEADER("Content-Type: application/$file_type"); HEADER("Content-Disposition: attachment; filename=database_dump.$file_ending"); HEADER("Pragma: no-cache"); HEADER("Expires: 0"); /* Start of Formatting for Word or Excel */ IF (ISSET($w) && ($w==1)) //check for $w again { /* FORMATTING FOR WORD DOCUMENTS ('.doc') */ //create title with timestamp: IF ($Use_Title == 1) { ECHO("$title\n\n"); } //define separator (defines columns in excel & tabs in word) $sep = "\n"; //new line character WHILE($row = MYSQL_FETCH_ROW($result)) { //set_time_limit(60); // HaRa $schema_insert = ""; FOR($j=0; $j<mysql_num_fields($result);$j++) { //define field names $field_name = MYSQL_FIELD_NAME($result,$j); //will show name of fields $schema_insert .= "$field_name:\t"; IF(!ISSET($row[$j])) { $schema_insert .= "NULL".$sep; } ELSEIF ($row[$j] != "") { $schema_insert .= "$row[$j]".$sep; } ELSE { $schema_insert .= "".$sep; } } $schema_insert = STR_REPLACE($sep."$", "", $schema_insert); $schema_insert .= "\t"; PRINT(TRIM($schema_insert)); //end of each mysql row //creates line to separate data from each MySQL table row PRINT "\n----------------------------------------------------\n"; } }ELSE{ /* FORMATTING FOR EXCEL DOCUMENTS ('.xls') */ //create title with timestamp: IF ($Use_Title == 1) { ECHO("$title\n"); } //define separator (defines columns in excel & tabs in word) $sep = "\t"; //tabbed character //start of printing column names as names of MySQL fields FOR ($i = 0; $i < MYSQL_NUM_FIELDS($result); $i++) { ECHO MYSQL_FIELD_NAME($result,$i) . "\t"; } PRINT("\n"); //end of printing column names //start while loop to get data WHILE($row = MYSQL_FETCH_ROW($result)) { //set_time_limit(60); // HaRa $schema_insert = ""; FOR($j=0; $j<mysql_num_fields($result);$j++) { IF(!ISSET($row[$j])) $schema_insert .= "NULL".$sep; ELSEIF ($row[$j] != "") $schema_insert .= "$row[$j]".$sep; ELSE $schema_insert .= "".$sep; } $schema_insert = STR_REPLACE($sep."$", "", $schema_insert); //following fix suggested by Josue (thanks, Josue!) //this corrects output in excel when table fields contain \n or \r //these two characters are now replaced with a space $schema_insert = PREG_REPLACE("/\r\n|\n\r|\n|\r/", " ", $schema_insert); $schema_insert .= "\t"; PRINT(TRIM($schema_insert)); PRINT "\n"; } } ?

    Read the article

  • Import Twitter XML into Visual Basic

    - by ct2k7
    Hello, I'm trying to import an XML that's provided by twitter into a readable format in Visual Basic. XML looks like: <?xml version="1.0" encoding="UTF-8" ?> - <statuses type="array"> - <status> <created_at>Mon Jan 18 20:41:19 +0000 2010</created_at> <id>111111111</id> <text>thattext</text> <source><a href="http://www.seesmic.com/" rel="nofollow">Seesmic</a></source> <truncated>false</truncated> <in_reply_to_status_id>7916479948</in_reply_to_status_id> <in_reply_to_user_id>90978206</in_reply_to_user_id> <favorited>false</favorited> <in_reply_to_screen_name>radonsystems</in_reply_to_screen_name> - <user> <id>20193170</id> <name>personname</name> <screen_name>screenname</screen_name> <location>loc</location> <description>desc</description> <profile_image_url>http://a3.twimg.com/profile_images/747012343/twitter_normal.png</profile_image_url> <url>myurl</url> <protected>false</protected> <followers_count>97</followers_count> <profile_background_color>ffffff</profile_background_color> <profile_text_color>333333</profile_text_color> <profile_link_color>0084B4</profile_link_color> <profile_sidebar_fill_color>ffffff</profile_sidebar_fill_color> <profile_sidebar_border_color>ababab</profile_sidebar_border_color> <friends_count>76</friends_count> <created_at>Thu Feb 05 21:54:24 +0000 2009</created_at> <favourites_count>1</favourites_count> <utc_offset>0</utc_offset> <time_zone>London</time_zone> <profile_background_image_url>http://a3.twimg.com/profile_background_images/76723999/754686.png</profile_background_image_url> <profile_background_tile>true</profile_background_tile> <notifications>false</notifications> <geo_enabled>true</geo_enabled> <verified>false</verified> <following>false</following> <statuses_count>782</statuses_count> <lang>en</lang> <contributors_enabled>false</contributors_enabled> </user> <geo /> <coordinates /> <place /> <contributors /> </status> </statuses> Now, I want to display it in a panel that automatically refresh after a certain period, however, I only want to pick out certain bits of info from that xml, such as profile_image_url and text and created_at. You can guess how the data will be formatted, much like that presented in TweetDeck and other Twitter clients. I'm quite new to Visual Basic, so how could I do this? Thanks

    Read the article

  • Is the Unix Philosophy still relevant in the Web 2.0 world?

    - by David Titarenco
    Introduction Hello, let me give you some background before I begin. I started programming when I was 5 or 6 on my dad's PSION II (some primitive BASIC-like language), then I learned more and more, eventually inching my way up to C, C++, Java, PHP, JS, etc. I think I'm a pretty decent coder. I think most people would agree. I'm not a complete social recluse, but I do stuff like write a virtual machine for fun. I've never taken a computer course in college because I've been in and out for the past couple of years and have only been taking core classes; never having been particularly amazing at school, perhaps I'm missing some basic tenet that most learn in CS101. I'm currently reading Coders at Work and this question is based on some ideas I read in there. A Brief (Fictionalized) Example So a certain sunny day I get an idea. I hire a designer and hammer away at some C/C++ code for a couple of months, soon thereafter releasing silvr.com, a website that transmutes lead into silver. Yep, I started my very own start-up and even gave it a clever web 2.0 name with a vowel missing. Mom and dad are proud. I come up with some numbers I should be seeing after 1, 2, 3, 6, 9, 12 months and set sail. Obviously, my transmuting server isn't perfect, sometimes it segfaults, sometimes it leaks memory. I fix it and keep truckin'. After all, gdb is my best friend. Eventually, I'm at a position where a very small community of people are happily transmuting lead into silver on a semi-regular basis, but they want to let their friends on MySpace know how many grams of lead they transmuted today. And they want to post images of their lead and silver nuggets on flickr. I'm losing out on potential traffic unless I let them log in with their Yahoo, Google, and Facebook accounts. They want webcam support and live cock fighting, merry-go-rounds and Jabberwockies. All these things seem necessary. The Aftermath Of course, I have to re-write the transmuting server! After all, I've been losing money all these months. I need OAuth libraries and OpenID libraries, JSON support, and the only stable Jabberwocky API is for Java. C++ isn't even an option anymore. I'm just one guy! The Java binary just grows and grows since I need some legacy Apache include for the JSON library, and some antiquated Sun dependency for OAuth support. Then I pick up a book like Coders at Work and read what people like jwz say about complexity... I think to myself.. Keep it simple, stupid. I like simple things. I've always loved the Unix Philosophy but even after trying to keep the new server source modular and sleek, I loathe having to write one more line of code. It feels that I'm just piling crap on top of other crap. Maybe I'm naive thinking every piece of software can be simple and clever. Maybe it's just a phase.. or is the Unix Philosophy basically dead when it comes to the current state of (web) development? I'm just kind of disheartened :(

    Read the article

  • Slow Jquery Animation

    - by Pyronaut
    I have this webpage : http://miloarc.pyrogenicmedia.com/ Which atm is nothing special. It has a few effects but none that break the bank. If you mouse over a tile, it should change it's opacity to give it a fade effect. This is done through the Jquery Animation, not through CSS (I do this so it can fade, instead of being a straight change). Everything looks nice when the page loads, and the fades look perfect. Infact if you drag your mouse all over the place, if gives you a "trail" almost like a snake. Anyway, My next problem is that you will see there is a box in the top left, which is going to tell you information about the tile you are hovering over. This seems to work fine. When you mouse over that information box, it switches it's position (So that you can reach the tiles that were previously hidden underneath it). From my understanding, this is all working fine, and to the letter. However, After one move of the info box (One hover). Viewing the page in Firefox turns slugish. As in, after a successful move of the info box, the fade effects become very stuttered, and it doesn't pick up events as fast meaning you can't draw a "trail" on the screen. Chrome doesn't have this issue. It seems to work fine no matter what. Safari also seems ok. Although I do notice if I move my mouse really fast, it does jump a bit but not as much as firefox. Internet explorer doesn't work at all. Seems to crash the browser... And there is an error with the rounded corner plugin im using (Not sure why...). All in all, I think whatever I'm doing inside my code must be heavily sluggish. But I don't know where it is happening. Here is the full code, but I would advise go to the link to view everything. <script type="text/javascript"> $(document).ready(function() { var WindowWidth = $(window).width(); var WindowMod = WindowWidth % 20; var WindowWidth = WindowWidth - WindowMod; var BoxDimension = WindowWidth / 20; var BoxDimensionFixed = BoxDimension - 12; var dimensions = BoxDimensionFixed + 'px'; $('.gamebutton').each(function(i) { $(this).css('width', dimensions); $(this).css('height', dimensions); }); var OuterDivHeight = BoxDimension * 10; var TopMargin = ($(window).height() - OuterDivHeight) / 2; var OuterDivWidth = BoxDimension * 20; var LeftMargin = ($(window).width() - OuterDivWidth) / 2; $('#gamePort').css('margin-top', TopMargin).css('margin-left', LeftMargin).css('margin-right', LeftMargin); $('.gamebutton img').each(function(i) { $(this).hover( function () { $(this).animate({'opacity': 0.65}); }, function () { $(this).animate({'opacity': 1}); } ); }); $('.rounded').corners(); $('.gamebutton').each(function(i) { $(this).hover(function() { $('.gameTitlePopup').html($(this).attr('title')); FadeActive(); }); }); function FadeActive() { $('.activeInfo').fadeIn('slow'); } $('#gameInfoLeft').hover(function() { $(this).removeClass('activeInfo'); $(this).fadeOut('slow', function() { $('#gameInfoRight').addClass('activeInfo'); FadeActive(); }); }); $('#gameInfoRight').hover(function() { $(this).removeClass('activeInfo'); $(this).fadeOut('slow', function() { $('#gameInfoLeft').addClass('activeInfo'); FadeActive(); }); }); }); </script> Thanks for any help :).

    Read the article

  • PHP: How do I loop through every XML file in a directory?

    - by celebritarian
    Hi! I'm building a simple application. It's a user interface to an online order system. Basically, the system is going to work like this: Other companies upload their purchase orders to our FTP server. These orders are simple XML files (containing things like customer data, address information, ordered products and the quantities…) I've built a simple user interface in HTML5, jQuery and CSS — all powered by PHP. PHP reads the content of an order (using the built-in features of SimpleXML) and displays it on the web page. So, it's a web app, supposed to always be running in a browser at the office. The PHP app will display the content of all orders. Every fifteen minutes or so, the app will check for new orders. How do I loop through all XML files in a directory? Right now, my app is able to read the content of a single XML file, and display it in a nice way on the page. My current code looks like this: // pick a random order that I know exists in the Order directory: $xml_file = file_get_contents("Order/6366246.xml",FILE_TEXT); $xml = new SimpleXMLElement($xml_file); // start echo basic order information, like order number: echo $xml->OrderHead->ShopPO; // more information about the order and the customer goes here… echo "<ul>"; // loop through each order line, and echo all quantities and products: foreach ($xml->OrderLines->OrderLine as $orderline) { echo "<tr>\n". "<li>".$orderline->Quantity." st.</li>\n". "<li>".$orderline->SKU."</li>\n"; } echo "</ul>"; // more information about delivery options, address information etc. goes here… So, that's my code. Pretty simple. It only needs to do one thing — print out the content of all order files on the screen — so me and my colleagues can see the order, confirm it and deliver it. That's it. But right now — as you can see — I'm selecting one single order at a time, located in the Order directory. But how do I loop through the entire Order directory, and read aand display the content of each order (like above)? I'm stuck. I don't really know how you get all (xml) files in a directory and then do something with the files (like reading them and echo out the data, like I want to). -- I'd really appreciate some help. I'm not very experienced with PHP/server-side programming, so if you could help me out here I'd be very grateful. Thanks a lot in advance! // Björn (celebritarian at me dot com)

    Read the article

  • Why are there so many man-made edge cases in IT, and is there any hope for simplification / unificat

    - by Hamish Grubijan
    This question is meant to generate discussion and so it is marked as community wiki. My observation is that the field of information technology grows so rapidly and randomly, that for many it takes a lot of time to learn many intricacies of some tools that will be obsolete in just short 3 years. If you look at the questions asked on StackOverflow ... at least half of them stem from the fact that some language / tool / API / protocol was poorly designed, is backwards and has gotchas. There are so many things which distract developers from converting English into machine code; instead they spend their time configuring stuff and gluing together things that do not really fit. How many times do you pick up somebody else's project (or someone picks up yours :) ) and realize that this program does not need half of the dialogs that it has, and that the logic can be simplified a great deal? But, it had to be made and sold here before a better thing is made and sold elsewhere, and hence all this rush. I often wish that things would just slow down. I do not want Microsoft Windows to run on my car's computer, my watch, my table, my toaster oven, and my toilet seat. I'd rather have Windows that DOES NOT HAVE WINDOWS REGISTRY, I'd rather have Windows that allows two different programs to work on the same file at the same time, the way it works on Unix systems. Microsoft is just an example. I am looking forward to the day when I do not have to worry about Windows vs Unix new line break, when System32 actually means that this directory contains 32-bit binaries, and not 64-bit ones, the day when dll hell and manifest hell are no longer an issue, the day when it takes me a lot less than 3 months on a new job to learn the system. I do not mean learning the entire code base of a product (depending on the size of it, it can take a long time). I mean - remembering which build-assisting scripts are written in Perl and which version of it, and which ones are done through .bat files, when do I need to manually make every file in some directory writable before running a script, or else a critical step of a database maintenance home-grown tool will bomb, and it will take 2 days to clean that up. Makes me wonder if humans enslaved computers, or if it is the other way around. The key is that improving those things will not bring extra revenue, and hence those taking the time to fix crap like that are not "business focused". However, these imperfections irritate me immensely, particularly because my memory is limited - I can hold only a small portion of that useless knowledge of a system in my head at any given point in time. I must not be alone. Did you also happen to notice that a programmer can waste a lot of time on things that should have been a lot more straight-forward? Is there hope? Will things get better/simpler in the future, or will there be a lot more IT crap floating around? I suppose I see diversity of tools, protocols, etc. as a bad thing. Thank you for participation.

    Read the article

  • Performing Aggregate Functions on Multi-Million Row Tables

    - by Daniel Short
    I'm having some serious performance issues with a multi-million row table that I feel I should be able to get results from fairly quick. Here's a run down of what I have, how I'm querying it, and how long it's taking: I'm running SQL Server 2008 Standard, so Partitioning isn't currently an option I'm attempting to aggregate all views for all inventory for a specific account over the last 30 days. All views are stored in the following table: CREATE TABLE [dbo].[LogInvSearches_Daily]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [Inv_ID] [int] NOT NULL, [Site_ID] [int] NOT NULL, [LogCount] [int] NOT NULL, [LogDay] [smalldatetime] NOT NULL, CONSTRAINT [PK_LogInvSearches_Daily] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY] ) ON [PRIMARY] This table has 132,000,000 records, and is over 4 gigs. A sample of 10 rows from the table: ID Inv_ID Site_ID LogCount LogDay -------------------- ----------- ----------- ----------- ----------------------- 1 486752 48 14 2009-07-21 00:00:00 2 119314 51 16 2009-07-21 00:00:00 3 313678 48 25 2009-07-21 00:00:00 4 298863 0 1 2009-07-21 00:00:00 5 119996 0 2 2009-07-21 00:00:00 6 463777 534 7 2009-07-21 00:00:00 7 339976 503 2 2009-07-21 00:00:00 8 333501 570 4 2009-07-21 00:00:00 9 453955 0 12 2009-07-21 00:00:00 10 443291 0 4 2009-07-21 00:00:00 (10 row(s) affected) I have the following index on LogInvSearches_Daily: /****** Object: Index [IX_LogInvSearches_Daily_LogDay] Script Date: 05/12/2010 11:08:22 ******/ CREATE NONCLUSTERED INDEX [IX_LogInvSearches_Daily_LogDay] ON [dbo].[LogInvSearches_Daily] ( [LogDay] ASC ) INCLUDE ( [Inv_ID], [LogCount]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] I need to pull inventory only from the Inventory for a specific account id. I have an index on the Inventory as well. I'm using the following query to aggregate the data and give me the top 5 records. This query is currently taking 24 seconds to return the 5 rows: StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- SELECT TOP 5 Sum(LogCount) AS Views , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , Inv_ID FROM LogInvSearches_Daily D (NOLOCK) WHERE LogDay DateAdd(d, -30, getdate()) AND EXISTS( SELECT NULL FROM propertyControlCenter.dbo.Inventory (NOLOCK) WHERE Acct_ID = 18731 AND Inv_ID = D.Inv_ID ) GROUP BY Inv_ID (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |--Top(TOP EXPRESSION:((5))) |--Sequence Project(DEFINE:([Expr1007]=dense_rank)) |--Segment |--Segment |--Sort(ORDER BY:([Expr1006] DESC, [D].[Inv_ID] DESC)) |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1006]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1011], [Expr1012], [Expr1010])) | |--Compute Scalar(DEFINE:(([Expr1011],[Expr1012],[Expr1010])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1011] AND [D].[LogDay] < [Expr1012]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID]), SEEK:([propertyControlCenter].[dbo].[Inventory].[Acct_ID]=(18731) AND [propertyControlCenter].[dbo].[Inventory].[Inv_ID]=[LOA (13 row(s) affected) I tried using a CTE to pick up the rows first and aggregate them, but that didn't run any faster, and gives me essentially the same execution plan. (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --SET SHOWPLAN_TEXT ON; WITH getSearches AS ( SELECT LogCount -- , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , D.Inv_ID FROM LogInvSearches_Daily D (NOLOCK) INNER JOIN propertyControlCenter.dbo.Inventory I (NOLOCK) ON Acct_ID = 18731 AND I.Inv_ID = D.Inv_ID WHERE LogDay DateAdd(d, -30, getdate()) -- GROUP BY Inv_ID ) SELECT Sum(LogCount) AS Views, Inv_ID FROM getSearches GROUP BY Inv_ID (1 row(s) affected) StmtText ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1004]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1008], [Expr1009], [Expr1007])) | |--Compute Scalar(DEFINE:(([Expr1008],[Expr1009],[Expr1007])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1008] AND [D].[LogDay] < [Expr1009]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID] AS [I]), SEEK:([I].[Acct_ID]=(18731) AND [I].[Inv_ID]=[LOALogs].[dbo].[LogInvSearches_Daily].[Inv_ID] as [D].[Inv_ID]) ORDERED FORWARD) (8 row(s) affected) (1 row(s) affected) So given that I'm getting good Index Seeks in my execution plan, what can I do to get this running faster? Thanks, Dan

    Read the article

  • Dynamically specify the type in C#

    - by Lirik
    I'm creating a custom DataSet and I'm under some constrains: I want the user to specify the type of the data which they want to store. I want to reduce type-casting because I think it will be VERY expensive. I will use the data VERY frequently in my application. I don't know what type of data will be stored in the DataSet, so my initial idea was to make it a List of objects, but I suspect that the frequent use of the data and the need to type-cast will be very expensive. The basic idea is this: class DataSet : IDataSet { private Dictionary<string, List<Object>> _data; /// <summary> /// Constructs the data set given the user-specified labels. /// </summary> /// <param name="labels"> /// The labels of each column in the data set. /// </param> public DataSet(List<string> labels) { _data = new Dictionary<string, List<object>>(); foreach (string label in labels) { _data.Add(label, new List<object>()); } } #region IDataSet Members public List<string> DataLabels { get { return _data.Keys.ToList(); } } public int Count { get { _data[_data.Keys[0]].Count; } } public List<object> GetValues(string label) { return _data[label]; } public object GetValue(string label, int index) { return _data[label][index]; } public void InsertValue(string label, object value) { _data[label].Insert(0, value); } public void AddValue(string label, object value) { _data[label].Add(value); } #endregion } A concrete example where the DataSet will be used is to store data obtained from a CSV file where the first column contains the labels. When the data is being loaded from the CSV file I'd like to specify the type rather than casting to object. The data could contain columns such as dates, numbers, strings, etc. Here is what it could look like: "Date","Song","Rating","AvgRating","User" "02/03/2010","Code Monkey",4.6,4.1,"joe" "05/27/2009","Code Monkey",1.2,4.5,"jill" The data will be used in a Machine Learning/Artificial Intelligence algorithm, so it is essential that I make the reading of data very fast. I want to eliminate type-casting as much as possible, since I can't afford to cast from 'object' to whatever data type is needed on every read. I've seen applications that allow the user to pick the specific data type for each item in the csv file, so I'm trying to make a similar solution where a different type can be specified for each column. I want to create a generic solution so I don't have to return a List<object> but a List<DateTime> (if it's a DateTime column) or List<double> (if it's a column of doubles). Is there any way that this can be achieved? Perhaps my approach is wrong, is there a better approach to this problem?

    Read the article

  • Retreiving dynamic post values

    - by Pankaj Khurana
    Hi I am using tiny table with some input fields for posting in a page. I want to retrieve the data which the user fills up for a particular instrument number. My code <form name="frmDeposit" action="paymentdeposited.php" method="post"> <table cellpadding="0" cellspacing="0" border="0" id="table" class="tinytable" style="width:700px;"> <thead> <tr> <th><h3>Email</h3></th> <th><h3>Amount Paid</h3></th> <th><h3>Instrument Type</h3></th> <th><h3>Instrument No.</h3></th> <th><h3>Date Paid</h3></th> <th class="nosort"><h3>Date Deposited</h3></th> <th class="nosort"><h3>Bank Name</h3></th> <th class="nosort"><h3>Slip No.</h3></th> <th class="nosort"><h3>Submit</h3></th> </tr> </thead> <tbody> <?php foreach($paymentsdeposited as $paymentdeposited) { ?> <tr> <td><?php echo $paymentdeposited[email];?></td> <td><?php echo $paymentdeposited[amount];?></td> <td><?php echo $paymentdeposited[instrument];?></td> <td><?php echo $paymentdeposited[instrumentnumber];?></td> <td><?php echo $paymentdeposited[dated];?></td> <td><input type="text" name="txtDateDeposited_<?php echo $paymentdeposited[pk_paymentinstrumentid];?>" class="field date-pick"/></td> <td><input type="text" name="txtBankName_<?php echo $paymentdeposited[pk_paymentinstrumentid];?>" class="field"/></td> <td><input type="text" name="txtSlipNo_<?php echo $paymentdeposited[pk_paymentinstrumentid];?>" class="field"/><input type="hidden" name="txtPaymentInstrumentNo_<?php echo $paymentdeposited[pk_paymentinstrumentid];?>" value="<?php echo $paymentdeposited[pk_paymentinstrumentid];?>" class="field"/></td> <td><input type="submit" name="btnSubmit1" value="Submit"/></td> </tr> <?php } ?> </tbody> </table> The print_r command outputs Array ( [txtDateDeposited_57] => 2010-05-07 [txtBankName_57] => pnb [txtSlipNo_57] => 121 [txtPaymentInstrumentNo_57] => 57 [btnSubmit1] => Submit [txtDateDeposited_51] => [txtBankName_51] => [txtSlipNo_51] => [txtPaymentInstrumentNo_51] => 51 [txtDateDeposited_52] => [txtBankName_52] => [txtSlipNo_52] => [txtPaymentInstrumentNo_52] => 52 [txtDateDeposited_45] => [txtBankName_45] => [txtSlipNo_45] => [txtPaymentInstrumentNo_45] => 45 [txtDateDeposited_47] => [txtBankName_47] => [txtSlipNo_47] => [txtPaymentInstrumentNo_47] => 47 ) I want to retrieve the values for id 57 for which he has entered values. But i am unable to construct logic for retrieving this value.I want to make it dynamic. Please help me on this. Thanks

    Read the article

  • How to Force an Exception from a Task to be Observed in a Continuation Task?

    - by Richard
    I have a task to perform an HttpWebRequest using Task<WebResponse>.Factory.FromAsync(req.BeginGetRespone, req.EndGetResponse) which can obviously fail with a WebException. To the caller I want to return a Task<HttpResult> where HttpResult is a helper type to encapsulate the response (or not). In this case a 4xx or 5xx response is not an exception. Therefore I've attached two continuations to the request task. One with TaskContinuationOptions OnlyOnRanToCompletion and the other with OnlyOnOnFaulted. And then wrapped the whole thing in a Task<HttpResult> to pick up the one result whichever continuation completes. Each of the three child tasks (request plus two continuations) is created with the AttachedToParent option. But when the caller waits on the returned outer task, an AggregateException is thrown is the request failed. I want to, in the on faulted continuation, observe the WebException so the client code can just look at the result. Adding a Wait in the on fault continuation throws, but a try-catch around this doesn't help. Nor does looking at the Exception property (as section "Observing Exceptions By Using the Task.Exception Property" hints here). I could install a UnobservedTaskException event handler to filter, but as the event offers no direct link to the faulted task this will likely interact outside this part of the application and is a case of a sledgehammer to crack a nut. Given an instance of a faulted Task<T> is there any means of flagging it as "fault handled"? Simplified code: public static Task<HttpResult> Start(Uri url) { var webReq = BuildHttpWebRequest(url); var result = new HttpResult(); var taskOuter = Task<HttpResult>.Factory.StartNew(() => { var tRequest = Task<WebResponse>.Factory.FromAsync( webReq.BeginGetResponse, webReq.EndGetResponse, null, TaskCreationOptions.AttachedToParent); var tError = tRequest.ContinueWith<HttpResult>( t => HandleWebRequestError(t, result), TaskContinuationOptions.AttachedToParent |TaskContinuationOptions.OnlyOnFaulted); var tSuccess = tRequest.ContinueWith<HttpResult>( t => HandleWebRequestSuccess(t, result), TaskContinuationOptions.AttachedToParent |TaskContinuationOptions.OnlyOnRanToCompletion); return result; }); return taskOuter; } with: private static HttpDownloaderResult HandleWebRequestError( Task<WebResponse> respTask, HttpResult result) { Debug.Assert(respTask.Status == TaskStatus.Faulted); Debug.Assert(respTask.Exception.InnerException is WebException); // Try and observe the fault: Doesn't help. try { respTask.Wait(); } catch (AggregateException e) { Log("HandleWebRequestError: waiting on antecedent task threw inner: " + e.InnerException.Message); } // ... populate result with details of the failure for the client ... return result; } (HandleWebRequestSuccess will eventually spin off further tasks to get the content of the response...) The client should be able to wait on the task and then look at its result, without it throwing due to a fault that is expected and already handled.

    Read the article

  • How to find minimum weight with maximum cost in 0-1 Knapsack algorithm?

    - by Nitin9791
    I am trying to solve a spoj problem Party Schedule the problem statement is- You just received another bill which you cannot pay because you lack the money. Unfortunately, this is not the first time to happen, and now you decide to investigate the cause of your constant monetary shortness. The reason is quite obvious: the lion's share of your money routinely disappears at the entrance of party localities. You make up your mind to solve the problem where it arises, namely at the parties themselves. You introduce a limit for your party budget and try to have the most possible fun with regard to this limit. You inquire beforehand about the entrance fee to each party and estimate how much fun you might have there. The list is readily compiled, but how do you actually pick the parties that give you the most fun and do not exceed your budget? Write a program which finds this optimal set of parties that offer the most fun. Keep in mind that your budget need not necessarily be reached exactly. Achieve the highest possible fun level, and do not spend more money than is absolutely necessary. Input The first line of the input specifies your party budget and the number n of parties. The following n lines contain two numbers each. The first number indicates the entrance fee of each party. Parties cost between 5 and 25 francs. The second number indicates the amount of fun of each party, given as an integer number ranging from 0 to 10. The budget will not exceed 500 and there will be at most 100 parties. All numbers are separated by a single space. There are many test cases. Input ends with 0 0. Output For each test case your program must output the sum of the entrance fees and the sum of all fun values of an optimal solution. Both numbers must be separated by a single space. Example Sample input: 50 10 12 3 15 8 16 9 16 6 10 2 21 9 18 4 12 4 17 8 18 9 50 10 13 8 19 10 16 8 12 9 10 2 12 8 13 5 15 5 11 7 16 2 0 0 Sample output: 49 26 48 32 now I know that it is an advance version of 0/1 knapsack problem where along with maximum cost we also have to find minimum weight that is less than a a given weight and have maximum cost. so I have used dp to solve this problem but still get a wrong awnser on submission while it is perfectly fine with given test cases. My code is typedef vector<int> vi; #define pb push_back #define FOR(i,n) for(int i=0;i<n;i++) int main() { //freopen("input.txt","r",stdin); while(1) { int W,n; cin>>W>>n; if(W==0 && n==0) break; int K[n+1][W+1]; vi val,wt; FOR(i,n) { int x,y; cin>>x>>y; wt.pb(x); val.pb(y); } FOR(i,n+1) { FOR(w,W+1) { if(i==0 || w==0) { K[i][w]=0; } else if (wt[i-1] <= w) { if(val[i-1] + K[i-1][w-wt[i-1]]>=K[i-1][w]) { K[i][w]=val[i-1] + K[i-1][w-wt[i-1]]; } else { K[i][w]=K[i-1][w]; } } else { K[i][w] = K[i-1][w]; } } } int a1=K[n][W],a2; for(int j=0;j<W;j++) { if(K[n][j]==a1) { a2=j; break; } } cout<<a2<<" "<<a1<<"\n"; } return 0; } Could anyone suggest what am I missing??

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >