Search Results

Search found 25579 results on 1024 pages for 'complex event processing'.

Page 755/1024 | < Previous Page | 751 752 753 754 755 756 757 758 759 760 761 762  | Next Page >

  • DataSets to POCOs - an inquiry regarding DAL architecture

    - by alexsome
    Hello all, I have to develop a fairly large ASP.NET MVC project very quickly and I would like to get some opinions on my DAL design to make sure nothing will come back to bite me since the BL is likely to get pretty complex. A bit of background: I am working with an Oracle backend so the built-in LINQ to SQL is out; I also need to use production-level libraries so the Oracle EF provider project is out; finally, I am unable to use any GPL or LGPL code (Apache, MS-PL, BSD are okay) so NHibernate/Castle Project are out. I would prefer - if at all possible - to avoid dishing out money but I am more concerned about implementing the right solution. To summarize, there are my requirements: Oracle backend Rapid development (L)GPL-free Free I'm reasonably happy with DataSets but I would benefit from using POCOs as an intermediary between DataSets and views. Who knows, maybe at some point another DAL solution will show up and I will get the time to switch it out (yeah, right). So, while I could use LINQ to convert my DataSets to IQueryable, I would like to have a generic solution so I don't have to write a custom query for each class. I'm tinkering with reflection right now, but in the meantime I have two questions: Are there any problems I overlooked with this solution? Are there any other approaches you would recommend to convert DataSets to POCOs? Thanks in advance.

    Read the article

  • Does this sound like a stack overflow?

    - by Jordan S
    I think I might be having a stack overflow problem or something similar in my embedded firmware code. I am a new programmer and have never dealt with a SO so I'm not sure if that is what's happening or not. The firmware controls a device with a wheel that has magnets evenly spaced around it and the board has a hall effect sensor that senses when magnet is over it. My firmware operates the stepper and also count steps while monitoring the magnet sensor in order to detect if the wheel has stalled. I am using a timer interrupt on my chip (8 bit, 8057 acrh.) to set output ports to control the motor and for the stall detection. The stall detection code looks like this... // Enter ISR // Change the ports to the appropriate value for the next step // ... StallDetector++; // Increment the stall detector if(PosSensor != LastPosMagState) { StallDetector = 0; LastPosMagState = PosSensor; } else { if (PosSensor == ON) { if (StallDetector > (MagnetSize + 10)) { HandleStallEvent(); } } else if (PosSensor == OFF) { if (StallDetector > (GapSize + 10)) { HandleStallEvent(); } } } this code is called every time the ISR is triggered. PosSensor is the magnet sensor. MagnetSize is the number of stepper steps that it takes to get through the magnet field. GapSize is the number of steps between two magnets. So I want to detect if the wheel gets stuck either with the sensor over a magnet or not over a magnet. This works great for a long time but then after a while the first stall event will occur because 'StallDetector (MagnetSize + 10)' but when I look at the value of StallDetector it is always around 220! This doesn't make sense because MagnetSize is always around 35. So the stall event should have been triggered at like 46 but somehow it got all the way up to 220? And I don't set the value of stall detector anywhere else in my code. Do you have any advice on how I can track down the root of this problem? The ISR looks like this void Timer3_ISR(void) interrupt 14 { OperateStepper(); // This is the function shown above TMR3CN &= ~0x80; // Clear Timer3 interrupt flag } HandleStallEvent just sets a few variable back to their default values so that it can attempt another move... #pragma save #pragma nooverlay void HandleStallEvent() { ///* PulseMotor = 0; //Stop the wheel from moving SetMotorPower(0); //Set motor power low MotorSpeed = LOW_SPEED; SetSpeedHz(); ERROR_STATE = 2; DEVICE_IS_HOMED = FALSE; DEVICE_IS_HOMING = FALSE; DEVICE_IS_MOVING = FALSE; HOMING_STATE = 0; MOVING_STATE = 0; CURRENT_POSITION = 0; StallDetector = 0; return; //*/ } #pragma restore

    Read the article

  • Evaluate or Show HTML using Javascript

    - by Anita
    Hello, I have this...portion of code in Javascript: var description = document.createElement('P'); description.className='rssBoxDescription'; description.innerHTML = itemTokens[2]; div.appendChild(description); I see that description gets the HTML value and displays as plain HTML coding not as HTML processing it shouldbe ... how to convert this HTML value in description to be appended to div element shown as HTML...which is not showing. Also it has value something like: <table><tr><td style="padding: 0 5px"><ahref="xttp://picasaweb.google.com/linktoimagepage"><imgtag style="border:1px solid #5C7FB9" s_rc="xttp://lh3.ggpht.com/imagepath" alt="image.jpg"/></a></td><td valign="top"><font color="#6B6B6B">Date: </font><font color="#333333">Jun 28, 2007 9:00 AM</font><br/><font color=\"#6B6B6B\">Number of Comments on Photo:</font><font color=\"#333333\">0</font><br/><p><ahref="xttp://picasaweb.google.com/linktoimage"><font color="#3964C2">View Photo</font></a></p></td></tr></table> Pls help Anita

    Read the article

  • Analyze log files from many languages using a single tool. And recommendations of logging frameworks

    - by Binary255
    We have a system build on lots of languages. The ones we are interested in logging, in order of priority, are: C/C++ PHP C# Bash Java Wish list: If it is possible, we would like logging to be achieved from the above languages in such a way that we may use a single log viewing tool for all of them. Ideally they would be in the same format, but next to that in as few formats as possible and readable from as many log file viewers as possible. If it is possible logging to a single log file or a set of log files would be nice. With a possibility to filter based on the source language that is being logged. We would like to copy the log files (or should be log to a database and copy it instead?) from multiple servers to a single location. So that we can analyze the log files from many servers at the same time (to see if any of our servers execute a certain piece legacy code for example). Being able to change logging level at runtime would be nice. Thank you for reading! It's quite a complex problem, I hope someone has wrestled with it before and has some valuable information!

    Read the article

  • Fluently setting C# properties and chaining methods

    - by John Feminella
    I'm using .NET 3.5. We have some complex third-party classes which are automatically generated and out of my control, but which we must work with for testing purposes. I see my team doing a lot of deeply-nested property getting/setting in our test code, and it's getting pretty cumbersome. To remedy the problem, I'd like to make a fluent interface for setting properties on the various objects in the hierarchical tree. There are a large number of properties and classes in this third-party library, and it would be too tedious to map everything manually. My initial thought was to just use object initializers. Red, Blue, and Green are properties, and Mix() is a method that sets a fourth property Color to the closest RGB-safe color with that mixed color. Paints must be homogenized with Stir() before they can be used. Bucket b = new Bucket() { Paint = new Paint() { Red = 0.4; Blue = 0.2; Green = 0.1; } }; That works to initialize the Paint, but I need to chain Mix() and other methods to it. Next attempt: Create<Bucket>(Create<Paint>() .SetRed(0.4) .SetBlue(0.2) .SetGreen(0.1) .Mix().Stir() ) But that doesn't scale well, because I'd have to define a method for each property I want to set, and there are hundreds of different properties in all the classes. Also, C# doesn't have a way to dynamically define methods prior to C# 4, so I don't think I can hook into things to do this automatically in some way. Third attempt: Create<Bucket>(Create<Paint>().Set(p => { p.Red = 0.4; p.Blue = 0.2; p.Green = 0.1; }).Mix().Stir() ) That doesn't look too bad, and seems like it'd be feasible. Is this an advisable approach? Is it possible to write a Set method that works this way? Or should I be pursuing an alternate strategy?

    Read the article

  • How to make a wxFrame behave like a modal wxDialog object

    - by dagorym
    Is is possible to make a wxFrame object behave like a modal dialog box in that the window creating the wxFrame object stops execution until the wxFrame object exits? I'm working on a small game and have run into the following problem. I have a main program window that hosts the main application (strategic portion). Occasionally, I need to transfer control to a second window for resolution of part of the game (tactical portion). While in the second window, I want the processing in the first window to stop and wait for completion of the work being done in the second window. Normally a modal dialog would do the trick but I want the new window to have some functionality that I can't seem to get with a wxDialog, namely a status bar at the bottom and the ability to resize/maximize/minimize the window (this should be possible but doesn't work, see this question How to get the minimize and maximize buttons to appear on a wxDialog object). As an addition note, I want the second window's functionality needs to stay completely decoupled from the primary window as it will be spun off into a separate program eventually. Has anyone done this or have any suggestions?

    Read the article

  • _dl_runtime_resolve -- When do the shared objects get loaded in to memory?

    - by windfinder
    We have a message processing system with high performance demands. Recently we have noticed that the first message takes many times longer then subsequent messages. A bunch of transformation and message augmentation happens as this goes through our system, much of it done by way of external lib. I just profiled this issue (using callgrind), comparing a "run" of just one message with a "run" of many messages (providing a baseline of comparison). The main difference I see is the function "do_lookup_x" taking up a huge amount of time. Looking at the various calls to this function, they all seem to be called by the common function: _dl_runtime_resolve. Not sure what this function does, but to me this looks like the first time the various shared libraries are being used, and are then being loaded in to memory by the ld. Is this a correct assumption? That the binary will not load the shared libraries in to memory until they are being prepped for use, therefore we will see a massive slowdown on the first message, but on none of the subsequent? How do we go about avoiding this? Note: We operate on the microsecond scale.

    Read the article

  • Bash script to insert code from one file at a specific location in another file?

    - by Kurtosis
    I have a fileA with a snippet of code, and I need a script to insert that snippet into fileB on the line after a specific pattern. I'm trying to make the accepted answer in this thread work, but it's not, and is not giving an error so not sure why not: sed -e '/pattern/r text2insert' filewithpattern Any suggestions? pattern (insert snippet on line after): def boot { also tried escaped pattern but no luck: def\ boot\ { def\ boot\ \{ fileA snippet: LiftRules.htmlProperties.default.set((r: Req) => new Html5Properties(r.userAgent)) fileB (Boot.scala): package bootstrap.liftweb import net.liftweb._ import util._ import Helpers._ import common._ import http._ import sitemap._ import Loc._ /** * A class that's instantiated early and run. It allows the application * to modify lift's environment */ class Boot { def boot { // where to search snippet LiftRules.addToPackages("code") // Build SiteMap val entries = List( Menu.i("Home") / "index", // the simple way to declare a menu // more complex because this menu allows anything in the // /static path to be visible Menu(Loc("Static", Link(List("static"), true, "/static/index"), "Static Content"))) // set the sitemap. Note if you don't want access control for // each page, just comment this line out. LiftRules.setSiteMap(SiteMap(entries:_*)) // Use jQuery 1.4 LiftRules.jsArtifacts = net.liftweb.http.js.jquery.JQuery14Artifacts //Show the spinny image when an Ajax call starts LiftRules.ajaxStart = Full(() => LiftRules.jsArtifacts.show("ajax-loader").cmd) // Make the spinny image go away when it ends LiftRules.ajaxEnd = Full(() => LiftRules.jsArtifacts.hide("ajax-loader").cmd) // Force the request to be UTF-8 LiftRules.early.append(_.setCharacterEncoding("UTF-8")) } }

    Read the article

  • Should java try blocks be scoped as tightly as possible?

    - by isme
    I've been told that there is some overhead in using the Java try-catch mechanism. So, while it is necessary to put methods that throw checked exception within a try block to handle the possible exception, it is good practice performance-wise to limit the size of the try block to contain only those operations that could throw exceptions. I'm not so sure that this is a sensible conclusion. Consider the two implementations below of a function that processes a specified text file. Even if it is true that the first one incurs some unnecessary overhead, I find it much easier to follow. It is less clear where exactly the exceptions come from just from looking at statements, but the comments clearly show which statements are responsible. The second one is much longer and complicated than the first. In particular, the nice line-reading idiom of the first has to be mangled to fit the readLine call into a try block. What is the best practice for handling exceptions in a funcion where multiple exceptions could be thrown in its definition? This one contains all the processing code within the try block: void processFile(File f) { try { // construction of FileReader can throw FileNotFoundException BufferedReader in = new BufferedReader(new FileReader(f)); // call of readLine can throw IOException String line; while ((line = in.readLine()) != null) { process(line); } } catch (FileNotFoundException ex) { handle(ex); } catch (IOException ex) { handle(ex); } } This one contains only the methods that throw exceptions within try blocks: void processFile(File f) { FileReader reader; try { reader = new FileReader(f); } catch (FileNotFoundException ex) { handle(ex); return; } BufferedReader in = new BufferedReader(reader); String line; while (true) { try { line = in.readLine(); } catch (IOException ex) { handle(ex); break; } if (line == null) { break; } process(line); } }

    Read the article

  • jQuery hold form submit until "continue" button pressed

    - by Seán McCabe
    I am trying to submit a form, which I have had working, but have now modified it to include a modal jQuery UI box, so that it won't submit until the user presses "continue". I've had various problems with this, including getting the form to hold until that button is pressed, but I think I have found a solution to that, but implementing it, I am getting a SyntaxError which I can't find the source of. With the help of kevin B managed to find the answer was the form was submitting, but the returned JSON response wasn't quite formatted right. The response was that the form wasn't being submitted, so that problem is still occurring. So updated the code with the provided feedback, now need to find out why the form isnt submitting. I know its something to do with the 2nd function isnt recognising the submit button has been pressed, so need to know how to submit that form data without the form needing to be submitted again. Below is the new code: function submitData() { $("#submitProvData").submit(function(event) { event.preventDefault(); var gTotal, sTotal, dfd; var dfd = new $.Deferred(); $('html,body').animate({ scrollTop: 0 }, 'fast'); $("#submitProvData input").css("border", "1px solid #aaaaaa"); $("#submitProvData input[readonly='readonly']").css("border", "none"); sTotal = $('#summaryTotal').val(); gTotal = $('#gptotal').val(); if(gTotal !== 'sTotal'){ $("#newsupinvbox").append('<div id="newsupinvdiagbox" title="Warning - Totals do not match" class="hidden"><p>Press "Continue", to submit the invoice flagged for attention.</p> <br /><p class="italic">or</p><br /> <p>Press "Correct" to correct the discrepancy.</p></div>') //CREATE DIV //SET $("#newsupinvdiagbox").dialog({ resizable: false, autoOpen:false, modal: true, draggable: false, width:380, height:240, closeOnEscape: false, position: ['center',20], buttons: { 'Continue': function() { $(this).dialog('close'); reData(); }, // end continue button 'Correct': function() { $(this).dialog('close'); return false; } //end cancel button }//end buttons });//end dialog $('#newsupinvdiagbox').dialog('open'); } return false; }); } function reData() { console.log('submitted'); $("#submitProvData").submit(function(resubmit){ console.log('form submit'); var formData; formData = new FormData($(this)[0]); $.ajax({ type: "POST", url: "functions/invoicing_upload_provider.php", data: formData, async: false, success: function(result) { $.each($.parseJSON(result), function(item, value){ if(item == 'Success'){ $('#newsupinv_window_message_success_mes').html('The provider invoice was uploaded successfully.'); $('#newsupinv_window_message_success').fadeIn(300, function (){ reset(); }).delay(2500).fadeOut(700); } else if(item == 'Error'){ $('#newsupinv_window_message_error_mes').html(value); $('#newsupinv_window_message_error').fadeIn(300).delay(3000).fadeOut(700); } else if(item == 'Warning'){ $('#newsupinv_window_message_warning_mes').html(value); $('#newsupinv_window_message_warning').fadeIn(300, function (){ reset(); }).delay(2500).fadeOut(700); } }); }, error: function() { $('#newsupinv_window_message_error_mes').html("An error occured, the form was not submitted"); $('#newsupinv_window_message_error').fadeIn(300); $('#newsupinv_window_message_error').delay(3000).fadeOut(700); }, cache: false, contentType: false, processData: false }); }); }

    Read the article

  • File based caching under PHP

    - by azatoth
    I've been using http://code.google.com/p/phpbrowscap/ for a project, and it usually works nice. But a few times it's cache, which is plain php-files (see http://code.google.com/p/phpbrowscap/source/browse/trunk/browscap/Browscap.php#372 et. al.), has been "zeroed", i.e. the whole cache file has become large blob of NULLs. Instead of trying to find out why the files become NULL, I though perhaps it might be better to change the caching strategy to something more resilient. So I do wonder if you has any good ideas what would be a good solution; I've been looking at http://www.jongales.com/blog/2009/02/18/simple-file-based-php-cache-class/ and http://www.phpclasses.org/package/313-PHP-Cache-arbitrary-data-in-files-.html and I also though of just saving an serialized array to the file instead of pure php as it's been doing now; But I'm uncertain what approach I should target here. I'm grateful for any insight into this area of technology, as I know it's complex from a performance point of view.

    Read the article

  • Cannot set property X of undefined - programatically created element

    - by Dave
    I am working on a class to create a fairly complex custom element. I want to be able to define element attributes (mainly CSS) as I am creating the elements. I came up with the below function to quickly set attrs for me given an input array: // Sample array structure of apply_settings //var _button_div = { // syle : { // position: 'absolute', // padding : '2px', // right : 0, // top : 0 // } //}; function __apply_settings(__el,apply_settings){ try{ for(settingKey in apply_settings){ __setting = apply_settings[settingKey]; if(typeof(__setting)=='string'){ __el[settingKey] = __setting; }else{ for(i in __setting){ __el[settingKey][i] = apply_settings[settingKey][i]; } } } }catch(ex){ alert(ex + " " + __el); } } The function works fine except here: function __draw_top(){ var containerDiv = document.createElement('div'); var buttonDiv = document.createElement('div'); __apply_settings(containerDiv,this.settings.top_bar); if(global_settings.show_x) buttonDiv.appendChild(__create_img(global_settings.images.close)); containerDiv.appendChild(buttonDiv); __apply_settings(buttonDiv,this.settings.button_div); return containerDiv; } The specific section that is failing is __apply_settings(buttonDiv,this.settings.button_div); with the error "Cannot set property 'position' of undefined". So naturally, I am assuming that buttonDiv is undefined.. I put alert(ex + " " + __el); in __apply_settings() to verify what I was working with. Surprisingly enough, the element is a div. Is there any reason why this function would be working for containerDiv and not buttonDiv? Any help would be greatly appreciated :) {EDIT} http://jsfiddle.net/Us8Zw/2/

    Read the article

  • What is the cleanest way to use anonymous functions?

    - by Fletcher Moore
    I've started to use Javascript a lot more, and as a result I am writing things complex enough that organization is becoming a concern. However, this question applies to any language that allows you to nest functions. Essentially, when should you use an anonymous function over a named global or inner function? At first I thought it was the coolest feature ever, but I think I am going overboard. Here's an example I wrote recently, ommiting all the variable delcarations and conditionals so that you can see the structure. function printStream() { return fold(function (elem, acc) { ... var comments = (function () { return fold(function (comment, out) { ... return out + ...; }, '', elem.comments); return acc + ... + comments; }, '', data.stream); } I realized though (I think) there's some kind of beauty in being so compact, it is probably isn't a good idea to do this in the same way you wouldn't want a ton of code in a double for loop.

    Read the article

  • Saving "heavy" image to PDF in MATLAB - rendering problem

    - by yuk
    I generate a figure in MATLAB with lot amount of points (100000+) and want to save it into a PDF file. With zbuffer or painters renderer I've got very large and slowly opened file (over 4 Mb) - all points are in vector format. Using OpenGL renderer rasterize the figure in PDF, ok for the plot, but not good for text labels. The file size is about 150 Kb. Try this simplified code, for example: x=linspace(1,10,100000); y=sin(x)+randn(size(x)); plot(x,y,'.') set(gcf,'Renderer','zbuffer') print -dpdf -r300 testpdf_zb set(gcf,'Renderer','painters') print -dpdf -r300 testpdf_pa set(gcf,'Renderer','opengl') print -dpdf -r300 testpdf_op The actual figure is much more complex with several axes and different types of plots. Is there a way to rasterize the figure, but keep text labels as vectors? Another problem with OpenGL is that is does not work in terminal mode (-nosplash -nodesktop) under Mac OSX. Looks like OpenGL is not supported. I have to use terminal mode for automation. The MATLAB version I run is 2007b. Mac OSX server 10.4.

    Read the article

  • Recommend me an architecture for this Facebook application

    - by andybaird
    Firstly, this question is subjective. There is not a right answer for this question and it really depends on what works for you. I'm hoping to use this thread as a breeding ground for ideas. I hope this is acceptable in this medium. I'm working on building a Facebook app that will be replacing an already popular app that gets ~50k hits a day. The original app is using a very typical LAMP setup with help from some Zend libraries for database layer extraction. For the most part the app worked well, except to solve a lot of issues I ended up fragmenting tables to speed things up. As a result, I couldn't do a lot of things with the app that I wanted to (namely any processing using aggregate data that needed to be returned quickly) So I'm starting to design plans for the next version of this application, and I have a whole bunch of new and cool features that I know would choke my current setup. I'm looking for technological recommendations of data storage methods that scale well. The database does not necessarily need to be relational, simple key/value storage would suffice (although at present time I know little to nothing about KV stores) What's your recommendation? How would you tackle this? I'd like to take a completely free approach to this -- although I am most familiar and comfortable using PHP, I want to leave all technical options open.

    Read the article

  • Php template caching design

    - by Thomas
    Hello to all, I want to include caching in my app design. Caching templates for starters. The design I have used so far is very modular. I have created an ORM implementation for all my tables and each table is represented by the corresponding class. All the requests are handled by one controller which routes them to the appropriate webmethod functions. I am using a template class for handling UI parts. What I have in mind for caching includes the implementation of a separate Cache class for handling caching with the flexibility to either store in files, apc or memcache. Right now I am testing with file caching. Some thoughts Should I include the logic of checking for cached versions in the Template class or in the webmethods which handle the incoming requests and which eventually call the Template class. In the first case, things are pretty simple as I will not have to change anything more than pass the template class an extra argument (whether to load from cache or not). In the second case however, I am thinking of checking for a cached version immediately in the webmethod and if found return it. This will save all the processing done until the logic reaches the template (first case senario). Both senarios however, rely on an accurate mechanism of invalidating caches, which brings as to Invalidating caches As I see it (and you can add your input freely) a template cached file, becomes invalidate if: a. the expiration set, is reached. b. the template file itself is updated (ie by the developer when adding a new line) c. the webmethod that handles the request changes (ie the developer adds/deletes something in the code) d. content coming from the db and ending in the template file is modified I am thinking of storing a json encoded array inside the cached file. The first value will be the expiration timestamp of the cache. The second value will be the modification time of the php file with the code handling the request (to cope with option c above) The third will be the content itself The validation process I am considering, according to the above senarios, is: a. If the expiration of the cached file (stored in the array) is reached, delete the cache file b. if the cached file's mod time is smaller than the template's skeleton file mod time, delete the cached file c. if the mod time of the php file is greated than the one stored in the cache, delete the cached file. d. This is tricky. In the ORM implementation I ahve added event handlers (which fire when adding, updating, deleting objects). I could delete the cache file every time an object thatprovides content to the template, is modified. The problem is how to keep track which cached files correpond to each schema object. Take this example, a user has his shortprofile page and a full profile page (2 templates) These templates can be cached. Now, every time the user modifies his profile, the event handler would need to know which templates or cached files correspond to the User, so that these files can be deleted. I could store them in the db but I am looking for a beter approach

    Read the article

  • Remove Empty Attributes from XML

    - by er4z0r
    Hi, I have a buggy xml that contains empty attributes and I have a parser that coughs on empty attributes. I have no control over the generation of the xml nor over the parser that coughs on empty attrs. So what I want to do is a pre-processing step that simply removes all empty attributes. I have managed to find the empty attribus, but now I don't know how to remove them: XPathFactory xpf = XPathFactory.newInstance(); XPath xpath = xpf.newXPath(); XPathExpression expr = xpath.compile("//@*"); Object result = expr.evaluate(d, XPathConstants.NODESET); if (result != null) { NodeList nodes = (NodeList) result; for(int node=0;node<nodes.getLength();node++) { Node n = nodes.item(node); if(isEmpty(n.getTextContent())) { this.log.warn("Found empty attribute declaration "+n.toString()); NamedNodeMap parentAttrs = n.getParentNode().getAttributes(); parentAttrs.removeNamedItem(n.getNodeName()); } } } This code gives me a NPE when accessing n.getParentNode().getAttributes(). But how can I remove the empty attribute from an element, when I cannot access the element?

    Read the article

  • LINQ to SQL - How to efficiently do either an AND or an OR search for multiple criteria

    - by Dan Diplo
    I have an ASP.NET MVC site (which uses Linq To Sql for the ORM) and a situation where a client wants a search facility against a bespoke database whereby they can choose to either do an 'AND' search (all criteria match) or an 'OR' search (any criteria match). The query is quite complex and long and I want to know if there is a simple way I can make it do both without having to have create and maintain two different versions of the query. For instance, the current 'AND' search looks something like this (but this is a much simplified version): private IQueryable<SampleListDto> GetSampleSearchQuery(SamplesCriteria criteria) { var results = from r in Table where (r.Id == criteria.SampleId) && (r.Status.SampleStatusId == criteria.SampleStatusId) && (r.Job.JobNumber.StartsWith(criteria.JobNumber)) && (r.Description.Contains(criteria.Description)) select r; } I could copy this and replace the && with || operators to get the 'OR' version, but feel there must be a better way of achieving this. Does anybody have any suggestions how this can be achieved in an efficient and flexible way that is easy to maintain? Thanks.

    Read the article

  • Get Json and output to text file undecoded

    - by Gary
    Hi, I want to fetch json script and write it to a txt file undecoded, exactly how it was originally. I do have a script that I use that I am modifying but unsure what to commands to use. This script decodes, which is what I want to advoid. //Get Age list($bstat,$bage,$bdata) = explode("\t",check_file('./advise/roadsnow.txt',60*2+15)); //Test Age if ( $bage > $CacheMaxAge ) { //echo "The if statement evaluated to true so get new file and reset $bage"; $bage="0"; $file = file_get_contents('http://somesite.jsontxt'); $out = (json_decode($file)); $report = wordwrap($out->mainText, 100, "\n"); //$valid = $out->validTo; //write the data to a text file called roadsnow.txt $myFile = "./advise/roadsnow.txt"; $fh = fopen($myFile, 'w') or die("can't open file"); $stringData = $report; fwrite($fh, $stringData); } else { //echo the test evaluated to false; file is not stale so read local cache //print "we are at the read local cache"; $stringData = file_get_contents("./advise/roadsnow.txt"); } // if/else is done carry on with processing //Format file $data = $stringData

    Read the article

  • Integrating Magento with a simple static website.

    - by ExtraLean
    Magento is an awesomely powerful ecommerce platform. That said, it is also very complex, and I'd like to know if there is a relatively simple way to utilize Magento as our mISV site's backend to fulfill orders without actually "using" Magento's framework to build the site, run the site, etc. In other words, I don't want to use the built-in CMS, etc. since we have a static website already built. I'd just like our Buy Now buttons to utilize the checkout stuff, and would like to be able to use the back-end part to keep track of orders etc. I was able to accomplish this "fairly" easily with osCommerce, but Magento is proving to be a little more difficult to wrap my head around since I've only started looking at it for a few days now. I found another person asking this same exact question on the Magento wiki (along with several others in the forum), and none of them ever receive a reply for some reason. I noticed that there are may Magento experts on Stack Overflow, so I thought I'd give it a go here. This is an example of one question asked by someone on their wiki, and it captures the essence of what I'm trying to accomplish: Hi, as far as I understand, all shopping cart/eCommerce solutions I see are full featured PHP driven web sites. This means that all the pages the user interacts with, are server generated, and thus, the experience, is tied to the magento framework/workflow. I’d like to integrate bits and pieces of eCommerce/shopping cart in my existing website. Effectively, I’d like to have: 1) on a product information page, a “buy now/add to cart” button that adds to a cart 2) on every page, a view cart/checkout option 3) on a checkout page, with additional content already in place, having the magento “checkout” block integrated in the page (and not the entire page generated from Magento). Have any of you done this with Magento? This is for a simple one-product website so any advice you could share would be highly appreciated.

    Read the article

  • stripping default.aspx and //www from the url

    - by b0x0rz
    the code to strip /Default.aspx and //www is not working (as expected): protected void Application_BeginRequest(object sender, EventArgs e) { HttpContext context = HttpContext.Current; string url = context.Request.RawUrl.ToString(); bool doRedirect = false; // remove > default.aspx if (url.EndsWith("/default.aspx", StringComparison.OrdinalIgnoreCase)) { url = url.Substring(0, url.Length - 12); doRedirect = true; } // remove > www if (url.Contains("//www")) { url = url.Replace("//www", "//"); doRedirect = true; } // redirect if necessary if (doRedirect) { context.Response.Redirect(url); } } it works usually, but when submitting a form (sign in for example) the code above INTERCEPTS the request and then does a redirect to the same page. example: try to arrive at page: ~/SignIn/Default.aspx the requests gets intercepted and fixed to: ~/SignIn/ fill the form, click sign in the current page url goes from: ~/SignIn/ to ~/SignIn/Default.aspx and gets fixed again, thus voiding the processing of the method SignIn (which would have redirected the browser to /SignIn/Success/) and the page is reloaded as ~/SignIn/ and no sign in was done. please help. not sure what / how to fix here. the main REQUIREMENT here is: remove /Default.aspx and //www from url's thnx

    Read the article

  • Creating parallel selenium tests in C# and using Nunit as the runner application

    - by damianmartin
    I am writing a new test suite for the company to test a very complex ASP.NET application which is heavily AJAX driven. We have decided to use Selenium (Grid & Remote Control) and Nunit to run these tests. The actually tests are dynamically created at run time from a spreadsheet. Each Column in an excel spreadsheet relates to a new test and each row relates to a selenium command (but in plain English and the dll converts this into Selenium code). My problem i have at the moment is getting the tests running in parallel. There will be 1000+ tests so it is too time consuming to have 1 test run at a time. Selenium Grid and Selenium Remote Control(s) are setup correctly because I can run there demo. From what i have read I need to use Punit but i can not find any documentation on what a test in punit should look like. Nunit tests are [SetUp] [TearDown] [Test]. Can anyone point me in the right direction. Thanks in advance.

    Read the article

  • Does a CS PhD Help for Software Engineering Career?

    - by SiLent SoNG
    I would like to seek advice on whether or not to take a PhD offer from a good university. My only concern is the PhD will take at least 4 year's commitment. During the period I won't have good monetary income. I am also concerned whether the PhD will help my future career development. My career goal is software engineering only. Some of the PhD info: The PhD is CS related. The research area is Information Retrieval, Machine Learning, and Nature Language Processing. More specifically, the research topic is Deep Web search. Some of backgrounds: I worked in Oracle for 3 years in database development after obtained a CS degree from some good university. In last year I received an email telling an interesting project from a professor and thereafter I was lured into his research team. The team consists of 4 PhD students; those students have little or no industry experiences and their coding skills are really really bad. By saying bad I mean they do not know some common patterns and they do not know pitfalls of the programming languages as well as idioms for doing things right. I guess a at least 4 year commitment is worth of serious consideration. I am 27 at this moment. If I take the offer, that implies I will be 31+ upon graduation. Wah... becoming.. what to say, no longer young. Hence, here I am seeking advice on whether it is good or not to take the PhD offer, and whether a CS PhD is good for my future career growth as a software engineer? I do not intent to go for academia.

    Read the article

  • Copying a subset of data to an empty database with the same schema

    - by user193655
    I would like to export part of a database full of data to an empty database. Both databases has the same schema. I want to maintain referential integrity. To simplify my cases it is like this: MainTable has the following fields: 1) MainID integer PK 2) Description varchar(50) 3) ForeignKey integer FK to MainID of SecondaryTable SecondaryTable has the following fields: 4) MainID integer PK (referenced by (3)) 5) AnotherDescription varchar(50) The goal I'm trying to accomplish is "export all records from MainTable using a WHERE condition", for example all records where MainID < 100. To do it manually I shuold first export all data from SecondaryTable contained in this select: select * from SecondaryTable ST outer join PrimaryTable PT on ST.MainID=PT.MainID then export the needed records from MainTable: select * from MainTable where MainID < 100. This is manual, ok. Of course my case is much much much omre complex, I have 200+ tables, so donig it manually is painful/impossible, I have many cascading FKs. Is there a way to force the copy of main table only "enforcing referntial integrity". so that my query is something like: select * from MainTable where MainID < 100 WITH "COPYING ALL FK sources" In this cases also the field (5) will be copied. ====================================================== Is there a syntax or a tool to do this? Table per table I'd like to insert conditions (like MainID <100 is only for MainTable, but I have also other tables).

    Read the article

  • ASP.NET CacheDependency out of ThreadPool

    - by Stephen
    In an async http handler, we add items to the ASP.NET cache, with dependencies on some files. If the async method executes on a thread from the ThreadPool, all is fine: AsyncResult result = new AsyncResult(context, cb, extraData); ThreadPool.QueueUserWorkItem(new WaitCallBack(DoProcessRequest), result); But as soon as we try to execute on a thread out of the ThreadPool: AsyncResult result = new AsyncResult(context, cb, extraData); Runner runner = new Runner(result); Thread thread = new Thread(new ThreadStart(runner.Run()); ... where Runner.Run just invokes DoProcessRequest, The dependencies do trigger right after the thread exits. I.e. the items are immediately removed from the cache, the reason being the dependencies. We want to use an out-of-pool thread because the processing might take a long time. So obviously something's missing when we create the thread. We might need to propagate the call context, the http context... Has anybody already encountered that issue? Note: off-the-shelf custom threadpools probably solve this. Writing our own threadpool is probably a bad idea (think NIH syndrom). Yet I'd like to understand this in details, though.

    Read the article

< Previous Page | 751 752 753 754 755 756 757 758 759 760 761 762  | Next Page >