Search Results

Search found 2706 results on 109 pages for 'jason mock'.

Page 66/109 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Why is Oracle using a skip scan for this query?

    - by Jason Baker
    Here's the tkprof output for a query that's running extremely slowly (WARNING: it's long :-) ): SELECT mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn FROM (SELECT /*+ FIRST_ROWS(1) */ mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn, ROWNUM AS ora_rn FROM (SELECT mbr.comment_idn AS mbr_comment_idn, mbr.crt_dt AS mbr_crt_dt, mbr.data_source AS mbr_data_source, mbr.dol_bl_rmo_ind AS mbr_dol_bl_rmo_ind, mbr.dxcg_ctl_member AS mbr_dxcg_ctl_member, mbr.employment_start_dt AS mbr_employment_start_dt, mbr.employment_term_dt AS mbr_employment_term_dt, mbr.entity_active AS mbr_entity_active, mbr.ethnicity_idn AS mbr_ethnicity_idn, mbr.general_health_status_code AS mbr_general_health_status_code, mbr.hand_dominant_code AS mbr_hand_dominant_code, mbr.hgt_feet AS mbr_hgt_feet, mbr.hgt_inches AS mbr_hgt_inches, mbr.highest_edu_level AS mbr_highest_edu_level, mbr.insd_addr_idn AS mbr_insd_addr_idn, mbr.insd_alt_id AS mbr_insd_alt_id, mbr.insd_name AS mbr_insd_name, mbr.insd_ssn_tin AS mbr_insd_ssn_tin, mbr.is_smoker AS mbr_is_smoker, mbr.is_vip AS mbr_is_vip, mbr.lmbr_first_name AS mbr_lmbr_first_name, mbr.lmbr_last_name AS mbr_lmbr_last_name, mbr.marital_status_cd AS mbr_marital_status_cd, mbr.mbr_birth_dt AS mbr_mbr_birth_dt, mbr.mbr_death_dt AS mbr_mbr_death_dt, mbr.mbr_expired AS mbr_mbr_expired, mbr.mbr_first_name AS mbr_mbr_first_name, mbr.mbr_gender_cd AS mbr_mbr_gender_cd, mbr.mbr_idn AS mbr_mbr_idn, mbr.mbr_ins_type AS mbr_mbr_ins_type, mbr.mbr_isreadonly AS mbr_mbr_isreadonly, mbr.mbr_last_name AS mbr_mbr_last_name, mbr.mbr_middle_name AS mbr_mbr_middle_name, mbr.mbr_name AS mbr_mbr_name, mbr.mbr_status_idn AS mbr_mbr_status_idn, mbr.mpi_id AS mbr_mpi_id, mbr.preferred_am_pm AS mbr_preferred_am_pm, mbr.preferred_time AS mbr_preferred_time, mbr.prv_innetwork AS mbr_prv_innetwork, mbr.rep_addr_idn AS mbr_rep_addr_idn, mbr.rep_name AS mbr_rep_name, mbr.rp_mbr_id AS mbr_rp_mbr_id, mbr.same_mbr_ins AS mbr_same_mbr_ins, mbr.special_needs_cd AS mbr_special_needs_cd, mbr.timezone AS mbr_timezone, mbr.upd_dt AS mbr_upd_dt, mbr.user_idn AS mbr_user_idn, mbr.wgt AS mbr_wgt, mbr.work_status_idn AS mbr_work_status_idn FROM mbr JOIN mbr_identfn ON mbr.mbr_idn = mbr_identfn.mbr_idn WHERE mbr_identfn.mbr_idn = mbr.mbr_idn AND mbr_identfn.identfd_type = :identfd_type_1 AND mbr_identfn.identfd_number = :identfd_number_1 AND mbr_identfn.entity_active = :entity_active_1) WHERE ROWNUM <= :ROWNUM_1) WHERE ora_rn > :ora_rn_1 call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 9936 0.46 0.49 0 0 0 0 Execute 9936 0.60 0.59 0 0 0 0 Fetch 9936 329.87 404.00 0 136966922 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 29808 330.94 405.09 0 136966922 0 0 Misses in library cache during parse: 0 Optimizer mode: FIRST_ROWS Parsing user id: 36 (JIVA_DEV) Rows Row Source Operation ------- --------------------------------------------------- 0 VIEW (cr=102 pr=0 pw=0 time=2180 us) 0 COUNT STOPKEY (cr=102 pr=0 pw=0 time=2163 us) 0 NESTED LOOPS (cr=102 pr=0 pw=0 time=2152 us) 0 INDEX SKIP SCAN IDX_MBR_IDENTFN (cr=102 pr=0 pw=0 time=2140 us)(object id 341053) 0 TABLE ACCESS BY INDEX ROWID MBR (cr=0 pr=0 pw=0 time=0 us) 0 INDEX UNIQUE SCAN PK_CLAIMANT (cr=0 pr=0 pw=0 time=0 us)(object id 334044) Rows Execution Plan ------- --------------------------------------------------- 0 SELECT STATEMENT MODE: HINT: FIRST_ROWS 0 VIEW 0 COUNT (STOPKEY) 0 NESTED LOOPS 0 INDEX MODE: ANALYZED (SKIP SCAN) OF 'IDX_MBR_IDENTFN' (INDEX (UNIQUE)) 0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'MBR' (TABLE) 0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PK_CLAIMANT' (INDEX (UNIQUE)) ******************************************************************************** Based on my reading of Oracle's documentation of skip scans, a skip scan is most useful when the first column of an index has a low number of unique values. The thing is that the first index of this column is a unique primary key. So am I correct in assuming that a skip scan is the wrong thing to do here? Also, what kind of scan should it be doing? Should I do some more hinting for this query? EDIT: I should also point out that the query's where clause uses the columns in IDX_MBR_IDENTFN and no columns other than what's in that index. So as far as I can tell, I'm not skipping any columns.

    Read the article

  • (partial apply str) and apply-str in clojure's ->

    - by Jason Baker
    If I do the following: user=> (-> ["1" "2"] (partial apply str)) #<core$partial__5034$fn__5040 clojure.core$partial__5034$fn__5040@d4dd758> ...I get a partial function back. However, if I bind it to a variable: user=> (def apply-str (partial apply str)) #'user/apply-str user=> (-> ["1" "2" "3"] apply-str) "123" ...the code works as I intended it. I would assume that they are the same thing, but apparently that isn't the case. Can someone explain why this is to me?

    Read the article

  • Unit Testing a Django Form with a FileField

    - by Jason Christa
    I have a form like: #forms.py from django import forms class MyForm(forms.Form): title = forms.CharField() file = forms.FileField() #tests.py from django.test import TestCase from forms import MyForm class FormTestCase(TestCase) def test_form(self): upload_file = open('path/to/file', 'r') post_dict = {'title': 'Test Title'} file_dict = {} #?????? form = MyForm(post_dict, file_dict) self.assertTrue(form.is_valid()) How do I construct the *file_dict* to pass *upload_file* to the form?

    Read the article

  • When should I use git pull --rebase?

    - by Jason Baker
    I know of some people who use git pull --rebase by default and others who insist never to use it. I believe I understand the difference between merging and rebasing, but I'm trying to put this in the context of git pull. Is it just about not wanting to see lots of merge commit messages? Or are there other issues?

    Read the article

  • php recursive list help

    - by Jason
    Hi all, I am trying to display a recursive list in PHP for a site I am working on. I am really having trouble trying to get the second level to display. I have a function that displays the contents to the page as follows. function get_menu_entries($content,$which=0) { global $tbl_prefix, $sys_explorer_vars, $sys_config_vars; // INIT LIBRARIES $db = new DB_Tpl(); $curr_time = time(); $db->query("SELECT * FROM ".$tbl_prefix."sys_explorer WHERE preid = '".$which."' && config_id = '".$sys_explorer_vars['config_id']."' && blocked = '0' && startdate < '".$curr_time."' && (enddate > '".$curr_time."' || enddate = '') ORDER BY preid,sorting"); while($db->next_record()){ $indent = $db->f("level") * 10 - 10; $sitemap_vars['break'] = ""; $sitemap_vars['bold'] = ""; if($db->f("level") == 2) { $sitemap_vars['ul_start'] = ""; $sitemap_vars['bold'] = "class='bold'"; $sitemap_vars['ul_end'] = ""; } switch($db->f("link_type")) { case '1': // External Url $sitemap_vars['hyperlink'] = $db->f("link_url"); $sitemap_vars['target'] = ""; if($db->f("link_target") != "") { $sitemap_vars['target'] = "target=\"".$db->f("link_target")."\""; } break; case '2': // Shortcut $sitemap_vars['hyperlink'] = create_url($db->f("link_eid"),$db->f("name"),$sys_config_vars['mod_rewrite']); $sitemap_vars['target'] = ""; break; default: $sitemap_vars['hyperlink'] = create_url($db->f("eid"),$db->f("name"),$sys_config_vars['mod_rewrite']); $sitemap_vars['target'] = ""; break; } if($db->f("level") > 1) { $content .= "<div style=\"text-indent: ".$indent."px;\" ".$sitemap_vars['bold']."><a href=\"".$sitemap_vars['hyperlink']."\" ".$sitemap_vars['target'].">".$db->f("name")."</a></div>\n"; } $content = get_menu_entries($content,$db->f("eid")); } return(''.$content.''); } At the moment the content displays properly, however I want to turn this function into a DHTML dropdown menu. At present what happens with the level 2 elements is that using CSS the contents are indented using CSS. What I need to happen is to place the UL tag at the beginning and /UL tag at the end of the level 2 elements. I hope this makes sense. Any help would be greatly appreciated.

    Read the article

  • using Swing components in javafx if they're not in the NetBeans javafx palette

    - by Jason S
    I'm just getting started with javafx in NetBeans, and I have it doing simple stuff (windows with buttons + the like) but would like to try something slightly more realistic. The "Swing/AWT Components" palette has a whole bunch of stuff that the "JavaFX Script Code Clips" palette does not (it has Button, CheckBox, ComboBox, ComboBoxItem, Label, RadioButton, Slider, TextField, and ToggleButton). How do I add stuff to this palette? I would like to try using some of the components in org-netbeans-swing-outline.jar edit: Aha. I was missing the point somewhat: there are only some javafx-wrapped Swing components available in javafx.ext.Swing.*, so if you want one of the other Swing components you have to wrap them yourself.

    Read the article

  • Check to See if File is in Repository with SharpSVN

    - by Jason
    How do I check if a file is already in a repository (or NOT in the repository) so I can determine whether I need to 'add' it first before doing the check in? (For the record, I have check-in working, but I get an exception when I try to check in a file that has not yet been added to the repository.) Thanks

    Read the article

  • TDD test data loading methods

    - by Dave Hanson
    I am a TDD newb and I would like to figure out how to test the following code. I am trying to write my tests first, but I am having trouble for creating a test that touches my DataAccessor. I can't figure out how to fake it. I've done the extend the shipment class and override the Load() method; to continue testing the object. I feel as though I end up unit testing my Mock objects/stubs and not my real objects. I thought in TDD the unit tests were supposed to hit ALL of the methods on the object; however I can never seem to test that Load() code only the overriden Mock Load My tests were write an object that contains a list of orders based off of shipment number. I have an object that loads itself from the database. public class Shipment { //member variables protected List<string> _listOfOrders = new List<string>(); protected string _id = "" //public properties public List<string> ListOrders { get{ return _listOfOrders; } } public Shipment(string id) { _id = id; Load(); } //PROBLEM METHOD // whenever I write code that needs this Shipment object, this method tries // to hit the DB and fubars my tests // the only way to get around is to have all my tests run on a fake Shipment object. protected void Load() { _listOfOrders = DataAccessor.GetOrders(_id); } } I create my fake shipment class to test the rest of the classes methods .I can't ever test the Real load method without having an actual DB connection public class FakeShipment : Shipment { protected new void Load() { _listOfOrders = new List<string>(); } } Any thoughts? Please advise. Dave

    Read the article

  • UIView Subview Does not autoresize on orientation change

    - by Jason
    In my iPhone app, I have a view controller with two views (essentially, a front & back view). The front view is the main UIView, and the back view is a secondary UIView which is added as a subview using [self.view addSubview:backView] when showing the back and [backView removeFromSuperview] when hiding it. However, when the orientation changes, I have the following issue: the main UIView (frontView) rotates & all of its elements resize properly, but the secondary/subview UIView (backView) does not rotate & all of its elements do not resize properly. Does anyone have suggestions on how to make the secondary UIView autoresize properly according to the rules I have set in Interface Builder?

    Read the article

  • Adding database results to array

    - by Jason
    Hi all, I am having a mental blank here and cannot for the life of me figure out a solution. My scenario is that I am programming in PHP and MySQL. I have a database table returning the results for a specific orderid. The query can return a maximum of 4 rows per order and a minimum of 1 row. Here is an image of how I want to return the results. I have all the orderdetails (Name, address) ect stored in a table named "orders". I have all the packages for that order stored in a table named "packages". What I need to do is using a loop I need to have access to each specific element of the database results (IE package1, itemstype1, package2, itemtype2) ect I am using a query like this to try and get hold of just the "number of items: $sql = "SELECT * FROM bookings_onetime_packages WHERE orderid = '".$orderid."' ORDER BY packageid DESC"; $total = $db-database_num_rows($db-database_query($sql)); $query = $db-database_query($sql); $noitems = ''; while($info = $db-database_fetch_assoc($query)){ $numberitems = $info['numberofitems']; for($i=0; $i $noitems .= $numberitems[$i]; } } print $noitems; I need to have access to each specific element because I them need to create fill out a pdf template using "fpdf". I hope this makes sense. Any direction would be greatly appreciated.

    Read the article

  • How do I introspect things in Ruby?

    - by Jason Baker
    For instance, in Python, I can do things like this if I want to get all attributes on an object: >>> import sys >>> dir(sys) ['__displayhook__', '__doc__', '__excepthook__', '__name__', '__package__', '__stderr__', '__stdin__', '__stdout__', '_clear_type_cache', '_current_frames', '_getframe', 'api_version', 'argv', 'builtin_module_names', 'byteorder', 'call_tracing', 'callstats', 'copyright', 'displayhook', 'dont_write_bytecode', 'exc_clear', 'exc_info', 'exc_type', 'excepthook', 'exec_prefix', 'executable', 'exit', 'flags', 'float_info', 'getcheckinterval', 'getdefaultencoding', 'getdlopenflags', 'getfilesystemencoding', 'getprofile', 'getrecursionlimit', 'getrefcount', 'getsizeof', 'gettrace', 'hexversion', 'maxint', 'maxsize', 'maxunicode', 'meta_path', 'modules', 'path', 'path_hooks', 'path_importer_cache', 'platform', 'prefix', 'ps1', 'ps2', 'py3kwarning', 'pydebug', 'setcheckinterval', 'setdlopenflags', 'setprofile', 'setrecursionlimit', 'settrace', 'stderr', 'stdin', 'stdout', 'subversion', 'version', 'version_info', 'warnoptions'] Or if I want to view the documentation of something, I can use the help function: >>> help(str) Is there any way to do similar things in Ruby?

    Read the article

  • Securely erasing a file using simple methods?

    - by Jason
    Hello, I am using C# .NET Framework 2.0. I have a question relating to file shredding. My target operating systems are Windows 7, Windows Vista, and Windows XP. Possibly Windows Server 2003 or 2008 but I'm guessing they should be the same as the first three. My goal is to securely erase a file. I don't believe using File.Delete is secure at all. I read somewhere that the operating system simply marks the raw hard-disk data for deletion when you delete a file - the data is not erased at all. That's why there exists so many working methods to recover supposedly "deleted" files. I also read, that's why it's much more useful to overwrite the file, because then the data on disk actually has to be changed. Is this true? Is this generally what's needed? If so, I believe I can simply write the file full of 1's and 0's a few times. I've read: http://www.codeproject.com/KB/files/NShred.aspx http://blogs.computerworld.com/node/5756 http://blogs.computerworld.com/node/5687 http://stackoverflow.com/questions/4147775/securely-deleting-a-file-in-c-net

    Read the article

  • Creating Dependencies Only to be able to Unit Test

    - by arin
    I just created a Manager that deals with a SuperClass that is extended all over the code base and registered with some sort of SuperClassManager (SCM). Now I would like to test my Manager that is aware of only the SuperClass. I tried to create a concrete SCM, however, that depends on a third party library and therefore I failed to do that in my jUnit test. Now the option is to mock all instances of this SCM. All is good until now, however, when my Manager deals with the SCM, it returns children of the SuperClass that my Manager does not know or care about. Nevertheless, the identities of these children are vital for my tests (for equality, etc.). Since I cannot use the concrete SCM, I have to mock the results of calls to the appropriate functions of the SCM, however, this means that my tests and therefore my Manager need to know and care about the children of the SuperClass. Checking the code base, there does not seem to be a more appropriate location for my test (that already maintains the appropriate real dependencies). Is it worth it to introduce unnecessary dependencies for the sake of unit testing?

    Read the article

  • Large public datasets?

    - by Jason
    I am looking for some large public datasets, in particular: Large sample web server logs that have been anonymized. Datasets used for database performance benchmarking. Any other links to large public datasets would be appreciated. I already know about Amazon's public datasets at: http://aws.amazon.com/publicdatasets/

    Read the article

  • Trouble getting started with Spring Roo and GWT

    - by Abdel Olakara
    Hi all, I am trying to get started with SpringRoo and GWT after seeing the keynote.. unfortunately I am stuck at this issue. I successfully created the project using Roo and added the persistence, the entities and when I perform the command "perform package" I get this error: 23/5/10 12:10:13 AM AST: [ERROR] ApplicationEntityTypesProcessor cannot be resolved 23/5/10 12:10:13 AM AST: [ERROR] ApplicationEntityTypesProcessor cannot be resolved to a type 23/5/10 12:10:13 AM AST: [WARN] advice defined in org.springframework.mock.staticmock.AnnotationDrivenStaticEntityMockingControl has not been applied [Xlint:adviceDidNotMatch] 23/5/10 12:10:13 AM AST: [WARN] advice defined in org.springframework.mock.staticmock.AbstractMethodMockingControl has not been applied [Xlint:adviceDidNotMatch] 23/5/10 12:10:13 AM AST: Build errors for helloroo; org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.codehaus.mojo:aspectj-maven-plugin:1.0:compile (default) on project helloroo: Compiler errors : error at import tp.gwt.request.ApplicationEntityTypesProcessor; I see this in the Maven console and cannot complete the build..I know there is some jar missing but how and why? because I downloaded all the latest version including GWT milestone release. Any idea why this error is occurring? How do I resolve this issue? Thanks in Advance, Abdel Olakara

    Read the article

  • How do I unit test a finalizer?

    - by GraemeF
    I have the following class which is a decorator for an IDisposable object (I have omitted the stuff it adds) which itself implements IDisposable using a common pattern: public class DisposableDecorator : IDisposable { private readonly IDisposable _innerDisposable; public DisposableDecorator(IDisposable innerDisposable) { _innerDisposable = innerDisposable; } #region IDisposable Members public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } #endregion ~DisposableDecorator() { Dispose(false); } protected virtual void Dispose(bool disposing) { if (disposing) _innerDisposable.Dispose(); } } I can easily test that innerDisposable is disposed when Dispose() is called: [Test] public void Dispose__DisposesInnerDisposable() { var mockInnerDisposable = new Mock<IDisposable>(); new DisposableDecorator(mockInnerDisposable.Object).Dispose(); mockInnerDisposable.Verify(x => x.Dispose()); } But how do I write a test to make sure innerDisposable does not get disposed by the finalizer? I want to write something like this but it fails, presumably because the finalizer hasn't been called by the GC thread: [Test] public void Finalizer__DoesNotDisposeInnerDisposable() { var mockInnerDisposable = new Mock<IDisposable>(); new DisposableDecorator(mockInnerDisposable.Object); GC.Collect(); mockInnerDisposable.Verify(x => x.Dispose(), Times.Never()); }

    Read the article

  • quartz: preventing concurrent instances of a job in jobs.xml

    - by Jason S
    This should be really easy. I'm using Quartz running under Apache Tomcat 6.0.18, and I have a jobs.xml file which sets up my scheduled job that runs every minute. What I would like to do, is if the job is still running when the next trigger time rolls around, I don't want to start a new job, so I can let the old instance complete. Is there a way to specify this in jobs.xml (prevent concurrent instances)? If not, is there a way I can share access to an in-memory singleton within my application's Job implementation (is this through the JobExecutionContext?) so I can handle the concurrency myself? (and detect if a previous instance is running) update: After floundering around in the docs, here's a couple of approaches I am considering, but either don't know how to get them to work, or there are problems. Use StatefulJob. This prevents concurrent access... but I'm not sure what other side-effects would occur if I use it, also I want to avoid the following situation: Suppose trigger times would be every minute, i.e. trigger#0 = at time 0, trigger #1 = 60000msec, #2 = 120000, #3 = 180000, etc. and the trigger#0 at time 0 fires my job which takes 130000msec. With a plain Job, this would execute triggers #1 and #2 while job trigger #0 is still running. With a StatefulJob, this would execute triggers #1 and #2 in order, immediately after #0 finishes at 130000. I don't want that, I want #1 and #2 not to run and the next trigger that runs a job should take place at #3 (180000msec). So I still have to do something else with StatefulJob to get it to work the way I want, so I don't see much of an advantage to using it. Use a TriggerListener to return true from vetoJobExecution(). Although implementing the interface seems straightforward, I have to figure out how to setup one instance of a TriggerListener declaratively. Can't find the docs for the xml file. Use a static shared thread-safe object (e.g. a semaphore or whatever) owned by my class that implements Job. I don't like the idea of using singletons via the static keyword under Tomcat/Quartz, not sure if there are side effects. Also I really don't want them to be true singletons, just something that is associated with a particular job definition. Implement my own Trigger which extends SimpleTrigger and contains shared state that could run its own TriggerListener. Again, I don't know how to setup the XML file to use this trigger rather than the standard <trigger><simple>...</simple></trigger>.

    Read the article

  • CAML query with nonexistent field

    - by Jason Hocker
    We need to have a web service that queries a sharepoint list using CAML, but we do not know what version of the list that we are using. Version introduced a new field we want to use in the query if it is present, but just ignore that otherwise. If I put it in the query on the old version, we get no results. How should I check if the field exists before setting up the query?

    Read the article

  • How do you concat multiple rows into one column in SQL Server?

    - by Jason
    I've searched high and low for the answer to this, but I can't figure it out. I'm relatively new to SQL Server and don't quite have the syntax down yet. I have this datastructure (simplified): Table "Users" | Table "Tags": UserID UserName | TagID UserID PhotoID 1 Bob | 1 1 1 2 Bill | 2 2 1 3 Jane | 3 3 1 4 Sam | 4 2 2 ----------------------------------------------------- Table "Photos": | Table "Albums": PhotoID UserID AlbumID | AlbumID UserID 1 1 1 | 1 1 2 1 1 | 2 3 3 1 1 | 3 2 4 3 2 | 5 3 2 | I'm looking for a way to get the all the photo info (easy) plus all the tags for that photo concatenated like CONCAT(username, ', ') AS Tags of course with the last comma removed. I'm having a bear of a time trying to do this. I've tried the method in this article but I get an error when I try to run the query saying that I can't use DECLARE statements... do you guys have any idea how this can be done? I'm using VS08 and whatever DB is installed in it (I normally use MySQL so I don't know what flavor of DB this really is... it's an .mdf file?)

    Read the article

  • 3 or 4 monitors with Nvidia and Ubuntu

    - by Jason
    I saw that you are (were?) running 4 monitors with Ubuntu 8.10 and two Nvidia cards (http://stackoverflow.com/questions/27113/how-to-use-3-monitors). I was curious if you were doing this with Xinerama, a hacked up TwinView config, or multiple X screens, or some other method? Does it work with compiz? I intend to run my Dell 30" in the middle with two 1280x1024 on the sides and continue to use one X screen, and run compiz, on Ubuntu 9.04. Currently, I am using 2 monitors with twinview and compiz, which runs fantastic. I just can't get the third monitor running (unless I enable it in its own X screen, and then enable Xinerama to enable windows to be dragged as if all one X screen, but this breaks compiz, and I don't care much for having separate X screen). I am very interested in knowing how you set up 4 monitors with 2 GPU's. Thanks!

    Read the article

  • php regex for strong password validation

    - by Jason
    Hello, I've seen around the web the following regex (?=^.{8,}$)((?=.*\d)|(?=.*\W+))(?![.\n])(?=.*[A-Z])(?=.*[a-z]).*$ which validates only if the string: * contain at least (1) upper case letter * contain at least (1) lower case letter * contain at least (1) number or special character * contain at least (8) characters in length I'd like to know how to convert this regex so that it checks the string to * contain at least (2) upper case letter * contain at least (2) lower case letter * contain at least (2) digits * contain at least (2) special character * contain at least (8) characters in length well if it contains at least 2 upper,lower,digits and special chars then I wouldn't need the 8 characters length. special characters include: `~!@#$%^&*()_-+=[]\|{};:'".,/<? thanks in advance.

    Read the article

  • How do I implement repository pattern and unit of work when dealing with multiple data stores?

    - by Jason
    I have a unique situation where I am building a DDD based system that needs to access both Active Directory and a SQL database as persistence. Initially this wasnt a problem because our design was setup where we had a unit of work that looked like this: public interface IUnitOfWork { void BeginTransaction() void Commit() } and our repositories looked like this: public interface IRepository<T> { T GetByID() void Save(T entity) void Delete(T entity) } In this setup our load and save would handle the mapping between both data stores because we wrote it ourselves. The unit of work would handle transactions and would contain the Linq To SQL data context that the repositories would use for persistence. The active directory part was handled by a domain service implemented in infrastructure and consumed by the repositories in each Save() method. Save() was responsible with interacting with the data context to do all the database operations. Now we are trying to adapt it to entity framework and take advantage of POCO. Ideally we would not need the Save() method because the domain objects are being tracked by the object context and we would just need to add a Save() method on the unit of work to have the object context save the changes, and a way to register new objects with the context. The new proposed design looks more like this: public interface IUnitOfWork { void BeginTransaction() void Save() void Commit() } public interface IRepository<T> { T GetByID() void Add(T entity) void Delete(T entity) } This solves the data access problem with entity framework, but does not solve the problem with our active directory integration. Before, it was in the Save() method on the repository, but now it has no home. The unit of work knows nothing other than the entity framework data context. Where should this logic go? I argue this design only works if you only have one data store using entity framework. Any ideas how to best approach this issue? Where should I put this logic?

    Read the article

  • How can I read a DBF file with incorrectly defined column data types using ADO.NET?

    - by Jason
    I have a several DBF files generated by a third party that I need to be able to query. I am having trouble because all of the column types have been defined as characters, but the data within some of these fields actually contain binary data. If I try to read these fields using an OleDbDataReader as anything other than a string or character array, I get an InvalidCastException thrown, but I need to be able to read them as a binary value or at least cast/convert them after they are read. The columns that actually DO contain text are being returned as expected. For example, the very first column is defined as a character field with a length of 2 bytes, but the field contains a 16-bit integer. I have written the following test code to read the first column and convert it to the appropriate data type, but the value is not coming out right. The first row of the database has a value of 17365 (0x43D5) in the first column. Running the following code, what I end up getting is 17215 (0x433F). I'm pretty sure it has to do with using the ASCII encoding to get the bytes from the string returned by the data reader, but I'm not sure of another way to get the value into the format that I need, other that to write my own DBF reader and bypass ADO.NET altogether which I don't want to do unless I absolutely have to. Any help would be greatly appreciated. byte[] c0; int i0; string con = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\ASTM;Extended Properties=dBASE III;User ID=Admin;Password=;"; using (OleDbConnection c = new OleDbConnection(con)) { c.Open(); OleDbCommand cmd = c.CreateCommand(); cmd.CommandText = "SELECT * FROM astm2007"; OleDbDataReader dr = cmd.ExecuteReader(); while (dr.Read()) { c0 = Encoding.ASCII.GetBytes(dr.GetValue(0).ToString()); i0 = BitConverter.ToInt16(c0, 0); } dr.Dispose(); }

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >