Search Results

Search found 2110 results on 85 pages for 'priority'.

Page 68/85 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • How can i use Google-o-Meter or Google Vizualisation API with Jenkins

    - by kamal
    Here is a sample that displays a static chart: google.load("visualization", "1.0", {packages:["imagechart"]}); google.setOnLoadCallback(drawChart); function drawChart() { var dataTable = new google.visualization.DataTable(); dataTable.addColumn('string'); dataTable.addColumn('number'); dataTable.addColumn('string'); // Row data is [chl, data point, point label] dataTable.addRows([ ['January',40,undefined], ['February',60,'Initial recall'], ['March',60,'Product withdrawn'], ['April',45,undefined], ['May',47,'Relaunch'], ['June',75,undefined], ['July',70,undefined], ['August',72,undefined] ]); var options = {cht: 'lc', chds:'0,160', annotationColumns:[{column:2, size:12, type:'flag', priority:'high'},]}; var chart = new google.visualization.ImageChart(document.getElementById('line_div')); chart.draw(dataTable, options); } How can i replace the static values and variables in dataTable.addRows([ with real live data ? In case the compete code is not visible, refer to : http://code.google.com/apis/visualization/documentation/gallery/genericimagechart.html When this Javascript is copied to the "Description" it renders a chart, what i want to know is how to replace the name/value in dataTable.addRows, to the name/values coming from Jenkins

    Read the article

  • Delphi dbExpress and Interbase: UTF8 migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally.

    Read the article

  • DAL Layer : EF 4.0 or Normal Data access layer with Stored Procedure

    - by Harryboy
    Hello Experts, Application : I am working on one mid-large size application which will be used as a product, we need to decide on our DAL layer. Application UI is in Silverlight and DAL layer is going to be behind service layer. We are also moving ahead with domain model, so our DB tables and domain classes are not having same structure. So patterns like Data Mapper and Repository will definitely come into picture. I need to design DAL Layer considering below mentioned factors in priority manner Speed of Development with above average performance Maintenance Future support and stability of the technology Performance Limitation : 1) As we need to strictly go ahead with microsoft, we can not use NHibernate or any other ORM except EF 4.0 2) We can use any code generation tool (Should be Open source or very cheap) but it should only generate code in .Net, so there would not be any licensing issue on per copy basis. Questions I read so many articles about EF 4.0, on outset it looks like that it is still lacking in features from NHibernate but it is considerably better then EF 1.0 So, Do you people feel that we should go ahead with EF 4.0 or we should stick to ADO .Net and use any code geneartion tool like code smith or any other you feel best Also i need to answer questions like what time it will take to port application from EF 4.0 to ADO .Net if in future we stuck up with EF 4.0 for some features or we are having serious performance issue. In reverse case if we go ahead and choose ADO .Net then what time it will take to swith to EF 4.0 Lastly..as i was going through the article i found the code only approach (with POCO classes) seems to be best suited for our requirement as switching is really easy from one technology to other. Please share your thoughts on the same and please guide on the above questions

    Read the article

  • Caching sitemaps in django

    - by michuk
    I implemented a simple sitemap class using django's default sitemap app. As it was taking a long time to execute, I added manual caching: class ShortReviewsSitemap(Sitemap): changefreq = "hourly" priority = 0.7 def items(self): # try to retrieve from cache result = get_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews") if result!=None: return result result = ShortReview.objects.all().order_by("-created_at") # store in cache set_cache(CACHE_SITEMAP_SHORT_REVIEWS, "sitemap_short_reviews", result) return result def lastmod(self, obj): return obj.updated_at The problem is that memcache allows only max 1MB object. This one was bigger that 1MB, so storing into cache failed: >7 SERVER_ERROR object too large for cache The problem is that django has an automated way of deciding when it should divide the sitemap file into smalled ones. According to the docs (http://docs.djangoproject.com/en/dev/ref/contrib/sitemaps/): You should create an index file if one of your sitemaps has more than 50,000 URLs. In this case, Django will automatically paginate the sitemap, and the index will reflect that. What do you think would be the best way to enable caching sitemaps? - Hacking into django sitemaps framework to restrict a single sitemap size to, let's say, 10,000 records seems like the best idea. Why was 50,000 chosen in the first place? Google advice? random number? - Or maybe there is a way to allow memcached store bigger files? - Or perhaps onces saved, the sitemaps should be made available as static files? This would mean that instead of caching with memcached I'd have to manually store the results in the filesystem and retrieve them from there next time when the sitemap is requested (perhaps cleaning the directory daily in a cron job). All those seem very low level and I'm wondering if an obvious solution exists...

    Read the article

  • Proving that the distance values extracted in Dijkstra's algorithm is non-decreasing?

    - by Gail
    I'm reviewing my old algorithms notes and have come across this proof. It was from an assignment I had and I got it correct, but I feel that the proof certainly lacks. The question is to prove that the distance values taken from the priority queue in Dijkstra's algorithm is a non-decreasing sequence. My proof goes as follows: Proof by contradiction. Fist, assume that we pull a vertex from Q with d-value 'i'. Next time, we pull a vertex with d-value 'j'. When we pulled i, we have finalised our d-value and computed the shortest-path from the start vertex, s, to i. Since we have positive edge weights, it is impossible for our d-values to shrink as we add vertices to our path. If after pulling i from Q, we pull j with a smaller d-value, we may not have a shortest path to i, since we may be able to reach i through j. However, we have already computed the shortest path to i. We did not check a possible path. We no longer have a guaranteed path. Contradiction.

    Read the article

  • Setting up Mercurial/TortoiseHg to work with UltraCompare

    - by Tim Pietzcker
    Hi, I'm trying to get my favorite Windows diff/merge tool, UltraCompare (V7.00) to work with Mercurial/TortoiseHg. I have set up UltraCompare in my Mercurial.ini like this (only relevant bits shown): [merge-tools] UltraCompare.executable = C:\Programme\IDM Computer Solutions\UltraCompare\uc.com UltraCompare.args = $base $local $other UltraCompare.priority = 1 UltraCompare.gui = True UltraCompare.binary = True UltraCompare.checkconflicts = True UltraCompare.checkchanged = True However, the three-way-merge fails. The path names get messed up if the path to the repository that is being merged to contains a space. I have done some more testing, and I've found out (using Process Explorer) that uc.com is called with a broken command line if there is a space in the repository's path: Compare "C:\Programme\IDM Computer Solutions\UltraCompare\uc.exe" " "c:\dokume~1\tim~1.pie\lokale~1\temp\test.txt~base.akr6au" "E:\Eigene Dateien\test\test-merge\test.txt" "c:\dokume~1\tim~1.pie\lokale~1\temp\test.txt~other.b92442" and "C:\Programme\IDM Computer Solutions\UltraCompare\uc.com" "c:\dokume~1\tim~1.pie\lokale~1\temp\test.txt~base.e7vryp" "E:\test\test-merge\test.txt" "c:\dokume~1\tim~1.pie\lokale~1\temp\test.txt~other.u_qxme" There is an extraneous " after the path of the executable in the first example - not in the second (which works fine). To me, it seems as if UltraCompare is doing everything right, and that Mercurial/TortoiseHg are passing a defective command line to it. Would you say so, too? Is there a workaround? I've just updated to Mercurial 1.5/TortoiseHg 1.0, and the problem persists. Support for other merge tools (Beyond Compare and others) has been added, sadly not UltraCompare...

    Read the article

  • SQL with Regular Expressions vs Indexes with Logical Merging Functions

    - by geeko
    Hello Lads, I am trying to develop a complex textual search engine. I have thousands of textual pages from many books. I need to search pages that contain specified complex logical criterias. These criterias can contain virtually any compination of the following: A: Full words. B: Word roots (semilar to stems; i.e. all words with certain key letters). C: Word templates (in some languages are filled in certain templates to form various part of speech such as adjactives, past/present verbs...). D: Logical connectives: AND/OR/XOR/NOT/IF/IFF and parentheses to state priorities. Now, would it be faster to have the pages' full text in database (not indexed) and search though them all using SQL and Regular Expressions ? Or would it be better to construct indexes of word/root/template-page-location tuples. Hence, we can boost searching for individual words/roots/templates. However, it gets tricky as we interdouce logical connectives into our query. I thought of doing the following steps in such cases: 1: Seperately search for each individual words/roots/templates in the specified query. 2: On priority bases, we merge two result lists (from step 1) at a time depedning on the logical connective For example, if we are searching for "he AND (is OR was)": 1: We shall search for "he", "is" and "was" seperately and get result lists for each word. 2: Merge the result lists of "is" and "was" using the merging function OR-MERGE 3: Merge the merged result list from the OR-MERGE function with the one of "he" using the merging function AND-MERGE The result of step 3 is then returned as the result of the specified query. What do you think gurues ? Which is faster ? Any better ideas ? Thank you all in advance.

    Read the article

  • What am i doing wrong

    - by Erik Sapir
    I have the following code. I need B class to have a min priority queue of AToTime objects. AToTime have operator, and yet i receive error telling me than there is no operator matching the operands... #include <queue> #include <functional> using namespace std; class B{ //public functions public: B(); virtual ~B(); //private members private: log4cxx::LoggerPtr m_logger; class AToTime { //public functions public: AToTime(const ACE_Time_Value& time, const APtr a) : m_time(time), m_a(a){} bool operator >(const AToTime& other) { return m_time > other.m_time; } //public members - no point using any private members here public: ACE_Time_Value m_time; APtr m_a; }; priority_queue<AToTime, vector<AToTime>, greater<AToTime> > m_myMinHeap; };

    Read the article

  • fitnesse test framework, arbitrary properties for test and queries/test runs based on them?

    - by Marcel
    hi, our testers have the requirement to store multiple properties for a test that are not present in the "properties". e.g. they want to store priority, a description(not in the wiki page itself) and so on. they don't want to use the tagging mechanism. is there a way to store any kind of new xml node in the properties.xml for a test? these properties should then be used to: query the fields via the search screen run tests based on the "SuiteResponder" ?suite=xxx&TAGx=abc&TAGy=cde they should be returned by "?properties" responder. they should appear in the test history of the test run in essence they want to store any kind of "meta" information in the properties.xml and work with them in all kinds of ways, search, run etc. does anybody here know if there is already something available in that direction? if not i think we have to "pimp" these features into fitnesse to make our testers happy. thanks a lot any help appreciated marcel ps: i've also posted the question in the yahoo fitnesse group

    Read the article

  • Model login constraints based on time

    - by DaDaDom
    Good morning, for an existing web application I need to implement "time based login constraints". It means that for each user, later maybe each group, I can define timeslots when they are (not) allowed to log in into the system. As all data for the application is stored in database tables, I need to somehow create a way to model this idea in that way. My first approach, I will try to explain it here: Create a tree of login constraints (called "timeslots") with the main "categories", like "workday", "weekend", "public holiday", etc. on the top level, which are in a "sorted" order (meaning "public holiday" has a higher priority than "weekday") for each top level node create subnodes, which have a finer timespan, like "monday", "tuesday", ... below that, create an "hour" level: 0, 1, 2, ..., 23. No further details are necessary. set every member to "allowed" by default For every member of the system create a 1:n relationship member:timeslots which defines constraints, e.g. a member A may have A:monday-forbidden and A:tuesday-forbidden Do a depth-first search at every login and check if the member has a constraint. Why a depth first search? Well, I thought that it may be that a member has the rules: A:monday->forbidden, A:monday-10->allowed, A:mondey-11->allowed So a login on monday at 12:30 would fail, but one at 10:30 succeed. For performance reasons I could break the relational database paradigm and set a flag for every entry in the member-to-timeslots-table which is set to true if the member has information set for "finer" timeslots, but that's a second step. Is this model in principle a good idea? Are there existing models? Thanks.

    Read the article

  • Overriding Node Paths

    - by fighella
    Can a PAGE override the priority of a NODE? I want to use the power of "Pages" to override my links from the nodes... For example: A nodes link is /content/202/hello-world I want to use my Panels to use the URL to create a "PAGE"... The panels can use the arguments from the URL to create a pretty cool page around the content of a node... So the PAGE path I've made is /content/%argument1/%title... I need the links created to the node from the node itself to go to this panel page and not to the content created by the node on its own... so i make the path alias do the same as the PAGE settings... /content/nid/title... A hand typed in link to this PAGE works fine when I dont have this as the path alias... It does exactly what I need... but as soon as I make this the Path Alias, it's like it goes to the single NODE before it goes to the PAGE... Anyone have any clues... Is there an order which Drupal looks for the correct URL? Im sure I have done this before. JD

    Read the article

  • Automatically grow document view of NSScrollView using auto layout?

    - by Monolo
    Is there a simple way to get an NSScrollView to adapt to its document view changing size when using autolayout (the Lion feature)? I have tried to call both setNeedsUpdateConstraints: and setNeedsLayout: on the document view, the clip view and the scroll view, without any results. fittingSize of the document view reports the correct size. An NSPopover in conjunction with an NSViewController handles this nicely, with the popover growing and shrinking as needed, and I was hoping to get a similar simple and robust behaviour with the scroll view. I have checked the documentation for scroll views, but they don't seem to be updated to use autolayout. Edited to clarify: The problem I experience is that the document view, which holds subviews, is not re-sized when the subviews change their size, even if they call invalidateIntrinsicContentSize. The contents of the document view are hence clipped to the original size of the document view as they grow. The document view is created in a nib and set as the scroll view's document view in an awakeFromBib method. What I hoped to obtain was that the document view frame would automatically be adjusted to when its fittingSize changes, and the scrollbars updated accordingly. NSPopover does something similar - provided that the subviews of the content controller's view have the constraints set right and various content hugging values are high enough (higher than the hidden popover window's hight constraint priority, for one).

    Read the article

  • XSD any element

    - by ofer shwartz
    HI! I'm trying to create a list that some of the elements are defined and some are not, without priority to order. I tried it this way, with an any element: <?xml version="1.0"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:complexType name="object" mixed="true"> <xs:choice> <xs:element name="value" minOccurs="1" maxOccurs="1"> <xs:simpleType> <xs:restriction base="xs:integer"> <xs:enumeration value="1"/> </xs:restriction> </xs:simpleType> </xs:element> <xs:any namespace="##any" processContents="skip"/> </xs:choice> </xs:complexType> <xs:element name="object" type="object"/> </xs:schema> And it tells me this error: :0:0: error: complex type 'object' violates the unique particle attribution rule in its components 'value' and '##any' Can someone help me out solve the problem? Ofer

    Read the article

  • Custom webserver caching

    - by Mark Kinsella
    I'm working with a custom webserver on an embedded system and having some problems correctly setting my HTTP Headers for caching. Our webserver is generating all dynamic content as XML and we're using semi-static XSL files to display it with some dynamic JSON requests thrown in for good measure along with semi-static images. I say "semi-static" because the problems occur when we need to do a firmware update which might change the XSL and image files. Here's what needs to be done: cache the XSL and image files and do not cache the XML and JSON responses. I have full control over the HTTP response and am currently: Using ETags with the XSL and image files, using the modified time and size to generate the ETag Setting Cache-Control: no-cache on the XML and JSON responses As I said, everything works dandy until a firmware update when the XSL and image files are sometimes cached. I've seen it work fine with the latest versions of Firefox and Safari but have had some problems with IE. I know one solution to this problem would be simply rename the XSL and image files after each version (eg. logo-v1.1.png, logo-v1.2.png) and set the Expires header to a date in the future but this would be difficult with the XSL files and I'd like to avoid this. Note: There is a clock on the unit but requires the user to set it and might not be 100% reliable which is what might be causing my caching issues when using ETags. What's the best practice that I should employ? I'd like to avoid as many webserver requests as possible but invalidating old XSL and image files after a software update is the #1 priority.

    Read the article

  • problem configure JBoss to work with JNDI

    - by Spiderman
    I am trying to bind connection to the DB using JNDI in my application that runs on JBoss. I did the following: I created the datasource file oracle-ds.xml filled it with the relevant xml elements: <datasources> <local-tx-datasource> <jndi-name>bilby</jndi-name> ... </local-tx-datasource> </datasources> and put it in the folder \server\default\deploy Added the relevant oracle jar file than in my application I performed: JndiObjectFactoryBean factory = new JndiObjectFactoryBean(); factory.setJndiName("bilby"); try{ factory.afterPropertiesSet(); dataSource = factory.getObject(); } catch(NamingException ne) { ne.printStackTrace(); } and this cause the error: javax.naming.NameNotFoundException: bilby not bound then in the output after this error occured I saw the line: 18:37:56,560 INFO [ConnectionFactoryBindingService] Bound ConnectionManager 'jb oss.jca:service=DataSourceBinding,name=bilby' to JNDI name 'java:bilby' So what is my configuration problem? I think that it may be that JBoss first loads and runs the .war file of my application and only then it loads the oracle-ds.xml that contain my data-source definition. The problem is that they are both located in the same folder. Is there a way to define priority of loading them, or maybe this is not the problem at all. Any idea?

    Read the article

  • Rails running multiple delayed_job - lock tables

    - by pepernik
    Hey. I use delayed_job for background processing. I have 8 CPU server, MySQL and I start 7 delayed_job processes RAILS_ENV=production script/delayed_job -n 7 start Q1: I'm wondering is it possible that 2 or more delayed_job processes start processing the same process (the same record-row in the database delayed_jobs). I checked the code of the delayed_job plugin but can not find the lock directive in a way it should be. I think each process should lock the database table before executing an UPDATE on lock_by column. They lock the record simply by updating the locked_by field (UPDATE delayed_jobs SET locked_by...). Is that really enough? No locking needed? Why? I know that UPDATE has higher priority than SELECT but I think this does not have the effect in this case. My understanding of the multy-threaded situation is: Process1: Get waiting job X. [OK] Process2: Get waiting jobs X. [OK] Process1: Update locked_by field. [OK] Process2: Update locked_by field. [OK] Process1: Get waiting job X. [Already processed] Process2: Get waiting jobs X. [Already processed] I think in some cases more jobs can get the same information and can start processing the same process. Q2: Is 7 delayed_jobs a good number for 8CPU server? Why yes/not. Thx 10x!

    Read the article

  • Call to a member function num_rows() on a non-object

    - by Patrick
    I need to get the number of rows of a query (so I can paginate results). As I'm learning codeigniter (and OO php) I wanted to try and chain a -num_rows() to the query, but it doesn't work: //this works: $data['count'] = count($this->events->findEvents($data['date'], $data['keyword'])); //the following doesn't work and generates // Fatal Error: Call to a member function num_rows() on a non-object $data['count2'] = $this->events->findEvents($data['date'], $data['keyword'])->num_rows(); the model returns an array of objects, and I think this is the reason why I can't use a method on it. function findEvents($date, $keyword, $limit = NULL, $offset = NULL) { $data = array(); $this->db->select('events.*, venues.*, events.venue AS venue_id'); $this->db->join('venues', 'events.venue = venues.id'); if ($date) { $this->db->where('date', $date); } if ($keyword) { $this->db->like('events.description', $keyword); $this->db->or_like('venues.description', $keyword); $this->db->or_like('band', $keyword); $this->db->or_like('venues.venue', $keyword); $this->db->or_like('genre', $keyword); } $this->db->order_by('date', 'DESC'); $this->db->order_by('events.priority', 'DESC'); $this->db->limit($limit, $offset); //for pagination purposes $Q = $this->db->get('events'); if ($Q->num_rows() > 0) { foreach ($Q->result() as $row) { $data[] = $row; } } $Q->free_result(); return $data; } Is there anything that i can do to be able to use it? EG, instead of $data[] = $row; I should use another (OO) syntax?

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • Does SetFileBandwidthReservation affect memory-mapped file performance?

    - by Ghostrider
    Does this function affect Memory-mapped file performance? Here's the problem I need to solve: I have two applications competing for disk access: "reader" and "updater". Whole system runs on Windows Server 2008 R2 x64 "Updater" constantly accesses disk in a linear manner, updating data. They system is set up in such a way that updater always has infinite data to update. Consider that it is constantly approximating a solution of a huge set of equations that takes up entire 2TB disk drive. Updater uses ReadFile and WriteFile to process data in a linear fashion. "Reader" is occasionally invoked by user to get some pieces of data. Usually user would read several 4kb blocks from the drive and stop. Occasionally user needs to read up to 100mb sequentially. In exceptional cases up to several gigabytes. Reader maps files to memory to get data it needs. What I would like to achieve is for "reader" to have absolute priority so that "updater" would completely stop if needed so that "reader" could get the data user needs ASAP. Is this problem solvable by using SetPriorityClass and SetFileBandwidthReservation calls? I would really hate to put synchronization login in "reader" and "updater" and rather have the OS take care of priorities.

    Read the article

  • How to control the order of module initialization in Prism

    - by Robert Taylor
    I'm using Prism V2 with a DirectoryModuleCatalog and I need the modules to be initialized in a certain order. The desired order is specified with an attribute on each IModule implementation. This is so that as each module is initialized, they add their View into a TabControl region and the order of the tabs needs to be deterministic and controlled by the module author. The order does not imply a dependency, but rather just an order that they should be initialized in. In other words: modules A, B, and C may have priorities of 1, 2, and 3 respectively. B does not have a dependency on A - it just needs to get loaded into the TabControl region after A. So that we have a deterministic and controllable order of the tabs. Also, B might not exist at runtime; so they would load as A, C because the priority should determine the order (1, 3). If i used the ModuleDependency, then module "C" will not be able to load w/o all of it's dependencies. I can manage the logic of how to sort the modules, but i can't figure out where to put said logic.

    Read the article

  • In C++, what is the scope resolution ("order of precedence") for shadowed variable names?

    - by Emile Cormier
    In C++, what is the scope resolution ("order of precedence") for shadowed variable names? I can't seem to find a concise answer online. For example: #include <iostream> int shadowed = 1; struct Foo { Foo() : shadowed(2) {} void bar(int shadowed = 3) { std::cout << shadowed << std::endl; // What does this output? { int shadowed = 4; std::cout << shadowed << std::endl; // What does this output? } } int shadowed; }; int main() { Foo().bar(); } I can't think of any other scopes where a variable might conflict. Please let me know if I missed one. What is the order of priority for all four shadow variables when inside the bar member function?

    Read the article

  • JQuery Dragging Outside Parent

    - by Andy
    I'm using JQuery's draggable(); to make li elements draggable. The only problem is when the element is dragged outside its parent, it doesn't show up. That is, it doesn't leave the confines of its parent div. An example is here: http://imgur.com/N2N1L.png - it's as if the z-index for everything else has greater priority (and the element slides under everything). Here's the code: $('.photoList li').draggable({ distance: 20, snap: theVoteBar, revert: true, containment: 'document' }); And the li elements are placed in DIVs like this: <div class="coda-slider preload" id="coda-slider-1"> <div class="panel"> <div class="panel-wrapper"> <h2 class="title" style="display:none;">Page 1</h2> <ul class="photoList"> <li> <img class="ui-widget-content" src="photos/1.jpg" style="width:100px;height:100px;" /> </li> </ul> </div> </div> I'm positive the problem is it won't leave its parent container, but I'm not sure what to do to get it out. Any direction or help would be appreciated GREATLY!

    Read the article

  • How do I improve this design for dealing with intersecting date ranges and resolving date range conf

    - by derdo
    I am working on some code that deals with date ranges. I have pricing activities that have a starting-date and an end-date to set a certain price for that range. There are multiple pricing activities with intersecting date ranges. What I ultimately need is the ability to query valid prices for a date range. I pass in (jan1,jan31) and I get back a list that says jan1--jan10 $4 , jan11--jan19 $3 jan20--jan31 $4. There are priorities between pricing activities. Some type of pricing activities have high priority, so they override other pricing activities and for certain type of pricing activities lowest price wins etc. I currently have a class that holds these pricing activities and keeps a resolved pricing calendar. As I add new pricing activities I update the resolved calendar. As I write more tests/code, it started to get very complicated with all the different cases (pricing activities intersecting in different ways). I am ending up with very complicated code where I am resolving a newly added pricing activity. See AddPricingActivity() method below. Can anybody think of a simpler way to deal with this? Could there be similar code somewhere out there? Public class PricingActivity() { DateTime startDate; DateTime endDate; Double price; Public bool StartsBeforeEndsAfter (PricingActivity pAct) { // pAct covers this pricing activity } Public bool StartsMiddleEndsAfter (PricingActivity pAct){ // early part of pAct Itersects with later part of this pricing activity } //more similar methods to cover all the combinations of intersecting } Public Class PricingActivityList() { List<PricingActivity> activities; SortedDictionary<Date, PricingActivity> resolvedPricingCalendar; Public void AddPricingActivity(PricingActivity pAct) { //update the resolvedCalendar //go over each activity and find intersecting ones //update the resolved calendar correctly //depending on the type of the intersection //this part is getting out of hand….. } }

    Read the article

  • How can I work out what events are being waited for with WinDBG in a kernel debug session

    - by Benj
    I'm a complete WinDbg newbie and I've been trying to debug a WindowsXP problem that a customer has sent me where our software and some third party software prevent windows from logging off. I've reproduced the problem and have verified that only when our software and the customers software are both installed (although not necessarily running at logoff) does the log off problem occur. I've observed that WM_ENDSESSION messages are not reaching the running windows when the user tries to log off and I know that the third party software uses a kernel driver. I've been looking at the processes in WinDbg and I know that csrss.exe would normally send all the windows a WM_ENDSESSION message. When I ran: !process 82356020 6 To look at csrss.exe's stack I can see: WARNING: Frame IP not in any known module. Following frames may be wrong. 00000000 00000000 00000000 00000000 00000000 0x7c90e514 THREAD 8246d998 Cid 0248.02a0 Teb: 7ffd7000 Win32Thread: e1627008 WAIT: (WrUserRequest) UserMode Non-Alertable 8243d9f0 SynchronizationEvent 81fe0390 SynchronizationEvent Not impersonating DeviceMap e1004450 Owning Process 82356020 Image: csrss.exe Attached Process N/A Image: N/A Wait Start TickCount 1813 Ticks: 20748 (0:00:05:24.187) Context Switch Count 3 LargeStack UserTime 00:00:00.000 KernelTime 00:00:00.000 Start Address 0x75b67cdf Stack Init f80bd000 Current f80bc9c8 Base f80bd000 Limit f80ba000 Call 0 Priority 14 BasePriority 13 PriorityDecrement 0 DecrementCount 0 Kernel stack not resident. ChildEBP RetAddr Args to Child f80bc9e0 80500ce6 00000000 8246d998 804f9af2 nt!KiSwapContext+0x2e (FPO: [Uses EBP] [0,0,4]) f80bc9ec 804f9af2 804f986e e1627008 00000000 nt!KiSwapThread+0x46 (FPO: [0,0,0]) f80bca24 bf80a4a3 00000002 82475218 00000001 nt!KeWaitForMultipleObjects+0x284 (FPO: [Non-Fpo]) f80bca5c bf88c0a6 00000001 82475218 00000000 win32k!xxxMsgWaitForMultipleObjects+0xb0 (FPO: [Non-Fpo]) f80bcd30 bf87507d bf9ac0a0 00000001 f80bcd54 win32k!xxxDesktopThread+0x339 (FPO: [Non-Fpo]) f80bcd40 bf8010fd bf9ac0a0 f80bcd64 00bcfff4 win32k!xxxCreateSystemThreads+0x6a (FPO: [Non-Fpo]) f80bcd54 8053d648 00000000 00000022 00000000 win32k!NtUserCallOneParam+0x23 (FPO: [Non-Fpo]) f80bcd54 7c90e514 00000000 00000022 00000000 nt!KiFastCallEntry+0xf8 (FPO: [0,0] TrapFrame @ f80bcd64) This waitForMultipleObjects looks interesting because I'm wondering if csrss.exe is waiting on some event which isn't arriving to allow the logoff. Can anyone tell me how I might find out what event it's waiting for anything else I might do to further investigate the problem?

    Read the article

  • create an independent hidden process

    - by Jessica
    I'm creating an application with its main window hidden by using the following code: STARTUPINFO siStartupInfo; PROCESS_INFORMATION piProcessInfo; memset(&siStartupInfo, 0, sizeof(siStartupInfo)); memset(&piProcessInfo, 0, sizeof(piProcessInfo)); siStartupInfo.cb = sizeof(siStartupInfo); siStartupInfo.dwFlags = STARTF_USESHOWWINDOW | STARTF_FORCEOFFFEEDBACK | STARTF_USESTDHANDLES; siStartupInfo.wShowWindow = SW_HIDE; if(CreateProcess(MyApplication, "", 0, 0, FALSE, 0, 0, 0, &siStartupInfo, &piProcessInfo) == FALSE) { // blah return 0; } Everything works correctly except my main application (the one calling this code) window loses focus when I open the new program. I tried lowering the priority of the new process but the focus problem is still there. Is there anyway to avoid this? furthermore, is there any way to create another process without using CreateProcess (or any of the API's that call CreateProcess like ShellExecute)? My guess is that my app is losing focus because it was given to the new process, even when it's hidden. To those of you curious out there that will certainly ask the usual "why do you want to do this", my answer is because I have a watchdog process that cannot be a service and it gets started whenever I open my main application. Satisfied? Thanks for the help. Code will be appreciated. Jess.

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >