Search Results

Search found 24879 results on 996 pages for 'prime number'.

Page 333/996 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • How can I increment a counter every N loops in JMeter?

    - by Dave Hunt
    I want to test concurrency, and reliably replicate an issue that JMeter brought to my attention. What I want to do is set a unique identifier (currently the time in milliseconds with a counter appended) and increment the counter between loops but not between threads. The idea being that the number of threads I have set up is the number of identical identifiers before incrementing and using another. If I had 3 threads with a loop count of 2 I want: 1. Unique ID: <current-time-in-millis>000000 2. Unique ID: <current-time-in-millis>000000 3. Unique ID: <current-time-in-millis>000000 4. Unique ID: <current-time-in-millis>000001 5. Unique ID: <current-time-in-millis>000001 6. Unique ID: <current-time-in-millis>000001 I've tried using Throughput Controllers to increment a counter, as well as several other things that seemed they should work but had no luck. This seems like something JMeter should be able to do. Is there any way to get the value of the loop count?

    Read the article

  • Common Lisp condition system for transfer of control

    - by Ken
    I'll admit right up front that the following is a pretty terrible description of what I want to do. Apologies in advance. Please ask questions to help me explain. :-) I've written ETLs in other languages that consist of individual operations that look something like: // in class CountOperation IEnumerable<Row> Execute(IEnumerable<Row> rows) { var count = 0; foreach (var row in rows) { row["record number"] = count++; yield return row; } } Then you string a number of these operations together, and call The Dispatcher, which is responsible for calling Operations and pushing data between them. I'm trying to do something similar in Common Lisp, and I want to use the same basic structure, i.e., each operation is defined like a normal function that inputs a list and outputs a list, but lazily. I can define-condition a condition (have-value) to use for yield-like behavior, and I can run it in a single loop, and it works great. I'm defining the operations the same way, looping through the inputs: (defun count-records (rows) (loop for count from 0 for row in rows do (signal 'have-value :value `(:count ,count @,row)))) The trouble is if I want to string together several operations, and run them. My first attempt at writing a dispatcher for these looks something like: (let ((next-op ...)) ;; pick an op from the set of all ops (loop (handler-bind ((have-value (...))) ;; records output from operation (setq next-op ...) ;; pick a new next-op (call next-op))) But restarts have only dynamic extent: each operation will have the same restart names. The restart isn't a Lisp object I can store, to store the state of a function: it's something you call by name (symbol) inside the handler block, not a continuation you can store for later use. Is it possible to do something like I want here? Or am I better off just making each operation function explicitly look at its input queue, and explicitly place values on the output queue?

    Read the article

  • Factorial in Prolog and C++

    - by Joshua Green
    I would like to work out a number's factorial. My factorial rule is in a Prolog file and I am connecting it to a C++ code. Can someone tell me what is wrong with my C++ interface please? % factorial.pl factorial( 1, 1 ):- !. factorial( X, Fac ):- X > 1, Y is X - 1, factorial( Y, New_Fac ), Fac is X * New_Fac. // factorial.cpp # headerfiles term_t t1; term_t t2; term_t goal_term; functor_t goal_functor; int main( int argc, char** argv ) { argc = 4; argv[0] = "libpl.dll"; argv[1] = "-G32m"; argv[2] = "-L32m"; argv[3] = "-T32m"; PL_initialise(argc, argv); if ( !PL_initialise(argc, argv) ) PL_halt(1); PlCall( "consult(swi('plwin.rc'))" ); PlCall( "consult('factorial.pl')" ); cout << "Enter your factorial number: "; long n; cin >> n; PL_put_integer( t1, n ); t1 = PL_new_term_ref(); t2 = PL_new_term_ref(); goal_term = PL_new_term_ref(); goal_functor = PL_new_functor( PL_new_atom("factorial"), 2 ); PL_put_atom( t1, t2 ); PL_cons_functor( goal_term, goal_functor, t1, t2 ); PL_halt( PL_toplevel() ? 0 : 1 ); }

    Read the article

  • Storing float numbers as strings in android database

    - by sandis
    So I have an app where I put arbitrary strings in a database and later extract them like this: Cursor DBresult = myDatabase.query(false, Constant.DATABASE_NOTES_TABLE_NAME, new String[] {"myStuff"}, where, null, null, null, null, null); DBresult.getString(0); This works fine in all cases except for when the string looks like a float number, for example "221.123123123". After saving it to the database I can extract the database to my computer and look inside it with a DB-viewer, and the saved number is correct. However, when using cursor.getString() the string "221.123" is returned. I cant for the life of me understand how I can prevent this. I guess I could do a cursor.getDouble() on every single string to see if this gives a better result, but that feels sooo ugly and inefficient. Any suggestions? Cheers, edit: I just made a small test program. This program prints "result: 123.123", when I would like it to print "result: 123.123123123" SQLiteDatabase database = openOrCreateDatabase("databas", Context.MODE_PRIVATE, null); database.execSQL("create table if not exists tabell (nyckel string primary key);"); ContentValues value = new ContentValues(); value.put("nyckel", "123.123123123"); database.insert("tabell", null, value); Cursor result = database.query("tabell", new String[]{"nyckel"}, null, null, null, null, null); result.moveToFirst(); Log.d("TAG","result: " + result.getString(0));

    Read the article

  • Drawing an honeycomb with as3

    - by vitto
    Hi, I'm trying to create an honeycomb with as3 but I have some problem on cells positioning. I've already created the cells (not with code) and for cycled them to a funcion and send to it the parameters which what I thought was need (the honeycomb cell is allready on a sprite container in the center of the stage). to see the structure of the cycle and which parameters passes, please see the example below, the only thing i calculate in placeCell is the angle which I should obtain directly inside tha called function Note: the angle is reversed but it isn't important, and the color are useful in example only for visually divide cases. My for cycle calls placeCell and passes cell, current_case, counter (index) and the honeycomb cell_lv (cell level). I thought it was what i needed but I'm not skilled in geometry and trigonometry, so I don't know how to position cells correctly: function placeCell (cell:Sprite, current_case:int, counter:int, cell_lv:int):void { var margin:int = 2; var angle:Number = (360 / (cell_lv * 6)) * (current_case + counter); var radius:Number = (cell.width + margin) * cell_lv; cell.x = radius * Math.cos (angle); cell.y = radius * Math.sin (angle); trace ("LV " + cell_lv + " current_case " + current_case + " counter " + counter + " angle " + angle + " radius " + radius) } how can I do to solve it?

    Read the article

  • Any diff/merge tool that provides a report (metrics) of conflicts?

    - by cad
    CONTEXT: I am preparing a big C# merge using visual studio 2008 and TFS. I need to create a report with the files and the number of collisions (total changes and conflicts) for each file (and in total of course) PROBLEM: I cannot do it for two reasons (first one is solved): 1- Using TFS merge I can have access to the file comparison but I cannot export the list of conflicting files... I can only try to resolve the conflicts. (I have solved problem 1 using beyond compare. It allows me to export the file list) 2- Using TFS merge I can only access manually for each file to get the number of conflicts... but I have more than 800 files (and probably will have to repeat it in the close future so is not an option doing it manually) There are dozens of file comparison tools (http://en.wikipedia.org/wiki/Comparison_of_file_comparison_tools ) but I am not sure which one could (if any) give me these metrics. I have also read several forums and questions here but are more general questions (which diff tool is better) and I am looking for a very specific report. So my questions are: Is Visual Studio 2010 (using still TFS2008) capable of doing such reports/exportation? Is there any tool that provide this kind of metrics (Now I am trying Beyond Compare)

    Read the article

  • while loop in c#

    - by Nave Tseva
    I have this code: using System; namespace _121119_zionAVGfilter { class Program { static void Main(string[] args) { int cnt = 0, zion, sum = 0; double avg; Console.Write("Enter first zion \n"); zion = int.Parse(Console.ReadLine()); while (zion != -1) { while (zion < -1 || zion > 100) { Console.Write("zion can be between 0 to 100 only! \nyou can rewrite the zion here, or Press -1 to see the avg\n"); zion = int.Parse(Console.ReadLine()); } cnt++; sum = sum + zion; Console.Write("Enter next zion, if you want to exit tap -1 \n"); zion = int.Parse(Console.ReadLine()); if (cnt != 0){} } if (cnt == 0) { Console.WriteLine("something doesn't make sence"); } else { avg = (double)sum / cnt; Console.Write("the AVG is {0}", avg); } Console.ReadLine(); } } } The problem here is that if in the beginning I enter a negative or bigger than hundred number, I will get this message: "zion can be between 0 to 100 only! \nyou can rewrite the zion here, or Press -1 to see the avg\n". If I then meenter -1, this what that shows up instead of the AVG: "Enter next zion, if you want to exit tap -1 \n." How can I solve this problem so when the number is negative or bigger than hundred and than tap -1 I will see the AVG an not another message?

    Read the article

  • jQuery accordion link issue

    - by Josh
    Slight problem, I've been working with a multi-accordion for a News panel. Everything is working fine, but there is an issue that has just recently come up. Underneath the headline I have information on when said headline+article was posted and when, and also if there are any comments. I intended to make the author and the number of comments as a link. The author link would most likely bring them to their contact page or maybe an email, the number of comments link would just expand it directly to the "View comments" which the user could also access by just expanding the article and then expanding the comments. A shortcut basically. Now, the issue is that I'm having to put this "Posted by..." information inside the a class which allows the user to expand the headline into the article. If I do this however, it breaks the entire accordion field for that headline because there are multiple A HREF links inside the original A link. I really don't know how to get around this, if anyone has a tip or a solution I would most appreciate it, thanks. You can view the demo here: http://www.notedls.com/demo

    Read the article

  • Multiple IntenseDebate Comment Counts

    - by Aristotle
    I just setup IntenseDebate on my blog this evening and am, for the most part, pleased with it. One thing I did see is that they offered me a small snippet to show the current number of comments: <script> var idcomments_acct = 'abcdefgef12345678mykey8675309acdc'; var idcomments_post_id; var idcomments_post_url; </script> <script type="text/javascript" src="http://www.intensedebate.com/js/genericLinkWrapperV2.js"></script> This is nice, but what I would like to do is have something similar on my archives page where many posts are listed - not just one. Presently the page looks like this: Some Post TitleAuthor NameShort abstract from this post... Some Post TitleAuthor NameShort abstract from this post... I would like it to look like this: Some Post TitleAuthor NameShort abstract from this post...7 Comments Some Post TitleAuthor NameShort abstract from this post...3 Comments But I'm not exactly sure how I can do this with IntenseDebate. Do they offer any sort of method to gather the total number of comments for multiple pages from a single page?

    Read the article

  • [Database] How to model this one-to-one relation?

    - by pbean
    I have several entities which respresent different types of users who need to be able to log in to a particular system. Additionally, they have different types of information associated with them. For example: a "general user", which has an e-mail address and "admin user", which has a workstation number (note that this a hypothetical case). Both entities also share common properties like first name, surname, address and telephone number. Finally, they naturally need to have a (unique) user name and a password to log in. In the application, the user just has to fill in his user name and password, and the functionality of the application changes slightly according to the type of the user. You can imagine that the username needs to be unique for this work. How should I model this effectively? I can't just create two tables, because then I can't force a unique constaint on the user name. I also can't put them all in just one table, because they have different types of specific information associated to them. I think I might need 3 seperate tables, one for "users" (with user name and password), one for the "general users" and another one for the "admin users", but how would the relations between these work? Or is there another solution? (By the way, the target DBMS is MySQL, so I don't think generalization is supported in the database system itself).

    Read the article

  • KODO: how set up fetch plan for bidirectional relationships?

    - by BestPractices
    Running KODO 4.2 and having an issue inefficient queries being generated by KODO. This happens when fetching an object that contains a collection where that collection has a bidrectional relationship back to the first object. Class Classroom { List<Student> _students; } Class Student { Classroom _classroom; } If we create a fetch plan to get a list of Classrooms and their corresponding Students by setting up the following fetch plan: fetchPlan.addField(Classroom.class,”_students”); This will result in two queries (get the classrooms and then get all students that are in those classrooms), which is what we would expect. However, if we include the reference back to the classroom in our fetch plan in order for the _classroom field to get populated by doing fetchPlan.addField(Student.class, “_classroom”), this will result in X number of additional queries where X is the number of students in each classroom. Can anyone explain how to fix this? KODO already has the original Classroom objects at the point that it's executing the queries to retrieve the Classroom objects and set them in each Student object's _classroom field. So I would expect KODO to simply set those objects in the _classroom field on each Student object accordingly and not go back to the database. Once again, the documentation is sorely lacking with Kodo/JDO/OpenJPA but from what I've read it should be able to do this more efficiently. Note-- EAGER_FETCH.PARALLEL is turned on and I have tried this with caching (query and data caches) turned on and off and there is no difference in the resultant queries.

    Read the article

  • RichFaces a4j:support parameter passing

    - by Mark Lewis
    Hello I have a number of rich:inplaceInput tags in RichFaces which represent numbers in an array. The validator allows integers only. When a user clicks in an input and changes a value, how can I get the bean to sort the array given the new number and reRender the list of rich:inplaceInput tags so that they're in numerical order? EG <a4j:region> <rich:dataTable value="#{MyBacking.config}" var="feed" cellpadding="0" cellspacing="0" width="100%" border="0" columns="5" id="Admin"> ... <a4j:repeat... <a4j:region id="MsgCon"> <rich:inplaceInput value="#{h.id}" validator="#{MyBacking.validateID}" id="andID" showControls="true"> <a4j:support event="onviewactivated" action="#{MyBacking.sort}" reRender="Admin" /> </rich:inplaceInput> </a4j:region> </a4j:repeat> </data:Table> </a4j:region> Note I do NOT want to use dataTable sort functions. The table is complicated and I've specified id="Admin" (ie the whole table) to reRender as I've not found a way to send more localised values to the backing bean through the inplaceInput. This question is about how to use a4j:support action attribute to call the sort method so that when the reRender rerenders the component, it outputs the list in sorted order. I have the sort method working ok when I click a button to sort, but I want to have the list sorted automatically as soon as a new valid value is entered into the inplaceInput component. Thanks

    Read the article

  • Connecting an overloaded PyQT signal using new-style syntax

    - by Claudio
    I am designing a custom widget which is basically a QGroupBox holding a configurable number of QCheckBox buttons, where each one of them should control a particular bit in a bitmask represented by a QBitArray. In order to do that, I added the QCheckBox instances to a QButtonGroup, with each button given an integer ID: def populate(self, num_bits, parent = None): """ Adds check boxes to the GroupBox according to the bitmask size """ self.bitArray.resize(num_bits) layout = QHBoxLayout() for i in range(num_bits): cb = QCheckBox() cb.setText(QString.number(i)) self.buttonGroup.addButton(cb, i) layout.addWidget(cb) self.setLayout(layout) Then, each time a user would click on a checkbox contained in self.buttonGroup, I'd like self.bitArray to be notified so I can set/unset the corresponding bit in the array. For that I intended to connect QButtonGroup's buttonClicked(int) signal to QBitArray's toggleBit(int) method and, to be as pythonic as possible, I wanted to use new-style signals syntax, so I tried this: self.buttonGroup.buttonClicked.connect(self.bitArray.toggleBit) The problem is that buttonClicked is an overloaded signal, so there is also the buttonClicked(QAbstractButton*) signature. In fact, when the program is executing I get this error when I click a check box: The debugged program raised the exception unhandled TypeError "QBitArray.toggleBit(int): argument 1 has unexpected type 'QCheckBox'" which clearly shows the toggleBit method received the buttonClicked(QAbstractButton*) signal instead of the buttonClicked(int) one. So, the question is, how can we specify, using new-style syntax, that self.buttonGroup emits the buttonClicked(int) signal instead of the default overload - buttonClicked(QAbstractButton*)?

    Read the article

  • start-stop-daemon quoted arguments misinterpreted

    - by Martin Westin
    Hi, I have been trying to make an init script using start-stop-daemon. I am stuck on the arguments to the daemon. I want to keep these in a variable at the top of the script but I can't get the quotations to filter down correctly. I'll use ls here so we don't have to look at binaries and arguments that most people wont know or care about. The end result I am looking for is for start-stop... to run ls -la "/folder with space/" DAEMON=/usr/bin/ls DAEMON_OPTS='-la "/folder with space/"' start-stop-daemon --start --make-pidfile --pidfile $PID --exec $DAEMON -- $DAEMON_OPTS Double escaping the options and trying innumerable variations of quotations do not help... Then they end up at the daemon they are always messed up. Enclosing $DAEMON_OPTS in quotes changes things... then they are seen as one since quote... never the right number though :) Echoing the command-line (start-stop...) prints exactly the right stuff to screen. But the daemon (the real one, not ls) complains about the wrong number of arguments. How do I specify a variable so that quotes inside it are brought along to the daemon correctly? Thanks, Martin

    Read the article

  • Why are my connections not closed even if I explicitly dispose of the DataContext?

    - by Chris Simpson
    I encapsulate my linq to sql calls in a repository class which is instantiated in the constructor of my overloaded controller. The constructor of my repository class creates the data context so that for the life of the page load, only one data context is used. In my destructor of the repository class I explicitly call the dispose of the DataContext though I do not believe this is necessary. Using performance monitor, if I watch my User Connections count and repeatedly load a page, the number increases once per page load. Connections do not get closed or reused (for about 20 minutes). I tried putting Pooling=false in my config to see if this had any effect but it did not. In any case with pooling I wouldn't expect a new connection for every load, I would expect it to reuse connections. I've tried putting a break point in the destructor to make sure the dispose is being hit and sure enough it is. So what's happening? Some code to illustrate what I said above: The controller: public class MyController : Controller { protected MyRepository rep; public MyController () { rep = new MyRepository(); } } The repository: public class MyRepository { protected MyDataContext dc; public MyRepository() { dc = getDC(); } ~MyRepository() { if (dc != null) { //if (dc.Connection.State != System.Data.ConnectionState.Closed) //{ // dc.Connection.Close(); //} dc.Dispose(); } } // etc } Note: I add a number of hints and context information to the DC for auditing purposes. This is essentially why I want one connection per page load

    Read the article

  • Deeply nested subqueries for traversing trees in MySQL

    - by nickf
    I have a table in my database where I store a tree structure using the hybrid Nested Set (MPTT) model (the one which has lft and rght values) and the Adjacency List model (storing parent_id on each node). my_table (id, parent_id, lft, rght, alias) This question doesn't relate to any of the MPTT aspects of the tree but I thought I'd leave it in in case anyone had a good idea about how to leverage that. I want to convert a path of aliases to a specific node. For example: "users.admins.nickf" would find the node with alias "nickf" which is a child of one with alias "admins" which is a child of "users" which is at the root. There is a unique index on (parent_id, alias). I started out by writing the function so it would split the path to its parts, then query the database one by one: SELECT `id` FROM `my_table` WHERE `parent_id` IS NULL AND `alias` = 'users';-- 1 SELECT `id` FROM `my_table` WHERE `parent_id` = 1 AND `alias` = 'admins'; -- 8 SELECT `id` FROM `my_table` WHERE `parent_id` = 8 AND `alias` = 'nickf'; -- 37 But then I realised I could do it with a single query, using a variable amount of nesting: SELECT `id` FROM `my_table` WHERE `parent_id` = ( SELECT `id` FROM `my_table` WHERE `parent_id` = ( SELECT `id` FROM `my_table` WHERE `parent_id` IS NULL AND `alias` = 'users' ) AND `alias` = 'admins' ) AND `alias` = 'nickf'; Since the number of sub-queries is dependent on the number of steps in the path, am I going to run into issues with having too many subqueries? (If there even is such a thing) Are there any better/smarter ways to perform this query?

    Read the article

  • Pairs from single list

    - by Apalala
    Often enough, I've found the need to process a list by pairs. I was wondering which would be the pythonic and efficient way to do it, and found this on Google: pairs = zip(t[::2], t[1::2]) I thought that was pythonic enough, but after a recent discussion involving idioms versus efficiency, I decided to do some tests: import time from itertools import islice, izip def pairs_1(t): return zip(t[::2], t[1::2]) def pairs_2(t): return izip(t[::2], t[1::2]) def pairs_3(t): return izip(islice(t,None,None,2), islice(t,1,None,2)) A = range(10000) B = xrange(len(A)) def pairs_4(t): # ignore value of t! t = B return izip(islice(t,None,None,2), islice(t,1,None,2)) for f in pairs_1, pairs_2, pairs_3, pairs_4: # time the pairing s = time.time() for i in range(1000): p = f(A) t1 = time.time() - s # time using the pairs s = time.time() for i in range(1000): p = f(A) for a, b in p: pass t2 = time.time() - s print t1, t2, t2-t1 These were the results on my computer: 1.48668909073 2.63187503815 1.14518594742 0.105381965637 1.35109519958 1.24571323395 0.00257992744446 1.46182489395 1.45924496651 0.00251388549805 1.70076990128 1.69825601578 If I'm interpreting them correctly, that should mean that the implementation of lists, list indexing, and list slicing in Python is very efficient. It's a result both comforting and unexpected. Is there another, "better" way of traversing a list in pairs? Note that if the list has an odd number of elements then the last one will not be in any of the pairs. Which would be the right way to ensure that all elements are included? I added these two suggestions from the answers to the tests: def pairwise(t): it = iter(t) return izip(it, it) def chunkwise(t, size=2): it = iter(t) return izip(*[it]*size) These are the results: 0.00159502029419 1.25745987892 1.25586485863 0.00222492218018 1.23795199394 1.23572707176 Results so far Most pythonic and very efficient: pairs = izip(t[::2], t[1::2]) Most efficient and very pythonic: pairs = izip(*[iter(t)]*2) It took me a moment to grok that the first answer uses two iterators while the second uses a single one. To deal with sequences with an odd number of elements, the suggestion has been to augment the original sequence adding one element (None) that gets paired with the previous last element, something that can be achieved with itertools.izip_longest().

    Read the article

  • How to add a new Stage to my default stage?

    - by Raigomaru
    I want to add a new Stage called field to the default stage (i need to place different elements on it later). And then i want to add a bitmap called myBitmap to the field. But nothing happens. I don't understand what should i do... var field:Stage = new Stage(); field.x = 200; field.y = 200; field.width = 300; field.height = 300; stage.addChild(field); var bdWidth:Number = 100; var bdHeight:Number = 100; var bdTransparent:Boolean = true; var bdFillColorARGB:uint = 0xFF007090; var myBitmapData:BitmapData = new BitmapData(bdWidth, bdHeight, bdTransparent, bdFillColorARGB); var myBitmap:Bitmap = new Bitmap(myBitmapData); myBitmap.x = 10; myBitmap.y = 10; field.addChild(myBitmap);

    Read the article

  • Can't see *all* databases in a remote SQL Server instance

    - by George
    Yesterday I posted a related question on StackOverflow. This problem involved not being able to see a SQL Server 2008 instance on another PC. I am not sure why adding the port number enabled me to see a SQL Server that I could not otherwise see, since the port number that I specified was, after all, the default port. Now I notice that I have another problem. While I can connect to the remote SQL 2008 Server instance, I cannot see all the databases in the instance. I am trying to connect to the 2008 instance from another PC using SQL Server 2008 Mgt Studio. I am connecting from a Windows 7 Ultimate PC to a Windows XP Pro PC. I suspect that my problem has something to do with not all database in the remote instance having the same version. For example, I "upgraded" a a SL 2005 database to 2008 by doing a backup frm 2005 and importing it into 2008. When I realized that this was not one of the database that I could see from my other PC, I noticed that the compatability level of the imported was still 2005, so I changed it to 2008. Still I could not see the database. I am sure that this is relevant: I just noticed that on my remote server, the sql node instance node, named "sql2008" says "version 10" when I am on the remote serfver, but when I connect to the sql2008 remote instance fron my local PC, the connection is shown locally as being a "SQL Servr version 8.0" instance. I suspect that locally, I am only being shown databases that are somehow in the remote 2008 instance but have not been upgraded. I guess I don't know what constitutes an upgraded database and I don not know who to connect to see all the databases, even if this requires multiple connections from the source PC.

    Read the article

  • How does C#'s DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Given: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees "5/1/2010" to "5/4/2010." But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would allow the user to get the latest data and always pass the same end date to the sql proc--"5/5/2010" in this case. Please speak to this as well. Sample proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • Calculate the retrieved rows in database Visual C#

    - by Tanya Lertwichaiworawit
    I am new in Visual C# and would want to know how to calculate the retrieved data from a database. Using the above GUI, when "Calculate" is click, the program will display the number of students in textBox1, and the average GPA of all students in textBox2. Here is my database table "Students": I was able to display the number of students but I'm still confused to how I can calculate the average GPA Here's my code: private void button1_Click(object sender, EventArgs e) { string connection = @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\Database1.accdb"; OleDbConnection connect = new OleDbConnection(connection); string sql = "SELECT * FROM Students"; connect.Open(); OleDbCommand command = new OleDbCommand(sql, connect); DataSet data = new DataSet(); OleDbDataAdapter adapter = new OleDbDataAdapter(command); adapter.Fill(data, "Students"); textBox1.Text = data.Tables["Students"].Rows.Count.ToString(); double gpa; for (int i = 0; i < data.Tables["Students"].Rows.Count; i++) { gpa = Convert.ToDouble(data.Tables["Students"].Rows[i][2]); } connect.Close(); }

    Read the article

  • How to index a table with a Type 2 slowly changing dimension for optimal performance

    - by The Lazy DBA
    Suppose you have a table with a Type 2 slowly-changing dimension. Let's express this table as follows, with the following columns: * [Key] * [Value1] * ... * [ValueN] * [StartDate] * [ExpiryDate] In this example, let's suppose that [StartDate] is effectively the date in which the values for a given [Key] become known to the system. So our primary key would be composed of both [StartDate] and [Key]. When a new set of values arrives for a given [Key], we assign [ExpiryDate] to some pre-defined high surrogate value such as '12/31/9999'. We then set the existing "most recent" records for that [Key] to have an [ExpiryDate] that is equal to the [StartDate] of the new value. A simple update based on a join. So if we always wanted to get the most recent records for a given [Key], we know we could create a clustered index that is: * [ExpiryDate] ASC * [Key] ASC Although the keyspace may be very wide (say, a million keys), we can minimize the number of pages between reads by initially ordering them by [ExpiryDate]. And since we know the most recent record for a given key will always have an [ExpiryDate] of '12/31/9999', we can use that to our advantage. However... what if we want to get a point-in-time snapshot of all [Key]s at a given time? Theoretically, the entirety of the keyspace isn't all being updated at the same time. Therefore for a given point-in-time, the window between [StartDate] and [ExpiryDate] is variable, so ordering by either [StartDate] or [ExpiryDate] would never yield a result in which all the records you're looking for are contiguous. Granted, you can immediately throw out all records in which the [StartDate] is greater than your defined point-in-time. In essence, in a typical RDBMS, what indexing strategy affords the best way to minimize the number of reads to retrieve the values for all keys for a given point-in-time? I realize I can at least maximize IO by partitioning the table by [Key], however this certainly isn't ideal. Alternatively, is there a different type of slowly-changing-dimension that solves this problem in a more performant manner?

    Read the article

  • How to make a increasing numbers after filenames in C?

    - by zaplec
    Hi, I have a little problem. I need to do some little operations on quite many files in one little program. So far I have decided to operate them in a single loop where I just change the number after the name. The files are all named TFxx.txt where xx is increasing number from 1 to 80. So how can I open them all in a single loop one after one? I have tried this: for(i=0; i<=80; i++) { char name[8] = "TF"+i+".txt"; FILE = open(name, r); /* Do something */ } As you can see the second line would be working in python but not in C. I have tried to do similiar running numbering with C to this program, but I haven't found out yet how to do that. The format doesn't need to be as it is on the second line, but I'd like to have some advice of how can I solve this problem. All I need to do is just be able to open many files and do same operations to them.

    Read the article

  • setBit java method using bit shifting and hexadecimal code - question

    - by somewhat_confused
    I am having trouble understanding what is happening in the two lines with the 0xFF7F and the one below it. There is a link here that explains it to some degree. http://www.herongyang.com/java/Bit-String-Set-Bit-to-Byte-Array.html I don't know if 0xFF7FposBit) & oldByte) & 0x00FF are supposed to be 3 values 'AND'ed together or how this is supposed to be read. If anyone can clarify what is happening here a little better, I would greatly appreciate it. private static void setBit(byte[] data, final int pos, final int val) { int posByte = pos/8; int posBit = pos%8; byte oldByte = data[posByte]; oldByte = (byte) (((0xFF7F>>posBit) & oldByte) & 0x00FF); byte newByte = (byte) ((val<<(8-(posBit+1))) | oldByte); data[posByte] = newByte; } passed into this method as parameters from a selectBits method was setBit(out,i,val); out = is byte[] out = new byte[numOfBytes]; (numOfBytes can be 7 in this situation) i = which is number [57], the original number from the PC1 int array holding the 56-integers. val = which is the bit taken from the byte array from the getBit() method.

    Read the article

  • Spring Integration 1.0 RC2: Streaming file content?

    - by gdm
    I've been trying to find information on this, but due to the immaturity of the Spring Integration framework I haven't had much luck. Here is my desired work flow: New files are placed in an 'Incoming' directory Files are picked up using a file:inbound-channel-adapter The file content is streamed, N lines at a time, to a 'Stage 1' channel, which parses the line into an intermediary (shared) representation. This parsed line is routed to multiple 'Stage 2' channels. Each 'Stage 2' channel does its own processing on the N available lines to convert them to a final representation. This channel must have a queue which ensures no Stage 2 channel is overwhelmed in the event that one channel processes significantly slower than the others. The final representation of the N lines is written to a file. There will be as many output files as there were routing destinations in step 4. *'N' above stands for any reasonable number of lines to read at a time, from [1, whatever I can fit into memory reasonably], but is guaranteed to always be less than the number of lines in the full file. How can I accomplish streaming (steps 3, 4, 5) in Spring Integration? It's fairly easy to do without streaming the files, but my files are large enough that I cannot read the entire file into memory. As a side note, I have a working implementation of this work flow without Spring Integration, but since we're using Spring Integration in other places in our project, I'd like to try it here to see how it performs and how the resulting code compares for length and clarity.

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >