Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 478/527 | < Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >

  • Common Properties: Consolidating Loan, Purchase, Inventory and Sale tables into one Transaction tabl

    - by Frank Computer
    Pawnshop Application: I have separate tables for Loan, Purchase, Inventory & Sales transactions. Each tables rows are joined to their respective customer rows by: customer.pk [serial] = loan.fk [integer]; = purchase.fk [integer]; = inventory.fk [integer]; = sale.fk [integer]; Since there are so many common properties within the four tables, I consolidated the four tables into one table called "transaction", where a column: transaction.trx_type char(1) {L=Loan, P=Purchase, I=Inventory, S=Sale} Scenario: A customer initially pawns merchandise, makes a couple of interest payments, then decides he wants to sell the merchandise to the pawnshop, who then places merchandise in Inventory and eventually sells it to another customer. I designed a generic transaction table where for example: transaction.main_amount DECIMAL(7,2) in a loan transaction holds the pawn amount, in a purchase holds the purchase price, in inventory and sale holds sale price. This is clearly a denormalized design, but has made programming alot easier and improved performance. Any type of transaction can now be performed from within one screen, without the need to change to different tables.

    Read the article

  • Good C string libary

    - by chamakits
    Hello all. I recently got inspired to start up a project I've been wanting to code for a while. I want to do it in C, because memory handling is key this application. I was searching around for a good implementation of strings in C, since I know me doing it myself could lead to some messy buffer overflows, and I expect to be dealing with a fairly big amount of strings. I found this article which gives details on each, but they each seem like they have a good amount of cons going for them (don't get me wrong, this article is EXTREMELY helpful, but it still worries me that even if I were to choose one of those, I wouldn't be using the best I can get). I also don't know how up to date the article is, hence my current plea. What I'm looking for is something that may hold a large amount of characters, and simplifies the process of searching through the string. If it allows me to tokenize the string in any way, even better. Also, it should have some pretty good I/O performance. Printing, and formatted printing isn't quite a top priority. I know I shouldn't expect a library to do all the work for me, but was just wandering if there was a well documented string function out there that could save me some time and some work. Any help is greatly appreciated. Thanks in advance! EDIT: I was asked about the license I prefer. Any sort of open source license will do, but preferably GPL (v2 or v3). EDIt2: I found betterString (bstring) library and it looks pretty good. Good documentation, small yet versatile amount of functions, and easy to mix with c strings. Anyone have any good or bad stories about it? The only downside I've read about it is that it lacks Unicode (again, read about this, haven't seen it face to face just yet), but everything else seems pretty good. EDIT3: Also, preferable that its pure C.

    Read the article

  • MySQL Multiple Table Join

    - by hitman001
    I have a 3 tables that I'm trying to join and get distinct results. CREATE TABLE `car` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL DEFAULT '', PRIMARY KEY (`id`) ) ENGINE=InnoDB mysql> select * from car; +----+-------+ | id | name | +----+-------+ | 1 | acura | +----+-------+ CREATE TABLE `tires` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `tire_desc` varchar(255) DEFAULT NULL, `car_id` int(10) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `new_fk_constraint` (`car_id`), CONSTRAINT `new_fk_constraint` FOREIGN KEY (`car_id`) REFERENCES `car` (`id`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB mysql> select * from tires; +----+-------------+--------+ | id | tire_desc | car_id | +----+-------------+--------+ | 1 | front_right | 1 | | 2 | front_left | 1 | +----+-------------+--------+ CREATE TABLE `lights` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `lights_desc` varchar(255) NOT NULL, `car_id` int(10) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `new1_fk_constraint` (`car_id`), CONSTRAINT `new1_fk_constraint` FOREIGN KEY (`car_id`) REFERENCES `car` (`id`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB mysql> select * from lights; +----+-------------+--------+ | id | lights_desc | car_id | +----+-------------+--------+ | 1 | right_light | 1 | | 2 | left_light | 1 | +----+-------------+--------+ Here is my query. mysql> SELECT name, group_concat(tire_desc), group_concat(lights_desc) FROM car left join tires on car.id = tires.car_id left join lights on car.id = car_id group by car.id; +-------+-----------------------------------------------+-----------------------------------------------+ | name | group_concat(tire_desc) | group_concat(lights_desc) | +-------+-----------------------------------------------+-----------------------------------------------+ | acura | front_right,front_right,front_left,front_left | right_light,left_light,right_light,left_light | +-------+-----------------------------------------------+-----------------------------------------------+ I get duplicate entires and this is what I would like to get. +-------+-----------------------------------------------+--------------------------------+ | name | group_concat(tire_desc) | group_concat(lights_desc) | +-------+-----------------------------------------------+--------------------------------+ | acura | front_right,front_left | right_light,left_light | +-------+-----------------------------------------------+--------------------------------+ I cannot use distinct in group_concat because I might have legitimate duplicates which I would like to keep. Is there any way to do this query using joins and not using inner selects like the statement below? SELECT name, (select group_concat(tire_desc) from tires where car.id = tires.car_id), (select group_concat(lights_desc) from lights where car.id = lights.car_id) FROM car Also, if I will use inner selects, will there be any performance issues over joins?

    Read the article

  • Grouping by date, with 0 when count() yields no lines

    - by SCO
    I'm using Postgresql 9 and I'm fighting with counting and grouping when no lines are counted. Let's assume the following schema : create table views { date_event timestamp with time zone ; event_id integer; } Let's imagine the following content : 2012-01-01 00:00:05 2 2012-01-01 01:00:05 5 2012-01-01 03:00:05 8 2012-01-01 03:00:15 20 I want to group by hour, and count the number of lines. I wish I could retrieve the following : 2012-01-01 00:00:00 1 2012-01-01 01:00:00 1 2012-01-01 02:00:00 0 2012-01-01 03:00:00 2 2012-01-01 04:00:00 0 2012-01-01 05:00:00 0 . . 2012-01-07 23:00:00 0 I mean that for each time range slot, I count the number of lines in my table whose date correspond, otherwise, I return a line with a count at zero. The following will definitely not work (will yeld only lines with counted lines 0). SELECT extract ( hour from date_event ),count(*) FROM views where date_event > '2012-01-01' and date_event <'2012-01-07' GROUP BY extract ( hour from date_event ); Please note I might also need to group by minute, or by hour, or by day, or by month, or by year (multiple queries is possible of course). I can only use plain old sql, and since my views table can be very big (100M records), I try to keep performance in mind. How can this be achieved ? Thank you !

    Read the article

  • Using a single texture image unit with multiple sampler uniforms

    - by bcrist
    I am writing a batching system which tracks currently bound textures in order to avoid unnecessary glBindTexture() calls. I'm not sure if I need to keep track of which textures have already been used by a particular batch so that if a texture is used twice, it will be bound to a different TIU for the second sampler which requires it. Is it acceptable for an OpenGL application to use the same texture image unit for multiple samplers within the same shader stage? What about samplers in different shader stages? For example: Fragment shader: ... uniform sampler2D samp1; uniform sampler2D samp2; void main() { ... } Main program: ... glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, tex_id); glUniform1i(samp1_location, 0); glUniform1i(samp2_location, 0); ... I don't see any reason why this shouldn't work, but what about if the shader program also included a vertex shader like this: Vertex shader: ... uniform sampler2D samp1; void main() { ... } In this case, OpenGL is supposed to treat both instances of samp1 as the same variable, and exposes a single location for them. Therefore, the same texture unit is being used in the vertex and fragment shaders. I have read that using the same texture in two different shader stages counts doubly against GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS but this would seem to contradict that. In a quick test on my hardware (HD 6870), all of the following scenarios worked as expected: 1 TIU used for 2 sampler uniforms in same shader stage 1 TIU used for 1 sampler uniform which is used in 2 shader stages 1 TIU used for 2 sampler uniforms, each occurring in a different stage. However, I don't know if this is behavior that I should expect on all hardware/drivers, or if there are performance implications.

    Read the article

  • Elegent methods for caching search results from RESTful service?

    - by Paul
    I have a RESTful web service which I access from the browser using JavaScript. As an example, say that this web service returns a list of all the Message resources assigned to me when I send a GET request to /messages/me. For performance reasons, I'd like to cache this response so that I don't have to re-fetch it every time I visit my Manage Messages web page. The cached response would expire after 5 minutes. If a Message resource is created "behind my back", say by the system admin, it's possible that I won't know about it for up to 5 minutes, until the cached search response expires and is re-fetched. This is acceptable, because it creates no confusion for me. However if I create a new Message resource which I know should be part of the search response, it becomes confusing when it doesn't appear on my Manage Messages page immediately. In general, when I knowingly create/delete/update a resource that invalidates a cached search response, I need that cached response to be expired/flushed immediately. The core problem which I can't figure out: I see no simple way of connecting the task of creating/deleting/updating a resource with the task of expiring the appropriate cached responses. In this example it seems simple, I could manually expire the cached search response whenever I create/delete/update a(ny) Message resource. But in a more complex system, keeping track of which search responses to expire under what circumstances will get clumsy quickly. If someone could suggest a simple solution or some clarifying thoughts, I'd appreciate it.

    Read the article

  • (PHP) How do I get XMLReader behave like SimpleXML with Xpath? (Large directory like XML file)

    - by AESM
    So, I've got this huge XML file (10MB and up) that I want to parse and I figured instead of using SimpleXML, I'd better use XMLReader. Since the performance should be way better, right? But since XMLReader doesn't work with XPath... The xml is like this: <root name="bookmarks"> * <dir name="A directory"> <link name="blablabla"> <dir name="Sub directory"> ... </dir> * </dir> * <link name="another link"> </root> With SimpleXML combined with Xpath, I would simple do like: $xml = simplexml_load_file('/xmlFile.xml'); $xml-xpath('/root[@name]/dir[@name="A directory"]/dir[@name="Sub directory"]'); Which is so simple. But how do I do this with just using XMLReader? Ps. I'm converting the resulting node to DOM/SimpleXML to get its inner contents. Like this: $node = $xr-expand(); $dom = new DomDocument(); $n = $dom-importNode($node, true); $dom-appendChild($n); $selectedRoot = simplexml_import_dom($dom); Is this ok? ... Thanks!

    Read the article

  • Is there an Easier way to Get a 3 deep Panel Control from a Form in order to add a new Control to it programmatically?

    - by Mark Sweetman
    I have a VB Windows Program Created by someone else, it was programmed so that anyone could Add to the functionality of the program through the use of Class Libraries, the Program calls them (ie... the Class Libraries, DLL files) Plugins, The Plugin I am creating is a C# Class Library. ie.. .dll This specific Plugin Im working on Adds a Simple DateTime Clock Function in the form of a Label and inserts it into a Panel that is 3 Deep. The Code I have I have tested and it works. My Question is this: Is there a better way to do it? for instance I use Controls.Find 3 different times, each time I know what Panel I am looking for and there will only be a single Panel added to the Control[] array. So again Im doing a foreach on an array that only holds a single element 3 different times. Now like I said the code works and does as I expected it to. It just seems overly redudant, and Im wondering if there could be a performance issue. here is the code: foreach (Control p0 in mDesigner.Controls) if (p0.Name == "Panel1") { Control panel1 = (Control)p0; Control[] controls = panel1.Controls.Find("Panel2", true); foreach (Control p1 in controls) if (p1.Name == "Panel2") { Control panel2 = (Control)p1; Control[] controls1 = panel2.Controls.Find("Panel3", true); foreach(Control p2 in controls1) if (p2.Name == "Panel3") { Control panel3 = (Control)p2; panel3.Controls.Add(clock); } } }

    Read the article

  • What is the best way to create a running integer id on the AppEngine data storage?

    - by Freed
    For various reasons, I need a unique running integer id for my entities stored on the Google AppEngine. The automatically generated key sort of has this behaviour, but it doesn't start from 1 (or 0) and doesn't guarantee that the generated integer part will come from a continuous sequence. What would be the best way to efficiently implement this on AppEngine? Is there any support from the storage system? To add to the complexity, I might need to do this over entities from different entity groups, meaning I can't just get the highest id right now and save an entity with the next id in a transaction. Might memcache be the way to go..? Edit: I havn't yet implemented this, but to clarify on the memcache idea. I know memcache is unreliable, but in practice it probably won't lose data "too often" to hurt performance. Basically, I would have a memcache entry for the last used id, update it (somehow atomically) whenever I create a new entity and use that id. In the case of memcache not having a value for this entry, I'd get the highest id so far by doing a query on my entities sorted by the id and update memcache (unless someone else had already done so). The only problem I can see with this right now would be atomicity of the operation as a whole if the save of my new entity was also part of a transaction. Thoughts..?

    Read the article

  • How to handle product ratings in a database

    - by Mel
    Hello, I would like to know what is the best approach to storing product ratings in a database. I have in mind the following two (simplified, and assuming a MySQL db) scenarios: Scenario 1: Create two columns in the product table to store number of votes and the sum of all votes. Use columns to get an average on the product display page: products(productedID, productName, voteCount, voteSum) Pros: I will only need to access one table, and thus execute one query to display product data and ratings. Cons: Write operations will be executed in a table whose original purpose is only to furnish product data. Scenario 2: Create an additional table to store ratings. products(productID, productName) ratings(productID, voteCount, voteSum) Pros: Isolate ratings into a separate table, leaving the products table to furnish data on available products. Cons: I will have to execute two separate queries on product page requests (one for data and another for ratings). In terms of performance, which of the following two approaches is best: Allow users to execute an occasional write query to a table that will handle hundreds of read requests? Execute two queries at every product page, but isolate the write query into a separate table. I'm a novice to database development, and often find myself struggling with simple questions such as these. Many thanks,

    Read the article

  • Value types of variable size

    - by YellPika
    I'm trying to code a small math library in C#. I wanted to create a generic vector structure where the user could define the element type (int, long, float, double, etc.) and dimensions. My first attempt was something like this... public struct Vector<T> { public readonly int Dimensions; public readonly T[] Elements; // etc... } Unfortunately, Elements, being an array, is also a reference type. Thus, doing this, Vector<int> a = ...; Vector<int> b = a; a[0] = 1; b[0] = 2; would result in both a[0] and b[0] equaling 2. My second attempt was to define an interface IVector<T>, and then use Reflection.Emit to automatically generate the appropriate type at runtime. The resulting classes would look roughly like this: public struct Int32Vector3 : IVector<T> { public int Element0; public int Element1; public int Element2; public int Dimensions { get { return 3; } } // etc... } This seemed fine until I found out that interfaces seem to act like references to the underlying object. If I passed an IVector to a function, and changes to the elements in the function would be reflected in the original vector. What I think is my problem here is that I need to be able to create classes that have a user specified number of fields. I can't use arrays, and I can't use inheritance. Does anyone have a solution? EDIT: This library is going to be used in performance critical situations, so reference types are not an option.

    Read the article

  • What db fits me?

    - by afvasd
    Dear Everyone I am currently using mysql. I am finding that my schema is getting incredibly complicated. I seek to find a new db that will suit my needs: Let's assume I am building a news aggregrator (which collects news from multiple website). I then run algorithms to determine if two news from different sites are actually referring to the same topic. I run this algorithm to cluster news together. The relationship is depicted below: cluster \--news1 \--word1 \--word2 \--news2 \--word3 \--news3 \--word1 \--word3 And then I will apply some magic and determine the importance of each word. Summing all the importance of each word gives me the importance of a news article. Summing the importance of each news article gives me the importance of a cluster. Note that above cluster there are also subgroups( like split by region etc), and categories (like sports, etc) which I have to determine the importance of that in a particular day per se. I have used views in the past to do so, but I realized that views are very slow. So i will normally do an insert into an actual table and index them for better performance. As you can see this leads to multiple tables derived like (cluster, importance), (news, importance), (words, importance) etc which can get pretty messy. Also the "importance" metric will change. It has become increasingly difficult to alter tables, update data (which I am using TRUNCATE TABLE) and then inserting from null. I am currently looking into something schemaless like Mongodb. I do not need distributedness. I would very much want something that is reasonably fast (which can be indexed) and something that is a lot more flexible that traditional RDMBS. Also, I need something that has some kind of ORM because I personally like ORM a lot. I am currently using sqlalchemy Please help!

    Read the article

  • dm_exec_query_stats returning stale data?

    - by VoiceOfUnreason
    I've been testing my app on a SQL Server 2005 database, and am trying to establish a preliminary picture of the query performance using sys.dm_exec_query_stats. Problem: there's a particular query that I'm interested in, because total_elapsed_time and last_elapsed_time are both large numbers. When I tickle my app to invoke that query (this runs successfully), then refresh my view of the stats, I find that 1) execution_count has incremented (expected) 2) last_execution_time has updated to now (expected) 3) last_elapsed_time is still a large value (not expected - I anticipated a new value) 4) total_elapsed_time is unchanged (contradiction?) If last_elapsed_time refers to the execution that happened @ last_execution_time, then the total_elapsed_time should have increased? This documentation: http://msdn.microsoft.com/en-us/library/ms189741(SQL.90).aspx tells me that last_execution_time is the last time the plan was executed, and last_elapsed_time comes from the "most recently executed plan", but doesn't tell me why those might be different. The query itself is uncomplicated (SELECT/WHERE/ORDER BY - parameters appearing in the where clause, but no clever operations), the table has maybe 25 rows in it right now. Questions: 1) What's the real relationship between execution_count, last_execution_time, and last_elapsed_time? 2) Where is the documentation of this relationship (manual, third party book, blog, bug ticket, stone tablets...) ?

    Read the article

  • Implementing search functionality with multiple optional parameters against database table.

    - by quarkX
    Hello, I would like to check if there is a preferred design pattern for implementing search functionality with multiple optional parameters against database table where the access to the database should be only via stored procedures. The targeted platform is .Net with SQL 2005, 2008 backend, but I think this is pretty generic problem. For example, we have customer table and we want to provide search functionality to the UI for different parameters, like customer Type, customer State, customer Zip, etc., and all of them are optional and can be selected in any combinations. In other words, the user can search by customerType only or by customerType, customerZIp or any other possible combinations. There are several available design approaches, but all of them have some disadvantages and I would like to ask if there is a preferred design among them or if there is another approach. Generate sql where clause sql statement dynamically in the business tier, based on the search request from the UI, and pass it to a stored procedure as parameter. Something like @Where = ‘where CustomerZip = 111111’ Inside the stored procedure generate dynamic sql statement and execute it with sp_executesql. Disadvantage: dynamic sql, sql injection Implement a stored procedure with multiple input parameters, representing the search fields from the UI, and use the following construction for selecting the records only for the requested fields in the where statement. WHERE (CustomerType = @CustomerType OR @CustomerType is null ) AND (CustomerZip = @CustomerZip OR @CustomerZip is null ) AND ………………………………………… Disadvantage: possible performance issue for the sql. 3.Implement separate stored procedure for each search parameter combinations. Disadvantage: The number of stored procedures will increase rapidly with the increase of the search parameters, repeated code.

    Read the article

  • What guarantees are there on the run-time complexity (Big-O) of LINQ methods?

    - by tzaman
    I've recently started using LINQ quite a bit, and I haven't really seen any mention of run-time complexity for any of the LINQ methods. Obviously, there are many factors at play here, so let's restrict the discussion to the plain IEnumerable LINQ-to-Objects provider. Further, let's assume that any Func passed in as a selector / mutator / etc. is a cheap O(1) operation. It seems obvious that all the single-pass operations (Select, Where, Count, Take/Skip, Any/All, etc.) will be O(n), since they only need to walk the sequence once; although even this is subject to laziness. Things are murkier for the more complex operations; the set-like operators (Union, Distinct, Except, etc.) work using GetHashCode by default (afaik), so it seems reasonable to assume they're using a hash-table internally, making these operations O(n) as well, in general. What about the versions that use an IEqualityComparer? OrderBy would need a sort, so most likely we're looking at O(n log n). What if it's already sorted? How about if I say OrderBy().ThenBy() and provide the same key to both? I could see GroupBy (and Join) using either sorting, or hashing. Which is it? Contains would be O(n) on a List, but O(1) on a HashSet - does LINQ check the underlying container to see if it can speed things up? And the real question - so far, I've been taking it on faith that the operations are performant. However, can I bank on that? STL containers, for example, clearly specify the complexity of every operation. Are there any similar guarantees on LINQ performance in the .NET library specification?

    Read the article

  • More efficient way to update multiple elements in javascript and/or jquery?

    - by Seiverence
    Say I have 6 divs with ID "#first", ""#second" ... "#sixth". Say if I wanted to execute functions on each of those divs, I would set up an array that contains each of the names of the divs I want to update as an element in the array of strings. ["first", "second", "third"] that I want to update. If I wanted to apply I function, I set up a for loop that iterates through each element in the array and say if I wanted to change the background color to red: function updateAllDivsInTheList() { for(var i = 0; i < array.size; i++) $("#"+array[i]).changeCssFunction(); } } Whenever I create a new div, i would add it to the array. The issue is, if I have a large number of divs that need to get updated, say if I wanted to update 1000 out of 1200 divs, it may be a pain/performance tank to have to sequentially iterate through every single element in the array. Is there some alternative more efficient way of updating multiple divs without having to sequentially iterate through every element in an array, maybe with some other more efficient data structure besides array? Or is what I am doing the most efficient way to do it? If can provide some example, that would be great.

    Read the article

  • Stream buffering issue

    - by Kolyunya
    The mod_rewrite documentation states that it is a strict requirement to disable in(out)put buffering in a rewrite program. Keeping that in mind I've written a simple program (I do know that it lacks the EOF check but this is not an issue and it saves one condition check per loop): #include <stdio.h> #include <stdlib.h> int main ( void ) { setvbuf(stdin,NULL,_IOLBF,4200); setvbuf(stdout,NULL,_IOLBF,4200); int character; while ( 42 ) { character = getchar(); if ( character == '-' ) { character = '_'; } putchar(character); } return 42 - 42; } After making some measurements I was shocked - it was over 9,000 times slower than the demo Perl script provided by the documentation: #!/usr/bin/perl $| = 1; # Turn off I/O buffering while (<STDIN>) { s/-/_/g; # Replace dashes with underscores print $_; } Now I have two related questions: Question 1. I believe that the streams may be line buffered since Apache sends a new line after each path. Am I correct? Switching my program to setvbuf(stdin,NULL,_IOLBF,4200); setvbuf(stdout,NULL,_IOLBF,4200); makes it twice as fast as Perl one. This should not hit Apache's performance, should it? Question 2. How can one write a program in C which will use unbuffered streams (like Perl one) and will perform as fast as Perl one?

    Read the article

  • Codeigniter xss_clean dilemma

    - by Henson
    I know this question has been asked over and over again, but I still haven't found the perfect answer for my liking, so here it goes again... I've been reading lots and lots polarizing comments about CI's xss_filter. Basically majority says that it's bad. Can someone elaborate how it's bad, or at least give 1 most probable scenario where it can be exploited? I've looked at the security class in CI 2.1 and I think it's pretty good as it doesn't allow malicious strings like document.cookie, document.write, etc. If the site has basically non-html presentation, is it safe to use global xss_filter (or if it's REALLY affecting performance that much, use it on per form post basis) before inserting to database ? I've been reading about pros and cons about whether to escape on input/output with majority says that we should escape on output only. But then again, why allow strings like <a href="javascript:stealCookie()">Click Me</a> to be saved in the database at all? The one thing I don't like is javascript: and such will be converted to [removed]. Can I extend the CI's security core $_never_allowed_str arrays so that the never allowed strings return empty rather than [removed]. The best reasonable wrongdoing example of this I've read is if a user has password of javascript:123 it will be cleaned into [removed]123 which means string like this document.write123 will also pass as the user's password. Then again, what is the odds of that to happen and even if it happens, I can't think of any real harm that can do to the site. Thanks

    Read the article

  • How do I grab hold of a pop-up that is opened from a frame?

    - by KLA
    I am testing a website using WatiN. On one of the pages I get a "report" in an Iframe, within this I frame there is a link to download and save the report. But since the only way to get to the link is to use frame.Link(...) the pop-up closes immediately after opening; Code snippet below //Click the create graph button ie.Button(Find.ById("ctl00_ctl00_ContentPlaceHolder1_TopBoxContentPlaceHolder_btnCreateGraph")).Click(); //Lets export the data ie.Div(Find.ById("colorbox")); ie.Div(Find.ById("cboxContent")); ie.Div(Find.ById("cboxLoadedContent")); Thread.Sleep(1000);//Used to cover performance issues Frame frame = ie.Frame(Find.ByName(frameNameRegex)); for (int Count = 0; Count < 10000000; Count++) {double nothing = (Count/12); }//Do nothing I just need a short pause //SelectList waits for a postback which does not occur. try { frame.SelectList(Find.ById("rvReport_ctl01_ctl05_ctl00")).SelectByValue("Excel"); } catch (Exception) { //Do nothing } //Now click export frame.Link(Find.ById("rvReport_ctl01_ctl05_ctl01")).ClickNoWait(); IE ieNewBrowserWindow = IE.AttachTo<IE>(Find.ByUrl(urlRegex)); fileDownloadHandler.WaitUntilFileDownloadDialogIsHandled(150); fileDownloadHandler.WaitUntilDownloadCompleted(200); I have tried using ie instead of frame which is why all those ie.Div's are present. if I use frame the pop-up window opens and closes instantly. If I use ie I get a link not found error. If I click on the link manually, while the test is "trying to find the link" the file will download correctly.

    Read the article

  • Questions about shifting from mysql to PDO

    - by Scarface
    Hey guys I have recently decided to switch all my current plain mysql queries performed with php mysql_query to PDO style queries to improve performance, portability and security. I just have some quick questions for any experts in this database interaction tool Will it prevent injection if all statements are prepared? (I noticed on php.net it wrote 'however, if other portions of the query are being built up with unescaped input, SQL injection is still possible' I was not exactly sure what this meant). Does this just mean that if all variables are run through a prepare function it is safe, and if some are directly inserted then it is not? Currently I have a connection at the top of my page and queries performed during the rest of the page. I took a look at PDO in more detail and noticed that there is a try and catch procedure for every query involving a connection and the closing of that connection. Is there a straightforward way to connecting and then reusing that connection without having to put everything in a try or constantly repeat the procedure by connecting, querying and closing? Can anyone briefly explain in layman's terms what purpose a set_exception_handler serves? I appreciate any advice from any more experienced individuals.

    Read the article

  • Recomendations for Creating a Picture Slide Show with Super-Smooth Transitions (For Live Presentaito

    - by Nick
    Hi everyone, I'm doing a theatrical performance, and I need a program that can read images from a folder and display them full screen on one of the computer's VGA outputs, in a predetermined order. All it needs to do is start with the first image, and when a key is pressed (space bar, right arrow), smoothly cross-fade to the next image. Sounds just like power-point right? The only reason why I can use power-point/open-office is because the "fade smoothly" transition isn't smooth enough, or configurable enough. It tends to be fast and choppy, where I would like to see a perfectly smooth fade over, say, 30 seconds. So the question is what is the best (cheap and fast) way to accomplish this? Is there a program that already does this well (for cheap or free)? OR should I try to hack at open-office's transition code? Or would it be easier to create this from scratch? Are there frameworks that might make it easier? I have web programming experience (php), but not desktop or real-time rendering. Any suggestions are appreciated!

    Read the article

  • for loop vs std::for_each with lambda

    - by Andrey
    Let's consider a template function written in C++11 which iterates over a container. Please exclude from consideration the range loop syntax because it is not yet supported by the compiler I'm working with. template <typename Container> void DoSomething(const Container& i_container) { // Option #1 for (auto it = std::begin(i_container); it != std::end(i_container); ++it) { // do something with *it } // Option #2 std::for_each(std::begin(i_container), std::end(i_container), [] (typename Container::const_reference element) { // do something with element }); } What are pros/cons of for loop vs std::for_each in terms of: a) performance? (I don't expect any difference) b) readability and maintainability? Here I see many disadvantages of for_each. It wouldn't accept a c-style array while the loop would. The declaration of the lambda formal parameter is so verbose, not possible to use auto there. It is not possible to break out of for_each. In pre- C++11 days arguments against for were a need of specifying the type for the iterator (doesn't hold any more) and an easy possibility of mistyping the loop condition (I've never done such mistake in 10 years). As a conclusion, my thoughts about for_each contradict the common opinion. What am I missing here?

    Read the article

  • How to optimize this MYSQL table?

    - by Lost_in_code
    This is for an upcoming project. I have two tables - first one keeps tracks of photos, and the second one keeps track of the photo's rank Photos: +-------+-----------+------------------+ | id | photo | current_rank | +-------+-----------+------------------+ | 1 | apple | 5 | | 2 | orange | 9 | +-------+-----------+------------------+ The photo rank keeps changing on a regular basis and this is the table that tracks it: Ranks: +-------+-----------+----------+-------------+ | id | photo_id | ranks | timestamp | +-------+-----------+----------+-------------+ | 1 | 1 | 8 | * | | 2 | 2 | 2 | * | | 3 | 1 | 3 | * | | 4 | 1 | 7 | * | | 5 | 1 | 5 | * | | 6 | 2 | 9 | * | +-------+-----------+----------+-------------+ * = current timestamp Every rank is tracked for reporting/analysis purpose. I talked to someone who has experience in this field and he told me that storing ranks like above is the way to go. But I'm not so sure yet. The problem here is data redundancy. There are going to be tens of thousands of photos. The photo rank changes on a hourly basis (many time within minutes) for recent photos but less frequently for older photos. At this rate the table will have millions of records within months. And since I do not have experience in working with large databases, this makes me a little nervous. I thought of this: Ranks: +-------+-----------+--------------------+ | id | photo_id | ranks | +-------+-----------+--------------------+ | 1 | 1 | 8:*,3:*,7:*,5:* | | 2 | 2 | 2:*,9:* | +-------+-----------+--------------------+ * = current timestamp That means some extra code in PHP to split the rank/time (and sorting) but that looks OK to me. Is this a correct way to optimize the table for performance? What would you recommend? Any suggestions would be great.

    Read the article

  • Which way of declaring a variable is fastest?

    - by ADB
    For a variable used in a function that is called very often and for implementation in J2ME on a blackberry (if that changed something, can you explain)? class X { int i; public void someFunc(int j) { i = 0; while( i < j ){ [...] i++; } } } or class X { static int i; public void someFunc(int j) { i = 0; while( i < j ){ [...] i++; } } } or class X { public void someFunc(int j) { int i = 0; while( i < j ){ [...] i++; } } } I know there is a difference how a static versus non-static class variable is accessed, but I don't know it would affect the speed. I also remember reading somewhere that in-function variables may be accessed faster, but I don't know why and where I read that. Background on the question: some painting function in games are called excessively often and even small difference in access time can affect the overall performance when a variable is used in a largish loop.

    Read the article

  • MySql: Is it reasonable to use 'view' or I would better denormalize my DB?

    - by Budda
    There is 'team_sector' table with following fields: Id, team_id, sect_id, size, level It contains few records for each 'team' entity (referenced with 'team_id' field). Each record represent sector of team's stadium (totally 8 sectors). Now it is necessary to implement few searches: by overall stadium size (SUM(size)); the best quality (SUM(level)/COUNT(*)). I could create query something like this: SELECT TS.team_id, SUM(TS.size) as OverallSize, SUM(TS.Level)/COUNT(TS.Id) AS QualityLevel FROM team_sector GROUP BY team_id ORDER BY OverallSize DESC / ORDER BY QualityLevel DESC But my concern here is that calculation for each team will be done each time on query performed. It is not too big overhead (at least now), but I would like to avoid performance issues later. I see 2 options here. The 1st one is to create 2 additional fields in 'team' table (for example) and store there OverallSize and QualityLevel fields. If information if 'sector' table is changed - update those table too (probably would be good to do that with triggers, as sector table doesn't change too often). The 2nd option is to create a view that will provide required data. The 2nd option seems much easier for me, but I don't have a lot of experience/knowledge of work with views. Q1: What is the best option from your perspective here and why? Probably you could suggest other options? Q2: Can I create view in such way that it will do calculations rarely (at least once per day)? If yes - how? Q3: Is it reasonable to use triggers for such purpose (1st option). P.S. MySql 5.1 is used, overall number of teams is around 1-2 thousand, overall number of records in sector table - overall 6-8 thousand. I understand, those numbers are pretty small, but I would like to implement the best practice here.

    Read the article

< Previous Page | 474 475 476 477 478 479 480 481 482 483 484 485  | Next Page >