Search Results

Search found 13608 results on 545 pages for 'performance dashboard'.

Page 493/545 | < Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >

  • Using map() on a _set in a template?

    - by Stuart Grimshaw
    I have two models like this: class KPI(models.Model): """KPI model to hold the basic info on a Key Performance Indicator""" title = models.CharField(blank=False, max_length=100) description = models.TextField(blank=True) target = models.FloatField(blank=False, null=False) group = models.ForeignKey(KpiGroup) subGroup = models.ForeignKey(KpiSubGroup, null=True) unit = models.TextField(blank=True) owner = models.ForeignKey(User) bt_measure = models.BooleanField(default=False) class KpiHistory(models.Model): """A historical log of previous KPI values.""" kpi = models.ForeignKey(KPI) measure = models.FloatField(blank=False, null=False) kpi_date = models.DateField() and I'm using RGraph to display the stats on internal wallboards, the handy thing is Python lists get output in a format that Javascript sees as an array, so by mapping all the values into a list like this: def f(x): return float(x.measure) stats = map(f, KpiHistory.objects.filter(kpi=1) then in the template I can simply use {{ stats }} and the RGraph code sees it as an array which is exactly what I want. [87.0, 87.5, 88.5, 90] So my question is this, is there any way I can achieve the same effect using Django's _set functionality to keep the amount of data I'm passing into the template, up until now I've been passing in a single KPI object to be graphed but now I want to pass in a whole bunch so is there anything I can do with _set {{ kpi.kpihistory_set }} dumps the whole model out, but I just want the measure field. I can't see any of the built in template methods that will let me pull out just the single field I want. How have other people handled this situation?

    Read the article

  • Index question: Select * with WHERE clause. Where and how to create index

    - by Mestika
    Hi, I’m working on optimizing some of my queries and I have a query that states: select * from SC where c_id ="+c_id” The schema of ** SC** looks like this: SC ( c_id int not null, date_start date not null, date_stop date not null, r_t_id int not null, nt int, t_p decimal, PRIMARY KEY (c_id, r_t_id, date_start, date_stop)); My immediate bid on how the index should be created is a covering index in this order: INDEX(c_id, date_start, date_stop, nt, r_t_id, t_p) The reason for this order I base on: The WHERE clause selects from c_id thus making it the first sorting order. Next, the date_start and date_stop to specify a sort of “range” to be defined in these parameters Next, nt because it will select the nt Next the r_t_id because it is a ID for a specific type of my r_t table And last the t_p because it is just a information. I don’t know if it is at all necessary to order it in a specific way when it is a SELECT ALL statement. I should say, that the SC is not the biggest table. I can say how many rows it contains but a estimate could be between <10 and 1000. The next thing to add is, that the SC, in different queries, inserts the data into the SC, and I know that indexes on tables which have insertions can be cost ineffective, but can I somehow create a golden middle way to effective this performance. Don't know if it makes a different but I'm using IBM DB2 version 9.7 database Sincerely Mestika

    Read the article

  • R optimization: How can I avoid a for loop in this situation?

    - by chrisamiller
    I'm trying to do a simple genomic track intersection in R, and running into major performance problems, probably related to my use of for loops. In this situation, I have pre-defined windows at intervals of 100bp and I'm trying to calculate how much of each window is covered by the annotations in mylist. Graphically, it looks something like this: 0 100 200 300 400 500 600 windows: |-----|-----|-----|-----|-----|-----| mylist: |-| |-----------| So I wrote some code to do just that, but it's fairly slow and has become a bottleneck in my code: ##window for each 100-bp segment windows <- numeric(6) ##second track mylist = vector("list") mylist[[1]] = c(1,20) mylist[[2]] = c(120,320) ##do the intersection for(i in 1:length(mylist)){ st <- floor(mylist[[i]][1]/100)+1 sp <- floor(mylist[[i]][2]/100)+1 for(j in st:sp){ b <- max((j-1)*100, mylist[[i]][1]) e <- min(j*100, mylist[[i]][2]) windows[j] <- windows[j] + e - b + 1 } } print(windows) [1] 20 81 101 21 0 0 Naturally, this is being used on data sets that are much larger than the example I provide here. Through some profiling, I can see that the bottleneck is in the for loops, but my clumsy attempt to vectorize it using *apply functions resulted in code that runs an order of magnitude more slowly. I suppose I could write something in C, but I'd like to avoid that if possible. Can anyone suggest another approach that will speed this calculation up?

    Read the article

  • cython setup.py gives .o instead of .dll

    - by alok1974
    Hi, I am a newbie to cython, so pardon me if I am missing something obvious here. I am trying to build c extensions to be used in python for enhanced performance. I have fc.py module with a bunch of function and trying to generate a .dll through cython using dsutils and running on win64: c:\python26\python c:\cythontest\setup.py build_ext --inplace I have the dsutils.cfg in C:\Python26\Lib\distutils. As required the disutils.cfg has the following config settings: [build] compiler = mingw32 My startup.py looks like this: from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext ext_modules = [Extension('fc', [r'C:\cythonTest\fc.pyx'])] setup( name = 'FC Extensions', cmdclass = {'build_ext': build_ext}, ext_modules = ext_modules ) I have latest version mingw for target/host amdwin64 type builds. I have the latest version of cython for python26 for win64. Cython does give me an fc.c without errors, only a few warning for type conversions, which I will handle once I have it right. Further it produces fc.def an fc.o files Instead of giving a .dll. I get no errors. I find on threads that it will create the .so or .dll automatically as required, which is not happening.

    Read the article

  • Do I need to using locking against integers in c++ threads

    - by Shane MacLaughlin
    The title says it all really. If I am accessing a single integer type (e.g. long, int, bool, etc...) in multiple threads, do I need to use a synchronisation mechanism such as a mutex to lock them. My understanding is that as atomic types, I don't need to lock access to a single thread, but I see a lot of code out there that does use locking. Profiling such code shows that there is a significant performance hit for using locks, so I'd rather not. So if the item I'm accessing corresponds to a bus width integer (e.g. 4 bytes on a 32 bit processor) do I need to lock access to it when it is being used across multiple threads? Put another way, if thread A is writing to integer variable X at the same time as thread B is reading from the same variable, is it possible that thread B could end up a few bytes of the previous value mixed in with a few bytes of the value being written? Is this architecture dependent, e.g. ok for 4 byte integers on 32 bit systems but unsafe on 8 byte integers on 64 bit systems? Edit: Just saw this related post which helps a fair bit.

    Read the article

  • Why do I get this exception? {An item with the same key has already been added."})

    - by Alan
    Aknittel NewSellerID is the result of a lookup on tblSellers. These tables (tblSellerListings and tblSellers) are not "officially" joined with a foreign key relationship, either in the model or in the database, but I want some referential integrity maintained for the future. So my issue remains. Why do I get the exception ({"An item with the same key has already been added."}) with this code, if I don't begin each iteration of the foreach loop with a new ObjectContext and end it with SaveChanges, which I think will affect performance. Also, could you tell me why ORCSolutionsDataService.tblSellerListings (An ADO.NET DataServices/WCF object is not IDisposable, like LINQ to Entities?? ============================================== // Add listings to previous seller int NewSellerID = 0; // Look up existing Seller key using SellerUniqueEBAYID var qryCurrentSeller = from s in service.tblSellers where s.SellerEBAYUserID == SellerUserID select s; foreach (var s in qryCurrentSeller) NewSellerID = s.SellerID; // Save the selected listings for this seller foreach (DataGridViewRow dgr in dgvRows) { ORCSolutionsDataService.tblSellerListings NewSellerListing = new ORCSolutionsDataService.tblSellerListings(); NewSellerListing.ItemID = dgr.Cells["txtSellerItemID"].Value.ToString(); NewSellerListing.Title = dgr.Cells["txtSellerItemTitle"].Value.ToString(); NewSellerListing.CurrentPrice = Convert.ToDecimal(dgr.Cells["txtSellerItemPrice"].Value); NewSellerListing.QuantitySold = Convert.ToInt32(dgr.Cells["txtSellerItemSold"].Value); NewSellerListing.EndTime = Convert.ToDateTime(dgr.Cells["txtSellerItemEnds"].Value); NewSellerListing.CategoryName = dgr.Cells["txtSellerItemCategory"].Value.ToString(); NewSellerListing.ExtendedPrice = Convert.ToDecimal(dgr.Cells["txtExtendedReceipts"].Value); NewSellerListing.RetrievedDtime = Convert.ToDateTime(dtSellerDataRetrieved.ToString()); NewSellerListing.SellerID = NewSellerID; service.AddTotblSellerListings(NewSellerListing); } service.SaveChanges(); } catch (Exception ex) { MessageBox.Show("Unable to add a new case. Exception: " + ex.Message); }

    Read the article

  • Cannot disable index during PL/SQL procedure

    - by nw
    I've written a PL/SQL procedure that would benefit if indexes were first disabled, then rebuilt upon completion. An existing thread suggests this approach: alter session set skip_unusable_indexes = true; alter index your_index unusable; [do import] alter index your_index rebuild; However, I get the following error on the first alter index statement: SQL Error: ORA-14048: a partition maintenance operation may not be combined with other operations ORA-06512: [...] 14048. 00000 - "a partition maintenance operation may not be combined with other operations" *Cause: ALTER TABLE or ALTER INDEX statement attempted to combine a partition maintenance operation (e.g. MOVE PARTITION) with some other operation (e.g. ADD PARTITION or PCTFREE which is illegal *Action: Ensure that a partition maintenance operation is the sole operation specified in ALTER TABLE or ALTER INDEX statement; operations other than those dealing with partitions, default attributes of partitioned tables/indices or specifying that a table be renamed (ALTER TABLE RENAME) may be combined at will The problem index is defined so: CREATE INDEX A11_IX1 ON STREETS ("SHAPE") INDEXTYPE IS "SDE"."ST_SPATIAL_INDEX" PARAMETERS ('ST_GRIDS=890,8010,72090 ST_SRID=2'); This is a custom index type from a 3rd-party vendor, and it causes chronic performance degradation during high-volume update/insert/delete operations. Any suggestions on how to work around this error? By the way, this error only occurs within a PL/SQL block.

    Read the article

  • Freeing of allocated memory in Solaris/Linux

    - by user355159
    Hi, I have written a small program and compiled it under Solaris/Linux platform to measure the performance of applying this code to my application. The program is written in such a way, initially using sbrk(0) system call, i have taken base address of the heap region. After that i have allocated an 1.5GB of memory using malloc system call, Then i used memcpy system call to copy 1.5GB of content to the allocated memory area. Then, I freed the allocated memory. After freeing, i used again sbrk(0) system call to view the heap size. This is where i little confused. In solaris, eventhough, i freed the memory allocated (of nearly 1.5GB) the heap size of the process is huge. But i run the same application in linux, after freeing, i found that the heap size of the process is equal to the size of the heap memory before allocation of 1.5GB. I know Solaris does not frees memory immediately, but i don't know how to tune the solaris kernel to immediately free the memory after free() system call. Also, please explain why the same problem does not comes under Linux? Can anyone help me out of this? Thanks, Santhosh.

    Read the article

  • Slow query. Wrong database structure?

    - by Tin
    I have a database with table that contains tasks. Tasks have a lifecycle. The status of the task's lifecycle can change. These state transitions are stored in a separate table tasktransitions. Now I wrote a query to find all open/reopened tasks and recently changed tasks but I already see with a rather small number of tasks (<1000) that execution time has becoming very long (0.5s). Tasks +-------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+---------+------+-----+---------+----------------+ | taskid | int(11) | NO | PRI | NULL | auto_increment | | description | text | NO | | NULL | | +-------------+---------+------+-----+---------+----------------+ Tasktransitions +------------------+-----------+------+-----+-------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+-----------+------+-----+-------------------+----------------+ | tasktransitionid | int(11) | NO | PRI | NULL | auto_increment | | taskid | int(11) | NO | MUL | NULL | | | status | int(11) | NO | MUL | NULL | | | description | text | NO | | NULL | | | userid | int(11) | NO | | NULL | | | transitiondate | timestamp | NO | | CURRENT_TIMESTAMP | | +------------------+-----------+------+-----+-------------------+----------------+ Query SELECT tasks.taskid,tasks.description,tasklaststatus.status FROM tasks LEFT OUTER JOIN ( SELECT tasktransitions.taskid,tasktransitions.transitiondate,tasktransitions.status FROM tasktransitions INNER JOIN ( SELECT taskid,MAX(transitiondate) AS lasttransitiondate FROM tasktransitions GROUP BY taskid ) AS tasklasttransition ON tasklasttransition.lasttransitiondate=tasktransitions.transitiondate AND tasklasttransition.taskid=tasktransitions.taskid ) AS tasklaststatus ON tasklaststatus.taskid=tasks.taskid WHERE tasklaststatus.status IS NULL OR tasklaststatus.status=0 or tasklaststatus.transitiondate>'2013-09-01'; I'm wondering if the database structure is best choice performance wise. Could adding indexes help? I already tried to add some but I don't see great improvements. +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | tasktransitions | 0 | PRIMARY | 1 | tasktransitionid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 1 | taskid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 2 | transitiondate | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | status_ix | 1 | status | A | 3 | NULL | NULL | | BTREE | | | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ Any other suggestions?

    Read the article

  • Type-safe generic data structures in plain-old C?

    - by Bradford Larsen
    I have done far more C++ programming than "plain old C" programming. One thing I sorely miss when programming in plain C is type-safe generic data structures, which are provided in C++ via templates. For sake of concreteness, consider a generic singly linked list. In C++, it is a simple matter to define your own template class, and then instantiate it for the types you need. In C, I can think of a few ways of implementing a generic singly linked list: Write the linked list type(s) and supporting procedures once, using void pointers to go around the type system. Write preprocessor macros taking the necessary type names, etc, to generate a type-specific version of the data structure and supporting procedures. Use a more sophisticated, stand-alone tool to generate the code for the types you need. I don't like option 1, as it is subverts the type system, and would likely have worse performance than a specialized type-specific implementation. Using a uniform representation of the data structure for all types, and casting to/from void pointers, so far as I can see, necessitates an indirection that would be avoided by an implementation specialized for the element type. Option 2 doesn't require any extra tools, but it feels somewhat clunky, and could give bad compiler errors when used improperly. Option 3 could give better compiler error messages than option 2, as the specialized data structure code would reside in expanded form that could be opened in an editor and inspected by the programmer (as opposed to code generated by preprocessor macros). However, this option is the most heavyweight, a sort of "poor-man's templates". I have used this approach before, using a simple sed script to specialize a "templated" version of some C code. I would like to program my future "low-level" projects in C rather than C++, but have been frightened by the thought of rewriting common data structures for each specific type. What experience do people have with this issue? Are there good libraries of generic data structures and algorithms in C that do not go with Option 1 (i.e. casting to and from void pointers, which sacrifices type safety and adds a level of indirection)?

    Read the article

  • Good Hosting Providers With Zend Framework Support

    - by manyxcxi
    I currently use ixwebhosting for my hosting services. They're cheap and work (most of the time). The databases are horribly slow, the servers are horribly slow, and their support (though usually prompt) is tough to deal with. That being said, they're cheap, I've got like 20 domains hosted in my account, none of them are high volume, and they work JUST good enough- until today. This isn't meant to be a condemnation of ixwh though. Their prices are very low for what they do offer and most things work just fine, most of the time. I need to be able to host web apps written with Zend Framework in a fairly easy fashion. The server performance can't be worse than what I've already had (a pretty low hurdle to clear), and I don't want to spend $30/mo. These are not money making websites- they're projects. My requirements are PHP 5.3, ZF support, MySQL databases, multiple domains- not much. Who should I look at, and who should I look out for? Also- I put this on SO instead of SF because of the Zend Framework specific requirement. If I'm wrong, do as you wish.

    Read the article

  • Ruby integer to string key

    - by Gene
    A system I'm building needs to convert non-negative Ruby integers into shortest-possible UTF-8 string values. The only requirement on the strings is that their lexicographic order be identical to the natural order on integers. What's the best Ruby way to do this? We can assume the integers are 32 bits and the sign bit is 0. This is successful: (i >> 24).chr + ((i >> 16) & 0xff).chr + ((i >> 8) & 0xff).chr + (i & 0xff).chr But it appears to be 1) garbage-intense and 2) ugly. I've also looked at pack solutions, but these don't seem portable due to byte order. FWIW, the application is Redis hash field names. Building keys may be a performance bottleneck, but probably not. This question is mostly about the "Ruby way".

    Read the article

  • What's the best practice for handling system-specific information under version control?

    - by Joe
    I'm new to version control, so I apologize if there is a well-known solution to this. For this problem in particular, I'm using git, but I'm curious about how to deal with this for all version control systems. I'm developing a web application on a development server. I have defined the absolute path name to the web application (not the document root) in two places. On the production server, this path is different. I'm confused about how to deal with this. I could either: Reconfigure the development server to share the same path as the production Edit the two occurrences each time production is updated. I don't like #1 because I'd rather keep the application flexible for any future changes. I don't like #2 because if I start developing on a second development server with a third path, I would have to change this for every commit and update. What is the best way to handle this? I thought of: Using custom keywords and variable expansion (such as setting the property $PATH$ in the version control properties and having it expanded in all the files). Git doesn't support this because it would be a huge performance hit. Using post-update and pre-commit hooks. Possibly the likely solution for git, but every time I looked at the status, it would report the two files as being changed. Not really clean. Pulling the path from a config file outside of version control. Then I would have to have the config file in the same location on all servers. Might as well just have the same path to begin with. Is there an easy way to deal with this? Am I over thinking it?

    Read the article

  • Why would one want to use the public constructors on Boolean and similar immutable classes?

    - by Robert J. Walker
    (For the purposes of this question, let us assume that one is intentionally not using auto(un)boxing, either because one is writing pre-Java 1.5 code, or because one feels that autounboxing makes it too easy to create NullPointerExceptions.) Take Boolean, for example. The documentation for the Boolean(boolean) constructor says: Note: It is rarely appropriate to use this constructor. Unless a new instance is required, the static factory valueOf(boolean) is generally a better choice. It is likely to yield significantly better space and time performance. My question is, why would you ever want to get a new instance in the first place? It seems like things would be simpler if constructors like that were private. For example, if they were, you could write this with no danger (even if myBoolean were null): if (myBoolean == Boolean.TRUE) It'd be safe because all true Booleans would be references to Boolean.TRUE and all false Booleans would be references to Boolean.FALSE. But because the constructors are public, someone may have used them, which means that you have to write this instead: if (Boolean.TRUE.equals(myBoolean)) But where it really gets bad is when you want to check two Booleans for equality. Something like this: if (myBooleanA == myBooleanB) ...becomes this: if ( (myBooleanA == null && myBooleanB == null) || (myBooleanA == null && myBooleanA.equals(myBooleanB)) ) I can't think of any reason to have separate instances of these objects which is more compelling than not having to do the nonsense above. What say you?

    Read the article

  • Best way of showing more results with javascript/css

    - by Ricardo Neves
    I'm developing a website and i'm having troubles showing the search results to the user the way I want. Basically, after the user search, the page makes a couple of ajax requests and as soon as a response arrive it appends the info to a specific element on my page. Each results is shown as a line... The problem is that in most case there are going to be more than 1000 results and this would make the page have a large scroll. My idea was to show only the first 15 results and when the user clicks "show more" the element would expand and show the next 15 results and so on... This would be easier to do if the website wasn't responsive, but because it is I can't find the proper way of implementing what I want without lowering the website perfomance. I have "2 ideas": The first is by using something like #element .div:nth-child(-n+15) on my css and figure a way of changing the "15" to how much results I want to show... I don't know if this can be done. Is it possible to call css rules with parameters? Maybe with less css? The second option is probably a bad option if i don't want to lower the website performance. Using javascript I would check if there is a specific css class(like .show-15 .show30 .show45) and add that class to my element and if it don't exist, create it somehow.. Any help would be appreciated.

    Read the article

  • Parallelizing L2S Entity Retrieval

    - by MarkB
    Assuming a typical domain entity approach with SQL Server and a dbml/L2S DAL with a logic layer on top of that: In situations where lazy loading is not an option, I have settled on a convention where getting a list of entities does not also get each item's child entities (no loading), but getting a single entity does (eager loading). Since getting a single entity also gets children, it causes a cascading effect in which each child then gets its children too. This sounds bad, but as long as the model is not too deep, I usually don't see performance problems that outweigh the benefits of the ease of use. So if I want to get a list in which each of the items is fully hydrated with children, I combine the GetList and GetItem methods. So I'll get a list and then loop through it getting each item with the full cascade. Even this is generally acceptable in many of the projects I've worked on - but I have recently encountered situations with larger models and/or more data in which it needs to be more efficient. I've found that partitioning the loop and executing it on multiple threads yields excellent results. In my first experiment with a list of 50 items from one particular project, I did 5 threads of 10 items each and got a 3X improvement in time. Of course, the mileage will vary depending on the project but all else being equal this is clearly a big opportunity. However, before I go further, I was wondering what others have done that have already been through this. What are some good approaches to parallelizing this type of thing?

    Read the article

  • JMS message. Model to include data or pointers to data?

    - by John
    I am trying to resolve a design difference of opinion where neither of us has experience with JMS. We want to use JMS to communicate between a j2ee application and the stand-alone application when a new event occurs. We would be using a single point-to-point queue. Both sides are Java-based. The question is whether to send the event data itself in the JMS message body or to send a pointer to the data so that the stand-alone program can retrieve it. Details below. I have a j2ee application that supports data entry of new and updated persons and related events. The person records and associated events are written to an Oracle database. There are also stand-alone, separate programs that contribute new person and event records to the database. When a new event occurs through any of 5-10 different application functions, I need to notify remote systems through an outbound interface using an industry-specific standard messaging protocol. The outbound interface has been designed as a stand-alone application to support scalability through asynchronous operation and by moving it to a separate server. The j2ee application currently has most of the data in memory at the time the event is entered. The data would consist of approximately 6 different objects; a person object and some with multiple instances for an average size in the range of 3000 to 20,000 bytes. Some special cases could be many times this amount. From a performance and reliability perspective, should I model the JMS message to pass all the data needed to create the interface message, or model the JMS message to contain record keys for the data and have the stand-alone Java application retrieve the data to create the interface message?

    Read the article

  • Advice on Minimizing Stored Procedure Parameters

    - by RPM1984
    Hi Guys, I have an ASP.NET MVC Web Application that interacts with a SQL Server 2008 database via Entity Framework 4.0. On a particular page, i call a stored procedure in order to pull back some results based on selections on the UI. Now, the UI has around 20 different input selections, ranging from a textbox, dropdown list, checkboxes, etc. Each of those inputs are "grouped" into logical sections. Example: Search box : "Foo" Checkbox A1: ticked, Checkbox A2: unticked Dropdown A: option 3 selected Checkbox B1: ticked, Checkbox B2: ticked, Checkbox B3: unticked So i need to call the SPROC like this: exec SearchPage_FindResults @SearchQuery = 'Foo', @IncludeA1 = 1, @IncludeA2 = 0, @DropDownSelection = 3, @IncludeB1 = 1, @IncludeB2 = 1, @IncludeB3 = 0 The UI is not too important to this question - just wanted to give some perspective. Essentially, i'm pulling back results for a search query, filtering these results based on a bunch of (optional) selections a user can filter on. Now, My questions/queries: What's the best way to pass these parameters to the stored procedure? Are there any tricks/new ways (e.g SQL Server 2008) to do this? Special "table" parameters/arrays - can we pass through User-Defined-Types? Keep in mind im using Entity Framework 4.0 - but could always use classic ADO.NET for this if required. What about XML? What are the serialization/de-serialization costs here? Is it worth it? How about a parameter for each logical section? Comma-seperated perhaps? Just thinking out loud. This page is particulary important from a user point of view, and needs to perform really well. The stored procedure is already heavy in logic, so i want to minimize the performance implications - so keep that in mind. With that said - what is the best approach here?

    Read the article

  • How to give new life into a five years old, simple but reliable PHP form?

    - by Sam
    Hi all. I have a script in php 5.2. I want to use a simple form. I found something a programmer made for me about 5 years ago. When I use it, PHP outputs an error now unless I set register_long_arrays = On, then it works fine. On the PHP website, however, it says: Warning This feature has been DEPRECATED as of PHP 5.3.0. Relying on this feature is highly discouraged. It's recommended to turn them off, for performance reasons. Instead, use the superglobal arrays, like $_GET. Should I listen to PHP's warning, or just enable the option and keep using my old form happily? If the former, then how/where do I change this simple form, so it does not rely on the deprecated setting? Your answer is much appreciated. form.htm <html><body> <form method="POST" action="form_sent.php"> ... </form> </body></html> form_sent.php <html><body> <?php $email = $HTTP_POST_VARS[email]; $mailto = "[email protected]"; $mailsubj = "A Form was Sent from Website!"; $mailhead = "From: $email\n"; reset ($HTTP_POST_VARS); $mailbody = "Values submitted from web site form:\n"; while (list($key, $val) = each ($HTTP_POST_VARS)){$mailbody .= "$key : $val\n";} if (!eregi("\n",$HTTP_POST_VARS[email])) { mail($mailto, $mailsubj, $mailbody, $mailhead); } ?> <b>Form Sent. Thank you.</b> </body></html>

    Read the article

  • Using array of Action() in a lambda expression

    - by Sean87
    I want to do some performance measurement for a method that does some work with int arrays, so I wrote the following class: public class TimeKeeper { public TimeSpan Measure(Action[] actions) { var watch = new Stopwatch(); watch.Start(); foreach (var action in actions) { action(); } return watch.Elapsed; } } But I can not call the Measure mehotd for the example below: var elpased = new TimeKeeper(); elpased.Measure( () => new Action[] { FillArray(ref a, "a", 10000), FillArray(ref a, "a", 10000), FillArray(ref a, "a", 10000) }); I get the following errors: Cannot convert lambda expression to type 'System.Action[]' because it is not a delegate type Cannot implicitly convert type 'void' to 'System.Action' Cannot implicitly convert type 'void' to 'System.Action' Cannot implicitly convert type 'void' to 'System.Action' Here is the method that works with arrays: private void FillArray(ref int[] array, string name, int count) { array = new int[count]; for (int i = 0; i < array.Length; i++) { array[i] = i; } Console.WriteLine("Array {0} is now filled up with {1} values", name, count); } What I am doing wrong?

    Read the article

  • What collection object is appropriate for fixed ordering of values?

    - by makerofthings7
    Scenario: I am tracking several performance counters and have a CounterDescription[] correlate to DataSnapshot[]... where CounterDescription[n] describes the data loaded within DataSnapshot[n]. I want to expose an easy to use API within C# that will allow for the easy and efficient expansion of the arrays. For example CounterDescription[0] = Humidity; DataSnapshot[0] = .9; CounterDescription[1] = Temp; DataSnapshot[1] = 63; My upload object is defined like this: Note how my intent is to correlate many Datasnapshots with a dattime reference, and using the offset of the data to refer to its meaning. This was determined to be the most efficient way to store the data on the back-end, and has now reflected itself into the following structure: public class myDataObject { [DataMember] public SortedDictionary<DateTime, float[]> Pages { get; set; } /// <summary> /// An array that identifies what each position in the array is supposed to be /// </summary> [DataMember] public CounterDescription[] Counters { get; set; } } I will need to expand each of these arrays (float[] and CounterDescription[] ), but whatever data already exists must stay in that relative offset. Which .NET objects support this? I think Array[] , LinkedList<t>, and List<t> Are able to keep the data fixed in the right locations. What do you think?

    Read the article

  • Domain model for an online WYSYWG webpage generator / runtime

    - by CharlieBrown
    Hi all, I'm using C#, MVC, NHibernate and StructureMap as my IoC container, and need some ideas regarding my domain model. The application I'm working has two parts: an Authoring part and a Runtime part. The idea is to allow the user to create a webpage in Authoring (mostly a form actually) by choosing from a set of predefined controls. That webpage will be later used as a form in a call center environment (Runtime part), or may be used in an intranet portal, etc. Basically something similar to what a CMS would do. The difference is, of course, that the webpage/form the author generates will be used and fulfilled in runtime, and that authros should be able to freely create the webpage they want without limitations. I have a draft working model that allows a RunController to iterate over the ScriptPage (my class for the "generated webpage") Controls collection and uses partial views to render each of them. Works kind of fine. Basically I have a common ScriptControl class, and then I can create for example a TextInputControl or a DropDownControl by inheriting from that base class. I can also figure out the Authoring part of the app, although that will surely be fun in itself for sure. :) The biggest problem I have now is persistance. In order to be flexible, I want to be able to add more controls, and template controls (think of an Address composite control) in sepparate DLLs, so I think having a relational model that handles very possible control is not the way to go. My current thinking is using a kind of ObjectStore: binary-serializing the ScriptPage object that contains the List collection and deserializing at Runtime, but I'm not sure how good will it work with NHibernate and how good the performance will be. Serializing a small "page" with 10 controls results in 7964 bytes, for example. Any ideas out there? Thanks in advance, excuse the length. ;)

    Read the article

  • Best way to keep a .net client app updated with status of another application

    - by rwmnau
    I have a Windows service that's running all the time, and takes some action every 15 minutes. I also have a client WinForms app that displays some information about what the service is doing. I'd like the forms application to keep itself updated with a recent status, but I'm not sure if polling every second is a good move performance-wise. When it starts, my Windows Service opens a WCF named pipe to receive queries (from my client form) Every second, a timer on the winform sends a query to the pipe, and then displays the results. If the pipe isn't there, the form displays that the service isn't running. Is that the best way to do this? If my service opens the pipe when it starts, will it always stay open (until I close it or my service stops)? In addition to polling the service, maybe there's some way for the service to notify any watching applications of certain events, like starting and stopping processing? That way, I could poll less, since I'd presumably know about big events already, and would only be polling for progress. Anything that I'm missing?

    Read the article

  • C++ AMP, for loops to parallel_for_each loop

    - by user1430335
    I'm converting an algorithm to make use of the massive acceleration that C++ AMP provides. The stage I'm at is putting the for loops into the known parallel_for_each loop. Normally this should be a straightforward task to do but it appears more complex then I first thought. It's a nested loop which I increment using steps of 4 per iterations: for(int j = 0; j < height; j += 4, data += width * 4 * 4) { for(int i = 0; i < width; i += 4) { The trouble I'm having is the use of the index. I can't seem to find a way to properly fit this into the parallel_for_each loop. Using an index of rank 2 is the way to go but manipulating it via branching will do harm to the performance gain. I found a similar post: Controlling the index variables in C++ AMP. It also deals about index manipulation but the increment aspect doesn't cover my issue. With kind regards, Forcecast

    Read the article

  • Is there a set based solution for this problem?

    - by NYSystemsAnalyst
    We have a table set up as follows: |ID|EmployeeID|Date |Category |Hours| |1 |1 |1/1/2010 |Vacation Earned|2.0 | |2 |2 |2/12/2010|Vacation Earned|3.0 | |3 |1 |2/4/2010 |Vacation Used |1.0 | |4 |2 |5/18/2010|Vacation Earned|2.0 | |5 |2 |7/23/2010|Vacation Used |4.0 | The business rules are: Vacation balance is calculated by vacation earned minus vacation used. Vacation used is always applied against the oldest vacation earned amount first. We need to return the rows for Vacation Earned that have not been offset by vacation used. If vacation used has only offset part of a vacation earned record, we need to return that record showing the difference. For example, using the above table, the result set would look like: |ID|EmployeeID|Date |Category |Hours| |1 |1 |1/1/2010 |Vacation Earned|1.0 | |4 |2 |5/18/2010|Vacation Earned|1.0 | Note that record 2 was eliminated because it was completely offset by used time, but records 1 and 4 were only partially used, so they were calculated and returned as such. The only way we have thought of to do this is to get all of the vacation earned records in a temporary table. Then, get the total vacation used and loop through the temporary table, deleting the oldest record and subtracting that value from the total vacation used until the total vacation used is zero. We could clean it up for when the remaining vacation used is only part of the oldest vacation earned record. This would leave us with just the outstanding vacation earned records. This works, but it is very inefficient and performs poorly. Also, the performance will just degrade over time as more and more records are added. Are there any suggestions for a better solution, preferable set based? If not, we'll just have to go with this.

    Read the article

< Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >