Search Results

Search found 13608 results on 545 pages for 'performance dashboard'.

Page 492/545 | < Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >

  • WCF with MANY database connections

    - by Jorge Dominguez
    I'm working in the development of an ERP type .Net WinForms application consuming a WCF service. It's to be used by many small companies (in the range of 100-200). Database is SQL Server 2008 and the service will be hosted as a Windows service. Even thought there will be a single DB Server, our customer insists in having separate databases for each company. That is because of stability/support concerns (like DB being damaged or took offline for some reason thus affecting all clients). Concerns coming from previous experiences (not necessarily with same platform). With a single database, connections to the DB would be opened at service start up and pooling used, but, I'm not sure how connections could be managed in a multiple DB scenario: Could a connection to the corresponding DB be opened and closed for each service request? would performance be acceptable? If a connection is opened and maintained for each company accessing the system, what's the practical limit of opened connections (to different databases)? It would be very interesting to hear your opinions and suggestions for this situation. Tanks

    Read the article

  • How to do "map chunks", like terraria or minecraft maps?

    - by O'poil
    Due to performance issues, I have to cut my maps into chunks. I manage the maps in this way: listMap[x][y] = new Tile (x,y); I tried in vain to cut this list for several "chunk" to avoid loading all the map because the fps are not very high with large map. And yet, when I update or Draw I do it with a little tile range. Here is how I proceed: foreach (List<Tile> list in listMap) { foreach (Tile leTile in list) { if ((leTile.Position.X < screenWidth + hero.Pos.X) && (leTile.Position.X > hero.Pos.X - tileSize) && (leTile.Position.Y < screenHeight + hero.Pos.Y) && (leTile.Position.Y > hero.Pos.Y - tileSize) ) { leTile.Draw(spriteBatch, gameTime); } } } (and the same thing, for the update method). So I try to learn with games like minecraft or terraria, and any two manages maps much larger than mine, without the slightest drop of fps. And apparently, they load "chunks". What I would like to understand is how to cut my list in Chunk, and how to display depending on the position of my character. I try many things without success. Thank you in advance for putting me on the right track! Ps : Again, sorry for my English :'( Pps : I'm not an experimented developer ;)

    Read the article

  • How to model in J2EE / JEE?

    - by Harry
    Let's say, I have decided to go with J(2)EE stack for my enterprise application. Now, for domain modelling (or: for designing the M of MVC), which APIs can I safely assume and use, and which I should stay away from... say, via a layer of abstraction? For example, Should I go ahead and litter my Model with calls to Hibernate/JPA API? Or, should I build an abstraction... a persistence layer of my own to avoid hard-coding against these two specific persistence APIs? Why I ask this: Few years ago, there was this Kodo API which got superseded by Hibernate. If one had designed a persistence layer and coded the rest of the model against this layer (instead of littering the Model with calls to specific vendor API), it would have allowed one to (relatively) easily switch from Kodo to Hibernate to xyz. Is it recommended to make aggressive use of the *QL provided by your persistence vendor in your domain model? I'm not aware of any real-world issues (like performance, scalability, portability, etc) arising out of a heavy use of an HQL-like language. Why I ask this: I would like to avoid, as much as possible, writing custom code when the same could be accomplished via query language that is more portable than SQL. Sorry, but I'm a complete newbie to this area. Where could I find more info on this topic? Many thanks in advance. /HS

    Read the article

  • What's the most trivial function that would benfit from being computed on a GPU?

    - by hanDerPeder
    Hi. I'm just starting out learning OpenCL. I'm trying to get a feel for what performance gains to expect when moving functions/algorithms to the GPU. The most basic kernel given in most tutorials is a kernel that takes two arrays of numbers and sums the value at the corresponding indexes and adds them to a third array, like so: __kernel void add(__global float *a, __global float *b, __global float *answer) { int gid = get_global_id(0); answer[gid] = a[gid] + b[gid]; } __kernel void sub(__global float* n, __global float* answer) { int gid = get_global_id(0); answer[gid] = n[gid] - 2; } __kernel void ranksort(__global const float *a, __global float *answer) { int gid = get_global_id(0); int gSize = get_global_size(0); int x = 0; for(int i = 0; i < gSize; i++){ if(a[gid] > a[i]) x++; } answer[x] = a[gid]; } I am assuming that you could never justify computing this on the GPU, the memory transfer would out weight the time it would take computing this on the CPU by magnitudes (I might be wrong about this, hence this question). What I am wondering is what would be the most trivial example where you would expect significant speedup when using a OpenCL kernel instead of the CPU?

    Read the article

  • Is it acceptable to wrap PHP library functions solely to change the names?

    - by Carson Myers
    I'm going to be starting a fairly large PHP application this summer, on which I'll be the sole developer (so I don't have any coding conventions to conform to aside from my own). PHP 5.3 is a decent language IMO, despite the stupid namespace token. But one thing that has always bothered me about it is the standard library and its lack of a naming convention. So I'm curious, would it be seriously bad practice to wrap some of the most common standard library functions in my own functions/classes to make the names a little better? I suppose it could also add or modify some functionality in some cases, although at the moment I don't have any examples (I figure I will find ways to make them OO or make them work a little differently while I am working). If you saw a PHP developer do this, would you think "Man, this is one shoddy developer?" Additionally, I don't know much (or anything) about if/how PHP is optimized, and I know that usually PHP performace doesn't matter. But would doing something like this have a noticeable impact on the performance of my application?

    Read the article

  • How much does an InnoDB table benefit from having fixed-length rows?

    - by Philip Eve
    I know that dependent on the database storage engine in use, a performance benefit can be found if all of the rows in the table can be guaranteed to be the same length (by avoiding nullable columns and not using any VARCHAR, TEXT or BLOB columns). I'm not clear on how far this applies to InnoDB, with its funny table arrangements. Let's give an example: I have the following table CREATE TABLE `PlayerGameRcd` ( `User` SMALLINT UNSIGNED NOT NULL, `Game` MEDIUMINT UNSIGNED NOT NULL, `GameResult` ENUM('Quit', 'Kicked by Vote', 'Kicked by Admin', 'Kicked by System', 'Finished 5th', 'Finished 4th', 'Finished 3rd', 'Finished 2nd', 'Finished 1st', 'Game Aborted', 'Playing', 'Hide' ) NOT NULL DEFAULT 'Playing', `Inherited` TINYINT NOT NULL, `GameCounts` TINYINT NOT NULL, `Colour` TINYINT UNSIGNED NOT NULL, `Score` SMALLINT UNSIGNED NOT NULL DEFAULT 0, `NumLongTurns` TINYINT UNSIGNED NOT NULL DEFAULT 0, `Notes` MEDIUMTEXT, `CurrentOccupant` TINYINT UNSIGNED NOT NULL DEFAULT 0, PRIMARY KEY (`Game`, `User`), UNIQUE KEY `PGR_multi_uk` (`Game`, `CurrentOccupant`, `Colour`), INDEX `Stats_ind_PGR` (`GameCounts`, `GameResult`, `Score`, `User`), INDEX `GameList_ind_PGR` (`User`, `CurrentOccupant`, `Game`, `Colour`), CONSTRAINT `Constr_PlayerGameRcd_User_fk` FOREIGN KEY `User_fk` (`User`) REFERENCES `User` (`UserID`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `Constr_PlayerGameRcd_Game_fk` FOREIGN KEY `Game_fk` (`Game`) REFERENCES `Game` (`GameID`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=INNODB CHARACTER SET utf8 COLLATE utf8_general_ci The only column that is nullable is Notes, which is MEDIUMTEXT. This table presently has 33097 rows (which I appreciate is small as yet). Of these rows, only 61 have values in Notes. How much of an improvement might I see from, say, adding a new table to store the Notes column in and performing LEFT JOINs when necessary?

    Read the article

  • Ruby integer to string key

    - by Gene
    A system I'm building needs to convert non-negative Ruby integers into shortest-possible UTF-8 string values. The only requirement on the strings is that their lexicographic order be identical to the natural order on integers. What's the best Ruby way to do this? We can assume the integers are 32 bits and the sign bit is 0. This is successful: (i >> 24).chr + ((i >> 16) & 0xff).chr + ((i >> 8) & 0xff).chr + (i & 0xff).chr But it appears to be 1) garbage-intense and 2) ugly. I've also looked at pack solutions, but these don't seem portable due to byte order. FWIW, the application is Redis hash field names. Building keys may be a performance bottleneck, but probably not. This question is mostly about the "Ruby way".

    Read the article

  • What hash algorithms are parallelizable? Optimizing the hashing of large files utilizing on multi-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • Datastructure choices for highspeed and memory efficient detection of duplicate of strings

    - by Jonathan Holland
    I have a interesting problem that could be solved in a number of ways: I have a function that takes in a string. If this function has never seen this string before, it needs to perform some processing. If the function has seen the string before, it needs to skip processing. After a specified amount of time, the function should accept duplicate strings. This function may be called thousands of time per second, and the string data may be very large. This is a highly abstracted explanation of the real application, just trying to get down to the core concept for the purpose of the question. The function will need to store state in order to detect duplicates. It also will need to store an associated timestamp in order to expire duplicates. It does NOT need to store the strings, a unique hash of the string would be fine, providing there is no false positives due to collisions (Use a perfect hash?), and the hash function was performant enough. The naive implementation would be simply (in C#): Dictionary<String,DateTime> though in the interest of lowering memory footprint and potentially increasing performance I'm evaluating a custom data structures to handle this instead of a basic hashtable. So, given these constraints, what would you use? EDIT, some additional information that might change proposed implementations: 99% of the strings will not be duplicates. Almost all of the duplicates will arrive back to back, or nearly sequentially. In the real world, the function will be called from multiple worker threads, so state management will need to be synchronized.

    Read the article

  • Slow T-SQL query, convert to LINQ to Object

    - by yimbot
    I have a T-SQL query which populates a DataSet from an MSSQL database. string qry = "SELECT * FROM EvnLog AS E WHERE TimeDate = (SELECT MAX(TimeDate) From EvnLog WHERE Code = E.Code) AND (Event = 8) AND (TimeDate BETWEEN '" + Start + "' AND '" + Finish + "')" The database is quite large and being the type of nested query it is, the Data Adapter can take a number of minutes to fill the DataSet. I have extended the DataAdapter's timeout value to 480 seconds to combat it, but the client still complains about slow performance and occassional timeouts. To combat this, I was considering executing a simpler query (ie. just taking the date range) and then populating a Generic List which I could then execute a Linq query against. The simple query executes very quickly which is great. However, I cannot seem to build a Linq query which generates the same results as the T-SQL query above. Is this the best solution to this problem? Can anyone provide tips on rewriting the above T-SQL into Linq? I have also considered using a DataView, but cannot seem to get the results from that either.

    Read the article

  • What collection object is appropriate for fixed ordering of values?

    - by makerofthings7
    Scenario: I am tracking several performance counters and have a CounterDescription[] correlate to DataSnapshot[]... where CounterDescription[n] describes the data loaded within DataSnapshot[n]. I want to expose an easy to use API within C# that will allow for the easy and efficient expansion of the arrays. For example CounterDescription[0] = Humidity; DataSnapshot[0] = .9; CounterDescription[1] = Temp; DataSnapshot[1] = 63; My upload object is defined like this: Note how my intent is to correlate many Datasnapshots with a dattime reference, and using the offset of the data to refer to its meaning. This was determined to be the most efficient way to store the data on the back-end, and has now reflected itself into the following structure: public class myDataObject { [DataMember] public SortedDictionary<DateTime, float[]> Pages { get; set; } /// <summary> /// An array that identifies what each position in the array is supposed to be /// </summary> [DataMember] public CounterDescription[] Counters { get; set; } } I will need to expand each of these arrays (float[] and CounterDescription[] ), but whatever data already exists must stay in that relative offset. Which .NET objects support this? I think Array[] , LinkedList<t>, and List<t> Are able to keep the data fixed in the right locations. What do you think?

    Read the article

  • Inline form editing on client side

    - by bykasif
    I see some web sites use dynamic forms(I am not sure about how to call them!) to edit a group of data. For example: there is a group of data such as name, last name, city, country.etc. when user clicks on EDIT button, instead of doing postback, a form, consisisting of 2 textboxes + 2 comboboxes, dynamically opens to edit,And then when you click on Save button, edit form disappears, and all data updates.. Now, I know what happens over here is using Ajax for server calls and some javascript for dom manipulation.. I even found some jquery plugins for textbox editing.. However, I could not found anything for full implementation of form fields. Therefore I have implemented it on asp.net by jquery ajax calls and dom manipulation manually. here is my process: 1) when Edit button clicked: Make a ajax call to server to retrieve necessary formedit.aspx 2) it returns editable form fields with values assigned. 3) when Save button clicked: make ajax call to server to retrieve formupdateprocess.aspx page. it basically do the database updates and then return necessary DOM snipplet (...) to insert current page.. well it works but MY PROBLEM, is performance.. Result seems slower than samples I see in other sites.:(( IS there anything that I dont know? a better way to implement this??

    Read the article

  • Slow query. Wrong database structure?

    - by Tin
    I have a database with table that contains tasks. Tasks have a lifecycle. The status of the task's lifecycle can change. These state transitions are stored in a separate table tasktransitions. Now I wrote a query to find all open/reopened tasks and recently changed tasks but I already see with a rather small number of tasks (<1000) that execution time has becoming very long (0.5s). Tasks +-------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+---------+------+-----+---------+----------------+ | taskid | int(11) | NO | PRI | NULL | auto_increment | | description | text | NO | | NULL | | +-------------+---------+------+-----+---------+----------------+ Tasktransitions +------------------+-----------+------+-----+-------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+-----------+------+-----+-------------------+----------------+ | tasktransitionid | int(11) | NO | PRI | NULL | auto_increment | | taskid | int(11) | NO | MUL | NULL | | | status | int(11) | NO | MUL | NULL | | | description | text | NO | | NULL | | | userid | int(11) | NO | | NULL | | | transitiondate | timestamp | NO | | CURRENT_TIMESTAMP | | +------------------+-----------+------+-----+-------------------+----------------+ Query SELECT tasks.taskid,tasks.description,tasklaststatus.status FROM tasks LEFT OUTER JOIN ( SELECT tasktransitions.taskid,tasktransitions.transitiondate,tasktransitions.status FROM tasktransitions INNER JOIN ( SELECT taskid,MAX(transitiondate) AS lasttransitiondate FROM tasktransitions GROUP BY taskid ) AS tasklasttransition ON tasklasttransition.lasttransitiondate=tasktransitions.transitiondate AND tasklasttransition.taskid=tasktransitions.taskid ) AS tasklaststatus ON tasklaststatus.taskid=tasks.taskid WHERE tasklaststatus.status IS NULL OR tasklaststatus.status=0 or tasklaststatus.transitiondate>'2013-09-01'; I'm wondering if the database structure is best choice performance wise. Could adding indexes help? I already tried to add some but I don't see great improvements. +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | tasktransitions | 0 | PRIMARY | 1 | tasktransitionid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 1 | taskid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 2 | transitiondate | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | status_ix | 1 | status | A | 3 | NULL | NULL | | BTREE | | | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ Any other suggestions?

    Read the article

  • How to control the "flow" of an ASP.NET MVC (3.0) web app that relies on Facebook membership, with Facebook C# SDK?

    - by Chad
    I want to totally remove the standard ASP.NET membership system and use Facebook only for my web app's membership. Note, this is not a Facebook canvas app question. Typically, in an ASP.NET app you have some key properties & methods to control the "flow" of an app. Notably: Request.IsAuthenticated, [Authorize] (in MVC apps), Membership.GetUser() and Roles.IsUserInRole(), among others. It looks like [FacebookAuthorize] is equivalent to [Authorize]. Also, there's some standard work I do across all controllers in my site. So I built a BaseController that overrides OnActionExecuting(FilterContext). Typically, I populate ViewData with the user's profile within this action. Would performance suffer if I made a call to fbApp.Get("me") in this action? I use the Facebook Javascript SDK to do registration, which is nice and easy. But that's all client-side, and I'm having a hard time wrapping my mind around when to use client-side facebook calls versus server-side. There will be a point when I need to grab the user's facebook uid and store it in a "profile" table along with a few other bits of data. That would probably be best handled on the return url from the registration plugin... correct? On a side note, what data is returned from fbApp.Get("me")?

    Read the article

  • C++ Virtual Methods for Class-Specific Attributes or External Structure

    - by acanaday
    I have a set of classes which are all derived from a common base class. I want to use these classes polymorphically. The interface defines a set of getter methods whose return values are constant across a given derived class, but vary from one derived class to another. e.g.: enum AVal { A_VAL_ONE, A_VAL_TWO, A_VAL_THREE }; enum BVal { B_VAL_ONE, B_VAL_TWO, B_VAL_THREE }; class Base { //... virtual AVal getAVal() const = 0; virtual BVal getBVal() const = 0; //... }; class One : public Base { //... AVal getAVal() const { return A_VAL_ONE }; BVal getBVal() const { return B_VAL_ONE }; //... }; class Two : public Base { //... AVal getAVal() const { return A_VAL_TWO }; BVal getBVal() const { return B_VAL_TWO }; //... }; etc. Is this a common way of doing things? If performance is an important consideration, would I be better off pulling the attributes out into an external structure, e.g.: struct Vals { AVal a_val; VBal b_val; }; storing a Vals* in each instance, and rewriting Base as follows? class Base { //... public: AVal getAVal() const { return _vals->a_val; }; BVal getBVal() const { return _vals->b_val; }; //... private: Vals* _vals; }; Is the extra dereference essentially the same as the vtable lookup? What is the established idiom for this type of situation? Are both of these solutions dumb? Any insights are greatly appreciated

    Read the article

  • Freetype2 (error-)return value documentation

    - by Awaki
    In short, I'm looking for documentation that would limit the error situations to check for after a Freetype library function failed, much like the OpenGL and Win32 APIs document the error codes generated by their respective functions. I can't seem to find such documentation though, so I was wondering how to best handle translation of Freetype errors to typed exceptions. Background: I am currently in the process of implementing font-rendering capability (using Freetype) for my GUI framework, which makes strong use of typed exceptions to indicate error situations. However, the Freetype docs seem to completely omit what errors can be expected from what functions. That, if such documentation does indeed not exist, would basically leave me with two options: either guessing which errors make sense for a certain Freetype function (obviously prone to mistakes on my part), or considering every error code for translation into appropriate exceptions (less verbose since I would have to write the translation only once). Performance isn't really critical in the code that calls the Freetype library, so even the latter option would probably be acceptable, but surely there must be some kind of documentation on which library calls may return what Freetype error? Is there any such documentation which I just somehow managed to not find? Should I go the route of generically expecting every error code for translation? Or are there other ways to approach this problem? By the way, I wanted to avoid introducing some kind of generic FreetypeException (containing a description of the Freetype error) since I intended to completely hide what libraries I'm using (not from a legal point-of-view, mind you), but I guess I can be convinced to do this anyway if the consensus is that it would be the best option. I don't think it matters for this question, but I'm writing in C++.

    Read the article

  • Best way of showing more results with javascript/css

    - by Ricardo Neves
    I'm developing a website and i'm having troubles showing the search results to the user the way I want. Basically, after the user search, the page makes a couple of ajax requests and as soon as a response arrive it appends the info to a specific element on my page. Each results is shown as a line... The problem is that in most case there are going to be more than 1000 results and this would make the page have a large scroll. My idea was to show only the first 15 results and when the user clicks "show more" the element would expand and show the next 15 results and so on... This would be easier to do if the website wasn't responsive, but because it is I can't find the proper way of implementing what I want without lowering the website perfomance. I have "2 ideas": The first is by using something like #element .div:nth-child(-n+15) on my css and figure a way of changing the "15" to how much results I want to show... I don't know if this can be done. Is it possible to call css rules with parameters? Maybe with less css? The second option is probably a bad option if i don't want to lower the website performance. Using javascript I would check if there is a specific css class(like .show-15 .show30 .show45) and add that class to my element and if it don't exist, create it somehow.. Any help would be appreciated.

    Read the article

  • C++ design question, container of instances and pointers

    - by Tom
    Hi all, Im wondering something. I have class Polygon, which composes a vector of Line (another class here) class Polygon { std::vector<Line> lines; public: const_iterator begin() const; const_iterator end() const; } On the other hand, I have a function, that calculates a vector of pointers to lines, and based on those lines, should return a pointer to a Polygon. Polygon* foo(Polygon& p){ std::vector<Line> lines = bar (p.begin(),p.end()); return new Polygon(lines); } Here's the question: I can always add a Polygon (vector Is there a better way that dereferencing each element of the vector and assigning it to the existing vector container? //for line in vector<Line*> v //vcopy is an instance of vector<Line> vcopy.push_back(*(v.at(i)) I think not, but I dont really like that approach. Hopefully, I will be able to convince the author of the class to change it, but I cant base my coding right now to that fact (and i'm scared of a performance hit). Thanks in advance.

    Read the article

  • C++ design question, container of instances and pointers

    - by Tom
    Hi all, Im wondering something. I have class Polygon, which composes a vector of Line (another class here) class Polygon { std::vector<Line> lines; public: const_iterator begin() const; const_iterator end() const; } On the other hand, I have a function, that calculates a vector of pointers to lines, and based on those lines, should return a pointer to a Polygon. Polygon* foo(Polygon& p){ std::vector<Line> lines = bar (p.begin(),p.end()); return new Polygon(lines); } Here's the question: I can always add a Polygon (vector Is there a better way that dereferencing each element of the vector and assigning it to the existing vector container? //for line in vector<Line*> v //vcopy is an instance of vector<Line> vcopy.push_back(*(v.at(i)) I think not, but I dont really like that approach. Hopefully, I will be able to convince the author of the class to change it, but I cant base my coding right now to that fact (and i'm scared of a performance hit). Thanks in advance.

    Read the article

  • C++ design question, container of instances and pointers

    - by Tom
    Hi all, Im wondering something. I have class Polygon, which composes a vector of Line (another class here) class Polygon { std::vector<Line> lines; public: const_iterator begin() const; const_iterator end() const; } On the other hand, I have a function, that calculates a vector of pointers to lines, and based on those lines, should return a pointer to a Polygon. Polygon* foo(Polygon& p){ std::vector<Line> lines = bar (p.begin(),p.end()); return new Polygon(lines); } Here's the question: I can always add a Polygon (vector Is there a better way that dereferencing each element of the vector and assigning it to the existing vector container? //for line in vector<Line*> v //vcopy is an instance of vector<Line> vcopy.push_back(*(v.at(i)) I think not, but I dont really like that approach. Hopefully, I will be able to convince the author of the class to change it, but I cant base my coding right now to that fact (and i'm scared of a performance hit). Thanks in advance.

    Read the article

  • Best way to update/insert into a table based on a remote table.

    - by martilyo
    I have two very large enterprise tables in an Oracle 10g database. One table keeps the historical information of the other table. The problem is, I'm getting to the point where the records are just too many that my insert update is taking too long and my session is getting killed by the governor. Here's a pseudocode of my update process: sqlsel := 'SELECT col1, col2, col3, sysdate FROM table2@remote_location dpi WHERE (col1, col2, col3) IN ( SELECT col1, col2, col3 FROM table2@remote_location MINUS SELECT DISTINCT col1, col2, col3 FROM table1 mpc WHERE facility = '''||load_facility||''' )'; EXECUTE IMMEDIATE sqlsel BULK COLLECT INTO table1; I've tried the MERGE statement: MERGE INTO table1 t1 USING ( SELECT col1, col2, col3 FROM table2@remote_location ) t2 ON ( t1.col1 = t2.col1 AND t1.col2 = t2.col2 AND t1.col3 = t2.col3 ) WHEN NOT MATCHED THEN INSERT (t1.col1, t1.col2, t1.col3, t1.update_dttm ) VALUES (t2.col1, t2.col2, t2.col3, sysdate ) But there seems to be a confirmed bug on versions prior to Oracle 10.2.0.4 on the merge statement when doing a merge using a remote database. The chance of getting an enterprise upgrade is slim so is there a way to further optimize my first query or write it in another way to have it run best performance wise? Thanks.

    Read the article

  • Using an embedded DB (SQLite / SQL Compact) for Message Passing within an app?

    - by wk1989
    Hello, Just out of curiosity, for applications that have a fairly complicated module tree, would something like sqlite/sql compact edition work well for message passing? So if I have modules containing data such as: \SubsystemA\SubSubSysB\ModuleB\ModuleDataC, \SubSystemB\SubSubSystemC\ModuleA\ModuleDataX Using traditional message passing/routing, you have to go through intermediate modules in order to pass a message to ModuleB to request say ModuleDataC. Instead of doing that, if we we simply store "\SubsystemA\SubSubSysB\ModuleB\ModuleDataC" in a sqlite database, getting that data is as simple as a sql query and needs no routing and passing stuff around. Has anyone done this before? Even if you haven't, do you foresee any issues & performance impact? The only concern I have right now would be the passing of custom types, e.g. if ModuleDataC is a custom data structure or a pointer, I'll need some way of storing the data structure into the DB or storing the pointer into the DB. Thanks, JW EDIT One usage case I haven't thought about is when you want to send a message from ModuleA to ModuleB to get ModuleB to do something rather than just getting/setting data. Is it possible to do this using an embedded DB? I believe callback from the DB would be needed, how feasible is this?

    Read the article

  • Index question: Select * with WHERE clause. Where and how to create index

    - by Mestika
    Hi, I’m working on optimizing some of my queries and I have a query that states: select * from SC where c_id ="+c_id” The schema of ** SC** looks like this: SC ( c_id int not null, date_start date not null, date_stop date not null, r_t_id int not null, nt int, t_p decimal, PRIMARY KEY (c_id, r_t_id, date_start, date_stop)); My immediate bid on how the index should be created is a covering index in this order: INDEX(c_id, date_start, date_stop, nt, r_t_id, t_p) The reason for this order I base on: The WHERE clause selects from c_id thus making it the first sorting order. Next, the date_start and date_stop to specify a sort of “range” to be defined in these parameters Next, nt because it will select the nt Next the r_t_id because it is a ID for a specific type of my r_t table And last the t_p because it is just a information. I don’t know if it is at all necessary to order it in a specific way when it is a SELECT ALL statement. I should say, that the SC is not the biggest table. I can say how many rows it contains but a estimate could be between <10 and 1000. The next thing to add is, that the SC, in different queries, inserts the data into the SC, and I know that indexes on tables which have insertions can be cost ineffective, but can I somehow create a golden middle way to effective this performance. Don't know if it makes a different but I'm using IBM DB2 version 9.7 database Sincerely Mestika

    Read the article

  • Array of Arrays - writing to File problem

    - by iFloh
    Hi, and again my array of arrays ... I try to improve my app performance by buffering arrays on file for later reuse. I have an NSMutableArray that contains about 30 NSMutableArrays with NSNumber, NSDate and NSString Objects. I try to write the file using this call: bool result = [myArray writeToFile:[fileMethods getFullPath:[NSString stringWithFormat:@"iEts%@.arr", [aDate shortDateString]]] atomically:NO]; = result = FALSE. The Path method is: + (NSString *) getFullPath:(NSString *)forFileName { NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; return [documentsDirectory stringByAppendingPathComponent:forFileName]; } and the aDate call returns a shortDateString with ddMMyy. The NSLog NSLog(@"%@", [fileMethods getFullPath:[NSString stringWithFormat:@"iEts%@.arr", [aDate shortDateString]]]); on the path generation returns: /Users/me/Library/Application Support/iPhone Simulator/User/Applications/86729620-EC1D-4C10-A799-0C638BB27933/Documents/iEts010510.arr FURTHER: It must have something to do with the Array of Arrays, since I also write 3 further simple arrays (containing NSStrings) that all succeed. The Array of Arrays gets generated using the addObject method Any ideas what could cause the trouble?

    Read the article

  • cython setup.py gives .o instead of .dll

    - by alok1974
    Hi, I am a newbie to cython, so pardon me if I am missing something obvious here. I am trying to build c extensions to be used in python for enhanced performance. I have fc.py module with a bunch of function and trying to generate a .dll through cython using dsutils and running on win64: c:\python26\python c:\cythontest\setup.py build_ext --inplace I have the dsutils.cfg in C:\Python26\Lib\distutils. As required the disutils.cfg has the following config settings: [build] compiler = mingw32 My startup.py looks like this: from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext ext_modules = [Extension('fc', [r'C:\cythonTest\fc.pyx'])] setup( name = 'FC Extensions', cmdclass = {'build_ext': build_ext}, ext_modules = ext_modules ) I have latest version mingw for target/host amdwin64 type builds. I have the latest version of cython for python26 for win64. Cython does give me an fc.c without errors, only a few warning for type conversions, which I will handle once I have it right. Further it produces fc.def an fc.o files Instead of giving a .dll. I get no errors. I find on threads that it will create the .so or .dll automatically as required, which is not happening.

    Read the article

< Previous Page | 488 489 490 491 492 493 494 495 496 497 498 499  | Next Page >