Search Results

Search found 9542 results on 382 pages for 'row'.

Page 309/382 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • Multithreading: Communication from Parent thread to child thread

    - by Dennis Nowland
    I have a List of threads normally 3 threads each of the threads reference a webbrowser control that communicates with the parent control to populate a datagridview. What I need to do is when the user clicks the button in a datagridviewButtonCell corresponding data will be sent back to the webbrowser control within the child thread that originally communicated with the main thread. but when I try to do this I receive the following error message 'COM object that has been separated from its underlying RCW cannot be used.' my problem is that I can not figure out how to reference the relevant webbrowser control. I would appreciate any help that anyone can give me. The language used is c# winforms .Net 4.0 targeted Code sample: The following code is executed when user click on the Start button in the main thread private void StartSubmit(object idx) { /* method used by the new thread to initialise a 'myBrowser' inherited from the webbrowser control each submitters object is an a custom Control called 'myBrowser' which holds detail about the function of the object eg: */ //index: is an integer value which represents the threads id int index = (int)idx; //submitters[index] is an instance of the 'myBrowser' control submitters[index] = new myBrowser(); //threads integer id submitters[index]._ThreadNum = index; // naming convention used 'browser' +the thread index submitters[index].Name = "browser" + index; //set list in 'myBrowser' class to hold a copy of the list found in the main thread submitters[index]._dirs = dirLists[index]; // suppress and javascript errors the may occur in the 'myBrowser' control submitters[index].ScriptErrorsSuppressed = true; //execute eventHandler submitters[index].DocumentCompleted += new WebBrowserDocumentCompletedEventHandler(DocumentCompleted); //advance to the next un-opened address in datagridview the navigate the that address //in the 'myBrowser' control. SetNextDir(submitters[index]); } private void btnStart_Click(object sender, EventArgs e) { // used to fill list<string> for use in each thread. fillDirs(); //connections is the list<Thread> holding the thread that have been opened //1 to 10 maximum for (int n = 0; n < (connections.Length); n++) { //initialise new thread to the StartSubmit method passing parameters connections[n] = new Thread(new ParameterizedThreadStart(StartSubmit)); // naming convention used conn + the threadIndex ie: 'conn1' to 'conn10' connections[n].Name = "conn" + n.ToString(); // due to the webbrowser control needing to be ran in the single //apartment state connections[n].SetApartmentState(ApartmentState.STA); //start thread passing the threadIndex connections[n].Start(n); } } Once the 'myBrowser' control is fully loaded I am inserting form data into webforms found in webpages loaded via data enter into rows found in the datagridview. Once a user has entered the relevant details into the different areas in the row the can then clicking a DataGridViewButtonCell that has tha collects the data entered and then has to be send back to the corresponding 'myBrowser' object that is found on a child thread. Thank you

    Read the article

  • Adding Column to a SQL Server Table

    - by Dinesh Asanka
    Adding a column to a table is  common task for  DBAs. You can add a column to a table which is a nullable column or which has default values. But are these two operations are similar internally and which method is optimal? Let us start this with an example. I created a database and a table using following script: USE master Go --Drop Database if exists IF EXISTS (SELECT 1 FROM SYS.databases WHERE name = 'AddColumn') DROP DATABASE AddColumn --Create the database CREATE DATABASE AddColumn GO USE AddColumn GO --Drop the table if exists IF EXISTS ( SELECT 1 FROM sys.tables WHERE Name = 'ExistingTable') DROP TABLE ExistingTable GO --Create the table CREATE TABLE ExistingTable (ID BIGINT IDENTITY(1,1) PRIMARY KEY CLUSTERED, DateTime1 DATETIME DEFAULT GETDATE(), DateTime2 DATETIME DEFAULT GETDATE(), DateTime3 DATETIME DEFAULT GETDATE(), DateTime4 DATETIME DEFAULT GETDATE(), Gendar CHAR(1) DEFAULT 'M', STATUS1 CHAR(1) DEFAULT 'Y' ) GO -- Insert 100,000 records with defaults records INSERT INTO ExistingTable DEFAULT VALUES GO 100000 Before adding a Column Before adding a column let us look at some of the details of the database. DBCC IND (AddColumn,ExistingTable,1) By running the above query, you will see 637 pages for the created table. Adding a Column You can add a column to the table with following statement. ALTER TABLE ExistingTable Add NewColumn INT NULL Above will add a column with a null value for the existing records. Alternatively you could add a column with default values. ALTER TABLE ExistingTable Add NewColumn INT NOT NULL DEFAULT 1 The above statement will add a column with a 1 value to the existing records. In the below table I measured the performance difference between above two statements. Parameter Nullable Column Default Value CPU 31 702 Duration 129 ms 6653 ms Reads 38 116,397 Writes 6 1329 Row Count 0 100000 If you look at the RowCount parameter, you can clearly see the difference. Though column is added in the first case, none of the rows are affected while in the second case all the rows are updated. That is the reason, why it has taken more duration and CPU to add column with Default value. We can verify this by several methods. Number of Pages The number of data pages can be obtained by using DBCC IND command. Though, this an undocumented dbcc command, many experts are ok to use this command in production. However, since there is no official word from Microsoft, use this “at your own risk”. DBCC IND (AddColumn,ExistingTable,1) Before Adding the Columns 637 Adding a Column with NULL 637 Adding a column with DEFAULT value 1270 This clearly shows that pages are physically modified. Please note, a high value indicated in the Adding a column with DEFAULT value  column is also a result of page splits. Continues…

    Read the article

  • Coordinate based travel through multi-line path over elapsed time

    - by Chris
    I have implemented A* Path finding to decide the course of a sprite through multiple waypoints. I have done this for point A to point B locations but am having trouble with multiple waypoints, because on slower devices when the FPS slows and the sprite travels PAST a waypoint I am lost as to the math to switch directions at the proper place. EDIT: To clarify my path finding code is separate in a game thread, this onUpdate method lives in a sprite like class which happens in the UI thread for sprite updating. To be even more clear the path is only updated when objects block the map, at any given point the current path could change but that should not affect the design of the algorithm if I am not mistaken. I do believe all components involved are well designed and accurate, aside from this piece :- ) Here is the scenario: public void onUpdate(float pSecondsElapsed) { // this could be 4x speed, so on slow devices the travel moved between // frames could be very large. What happens with my original algorithm // is it will start actually doing circles around the next waypoint.. pSecondsElapsed *= SomeSpeedModificationValue; final int spriteCurrentX = this.getX(); final int spriteCurrentY = this.getY(); // getCoords contains a large array of the coordinates to each waypoint. // A waypoint is a destination on the map, defined by tile column/row. The // path finder converts these waypoints to X,Y coords. // // I.E: // Given a set of waypoints of 0,0 to 12,23 to 23, 0 on a 23x23 tile map, each tile // being 32x32 pixels. This would translate in the path finder to this: // -> 0,0 to 12,23 // Coord : x=16 y=16 // Coord : x=16 y=48 // Coord : x=16 y=80 // ... // Coord : x=336 y=688 // Coord : x=336 y=720 // Coord : x=368 y=720 // // -> 12,23 to 23,0 -NOTE This direction change gives me trouble specifically // Coord : x=400 y=752 // Coord : x=400 y=720 // Coord : x=400 y=688 // ... // Coord : x=688 y=16 // Coord : x=688 y=0 // Coord : x=720 y=0 // // The current update index, the index specifies the coordinate that you see above // I.E. final int[] coords = getCoords( 2 ); -> x=16 y=80 final int[] coords = getCoords( ... ); // now I have the coords, how do I detect where to set the position? The tricky part // for me is when a direction changes, how do I calculate based on the elapsed time // how far to go up the new direction... I just can't wrap my head around this. this.setPosition(newX, newY); }

    Read the article

  • Extracting the Layout of all the Data Forms from the Relational Database

    - by RahulS
    Today I came across a question from one of our clients that: "what members are used on each data form WITHOUT having to go through the report generated out of our Planning app". We worked with client on this and reached to a simple query. All the form related information is stored in the following tables: HSP_FORM HSP_FORMOBJ_DEF HSP_FORMOBJ_DEF_MBR HSP_FORM_ATTRIBUTES HSP_FORM_CALCS HSP_FORM_DV_CONDITION HSP_FORM_DV_PM_RULE HSP_FORM_DV_RULE HSP_FORM_DV_USER_IN_PM_RULE HSP_FORM_LAYOUT HSP_FORM_MENUS HSP_FORM_VARIABLES If we want to retrieve just the members included, we can concentrate on: HSP_OBJECT to get the Object_ID for form, Object_Type is 7 for forms. (Ex: Select * from HSP_OBJECT where OBJECT_TYPE = 7) HSP_FORMOBJ_DEF Find the OBJDEF_ID for a particular form HSP_FORMOBJ_DEF_MBR Use the above OBJDEF_ID to find the members: Here the Mbr_ID is the Id of the member and Query_Type is the Function like Idesc, Level0 etc and Sequce is you sequence, And the final table we can use is HSP_FORM_LAYOUT: Layout_Type: 0->Pov 1-> Page, 2->Row, 3->Col, DIM_ID is the dimension ID and Ordinal is position. Here is the Query: SELECT HSP_OBJECT.OBJECT_NAME AS 'Form',  HSP_OBJECT_2.OBJECT_NAME AS 'Dimension',  HSP_OBJECT_1.OBJECT_NAME AS 'Member',  HSP_FORMOBJ_DEF_MBR.QUERY_TYPE FROM  <DatabaseName>.dbo.HSP_FORM_LAYOUT HSP_FORM_LAYOUT,  <DatabaseName>.dbo.HSP_FORMOBJ_DEF HSP_FORMOBJ_DEF,  <DatabaseName>.dbo.HSP_FORMOBJ_DEF_MBR HSP_FORMOBJ_DEF_MBR,  <DatabaseName>.dbo.HSP_MEMBER HSP_MEMBER,  <DatabaseName>.dbo.HSP_OBJECT HSP_OBJECT,  <DatabaseName>.dbo.HSP_OBJECT HSP_OBJECT_1,  <DatabaseName>.dbo.HSP_OBJECT HSP_OBJECT_2 WHERE  HSP_OBJECT.OBJECT_ID = HSP_FORMOBJ_DEF.FORM_ID AND  HSP_FORMOBJ_DEF_MBR.OBJDEF_ID = HSP_FORMOBJ_DEF.OBJDEF_ID AND  HSP_MEMBER.MEMBER_ID = HSP_FORMOBJ_DEF_MBR.MBR_ID AND  HSP_OBJECT_1.OBJECT_ID = HSP_MEMBER.MEMBER_ID AND  HSP_OBJECT_2.OBJECT_ID = HSP_MEMBER.DIM_ID AND  HSP_FORM_LAYOUT.DIM_ID = HSP_MEMBER.DIM_ID AND  HSP_FORM_LAYOUT.FORM_ID = HSP_FORMOBJ_DEF.FORM_ID AND  ((HSP_OBJECT.OBJECT_TYPE=7)) ORDER BY HSP_OBJECT.OBJECT_NAME  Concentrate on Test1 data form and Actual Layout of it as follows: Corresponding Query_type for few of the functions: 9  for Idesc, 3  for Ancestors, -9 for ILvl0Des, 8  for Desc, 4  for IAncestors Its just a basic idea you can do lot on the basis of this. Cheers..!!! Rahul S. http://www.facebook.com/pages/HyperionPlanning/117320818374228

    Read the article

  • how to update child records when updating the Master table using Linq [closed]

    - by user20358
    I currently use a general repositry class that can update only a single table like so public abstract class MyRepository<T> : IRepository<T> where T : class { protected IObjectSet<T> _objectSet; protected ObjectContext _context; public MyRepository(ObjectContext Context) { _objectSet = Context.CreateObjectSet<T>(); _context = Context; } public IQueryable<T> GetAll() { return _objectSet.AsQueryable(); } public IQueryable<T> Find(Expression<Func<T, bool>> filter) { return _objectSet.Where(filter); } public void Add(T entity) { _objectSet.AddObject(entity); _context.ObjectStateManager.ChangeObjectState(entity, System.Data.EntityState.Added); _context.SaveChanges(); } public void Update(T entity) { _context.ObjectStateManager.ChangeObjectState(entity, System.Data.EntityState.Modified); _context.SaveChanges(); } public void Delete(T entity) { _objectSet.Attach(entity); _context.ObjectStateManager.ChangeObjectState(entity, System.Data.EntityState.Deleted); _objectSet.DeleteObject(entity); _context.SaveChanges(); } } For every table class generated by my EDMX designer I create another class like this public class CustomerRepo : MyRepository<Customer> { public CustomerRepo (ObjectContext context) : base(context) { } } for any updates that I need to make to a particular table I do this: Customer CustomerObj = new Customer(); CustomerObj.Prop1 = ... CustomerObj.Prop2 = ... CustomerObj.Prop3 = ... CustomerRepo.Update(CustomerObj); This works perfectly well when I am updating just to the specific table called Customer. Now if I need to also update each row of another table which is a child of Customer called Orders what changes do I need to make to the class MyRepository. Orders table will have multiple records for a Customer record and multiple fields too, say for example Field1, Field2, Field3. So my questions are: 1.) If I only need to update Field1 of the Orders table for some rows based on a condition and Field2 for some other rows based on a different condition then what changes I need to do? 2.) If there is no such condition and all child rows need to be updated with the same value for all rows then what changes do I need to do? Thanks for taking the time. Look forward to your inputs...

    Read the article

  • Sometimes keyboard & touchpad work... sometimes not

    - by Voyagerfan5761
    When I first ran Ubuntu from CD on this Dell Inspiron 2650, it worked for about ten to fifteen minutes, then it hung (I was probably trying to do too much at once from a Live CD). The next time, my mouse and keyboard didn't work. I rebooted three times and finally got them working. I then installed Ubuntu alongside Windows XP. After installing, selecting the OS in GRUB worked, but my touchpad and keyboard were again not working. I rebooted, and they worked. (I fortunately had a USB mouse with which to reboot.) Booting Ubuntu and then rebooting to enable my keyboard and touchpad has become a routine ever since. Often several reboots are required; at one point I had to reboot over a dozen times in a row before getting a session where everything worked properly. (My installation has been in place for about three days a week now.) I've looked around for a device manager equivalent to no avail. Sometimes the hardware is properly detected, and sometimes it's not. Once or twice I've had the keyboard detected properly but the touchpad not. Plugging in my wireless card also sometimes requires a plug, unplug, and plug again to get it working. So is there some solution? I'm without an Internet connection at home, and this "laptop" is really a wall wart on my desk, so suggestions for packages may take a while to test. Xorg logs I captured two three four sample Xorg logs: one from a startup where the devices worked; one from when they didn't; one from a session where Ubuntu thought my touchpad was a normal mouse; and one from a session where my keyboard worked but the touchpad didn't. See this gist. Updated 2010-12-15 01:50 UTC with Xorg.0.log.keyboardonly file illustrating the case where the keyboard worked but not the touchpad. Updated 2011-01-11 04:10 UTC with Xorg.0.log.touchpadregmouse to illustrate a case where the touchpad was detected as a regular mouse (no "Touchpad" tab in mouse prefs).

    Read the article

  • GWT: Generate more complete crawl error report

    - by Mike
    I'm a developer in charge of managing Webmasters and related issues (including correcting crawl errors) for dozens (hundreds, maybe?) of active sites and as part of my duties I create a report of every discrepancy, including all pages generating a 404 and all pages that link to those pages. Currently within Webmaster Tools I'm able to download a csv file of all pages with a 404 response, but I'm then having to manually click on every single one of those links and copy the "linked from" field to paste into my spreadsheet. This is extremely tedious and seems unnecessary; I would expect the ability to download all that data at once. I'm ultimately looking for the end result of one csv file that has every url with a 404, but also has every url that links to each one of them. Am I overlooking this functionality somewhere or does anyone have a good solution? Edit 1 (2/11/2013): Example of what the csv output looks like now: URL,Response Code,News Error,Detected,Category http://www.abcdef.com/123.php,404,,11/12/13,Not found http://www.abcdef.com/456.php,404,,11/12/13,Not found Which is great, but let's say 123.php has 5 pages that link to it. Now I have to duplicate that row in my spreadsheet 4 more times, then go into Webmasters, get all the url's that link to the page, and add that data to my spreadsheet. The output I would prefer: URL,Response Code,Linked From,News Error,Detected,Category http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found Note the (hypothetical) addition of a "Linked From" column, as well as the fact there are only 2 unique URL's now (like before) but all of the "Linked To" pages are shown in one report. Edit 2 (2/12/2013): To clarify, my question is less about detecting and correcting 404's, but more about generating a report of what Google has listed as errors. Oftentimes, these errors aren't even valid anymore but I still need documentation to show that Google detected a problem and that problem is now fixed. Many of the "linked from" url's I find are actually outdated, cached resources. For example, I'll frequently see that the linked-from url is the sitemap, which is actually an old sitemap cached by Google that points to an old page. Neither the sitemap or old page exist, but they still appear in my crawl error reports because they are cached resources.

    Read the article

  • How do I make this rendering thread run together with the main one?

    - by funk
    I'm developing an Android game and need to show an animation of an exploding bomb. It's a spritesheet with 1 row and 13 different images. Each image should be displayed in sequence, 200 ms apart. There is one Thread running for the entire game: package com.android.testgame; import android.graphics.Canvas; public class GameLoopThread extends Thread { static final long FPS = 10; // 10 Frames per Second private final GameView view; private boolean running = false; public GameLoopThread(GameView view) { this.view = view; } public void setRunning(boolean run) { running = run; } @Override public void run() { long ticksPS = 1000 / FPS; long startTime; long sleepTime; while (running) { Canvas c = null; startTime = System.currentTimeMillis(); try { c = view.getHolder().lockCanvas(); synchronized (view.getHolder()) { view.onDraw(c); } } finally { if (c != null) { view.getHolder().unlockCanvasAndPost(c); } } sleepTime = ticksPS - (System.currentTimeMillis() - startTime); try { if (sleepTime > 0) { sleep(sleepTime); } else { sleep(10); } } catch (Exception e) {} } } } As far as I know I would have to create a second Thread for the bomb. package com.android.testgame; import android.graphics.Bitmap; import android.graphics.Canvas; import android.graphics.Rect; public class Bomb { private final Bitmap bmp; private final int width; private final int height; private int currentFrame = 0; private static final int BMPROWS = 1; private static final int BMPCOLUMNS = 13; private int x = 0; private int y = 0; public Bomb(GameView gameView, Bitmap bmp) { this.width = bmp.getWidth() / BMPCOLUMNS; this.height = bmp.getHeight() / BMPROWS; this.bmp = bmp; x = 250; y = 250; } private void update() { currentFrame++; new BombThread().start(); } public void onDraw(Canvas canvas) { update(); int srcX = currentFrame * width; int srcY = height; Rect src = new Rect(srcX, srcY, srcX + width, srcY + height); Rect dst = new Rect(x, y, x + width, y + height); canvas.drawBitmap(bmp, src, dst, null); } class BombThread extends Thread { @Override public void run() { try { sleep(200); } catch(InterruptedException e){ } } } } The Threads would then have to run simultaneously. How do I do this?

    Read the article

  • Function Folding in #PowerQuery

    - by Darren Gosbell
    Originally posted on: http://geekswithblogs.net/darrengosbell/archive/2014/05/16/function-folding-in-powerquery.aspxLooking at a typical Power Query query you will noticed that it's made up of a number of small steps. As an example take a look at the query I did in my previous post about joining a fact table to a slowly changing dimension. It was roughly built up of the following steps: Get all records from the fact table Get all records from the dimension table do an outer join between these two tables on the business key (resulting in an increase in the row count as there are multiple records in the dimension table for each business key) Filter out the excess rows introduced in step 3 remove extra columns that are not required in the final result set. If Power Query was to execute a query like this literally, following the same steps in the same order it would not be overly efficient. Particularly if your two source tables were quite large. However Power Query has a feature called function folding where it can take a number of these small steps and push them down to the data source. The degree of function folding that can be performed depends on the data source, As you might expect, relational data sources like SQL Server, Oracle and Teradata support folding, but so do some of the other sources like OData, Exchange and Active Directory. To explore how this works I took the data from my previous post and loaded it into a SQL database. Then I converted my Power Query expression to source it's data from that database. Below is the resulting Power Query which I edited by hand so that the whole thing can be shown in a single expression: let     SqlSource = Sql.Database("localhost", "PowerQueryTest"),     BU = SqlSource{[Schema="dbo",Item="BU"]}[Data],     Fact = SqlSource{[Schema="dbo",Item="fact"]}[Data],     Source = Table.NestedJoin(Fact,{"BU_Code"},BU,{"BU_Code"},"NewColumn"),     LeftJoin = Table.ExpandTableColumn(Source, "NewColumn"                                   , {"BU_Key", "StartDate", "EndDate"}                                   , {"BU_Key", "StartDate", "EndDate"}),     BetweenFilter = Table.SelectRows(LeftJoin, each (([Date] >= [StartDate]) and ([Date] <= [EndDate])) ),     RemovedColumns = Table.RemoveColumns(BetweenFilter,{"StartDate", "EndDate"}) in     RemovedColumns If the above query was run step by step in a literal fashion you would expect it to run two queries against the SQL database doing "SELECT * …" from both tables. However a profiler trace shows just the following single SQL query: select [_].[BU_Code],     [_].[Date],     [_].[Amount],     [_].[BU_Key] from (     select [$Outer].[BU_Code],         [$Outer].[Date],         [$Outer].[Amount],         [$Inner].[BU_Key],         [$Inner].[StartDate],         [$Inner].[EndDate]     from [dbo].[fact] as [$Outer]     left outer join     (         select [_].[BU_Key] as [BU_Key],             [_].[BU_Code] as [BU_Code2],             [_].[BU_Name] as [BU_Name],             [_].[StartDate] as [StartDate],             [_].[EndDate] as [EndDate]         from [dbo].[BU] as [_]     ) as [$Inner] on ([$Outer].[BU_Code] = [$Inner].[BU_Code2] or [$Outer].[BU_Code] is null and [$Inner].[BU_Code2] is null) ) as [_] where [_].[Date] >= [_].[StartDate] and [_].[Date] <= [_].[EndDate] The resulting query is a little strange, you can probably tell that it was generated programmatically. But if you look closely you'll notice that every single part of the Power Query formula has been pushed down to SQL Server. Power Query itself ends up just constructing the query and passing the results back to Excel, it does not do any of the data transformation steps itself. So now you can feel a bit more comfortable showing Power Query to your less technical Colleagues knowing that the tool will do it's best fold all the  small steps in Power Query down the most efficient query that it can against the source systems.

    Read the article

  • Writing or extending existing emacs packages: is it worth or should I move to Netbeans/Eclipse?

    - by Andrea
    I'm finishing my master degree course in CS and I've almost become addicted to Emacs. I've used it to write in C, Latex, Java, JSP,XML, CommonLisp, Ada and other languages no other editor supported, like AMPL. I'd like to improve the packages I've been using the most or create new ones, but, in practice, I find that the implementation of Emacs leaves a lot to be desired. There are a lot of poorly-featured/poorly-maintained packages with either overlapping functionalities or obscure incompatibilities, and Elisp just seems to foster the situation by lacking the common features modern lisps have. In contrast Eclipse and Netbeans are actively improved and it does seem they can be effective for non-mainstream languages. I tried Hibachi for Ada in Eclipse and it worked well, there's CUPS for Lisp in Eclipse and LambdaBeans built using NetBeans components. On the other hand those plugins seem to be less active than their Emacs' counterparts, for example Hibachi was archived last year. What's your opinion on this? Which editor should I write extension for? EDIT: To answer Larry Coleman (see comment below): I like Emacs as a user because it is efficient both for me and the computer I'm using. It's fast and the textual interface (i.e. minibuffer) allows for quick interaction. It's solid and packages are usually small and easy to manage. If I need to correct or remove something I usually just have to change a row in my .emacs or an elisp file, or delete a directory. Eclipse plugins rely on a more complicated process that screwed my Eclipse configuration a couple of times, forcing me to do a clean reinstall. Emacs works as long as I use the basic packages. If I need something more complicated the situation gets pretty hairy. As a "power user" I think that the best I can hope for is to write a severely crippled version of the extensions I'd actually like to have; in other words, that it's not worth the trouble. I'd like to write extensions for the things I'd like to have automated in Emacs, for example project support with automated tag-table update on file writing. There are a few projects on this that lack integration, documentation, extensibility and so forth. The best one is probably CEDET, for which I believe the Greenspun's 10th rule can be applied. EDIT: To comment Larry Coleman's answer I'm pretty sure I can pick elisp programming but the extensions I have in mind don't exist yet despite their relative simplicity and the effort more knowledgeable people poured into related projects.This makes me wonder whether it is so because of the way emacs is developed, i.e. people tend to write their own little extensions without coordination, or its implementation, its extension language not being able to keep up with the growing complexity.

    Read the article

  • PeopleSoft New Design Solves Navigation Problem

    - by Applications User Experience
    Anna Budovsky, User Experience Principal Designer, Applications User Experience In PeopleSoft we strive to improve User Experience on all levels. Simplifying navigation and streamlining access to the most important pages is always an important goal. No one likes to waste time waiting for pages to load and watching a spinning glass going on and on. Those performance-affecting server trips, page-load waits and just-too-many clicks were complained about for a long time. Something had to be done. A few new designs came in PeopleSoft 9.2 helping users to access their everyday work areas easier and faster. For example, Dashboard and Work Center aggregate most accessed information sections on a single page; Related Information allows users to complete transaction-related-research without interrupting a transaction and Secure Search gets users to a specific page directly. Today we’ll talk about the Actions menu. Most PeopleSoft pages are shared between individual products and product lines. It means changing the content on a single page involves Oracle development and quality assurance time for making and testing the changes. In order to streamline the navigation and cut down on accessing PeopleSoft pages one-page-at-a-time, we introduced a new menu design. The new menu allows accessing shared pages without the Oracle development team making any local changes, and it works as an additional one-click-path to specific high-traffic actionable pages. Let’s look at how many steps it took to Change Salary for an employee in HCM 9.1 before: Figure 1. BEFORE: The 6 steps a user would take to Change Salary in PeopleSoft HCM 9.1 In PeopleSoft 9.1 it took 5 steps + page loading time + additional verification time for making sure a correct employee is selected from the table. In PeopleSoft 9.2 it only takes 2 steps. To complete Ad Hoc Change Salary action, the user can start from the HCM Manager's Dashboard, click the Action menu within a table, choose a menu option, and access a correct employee’s details page to take an action. Figure 2. AFTER: The 2 steps a user would take to Change Salary in PeopleSoft HCM 9.2 The new menu is placed on a row level which ensures the user accesses the correct employee’s details page. The Actions menu separates menu options into hierarchical sections which help to scan and access the correct option quickly. The new menu’s small size and its structure enabled users to access high-traffic pages from any page and from any part of the page. No more spinning hourglass, no more multiple pages upload. The flexible design fits anywhere on a page and provides a fast and reliable path to the correct destination within the product. Now users can: Access any target page no matter how far it is buried from the starting point; Reduce navigation and page-load time; Improve productivity and reduce errors. The new menu design is available and widely used in all PeopleSoft 9.2 product lines.

    Read the article

  • algorithm for project euler problem no 18

    - by Valentino Ru
    Problem number 18 from Project Euler's site is as follows: By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23. 3 7 4 2 4 6 8 5 9 3 That is, 3 + 7 + 4 + 9 = 23. Find the maximum total from top to bottom of the triangle below: 75 95 64 17 47 82 18 35 87 10 20 04 82 47 65 19 01 23 75 03 34 88 02 77 73 07 63 67 99 65 04 28 06 16 70 92 41 41 26 56 83 40 80 70 33 41 48 72 33 47 32 37 16 94 29 53 71 44 65 25 43 91 52 97 51 14 70 11 33 28 77 73 17 78 39 68 17 57 91 71 52 38 17 14 91 43 58 50 27 29 48 63 66 04 68 89 53 67 30 73 16 69 87 40 31 04 62 98 27 23 09 70 98 73 93 38 53 60 04 23 NOTE: As there are only 16384 routes, it is possible to solve this problem by trying every route. However, Problem 67, is the same challenge with a triangle containing one-hundred rows; it cannot be solved by brute force, and requires a clever method! ;o) The formulation of this problems does not make clear if the "Traversor" is greedy, meaning that he always choosed the child with be higher value the maximum of every single walkthrough is asked The NOTE says, that it is possible to solve this problem by trying every route. This means to me, that is is also possible without! This leads to my actual question: Assumed that not the greedy one is the max, is there any algorithm that finds the max walkthrough value without trying every route and that doesn't act like the greedy algorithm? I implemented an algorithm in Java, putting the values first in a node structure, then applying the greedy algorithm. The result, however, is cosidered as wrong by Project Euler. sum = 0; void findWay(Node node){ sum += node.value; if(node.nodeLeft != null && node.nodeRight != null){ if(node.nodeLeft.value > node.nodeRight.value){ findWay(node.nodeLeft); }else{ findWay(node.nodeRight); } } }

    Read the article

  • How can I diagnose/debug "maximum number of clients reached" X errors?

    - by jmtd
    Hi, I'm hitting a problem whereby X prevents processes from creating windows, uttering something like the following into ~/.xsession-errors: cannot open display: :0.0 Maximum number of clients reached Searching around there are lots of examples of people facing this problem, and sometimes people identify which program they are running is using up all the client slots. See e.g. LP 70872 (Firefox), LP 263211 (gnome-screensaver). For what it's worth, I run gnome-terminal, thunderbird, chromium-browser, empathy, tomboy and virtualbox nearly all the time, on top of the normal stuff you get with the GNOME desktop, and occasionally some other bits and pieces. However my question is not "which of my programs is causing this problem" but rather, how can one go about diagnosing this problem? In the above (and other) bugs, forum reports, etc., a number of tools are suggested: xlsclients - lists the client applications for the given display, but I don't think that corresponds to 'X clients' xrestop - a top-style X resources tool, one row per X client. Lots of '' clients, not shown in xlsclients output xwininfo -root -children lists X window objects From what I can gather, the problem might not be too many clients at all, but rather resources kept around in the X server for clients who have long-since detached. But it would also appear that you cannot (easily?) relate X resources back to their client. Can one effectively diagnose this issue once it has started to occur, or is a tedious divide-and-conquer approach for the apps I run the only approach open to me? Update Jan 2011: I think I have resolved this issue. For the benefit of anyone stumbling across this, nautilus and/or compiz or something in that chain of software was segfaulting due to a wallpaper I had. I had chosen an XML file as my wallpaper, which defined a rotating gallery of images. It was hand-made, but based on /usr/share/backgrounds/contest/background-1.xml or similar. Disabling the wallpaper and I have not had a crash since. I'm not marking this as answered yet, since the actual specific problem was not my question, but how to diagnose it was. Unfortunately this was mostly trial-and-error which sucks.

    Read the article

  • Why does my code dividing a 2D array into chunks fail?

    - by Borog
    I have a 2D-Array representing my world. I want to divide this huge thing into smaller chunks to make collision detection easier. I have a Chunk class that consists only of another 2D Array with a specific width and height and I want to iterate through the world, create new Chunks and add them to a list (or maybe a Map with Coordinates as the key; we'll see about that). world = new World(8192, 1024); Integer[][] chunkArray; for(int a = 0; a < map.getHeight() / Chunk.chunkHeight; a++) { for(int b = 0; b < map.getWidth() / Chunk.chunkWidth; b++) { Chunk chunk = new Chunk(); chunkArray = new Integer[Chunk.chunkWidth][Chunk.chunkHeight]; for(int x = Chunk.chunkHeight*a; x < Chunk.chunkHeight*(a+1); x++) { for(int y = Chunk.chunkWidth*b; y < Chunk.chunkWidth*(b+1); y++) { // Yes, the tileMap actually is [height][width] I'll have // to fix that somewhere down the line -.- chunkArray[y][x] = map.getTileMap()[x*a][y*b]; // TODO:Attach to chunk } } chunkList.add(chunk); } } System.out.println(chunkList.size()); The two outer loops get a new chunk in a specific row and column. I do that by dividing the overall size of the map by the chunkSize. The inner loops then fill a new chunkArray and attach it to the chunk. But somehow my maths is broken here. Let's assume the chunkHeight = chunkWidth = 64. For the first Array I want to start at [0][0] and go until [63][63]. For the next I want to start at [64][64] and go until [127][127] and so on. But I get an out of bounds exception and can't figure out why. Any help appreciated! Actually I think I know where the problem lies: chunkArray[y][x] can't work, because y goes from 0-63 just in the first iteration. Afterwards it goes from 64-127, so sure it is out of bounds. Still no nice solution though :/ EDIT: if(y < Chunk.chunkWidth && x < Chunk.chunkHeight) chunkArray[y][x] = map.getTileMap()[y][x]; This works for the first iteration... now I need to get the commonly accepted formula.

    Read the article

  • Help identify the pattern for reacting on updates

    - by Mike
    There's an entity that gets updated from external sources. Update events are at random intervals. And the entity has to be processed once updated. Multiple updates may be multiplexed. In other words there's a need for the most current state of entity to be processed. There's a point of no-return during processing where the current state (and the state is consistent i.e. no partial update is made) of entity is saved somewhere else and processing goes on independently of any arriving updates. Every consequent set of updates has to trigger processing i.e. system should not forget about updates. And for each entity there should be no more than one running processing (before the point of no-return) i.e. the entity state should not be processed more than once. So what I'm looking for is a pattern to cancel current processing before the point of no return or abandon processing results if an update arrives. The main challenge is to minimize race conditions and maintain integrity. The entity sits mainly in database with some files on disk. And the system is in .NET with web-services and message queues. What comes to my mind is a database queue-like table. An arriving update inserts row in that table and the processing is launched. The processing gathers necessary data before the point of no-return and once it reaches this barrier it looks into the queue table and checks whether there're more recent updates for the entity. If there are new updates the processing simply shuts down and its data is discarded. Otherwise the processing data is persisted and it goes beyond the point of no-return. Though it looks like a solution to me it is not quite elegant and I believe this scenario may be supported by some sort of middleware. If I would use message queues for this then there's a need to access the queue API in the point of no-return to check for the existence of new messages. And this approach also lacks elegance. Is there a name for this pattern and an existing solution?

    Read the article

  • Any ideas on reducing lag in terrain generation?

    - by l5p4ngl312
    Ok so here's the deal. I've written an isometric engine that generates terrain based on camera values using 2D perlin noise. I planned on doing 3D but first I need to work out the lag issues I'm having. I will try to explain how I am doing this so that maybe someone can spot where I am going wrong. I know it should not be this laggy. There is the abstract class Block which right now just contains render(). BlockGrass, etc. extend this class and each has code in the render function to create a textured quad at the given position. Then there is the class Chunk which has the function Generate() and setBlocksInArea(). Generate uses 2D perlin noise to make a height map and stores the heights in a 2D array. It stores the positions of each block it generates in blockarray[x][y][z]. The chunks are 8x8x128. In the main game class there is a 3D array called blocksInArea. The blocks in this array are what gets rendered. When a chunk generates, it adds its blocks to this array at the correct index. It is like this so chunks can be saved to the hard drive (even though they aren't yet) but there can still be optimization with the rendering that you wouldn't have if you rendered each chunk separately. Here's where the laggy part comes in: When the camera moves to a new chunk, a row of chunks generates on the end of the axis that the camera moved on. But it still has to move the other chunks up/down in the blocksInArea (render) array. It does this by calculating the new position in the array and doing the Chunk.setBlocksInArea(): for(int x = 0; x < 8; x++){ for(int y = 0; y < 8; y++){ nx = x+(coordX - camCoordX)*8 ny = y+(coordY - camCoordY)*8 for(int z = 0; z < height[x][y]; z++){ blockarray[x][y][z] = Game.blocksInArea[nx][ny][z]; } } } My reasoning was that this would be much faster than doing the perlin noise all over again, but there are still little spikes of lag when you move in between chunks. Edit: Would it be possible to create a 3 dimensional array list so that shifting of chunks within the array would not be neccessary?

    Read the article

  • PL/SQL to delete invalid data from token Strings

    - by Jie Chen
    Previous article describes how to delete the duplicated values from token string in bulk mode. This one extends it and shows the way to delete invalid data. Scenario Support we have page_two and manufacturers tables in database and the table DDL is: SQL> desc page_two; Name NULL? TYPE ----------------------------------------- -------- ------------------------ MULTILIST04 VARCHAR2(765) SQL> SQL> desc manufacturers; Name NULL? TYPE ----------------------------------------- -------- ------ ID NOT NULL NUMBER NAME VARCHAR In table page_two, column multilist04 stores a token string splitted with common. Each token represent a valid ID in manufacturers table. My expectation is to delete invalid token strings from page_two.multilist04, which have no mapping id in manufacturers.id. For example in below SQL result: ,6295728,33,6295729,6295730,6295731,22, , value 33 and 22 are invalid data because there is no ID equals to 33 or 22 in manufacturers table. So I need to delete 33 and 22. SQL> col rowid format a20; SQL> col multilist04 format a50; SQL> select rowid, multilist04 from page_two; ROWID MULTILIST04 -------------------- -------------------------------------------------- AAB+UrADfAAAAhUAAI ,6295728,6295729,6295730,6295731, AAB+UrADfAAAAhUAAJ ,1111,6295728,6295729,6295730,6295731, AAB+UrADfAAAAhUAAK ,6295728,111,6295729,6295730,6295731, AAB+UrADfAAAAhUAAL ,6295728,6295729,6295730,6295731,22, AAB+UrADfAAAAhUAAM ,6295728,33,6295729,6295730,6295731,22, SQL> select id, encode_name from manufacturers where id in (1111,11,22,33); No rows selected SQL> Solution As there is no existing SPLIT function or related in PL/SQL, I should program it by myself. I code Split intermediate function which is used to get the token value between current splitter and next splitter. Next program is main entry point, it get each column value from page_two.multilist04, process each row based on cursor. When it get each multilist04 value, it uses above Split function to get each token string stored to singValue variant, then check if it exists in manufacturers.id. If not found, set fixFlag to 1, pending to be deleted.

    Read the article

  • How to get lookahead symbol when constructing LR(1) NFA for parser?

    - by greenoldman
    I am reading an explanation (awesome "Parsing Techniques" by D.Grune and C.J.H.Jacobs; p.292 in the 2nd edition) about how to construct an LR(1) parser, and I am at the stage of building the initial NFA. What I don't understand is how to get/compute a lookahead symbol. Here is the example from the book, the grammar: S -> E E -> E - T E -> T T -> ( E ) T -> n n is terminal. The "weird" transitions for me are is the sequence: 1) S -> . E eof 2) E -> . E - T eof 3) E -> . E - T - 4) E -> E . - T - 5) E -> E - . T - (Note: In the above table, the state numbers are in front and the lookahead symbol is at the end.) What puzzles me is that transition from (4) to (5) means reading - token, right? So how is it that - is still a lookahead symbol and even more important why is it that eof is no longer a lookahead symbol? After all in an input such as n - n eof there is only one - symbol. My naive thinking tells me (5) should be written as: 5) E -> E - . T - eof And another thing -- n is terminal. Why it is not used at all as a lookahead symbol? I mean -- we expect to see - or (, it is ok, but lack of n means we are sure it won't appear in input? Update: after more reading I am only more confused ;-) I.e. what is really a lookahead? Because I see such state as (p.292, 2nd column, 2nd row): E -> E . - T eof Lookahead says eof but the incoming input says -. Isn't it a contradiction? And it is not only in this book.

    Read the article

  • Speed up your Silverlight Debugging for large projects

    - by Aligned
    I'm working on a 5+ year old ASP.NET project that has 74+ projects and we've been adding new Silverlight applications to run in the ASP.NET page islands. My machine at work isn't the most powerful, so I find myself waiting a lot for the whole thing to build. I'm using Visual Studio 2010, so that takes up a lot of resources as well. This causes me to get distracted and I start looking at the news... I need to combat that more :-). I can't get a new machine, that's up to someone else, so I've found a few tricks to help. 1. Only build the Silverlight project you're working with. This will build all referenced projects (you can see these by right clicking and clicking Project Dependencies) and package a new XAP (you can see all the actions in your output build window). Then refresh your page with the Silverlight app and it's up-to-date. 2. I was working with a co-worker (thanks Jordan) who was using the the Debug -> attach to processes window. In the Attach to: row there is a "Select..." button. In the dialog, click "Debug these code types:" and select Silverlight. Hit ok. Then all you need to do is find your process (you might need to click the refresh button). I'm usually debugging in IE, so I select the first one and push "i" on the keyboard. That brings me to the IE windows open. Find the one with type of Silverlight, x86. It is usually directly above one with type of x86 that has the page title for "title". Click attach and watch your output window spit out messages about loading debug symbols and your breakpoints enabled (if this doesn't happen you chose the wrong process, hit stop and try again). Now you can debug the client code as normal, server code requires a full F5 or attaching to the correct process. To improve this even further, bind the menu item to a key stroke. I chose ctrl + x, x. (Tools -> Options -> Keyboard, search for Debug.AttachToProcess, set the shortcut keys globaly and assign). Most of the time I build the project, then hit ctrl + x, x then i, then enter and I'm debugging. The process I want is usually the first IE in the list.

    Read the article

  • Validation and Error Generation when using the Data Mapper Pattern

    - by AndyPerlitch
    I am working on saving state of an object to a database using the data mapper pattern, but I am looking for suggestions/guidance on the validation and error message generation step (step 4 below). Here are the general steps as I see them for doing this: (1) The data mapper is used to get current info (assoc array) about the object in db: +=====================================================+ | person_id | name | favorite_color | age | +=====================================================+ | 1 | Andy | Green | 24 | +-----------------------------------------------------+ mapper returns associative array, eg. Person_Mapper::getPersonById($id) : $person_row = array( 'person_id' => 1, 'name' => 'Andy', 'favorite_color' => 'Green', 'age' => '24', ); (2) the Person object constructor takes this array as an argument, populating its fields. class Person { protected $person_id; protected $name; protected $favorite_color; protected $age; function __construct(array $person_row) { $this->person_id = $person_row['person_id']; $this->name = $person_row['name']; $this->favorite_color = $person_row['favorite_color']; $this->age = $person_row['age']; } // getters and setters... public function toArray() { return array( 'person_id' => $this->person_id, 'name' => $this->name, 'favorite_color' => $this->favorite_color, 'age' => $this->age, ); } } (3a) (GET request) Inputs of an HTML form that is used to change info about the person is populated using Person::getters <form> <input type="text" name="name" value="<?=$person->getName()?>" /> <input type="text" name="favorite_color" value="<?=$person->getFavColor()?>" /> <input type="text" name="age" value="<?=$person->getAge()?>" /> </form> (3b) (POST request) Person object is altered with the POST data using Person::setters $person->setName($_POST['name']); $person->setFavColor($_POST['favorite_color']); $person->setAge($_POST['age']); *(4) Validation and error message generation on a per-field basis - Should this take place in the person object or the person mapper object? - Should data be validated BEFORE being placed into fields of the person object? (5) Data mapper saves the person object (updates row in the database): $person_mapper->savePerson($person); // the savePerson method uses $person->toArray() // to get data in a more digestible format for the // db gateway used by person_mapper Any guidance, suggestions, criticism, or name-calling would be greatly appreciated.

    Read the article

  • Building a Store Locator ASP.NET Application Using Google Maps API

    The past couple of projects I've been working on have included the use of the Google Maps API and geocoding service in websites for various reasons. I decided to tie together some of the lessons learned, build an ASP.NETstore locator demo, and write about it on 4Guys. Last week I published the first article in what I think will be a three-part series: Building a Store Locator ASP.NET Application Using Google Maps (Part 1). Part 1 walks through creating a demo where a user can type in an address and any stores within a (roughly) 15 mile area will be displayed in a grid.The article begins with a look at the database used to power the store locator (namely, a single table that contains one row for every location, with each location storing its store number, address, and, most important, latitude and longitude coordinates) and then turns to usingGoogle's geocoding service to translatea user-entered address into latitude and longitude coordinates. The latitude and longitude coordinates are used to find nearby stores, which are then displayed in a grid. Part 2 looks at enhancing the search results to include a map with markers indicating the position of each nearby store location. The Google Maps API, along with a bit of client-side script and server-side logic, make this actually pretty straightforward and easy to implement. Here's a screen shot of the improved store locator results. Part 3, which I plan on publishing next week, looks at how to enhance the map by using information windows to display address information when clicking a marker. Additionally, I'll show how to use custom icons for the markers so that instead of having the same marker for each nearby location the markers will be images numbered 1, 2, 3, and so on, which will correspond to a number assigned to each search result in the grid. The idea here is that by numbering the search results in the grid and the markers on the map visitors will quickly be able to see what marker corresponds to what search result. This article and demo has been a lot of fun to write and create, and I hope you enjoy reading it, too. Building a Store Locator ASP.NET Application Using Google Maps API (Part 1) Building a Store Locator ASP.NET Application Using Google Maps API (Part 2) Happy Programming!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Storing a looong lookup table

    - by inquisitive
    Background The product i am working on has a very long lookup-table. the table contains static data and cannot be auto generated. there are about 500 rows and 10 columns. columns have mostly integers and strings. to complicate the matters, there are actually two such tables. every row in table-1 maps to zero-or-more rows in table-2. we use an SQLite database with two tables. the product installer places the SQLite file in the installation directory. the application is written in dot-net and we use ADO to load the data once on startup. now, the lookup table grows. in each release a month, we add about 10 new entries existing entries are adjusted. every release we fine tune existing entries. The problem a team of (10) developers work on the lookup table. Code goes in the SVN, but the little devil the SQLite does not. this prevents multiple developers to work on it. we do take regular backups of the file, but proper versioning is not possible. we never know who did the breaking change. the worse thing is we dont know if there is any change at all. diff'ing databases is tedious if not impossible. the tables are expected to grow quite large in years to come and we would need developers to work in parallel on it. the data is business critical. we need to be able to audit changes made to it. Question What would be a solution for the problems outlines above? one idea was to transform the whole thing to XML and treat it like just another source file. that way SVN can do the versioning and we can work in parallel. but the data shows relational behavior. with XML we loose the unique and foreign-key constraints. also we cant query it with sql like ease. any help here will be appreciated.

    Read the article

  • MOSSt 2010 Hosting :: Dialog Platform in SharePoint 2010 & How to Open the Edit Form Dialog for List Item

    - by mbridge
    One of the New User Interface Platforms in SharePoint 2010 is ‘The Dialog Platform’ A dialog is essentially a <div> which gets visible on demand and renders the HTML using a background overlay creating a modal dialog like user experience. We can show an existing div from within the page or a different page using a URL inside the dialogs. When we pass the URL to the dialog it looks for the Querystring parameter “IsDlg=1”. If this parameters exists than it would dynamically load the "/_layouts/styles/dlgframe.css” file. This file overrides the “s4-notdlg” class items as “display:none”, which means that all items with this class would not get displayed in Dialog Mode.  So if we go to the v4.master page we can see that this class is used by the Ribbon control to hide the ribbon when in dialog mode: How to open the Edit Form Dialog for List Item: In SharePoint 2010 The URL for opening the Edit Form of any list item looks like something like this : http://intranet.contoso.com/<SiteName>/Lists/<ListName>/EditForm.aspx?ID=1&IsDlg=1 ID is the list item row identifier and as discussed above the IsDlg is for the dialog mode. Now to open a dialog we need to use the SP.UI.ModalDialog.showModalDialog method from the ECMAScript Client Object model and pass in the url of the page, width & height of the dialog and also a callback function in case we want some code to run after the dialog is closed. <script type="text/javascript">          //Handle the DialogCallback callback               function DialogCallback(dialogResult, returnValue){               }             //Open the Dialog           function OpenEditDialog(id){             var options = { url:&quot;http://intranet.contoso.com/<SiteName>/Lists/<ListName>/EditForm.aspx?ID=&quot; + id + &quot;&amp;IsDlg=1&quot;,              width: 700,              height: 700,              dialogReturnValueCallback: DialogCallback              };             SP.UI.ModalDialog.showModalDialog(options);           } </script> The .js files for the ECMAScript Object Model (SP.js, SP.Core.js, SP.Ribbon.js, and SP.Runtime.js ) are installed in the %ProgramFiles%\Common Files\Microsoft Shared\web server extensions\14\TEMPLATE\LAYOUTS directory. Here is a good MSDN link explaining the Client Object Model Distribution and Deployment options available in SharePoint 2010 and this is the lowest costSharePoint 2010 Provider.

    Read the article

  • systemstate dump ??

    - by JaneZhang(???)
            ???????????????hang????,????????systemstate dump?????????,?????,????????,???????????????,????systemstate dump?????????????       ??????,????????systemstate dump, ?????“WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK”?        systemstate dump???????????,??????:??????,???????,????dump????????,???????M????)1. ?sysdba???????:$sqlplus / as sysdba??$sqlplus -prelim / as sysdba <==??????????hang?????SQL>oradebug setmypidSQL>oradebug unlimit;SQL>oradebug dump systemstate 266;?1~2??SQL>oradebug dump systemstate 266;?1~2??SQL>oradebug dump systemstate 266;SQL>oradebug tracefile_name;==>????????2. ????systemstate dump,??????hang analyze??????????????????$sqlplus / as sysdba??$sqlplus -prelim / as sysdba <==??????????hang?????SQL>oradebug setmypidSQL>oradebug unlimit;SQL>oradebug dump hanganalyze 3?1~2??SQL>oradebug dump hanganalyze 3?1~2??SQL>oradebug dump hanganalyze 3SQL>oradebug tracefile_name;==>??????????RAC???,????????????systemstate dump,???????????(?????????):$sqlplus / as sysdba??$sqlplus -prelim / as sysdba <==??????????hang?????SQL>oradebug setmypidSQL>oradebug unlimitSQL>oradebug -g all dump systemstate 266  <==-g all ??????????dump?1~2??SQL>oradebug -g all dump systemstate 266?1~2??SQL>oradebug -g all dump systemstate 266?RAC???hang analyze:SQL>oradebug setmypidSQL>oradebug unlimitSQL>oradebug -g all hanganalyze 3?1~2??SQL>oradebug -g all hanganalyze 3?1~2??SQL>oradebug -g all hanganalyze 3?????????????????systemstate dump,?????????????backgroud_dump_dest??diag trace???????????????????????????,?????hang?,?????systemstate dump?????:10:   dump11:   dump + global cache of RAC256: short stack (????)258: dump(???lock element) + short stack (????)266: 256+10 -->short stack+ dump267: 256+11 -->short stack+ dump + global cache of RAClevel 11? 267? dump global cache, ??????trace ??,??????????????,????????,???266,??????dump?????????,????????????????????short stack????,???????,??2000???,??????30??????????,????level 10 ?? level 258, level 258 ? level 10????short short stack, ??level 10?????lock element data.?????systemstate dump???,??????level?????:??????37???:-rw-r----- 1 oracle oinstall    72721 Aug 31 21:50 rac10g2_ora_31092.trc==>256 (short stack, ????2K)-rw-r----- 1 oracle oinstall  2724863 Aug 31 21:52 rac10g2_ora_31654.trc==>10    (dump,????72K )-rw-r----- 1 oracle oinstall  2731935 Aug 31 21:53 rac10g2_ora_32214.trc==>266 (dump + short stack ,????72K)RAC:-rw-r----- 1 oracle oinstall 55873057 Aug 31 21:49 rac10g2_ora_30658.trc ==>11   (dump+global cache,????1.4M)-rw-r----- 1 oracle oinstall 55879249 Aug 31 21:48 rac10g2_ora_28615.trc ==>267 (dump+global cache+short stack,????1.4M) ??,??????dump global cache(level 11?267,???????????????)??????????,?????????systemstate dump ??

    Read the article

  • Help with DB Structure, vOD site

    - by Chud37
    I have a video on demand style site that hosts series of videos under different modules. However with the way I have designed the database it is proving to be very slow. I have asked this question before and someone suggested indexing, but i cannot seem to get my head around it. But I would like someone to help with the structure of the database here to see if it can be improved. The core table is Videos: ID bigint(20) (primary key, auto-increment) pID text airdate text title text subject mediumtext url mediumtext mID int(11) vID int(11) sID int(11) pID is a unique 5 digit string to each video that is a shorthand identifier. Airdate is the TS, (stored in text format, right there maybe I should change that to TIMESTAMP AUTO UPDATE), title is self explanatory, subject is self explanatory, url is the hard link on the site to the video, mID is joined to another table for the module title, vID is joined to another table for the language of the video, (english, russian, etc) and sID is the summary for the module, a paragraph stored in an external database. The slowest part of the website is the logging part of it. I store the data in another table called 'Hits': id mediumint(10) (primary key, auto-increment) progID text ts int(10) Again, here (this was all made a while ago) but my Timestamp (ts) is an INT instead of ON UPDATE CURRENT TIMESTAMP, which I guess it should be. However This table is now 47,492 rows long and the script that I wrote to process it is very very slow, so slow in fact that it times out. A row is added to this table each time a user clicks 'Play' on the website and then so the progID is the same as the pID, and it logs the php time() timestamp in ts. Basically I load the entire database of 'Hits' into an array and count the hits in each day using the TS column. I am guessing (i'm quite slow at all this, but I had no idea this would happen when I built the thing) that this is possibly the worst way to go about this. So my questions are as follows: Is there a better way of structuring the 'Videos' table, is so, what do you suggest? Is there a better way of structuring 'hits', if so, please help/tell me! Or is it the fact that my tables are fine and the PHP coding is crappy?

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >