Search Results

Search found 3056 results on 123 pages for 'ben daniel'.

Page 74/123 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Binding ListBox ItemCount to IvalueConverter

    - by Ben
    Hi All, I am fairly new to WPF so forgive me if I am missing something obvious. I'm having a problem where I have a collection of AggregatedLabels and I am trying to bind the ItemCount of each AggregatedLabel to the FontSize in my DataTemplate so that if the ItemCount of an AggregatedLabel is large then a larger fontSize will be displayed in my listBox etc. The part that I am struggling with is the binding to the ValueConverter. Can anyone assist? Many thanks! XAML Snippet <DataTemplate x:Key="TagsTemplate"> <WrapPanel> <TextBlock Text="{Binding Name, Mode=Default}" TextWrapping="Wrap" FontSize="{Binding ItemCount, Converter={StaticResource CountToFontSizeConverter}, Mode=Default}" Foreground="#FF0D0AF7"/> </WrapPanel> </DataTemplate> <ListBox x:Name="tagsList" ItemsSource="{Binding AggregatedLabels, Mode=Default}" ItemTemplate="{StaticResource TagsTemplate}" Style="{StaticResource tagsStyle}" Margin="200,10,16.171,11.88" /> AggregatedLabel Collection using (DB2DataReader dr = command.ExecuteReader()) { while (dr.Read()) { AggregatedLabelModel aggLabel = new AggregatedLabelModel(); aggLabel.ID = Convert.ToInt32(dr["LABEL_ID"]); aggLabel.Name = dr["LABEL_NAME"].ToString(); LabelData.Add(aggLabel); } } Converter public class CountToFontSizeConverter : IValueConverter { #region IValueConverter Members public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { const int minFontSize = 6; const int maxFontSize = 38; const int increment = 3; int count = (int)value; if ((minFontSize + count + increment) < maxFontSize) { return minFontSize + count + increment; } return maxFontSize; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } #endregion } AggregatedLabel Class public class AggregatedLabelModel { public int ID { get; set; } public string Name { get; set; } } CollectionView ListCollectionView labelsView = new ListCollectionView(LabelData); labelsView.GroupDescriptions.Add(new PropertyGroupDescription("Name"));

    Read the article

  • How to determine cpu, ram needed for rails app?

    - by Ben
    What is the most accurate way to determine the amount of cpu speed and ram needed to run my rails app? I believe there are stress testing tools like Tsung, but how do I determine, for example, that I need X more ram, or X more CPU? I would like to find some way to roughly gauge the performance needs of my application so I can anticipate future needs. I think this data will also be useful for me to decide whether to upgrade one machine, or get another dedicated machine and put all the databases on that one. Essentially, I am concerned about scaling issues, and how to anticipate them. Thanks in advance for the help!

    Read the article

  • Simplest way to match array of strings to search in perl?

    - by Ben Dauphinee
    What I want to do is check an array of strings against my search string and get the corresponding key so I can store it. Is there a magical way of doing this with Perl, or am I doomed to using a loop? If so, what is the most efficient way to do this? I'm relatively new to Perl (I've only written 2 other scripts), so I don't know a lot of the magic yet, just that Perl is magic =D Reference Array: (1 = 'Canon', 2 = 'HP', 3 = 'Sony') Search String: Sony's Cyber-shot DSC-S600 End Result: 3

    Read the article

  • whats the diference between train, validation and test set, in neural networks?

    - by Daniel
    Im using this library http://pastebin.com/raw.php?i=aMtVv4RZ to implement a learning agent. I have generated the train cases, but i dont know for sure what are the validation and test sets, the teacher says: 70% should be train cases, 10% will be test cases and the rest 20% should be validation cases. Thanks. edit i have this code, for training.. but i have no ideia when to stop training.. def train(self, train, validation, N=0.3, M=0.1): # N: learning rate # M: momentum factor accuracy = list() while(True): error = 0.0 for p in train: input, target = p self.update(input) error = error + self.backPropagate(target, N, M) print "validation" total = 0 for p in validation: input, target = p output = self.update(input) total += sum([abs(target - output) for target, output in zip(target, output)]) #calculates sum of absolute diference between target and output accuracy.append(total) print min(accuracy) print sum(accuracy[-5:])/5 #if i % 100 == 0: print 'error %-14f' % error if ? < ?: break

    Read the article

  • Programmer tendency to preach [closed]

    - by Daniel
    I've run across several SO posts that come across as preachy or condescending. Do pedagogical programmers feel plagued by thoughtless questions? Or, do programmers count self-sufficiency such a virtue that any perceived lack of ambition merits scolding? These are some theories, admittedly negative ones. Can anyone offer some insight?

    Read the article

  • what libraries or platforms should I use to build web apps that provide real-time, asynchronous data

    - by Daniel Sterling
    This is a less a question with a simple, practical answer and more a question to foster discussion on the real-time data exchange topic. I'll begin with an example: Google Wave is, at its core, a real-time asynchronous data synchronization engine. Wave supports (or plans to support) concurrent (real-time) document collaboration, disconnected (offline) document editing, conflict resolution, document history and playback with attribution, and server federation. A core part of Wave is the Operational Transformation engine: http://www.waveprotocol.org/whitepapers/operational-transform The OT engine manages document state. Changes between clients are merged and each client has a sane and consistent view of the document at all times; the final document is eventually consistent between all connected clients. My question is: is this system abstract or general enough to be used as a library or generic framework upon which to build web apps that synchronize real-time, asynchronous state in each client? Is the Wave protocol directly used by any current web applications (besides Google's client)? Would it make sense to directly use it for generic state synchronization in a web app? What other existing libraries or frameworks would you consider using when building such a web app? How much code in such an app might be domain-specific logic vs generic state synchronization logic? Or, put another way, how leaky might the state synchronization abstractions be? Comments and discussion welcomed!

    Read the article

  • Detecting whether user stayed after prompting onBeforeUnload

    - by Daniel Magliola
    In a web app I'm working on, I'm capturing onBeforeUnload to ask the user whether he really wants to exit. Now, if he decides to stay, there are a number of things I'd like to do. What I'm trying to figure out is that he actually chose to stay. I can of course declare a SetTimeout for "x" seconds, and if that fires, then it would mean the user is still there (because we didn't get unloaded). The problem is that the user can take any time to decide whether to stay or not... I was first hoping that while the dialog was showing, SetTimeout calls would not fire, so I could set a timeout for a short time and it'd only fire if the user chose to stay. However, timeouts do fire while the dialog is shown, so that doesn't work. Another idea I tried is capturing mouseMoves on the window/document. While the dialog is shown, mouseMoves indeed don't fire, except for one weird exception that really applies to my case, so that won't work either. Can anyone think of other way to do this? Thanks! (In case you're curious, the reason capturing mouseMove doesn't work is that I have an IFrame in my page, containing a site from another domain. If at the time of unloading the page, the focus is within the IFrame, while the dialog shows, then I get the MouseMove event firing ONCE when the mouse moves from inside the IFrame to the outside (at least in Firefox). That's probably a bug, but still, it's very likely that'll happen in our case, so I can't use this method).

    Read the article

  • Drawing only part of a

    - by Ben Reeves
    ..Continued on from my previous question I have a 320*480 RGB565 framebuffer which I wish to draw using OpenGL ES 1.0 on the iPhone. - (void)setupView { glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, (int[4]){0, 0, 480, 320}); glEnable(GL_TEXTURE_2D); } // Updates the OpenGL view when the timer fires - (void)drawView { // Make sure that you are drawing to the current context [EAGLContext setCurrentContext:context]; //Get the 320*480 buffer const int8_t * frameBuf = [source getNextBuffer]; //Create enough storage for a 512x512 power of 2 texture int8_t lBuf[2*512*512]; memcpy (lBuf, frameBuf, 320*480*2); //Upload the texture glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, lBuf); //Draw it glDrawTexiOES(0, 0, 1, 480, 320); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } If I produce the original texture in 512*512 the output is cropped incorrectly but other than that looks fine. However using the require output size of 320*480 everything is distorted and messed up. I'm pretty sure it's the way I'm copying the framebuffer into the new 512*512 buffer. I have tried this routine int8_t lBuf[512][512][2]; const char * frameDataP = frameData; for (int ii = 0; ii < 480; ++ii) { memcpy(lBuf[ii], frameDataP, 320); frameDataP += 320; } Which is better, but the width appears to be stretched and the height is messed up. Any help appreciated.

    Read the article

  • Jquery each loop with json array

    - by Ben
    I'm trying to use Jquery's each loop to go through this Json and add it to a div named #contentHere. The Json is as follows: { "justIn": [ { "textId": "123", "text": "Hello", "textType": "Greeting" }, { "textId": "514", "text":"What's up?", "textType": "Question" }, { "textId": "122", "text":"Come over here", "textType": "Order" } ], "recent": [ { "textId": "1255", "text": "Hello", "textType": "Greeting" }, { "textId": "6564", "text":"What's up?", "textType": "Question" }, { "textId": "0192", "text":"Come over here", "textType": "Order" } ], "old": [ { "textId": "5213", "text": "Hello", "textType": "Greeting" }, { "textId": "9758", "text":"What's up?", "textType": "Question" }, { "textId": "7655", "text":"Come over here", "textType": "Order" } ] } I'm getting this Json through use of this code: $.get("data.php", function(data){ }) Any solutions?

    Read the article

  • Server is on or off

    - by Daniel
    Curious. I'm starting to broadcast high school football games online, and it uses a program to broadcast the feeds off my computer. However, when I shut the program down or the computer down, the server goes offline and guests won't be able to access the feeds. Is there any kind of code out there that I can post onto my website that will indicate to my guests whether the server is on or off? I would figure it would be a simple code, a php script or something that periodically checks to see if a site is on line and then displays ON or OFF.

    Read the article

  • How can I temporarily redirect printf output to a c-string?

    - by Ben S
    I'm writing an assignment which involves adding some functionality to PostgreSQL on a Solaris box. As part of the assignment, we need to print some information on the client side (i.e.: using elog.) PostgreSQL already has lots of helper methods which print out the required information, however, the helper methods are packed with hundreds of printf calls, and the elog method only works with c-style strings. Is there I way that I could temporarily redirect printf calls to a buffer so I could easily send it over elog to the client? If that's not possible, what would be the simplest way to modify the helper methods to end up with a buffer as output?

    Read the article

  • Why does the rename() syscall prohibit moving a directory that I can't write to a different director

    - by Daniel Papasian
    I am trying to understand why this design decision was made with the rename() syscall in 4.2BSD. There's nothing I'm trying to solve here, just understand the rationale for the behavior itself. 4.2BSD saw the introduction of the rename() syscall for the purpose of allowing atomic renames/moves of files. From 4.3BSD-Reno/src/sys/ufs/ufs_vnops.c: /* * If ".." must be changed (ie the directory gets a new * parent) then the source directory must not be in the * directory heirarchy above the target, as this would * orphan everything below the source directory. Also * the user must have write permission in the source so * as to be able to change "..". We must repeat the call * to namei, as the parent directory is unlocked by the * call to checkpath(). */ if (oldparent != dp->i_number) newparent = dp->i_number; if (doingdirectory && newparent) { VOP_LOCK(fndp->ni_vp); error = ufs_access(fndp->ni_vp, VWRITE, tndp->ni_cred); VOP_UNLOCK(fndp->ni_vp); So clearly this check was added intentionally. My question is - why? Is this behavior supposed to be intuitive? The effect of this is that one cannot move a directory (located in a directory that one can write) that one cannot write to another directory that one can write to atomically. You can, however, create a new directory, move the links over (assuming one has read access to the directory), and then remove one's write bit on the directory. You just can't do so atomically. % cd /tmp % mkdir stackoverflow-question % cd stackoverflow-question % mkdir directory-1 % mkdir directory-2 % mkdir directory-1/directory-i-cant-write % echo "foo" > directory-1/directory-i-cant-write/contents % chmod 000 directory-1/directory-i-cant-write/contents % chmod 000 directory-1/directory-i-cant-write % mv directory-1/directory-i-cant-write directory-2 mv: rename directory-1/directory-i-cant-write to directory-2/directory-i-cant-write: Permission denied We now have a directory I can't write with contents I can't read that I can't move atomically. I can, however, achieve the same effect non-atomically by changing permissions, making the new directory, using ln to create the new links, and changing permissions. (Left as an exercise to the reader) . and .. are special cased already, so I don't particularly buy that it is intuitive that if I can't write a directory I can't "change .." which is what the source suggests. Is there any reason for this besides it being the perceived correct behavior by the author of the code? Is there anything bad that can happen if we let people atomically move directories (that they can't write) between directories that they can write?

    Read the article

  • How to store (and use) the current mouse position?

    - by Ben Packard
    What is the best way to store the current mouse position (system-wide) and then (later) put the mouse at that stored point? [NSEvent mouseLocation] gets me the position, and I can move the mouse with a CGEventMouseMoved, but they each use a different co-ordinates system (I believe y=0 is the top for NSEvent and the bottom for a CGEvent). I'm worried about the robustness of capturing the screen height and using it to convert between the two - or is this the best approach?

    Read the article

  • Why does Microsoft Windows' performance appear to degrade over time?

    - by Ben Aston
    Windows XP/2k3 and earlier (can't attest to Vista, but suspect it's the same) all appear to become more sluggish over time as applications are installed and uninstalled. This is not a scientifically tested observation, but more of a learned-through-experience piece of wisdom. (I've always suspected the registry as being behind the issue.) Does anyone have any concrete evidence of this degradation occurring, or it just an invalid perception of mine?

    Read the article

  • Sql query listing Fathers and childs with joins, how to distinct them?

    - by DaNieL
    Having those tables: table_n1: | t1_id | t1_name | | 1 | foo | table_n2: | t2_id | t1_id | t2_name | | 1 | 1 | bar | I need a query that gives me two result: | names | | foo | | foo / bar | But i cant figure out the right way. I wrote this one: SELECT CONCAT_WS(' / ', table_n1.t1_name, table_n2.t2_name) AS names FROM table_n1 LEFT JOIN table_n2 ON table_n2.t1_id = table_n1.t1_id that works for an half: this only return the 2° row (in the example above): | names | | foo - bar | This query return the 'father' (table_n1) name only when it doesnt have 'childs' (table_n2). How can i fix it?

    Read the article

  • Misalignement in the output Bitmap created from a byte array

    - by Daniel
    I am trying to understand why I have troubles creating a Bitmap from a byte array. I post this after a careful scrutiny of the existing posts about Bitmap creation from byte arrays, like the followings: Creating a bitmap from a byte[], Working with Image and Bitmap in c#?, C#: Bitmap Creation using bytes array My code is aimed to execute a filter on a digital image 8bppIndexed writing the pixel value on a byte [] buffer to be converted again (after some processing to manage gray levels) in a 8BppIndexed Bitmap My input image is a trivial image created by means of specific perl code: https://www.box.com/shared/zqt46c4pcvmxhc92i7ct Of course, after executing the filter the output image has lost the first and last rows and the first and last columns, due to the way the filter manage borders, so from the original 256 x 256 image i get a 254 x 254 image. Just to stay focused on the issue I have commented the code responsible for executing the filter so that the operation really performed is an obvious: ComputedPixel = InputImage.GetPixel(myColumn, myRow).R; I know, i should use lock and unlock but I prefer one headache one by one. Anyway this code should be a sort of identity transform, and at last i use: private unsafe void FillOutputImage() { OutputImage = new Bitmap (OutputImageCols, OutputImageRows , PixelFormat .Format8bppIndexed); ColorPalette ncp = OutputImage.Palette; for (int i = 0; i < 256; i++) ncp.Entries[i] = Color .FromArgb(255, i, i, i); OutputImage.Palette = ncp; Rectangle area = new Rectangle(0, 0, OutputImageCols, OutputImageRows); var data = OutputImage.LockBits(area, ImageLockMode.WriteOnly, OutputImage.PixelFormat); Marshal .Copy (byteBuffer, 0, data.Scan0, byteBuffer.Length); OutputImage.UnlockBits(data); } The output image I get is the following: https://www.box.com/shared/p6tubyi6dsf7cyregg9e It is quite clear that I am losing a pixel per row, but i cannot understand why: I have carefully controlled all the parameters: OutputImageCols, OutputImageRows and the byte [] byteBuffer length and content even writing known values as way to test. The code is nearly identical to other code posted in stackOverflow and elsewhere. Someone maybe could help to identify where the problem is? Thanks a lot

    Read the article

  • Performing Aggregate Functions on Multi-Million Row Tables

    - by Daniel Short
    I'm having some serious performance issues with a multi-million row table that I feel I should be able to get results from fairly quick. Here's a run down of what I have, how I'm querying it, and how long it's taking: I'm running SQL Server 2008 Standard, so Partitioning isn't currently an option I'm attempting to aggregate all views for all inventory for a specific account over the last 30 days. All views are stored in the following table: CREATE TABLE [dbo].[LogInvSearches_Daily]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [Inv_ID] [int] NOT NULL, [Site_ID] [int] NOT NULL, [LogCount] [int] NOT NULL, [LogDay] [smalldatetime] NOT NULL, CONSTRAINT [PK_LogInvSearches_Daily] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY] ) ON [PRIMARY] This table has 132,000,000 records, and is over 4 gigs. A sample of 10 rows from the table: ID Inv_ID Site_ID LogCount LogDay -------------------- ----------- ----------- ----------- ----------------------- 1 486752 48 14 2009-07-21 00:00:00 2 119314 51 16 2009-07-21 00:00:00 3 313678 48 25 2009-07-21 00:00:00 4 298863 0 1 2009-07-21 00:00:00 5 119996 0 2 2009-07-21 00:00:00 6 463777 534 7 2009-07-21 00:00:00 7 339976 503 2 2009-07-21 00:00:00 8 333501 570 4 2009-07-21 00:00:00 9 453955 0 12 2009-07-21 00:00:00 10 443291 0 4 2009-07-21 00:00:00 (10 row(s) affected) I have the following index on LogInvSearches_Daily: /****** Object: Index [IX_LogInvSearches_Daily_LogDay] Script Date: 05/12/2010 11:08:22 ******/ CREATE NONCLUSTERED INDEX [IX_LogInvSearches_Daily_LogDay] ON [dbo].[LogInvSearches_Daily] ( [LogDay] ASC ) INCLUDE ( [Inv_ID], [LogCount]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] I need to pull inventory only from the Inventory for a specific account id. I have an index on the Inventory as well. I'm using the following query to aggregate the data and give me the top 5 records. This query is currently taking 24 seconds to return the 5 rows: StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- SELECT TOP 5 Sum(LogCount) AS Views , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , Inv_ID FROM LogInvSearches_Daily D (NOLOCK) WHERE LogDay DateAdd(d, -30, getdate()) AND EXISTS( SELECT NULL FROM propertyControlCenter.dbo.Inventory (NOLOCK) WHERE Acct_ID = 18731 AND Inv_ID = D.Inv_ID ) GROUP BY Inv_ID (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |--Top(TOP EXPRESSION:((5))) |--Sequence Project(DEFINE:([Expr1007]=dense_rank)) |--Segment |--Segment |--Sort(ORDER BY:([Expr1006] DESC, [D].[Inv_ID] DESC)) |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1006]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1011], [Expr1012], [Expr1010])) | |--Compute Scalar(DEFINE:(([Expr1011],[Expr1012],[Expr1010])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1011] AND [D].[LogDay] < [Expr1012]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID]), SEEK:([propertyControlCenter].[dbo].[Inventory].[Acct_ID]=(18731) AND [propertyControlCenter].[dbo].[Inventory].[Inv_ID]=[LOA (13 row(s) affected) I tried using a CTE to pick up the rows first and aggregate them, but that didn't run any faster, and gives me essentially the same execution plan. (1 row(s) affected) StmtText ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --SET SHOWPLAN_TEXT ON; WITH getSearches AS ( SELECT LogCount -- , DENSE_RANK() OVER(ORDER BY Sum(LogCount) DESC, Inv_ID DESC) AS Rank , D.Inv_ID FROM LogInvSearches_Daily D (NOLOCK) INNER JOIN propertyControlCenter.dbo.Inventory I (NOLOCK) ON Acct_ID = 18731 AND I.Inv_ID = D.Inv_ID WHERE LogDay DateAdd(d, -30, getdate()) -- GROUP BY Inv_ID ) SELECT Sum(LogCount) AS Views, Inv_ID FROM getSearches GROUP BY Inv_ID (1 row(s) affected) StmtText ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |--Stream Aggregate(GROUP BY:([D].[Inv_ID]) DEFINE:([Expr1004]=SUM([LOALogs].[dbo].[LogInvSearches_Daily].[LogCount] as [D].[LogCount]))) |--Sort(ORDER BY:([D].[Inv_ID] ASC)) |--Nested Loops(Inner Join, OUTER REFERENCES:([D].[Inv_ID])) |--Nested Loops(Inner Join, OUTER REFERENCES:([Expr1008], [Expr1009], [Expr1007])) | |--Compute Scalar(DEFINE:(([Expr1008],[Expr1009],[Expr1007])=GetRangeWithMismatchedTypes(dateadd(day,(-30),getdate()),NULL,(6)))) | | |--Constant Scan | |--Index Seek(OBJECT:([LOALogs].[dbo].[LogInvSearches_Daily].[IX_LogInvSearches_Daily_LogDay] AS [D]), SEEK:([D].[LogDay] > [Expr1008] AND [D].[LogDay] < [Expr1009]) ORDERED FORWARD) |--Index Seek(OBJECT:([propertyControlCenter].[dbo].[Inventory].[IX_Inventory_Acct_ID] AS [I]), SEEK:([I].[Acct_ID]=(18731) AND [I].[Inv_ID]=[LOALogs].[dbo].[LogInvSearches_Daily].[Inv_ID] as [D].[Inv_ID]) ORDERED FORWARD) (8 row(s) affected) (1 row(s) affected) So given that I'm getting good Index Seeks in my execution plan, what can I do to get this running faster? Thanks, Dan

    Read the article

  • nfs (netapp) question: how to map the device shown in sar -d to the data volume?

    - by Daniel
    Using sar I can see device nfss24 is very busy, but how to know which data volume (file system) the device is for? sar -d 1 10|egrep "busy|nfs" SunOS phxdbnfs11 5.10 Generic_141414-07 sun4v 04/14/2010 19:03:27 device %busy avque r+w/s blks/s avwait avserv 19:03:28 nfs23 0 0.0 0 0 0.0 0.0 nfs24 100 1484053.4 504 32123 2944300.0 47.5 nfs25 0 0.0 0 0 0.0 0.0 nfs26 0 0.0 0 0 0.0 0.0 nfs27 107 1.1 0 0 0.0 0.0 nfs28 107 17.0 0 0 0.0 0.0 nfs29 100 13109.5 460 29435 28451.7 52.0 nfs30 0 0.0 0 0 0.0 0.0 nfs31 107 9.6 0 0 0.0 0.0 nfs32 0 0.0 0 0 0.0 0.0 nfs33 107 1.1 0 0 0.0 0.0 19:03:29 nfs23 0 0.0 0 0 0.0 0.0 nfs24 100 1483762.8 530 33709 2797054.5 45.1 nfs25 0 0.0 0 0 0.0 0.0 nfs26 0 0.0 0 0 0.0 0.0 nfs27 107 1.1 0 0 0.0 0.0 nfs28 107 17.0 0 0 0.0 0.0 nfs29 100 12800.8 511 32732 25016.0 46.8 nfs30 0 0.0 0 0 0.0 0.0 nfs31 107 9.6 0 0 0.0 0.0 nfs32 0 0.0 0 0 0.0 0.0 nfs33 107 1.1 0 0 0.0 0.0 19:03:30 nfs23 0 0.0 0 0 0.0 0.0 nfs24 100 1483080.4 761 48162 1950073.8 31.4 nfs25 0 0.0 0 0 0.0 0.0 nfs26 0 0.0 0 0 0.0 0.0 nfs27 107 1.1 0 0 0.0 0.0 nfs28 107 17.0 0 0 0.0 0.0 nfs29 100 12406.7 737 46855 16800.7 32.4 nfs30 0 0.0 0 0 0.0 0.0 nfs31 107 9.6 0 0 0.0 0.0 nfs32 0 0.0 0 0 0.0 0.0 nfs33 107 1.1 0 0 0.0 0.0

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >