Search Results

Search found 4603 results on 185 pages for 'lower bound'.

Page 37/185 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Essbase BSO Data Fragmentation

    - by Ann Donahue
    Essbase BSO Data Fragmentation Data fragmentation naturally occurs in Essbase Block Storage (BSO) databases where there are a lot of end user data updates, incremental data loads, many lock and send, and/or many calculations executed.  If an Essbase database starts to experience performance slow-downs, this is an indication that there may be too much fragmentation.  See Chapter 54 Improving Essbase Performance in the Essbase DBA Guide for more details on measuring and eliminating fragmentation: http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/daprcset.html Fragmentation is likely to occur in the following situations: Read/write databases that users are constantly updating data Databases that execute calculations around the clock Databases that frequently update and recalculate dense members Data loads that are poorly designed Databases that contain a significant number of Dynamic Calc and Store members Databases that use an isolation level of uncommitted access with commit block set to zero There are two types of data block fragmentation Free space tracking, which is measured using the Average Fragmentation Quotient statistic. Block order on disk, which is measured using the Average Cluster Ratio statistic. Average Fragmentation Quotient The Average Fragmentation Quotient ratio measures free space in a given database.  As you update and calculate data, empty spaces occur when a block can no longer fit in its original space and will either append at the end of the file or fit in another empty space that is large enough.  These empty spaces take up space in the .PAG files.  The higher the number the more empty spaces you have, therefore, the bigger the .PAG file and the longer it takes to traverse through the .PAG file to get to a particular record.  An Average Fragmentation Quotient value of 3.174765 means the database is 3% fragmented with free space. Average Cluster Ratio Average Cluster Ratio describes the order the blocks actually exist in the database. An Average Cluster Ratio number of 1 means all the blocks are ordered in the correct sequence in the order of the Outline.  As you load data and calculate data blocks, the sequence can start to be out of order.  This is because when you write to a block it may not be able to place back in the exact same spot in the database that it existed before.  The lower this number the more out of order it becomes and the more it affects performance.  An Average Cluster Ratio value of 1 means no fragmentation.  Any value lower than 1 i.e. 0.01032828 means the data blocks are getting further out of order from the outline order. Eliminating Data Block Fragmentation Both types of data block fragmentation can be removed by doing a dense restructure or export/clear/import of the data.  There are two types of dense restructure: 1. Implicit Restructures Implicit dense restructure happens when outline changes are done using EAS Outline Editor or Dimension Build. Essbase restructures create new .PAG files restructuring the data blocks in the .PAG files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are NOT removed with implicit restructures. 2. Explicit Restructures Explicit dense restructure happens when a manual initiation of the database restructure is executed. An explicit dense restructure is a full restructure which comprises of a dense restructure as outlined above plus the removal of empty blocks Empty Blocks vs. Fragmentation The existence of empty blocks is not considered fragmentation.  Empty blocks can be created through calc scripts or formulas.  An empty block will add to an existing database block count and will be included in the block counts of the database properties.  There are no statistics for empty blocks.  The only way to determine if empty blocks exist in an Essbase database is to record your current block count, export the entire database, clear the database then import the exported data.  If the block count decreased, the difference is the number of empty blocks that had existed in the database.

    Read the article

  • “Apparently, you signed a software services agreement without fully understanding it.”

    - by Dave Ballantyne
    I am not a lawyer. Let me say that again, I am not a lawyer. Todays Dilbert has prompted me to post about my recent experience with SqlServer licensing. I'm in the technical realm and rarely have much to do with purchasing and licensing.  I say “I need” , budget realities will state weather I actually get.  However, I do keep my ear to the ground and due to my community involvement, I know, or at least have an understanding of, some licensing restrictions. Due to a misunderstanding, Microsoft Licensing stated that we needed licenses for our standby servers.  I knew that that was not the case,  and a quick tweet confirmed this. So after composing an email stating exactly what the machines in question were used for ie Log shipped to and used in a disaster recover scenario only,  and posting several Technet articles to back this up, we saved 2 enterprise edition licences, a not inconsiderable cost. However during this discussion, I was made aware of another ‘legalese’ document that could completely override the referenced articles, and anything I knew, or thought i knew, about SqlServer licensing. Personally, I had no knowledge of this.  The “Purchase Use Rights” agreement would appear to be the volume licensing equivalent of the “End User License Agreement” , click throughs we all know and ignore.  Here is a direct quote from Microsoft licensing, when asked for clarification. “Thanks for your email. Just to give some background on the Product Use Rights (PUR), licenses acquired through volume licensing are bound by the most recent PUR at the time of license acquisition. The link for the current PUR and PUR archive is http://www.microsoft.com/licensing/about-licensing/product-licensing.aspx. Further to this, products acquired through boxed product or pre-installed on hardware (OEM) are bound by the End User License Agreement (EULA). The PUR will explain limitations, license requirements and rulings on areas like multiplexing, virtualization, processor licensing, etc. When an article will appear on a Microsoft site or blog describing the licensing of a product, it will be using the PUR as a base. Due to the writing style or language used by the person writing areas of the website or technical blogs, the PUR is what you should use as a rule and not any of the other media. The PUR is updated quarterly and will reference every product available at that time working on the latest version unless otherwise stated. The crux of this is that the PUR is written after extensive discussions between the different branches of Microsoft (legal, technical, etc) and the wording is then approved. This is not always the case for some pages explaining licensing as they are merely intended to advise and not subject to the intense scrutiny as the PUR.” So, exactly what does that mean ? My take :  This is a living document, “updated quarterly” , though presumably this could be done on a whim and a fancy.  It could state , you are only licensed if ,that during install you stand in a corner juggling and that photographic evidence is required. A plainly ridiculous demand but,  what else could it override or new requirements could it state that change your existing understanding of the product or your legal usage of it. As i say, im not a lawyer, but are you checking the PURA prior to purchase ?

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • ADO and Two Way Storage Tiering

    - by Andy-Oracle
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 We get asked the following question about Automatic Data Optimization (ADO) storage tiering quite a bit. Can you tier back to the original location if the data gets hot again? The answer is yes but not with standard Automatic Data Optimization policies, at least not reliably. That's not how ADO is meant to operate. ADO is meant to mirror a traditional view of Information Lifecycle Management (ILM) where data will be very volatile when first created, will become less active or cool, and then will eventually cease to be accessed at all (i.e. cold). I think the reason this question gets asked is because customers realize that many of their business processes are cyclical and the thinking goes that those segments that only get used during month end or year-end cycles could sit on lower cost storage when not being used. Unfortunately this doesn't fit very well with the ADO storage tiering model. ADO storage tiering is based on the amount of free and used space in the source tablespace. There are two parameters that control this behavior, TBS_PERCENT_USED and TBS_PERCENT_FREE. When the space in the tablespace exceeds the TBS_PERCENT_USED value then segments specified in storage tiering clause(s) can be moved until the percent of free space reaches the TBS_PERCENT_FREE value. It is worth mentioning that no checks are made for available space in the target tablespace. Now, it is certainly possible to create custom functions to control storage tiering, but this can get complicated. The biggest problem is insuring that there is enough space to move the segment back to tier 1 storage, assuming that that's the goal. This isn't as much of a problem when moving from tier 1 to tier 2 storage because there is typically more tier 2 storage available. At least that's the premise since it is supposed to be less costly, lower performing and higher capacity storage. In either case though, if there isn't enough space then the operation fails. In the case of a customized function, the question becomes do you attempt to free the space so the move can be made or do you just stop and return false so that the move cannot take place? This is really the crux of the issue. Once you cross into this territory you're really going to have to implement two-way hierarchical storage and the whole point of ADO was to provide automatic storage tiering. You're probably better off using heat map and/or business access requirements and building your own hierarchical storage management infrastructure if you really want two way storage tiering. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • I.T. Chargeback : Core to Cloud Computing

    - by Anand Akela
    Contributed by Mark McGill Consolidation and Virtualization have been widely adopted over the years to help deliver benefits such as increased server utilization, greater agility and lower cost to the I.T. organization. These are key enablers of cloud, but in themselves they do not provide a complete cloud solution. Building a true enterprise private cloud involves moving from an admin driven world, where the I.T. department is ultimately responsible for the provisioning of servers, databases, middleware and applications, to a world where the consumers of I.T. resources can provision their infrastructure, platforms and even complete application stacks on demand. Switching from an admin-driven provisioning model to a user-driven model creates some challenges. How do you ensure that users provisioning resources will not provision more than they need? How do you encourage users to return resources when they have finished with them so that others can use them? While chargeback has existed as a concept for many years (especially in mainframe environments), it is the move to this self-service model that has created a need for a new breed of chargeback applications for cloud. Enabling self-service without some form of chargeback is like opening a shop where all of the goods are free. A successful chargeback solution will be able to allocate the costs of shared I.T. infrastructure based on the relative consumption by the users. Doing this creates transparency between the I.T. department and the consumers of I.T. When users are able to understand how their consumption translates to cost they are much more likely to be prudent when it comes to their use of I.T. resources. This also gives them control of their I.T. costs, as moderate usage will translate to a lower charge at the end of the month. Implementing Chargeback successfully create a win-win situation for I.T. and the consumers. Chargeback can help to ensure that I.T. resources are used for activities that deliver business value. It also improves the overall utilization of I.T. infrastructure as I.T. resources that are not needed are not left running idle. Enterprise Manager 12c provides an integrated metering and chargeback solution for Enterprise Manager Targets. This solution is built on top of the rich configuration and utilization information already available in Enterprise Manager. It provides metering not just for virtual machines, but also for physical hosts, databases and middleware. Enterprise Manager 12c provides metering based on the utilization and configuration of the following types of Enterprise Manager Target: Oracle VM Host Oracle Database Oracle WebLogic Server Using Enterprise Manager Chargeback, administrators are able to create a set of Charge Plans that are used to attach prices to the various metered resources. These plans can contain fixed costs (eg. $10/month/database), configuration based costs (eg. $10/month if OS is Windows) and utilization based costs (eg. $0.05/GB of Memory/hour) The self-service user provisioning these resources is then able to view a report that details their usage and helps them understand how this usage translates into cost. Armed with this information, the user is able to determine if the resources are delivering adequate business value based on what is being charged. Figure 1: Chargeback in Self-Service Portal Enterprise Manager 12c provides a variety of additional interfaces into this data. The administrator can access summary and trending reports. Summary reports allow the administrator to drill-down through the cost center hierarchy to identify, for example, the top resource consumers across the organization. Figure 2: Charge Summary Report Trending reports can be used for I.T. planning and budgeting as they show utilization and charge trends over a period of time. Figure 3: CPU Trend Report We also provide chargeback reports through BI Publisher. This provides a way for users who do not have an Enterprise Manager login (such as Line of Business managers) to view charge and usage information. For situations where a bill needs to be produced, chargeback can be integrated with billing applications such as Oracle Billing and Revenue Management (BRM). Further information on Enterprise Manager 12c’s integrated metering and chargeback: White Paper Screenwatch Cloud Management on OTN

    Read the article

  • The new direction of the gaming industry

    - by raccoon_tim
    Just recently I read a great blog post by David Darling, the founder of Codemasters: http://www.develop-online.net/blog/347/Jurassic-consoles-could-become-extinct. In the blog post he talks about how traditional retail games are experiencing a downfall thanks to the increasing popularity of digital distribution. I personally think of retail games as being relics of the past. It does not really make much sense to still keep distributing boxed games when the same game can be elegantly downloaded and updated over the air through a digital distribution channel. The world is not all rainbows, however. One big issue with mixing digital distribution with boxed retail games is that resellers will not condone you selling your game for 10€ digitally while their selling the same game for 70€. The only way to get around this issue is to move to full digital distribution. This has the added benefit of minimizing piracy as the game can be tightly bound to the service you downloaded the game from. Many players are, however, complaining about not being able to play the games offline. Having games tightly bound to the internet is a problem when games are bought from a retailer as we tend to expect that once we have the product we can use it anywhere because we physically own it. The truth is that we don’t actually own the product. Instead, the typical EULA actually states that we only have a license to use the product. We’re not, for instance, allowed to disassemble the product, which the owner is indeed permitted to do. Digital distribution allows us to provide games as services, instead of selling them as standalone products. This means that for a service to work you have to be connected to the internet but you still have the same rights to use the product. It’s really straightforward; if you downloaded a client from the internet you are expected to have an internet connection so you’re able to connect to the server. A game distributed digitally that is built using a client-server architecture has the added benefit of allowing you to play anywhere as long as you have the client installed and you are able to log in with your user information. Your save games can be backed up and your game can continue anywhere. Another development we’re seeing in the gaming industry is the increasing popularity of free-to-play games. These are games that let you play for free but allow you to boost your gaming experience with real world money. The nature of these games is that players are constantly rewarded with new content and the game can evolve according to their way of playing and their wishes can be incorporated into the product. Free-to-play games can quickly gain a large player basis and monetization is done by providing players valuable things to buy making their gaming experience more fun. I am personally very excited about free-to-play games as it’s possible to start building the game together with your players and there is no need to work on the game for 5 years from start to finish and only then see if it’s actually something the players like. This is a typical problem with big movie-like retail games and recent news about Radical Entertainment practically closing its doors paints a clear picture of what can happen when the risk does not pay off: http://news.teamxbox.com/xbox/25874/Prototype-Developer-Radical-Entertainment-Closes/.

    Read the article

  • Could not determine metatable error binding list to asp.net datagridview

    - by Scott Vercuski
    I am working with the following block of code ... List<ThemeObject> themeList = (from theme in database.Themes join image in database.DBImages on theme.imageID equals image.imageID into resultSet from item in resultSet select new ThemeObject { Name = theme.Name, ImageID = item.imageID}).ToList(); dgvGridView.DataSource = themeList; dgvGridView.DataBind(); The list object populates fine. The datagrid is setup with 2 columns. A textbox column for the "Name" which is bound to "Name" An image column which is bound to the "ImageID" field When I execute the code I receive the following error on the DataBind() Could not determine a MetaTable. A MetaTable could not be determined for the data source '' and one could not be inferred from the request URL. Make sure that the table is mapped to the dats source, or that the data source is configured with a valid context type and table name, or that the request is part of a registered DynamicDataRoute. I'm not using any dynamicdataroutes as far as I can tell. Has anyone experienced this error before?

    Read the article

  • MVC 2: Html.TextBoxFor, etc. in VB.NET 2010

    - by Brian
    Hello, I have this sample ASP.NET MVC 2.0 view in C#, bound to a strongly typed model that has a first name, last name, and email: <div> First: <%= Html.TextBoxFor(i => i.FirstName) %> <%= Html.ValidationMessageFor(i => i.FirstName, "*") %> </div> <div> Last: <%= Html.TextBoxFor(i => i.LastName) %> <%= Html.ValidationMessageFor(i => i.LastName, "*")%> </div> <div> Email: <%= Html.TextBoxFor(i => i.Email) %> <%= Html.ValidationMessageFor(i => i.Email, "*")%> </div> I converted it to VB.NET, seeing the appropriate constructs in VB.NET 10, as: <div> First: <%= Html.TextBoxFor(Function(i) i.FirstName) %> <%= Html.ValidationMessageFor(Function(i) i.FirstName, "*") %> </div> <div> Last: <%= Html.TextBoxFor(Function(i) i.LastName)%> <%= Html.ValidationMessageFor(Function(i) i.LastName, "*")%> </div> <div> Email: <%= Html.TextBoxFor(Function(i) i.Email)%> <%= Html.ValidationMessageFor(Function(i) i.Email, "*")%> </div> No luck. Is this right, and if not, what syntax do I need to use? Again, I'm using ASP.NET MVC 2.0, this is a view bound to a strongly typed model... does MVC 2 still not support the new language constructs in .NET 2010? Thanks.

    Read the article

  • Why is the UpdatePanel Response size changing on alternate requests?

    - by Decker
    We are using UpdatePanel in a small portion of a large page and have noticed a performance problem where IE7 becomes CPU bound and the control within the UpdatePanel takes a long time (upwards of 30 seconds) to render. We also noticed that Firefox does not seem to suffer from these delays. We ran both Fiddler (for IE) and Firebug (for Firefox) and noticed that the real problem lied with the amount of data being returned in update panel responses. Within the UpdatePanel control there is a table that contains a number of ListBox controls. The real problem is that EVERY OTHER TIME the response (from making ListBox selections) alternates from 30K to 430K. Firefox handles the 400+K response in a reasonable amount of time. For whatever reason, IE7 goes CPU bound while it is presumably processing this data. So irrespective of whether or not we should be using an UpdatePanel or not, we'd like to figure out why every other async postback response is larger by a factor of more than 10 than the previous one. When the response is in the 30K range, IE updates the display within a second. On the alternate times, the response time is well over 10 times longer. Any idea why this alternating behavior should be happening with an UpdatePanel?

    Read the article

  • GridView doesn't remember state between postbacks

    - by Ryan
    Hi, I have a simple ASP page with databound grid (bound to an object source). The grid is within the page of a wizard and has a 'select' checkbox for each row. In one stage of the wizard, I bind the GridView: protected void Wizard1_NextButtonClick(object sender, WizardNavigationEventArgs e) { ... // Bind and display matches GridViewMatches.EnableViewState = true; GridViewMatches.DataSource = getEmailRecipients(); GridViewMatches.DataBind(); And when the finish button is clicked, I iterate through the rows and check what's selected: protected void Wizard1_FinishButtonClick(object sender, WizardNavigationEventArgs e) { // Set the selected values, depending on the checkboxes on the grid. foreach (GridViewRow gr in GridViewMatches.Rows) { Int32 personID = Convert.ToInt32(gr.Cells[0].Text); CheckBox selected = (CheckBox) gr.Cells[1].FindControl("CheckBoxSelectedToSend"); But at this stage GridViewMatches.Rows.Count = 0! I don't re-bind the grid, I shouldn't need to, right? I expect the view-state to maintain the state. (Also, if I do rebind the grid, my selection checkboxes will be cleared) NB: This page also dynamically adds user controls in OnInit method. I have heard that it might mess with the view state, but as far as I can tell, I am doing it correctly and the viewstate for those added controls seems to work (values are persisted between postbacks) Thanks a lot in advance for any help! Ryan UPDATE: Could this be to do with the fact I am setting the datasource programatically? I wondered if the asp engine was databinding the grid during the page lifecycle to a datasource that was not yet defined. (In a test page, the GridView is 'automatically' databound'. I don't want the grid to re-bound I just want the values from the viewstate from the previous post! Also, I have this in the asp header: ViewStateEncryptionMode="Never" - this was to resolve an occasional 'Invalid Viewstate Validation MAC' message Thanks again Ryan

    Read the article

  • MVVM Binding Selected RadOutlookBarItem

    - by Christian
    Imagine: [RadOutlookBarItem1] [RadOutlookBarItem2] [RadOutlookBar] [CONTENCONTROL] What i want to achieve is: User selects one of the RadOutlookBarItem's. Item's tag is bound like: Tag="{Binding SelectedControl, Mode=TwoWay}" MVVM Property public string SelectedControl { get { return _showControl; } set { _showControl = value; OnNotifyPropertyChanged("ShowControl"); } } ContentControl has multiple CustomControls and Visibility of those is bound like: <UserControl.Resources> <Converters:BoolVisibilityConverter x:Key="BoolViz"/> </UserControl.Resources> <Grid x:Name="LayoutRoot" Background="White"> <Views:ViewDocumentSearchControl Visibility="{Binding SelectedControl, Converter={StaticResource BoolViz}, ConverterParameter='viewDocumentSearchControl'}"/> <Views:ViewStartControl Visibility="{Binding SelectedControl, Converter={StaticResource BoolViz}, ConverterParameter='viewStartControl'}"/> </Grid> Converter: public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { // here comes the logic part... should return Visibility.Collapsed : Visibility.Visible based on 'object value' value System.Diagnostics.Debugger.Break(); return Visibility.Collapsed; } now, logically the object value is always set to null. So here's it comes to my question: How can i put a value into the SelectedControl Variable for the RadOutlookBarItem's Tag. I mean something like Tag="{Binding SelectedControl, Mode=TwoWay, VALUE='i.e.ControlName'"} So that i can decide, using the Convert Method, whether a specific Control's visibility is either set to collapsed or visible? help's appreciated Christian

    Read the article

  • Microsoft Surface - Flip & Scatter View Items

    - by Angelus
    Hi Guys, I'm currently trying to get flip to work with scatterview items, and I'm having some trouble conceptually with it, using a plugin called Thriple (http://thriple.codeplex.com/). Essentially, a 2 sided thriple control looks like this: <thriple:ContentControl3D xmlns:thriple="http://thriple.codeplex.com/" Background="LightBlue" BorderBrush="Black" BorderThickness="2" MaxWidth="200" MaxHeight="200" > <thriple:ContentControl3D.Content> <Button Content="Front Side" Command="thriple:ContentControl3D.RotateCommand" Width="100" Height="100" /> </thriple:ContentControl3D.Content> <thriple:ContentControl3D.BackContent> <Button Content="Back Side" Command="thriple:ContentControl3D.RotateCommand" Width="100" Height="100" /> </thriple:ContentControl3D.BackContent> </thriple:ContentControl3D> What I'm struggling to grasp is if I should be making 2 separate ScatterView templates to bind to the data I want, and then each one would be the "front" and "back" of a scatterview item OR should i make 2 separate ScatterView items which are bound to the data I want, which is then bound to the "back" and "front" of a main ScatterView item? If there is a better way of using doing flip animations with ScatterViewItem's, that'd be cool too! Thanks!

    Read the article

  • Arrrg! MovieClip object refuses to moved in any rational way in as3?

    - by Aaron H.
    I have a MovieClip object, which is exported for actionscript (AS3) in an .swc file. When I place an instance of the clip on the stage without any modifications, it appears in the upper left corner, about half off stage (i.e. only the lower right quadrant of the object is visible). I understand that this is because the clip has a registration point which is not the upper left corner. If you call getBounds() on the movieclip you can get the bounds of the clip (presumably from the "point" that it's aligned on) which looks something like (left: -303, top: -100, right: 303, bottom: 100), you can subtract the left and top values from the clip x and y: clip.x -= bounds.left; clip.y -= bounds.top; This seems to properly align the clip fully on stage with the top left of the clip squarely in the corner of the stage. But! Following that logic doesn't seem to work when aligning it on the center of the stage! clip.x = (stage.stageWidth / 2); etc... This creates the crazy parallel universe where the clip is now down in the lower right corner of the stage. The only clue I have is that looking at: clip.transform.matrix and clip.transform.concatenatedMatrix matrix has a tx value of 748 (half of stage height) ty value of 426 (Half of stage height) concatenatedMatrix has a tx value of 1699.5 and ty value of 967.75 That's also obviously where the movieclip is getting positioned, but why? Where is this additional translation coming from?

    Read the article

  • How to avoid reallocation using the STL (C++)

    - by Tue Christensen
    This question is derived of the topic: http://stackoverflow.com/questions/2280655/vector-reserve-c I am using a datastructur of the type vector<vector<vector<double> > >. It is not possible to know the size of each of these vector (except the outer one) before items (doubles) are added. I can get an approximate size (upper bound) on the number of items in each "dimension". A solution with the shared pointers might be the way to go, but I would like to try a solution where the vector<vector<vector<double> > > simply has .reserve()'ed enough space (or in some other way has allocated enough memory). Will A.reserve(500) (assumming 500 is the size or, alternatively an upper bound on the size) be enough to hold "2D" vectors of large size, say [1000][10000]? The reason for my question is mainly because I cannot see any way of reasonably estimating the size of the interior of A at the time of .reserve(500). An example of my question: vector A; A.reserve(500+1); vector temp2; vector temp1 (666,666); for(int i=0;i<500;i++) { A.push_back(temp2); for(int j=0; j< 10000;j++) { A.back().push_back(temp1); } } Will this ensure that no reallocation is done for A? If temp2.reserve(100000) and temp1.reserve(1000) where added at creation will this ensure no reallocation at all will occur at all? In the above please disregard the fact that memory could be wasted due to conservative .reserve() calls. Thank you all in advance!

    Read the article

  • Explicit behavior with checks vs. implicit behavior

    - by Silviu
    I'm not sure how to construct the question but I'm interested to know what do you guys think of the following situations and which one would you prefer. We're working at a client-server application with winforms. And we have a control that has some fields automatically calculated upon filling another field. So we're having a field currency which when filled by the user would determine an automatic filling of another field, maybe more fields. When the user fills the currency field, a Currency object would be retrieved from a cache based on the string introduced by the user. If entered currency is not found in the cache a null reference is returned by the cache object. Further down when asking the application layer to compute the other fields based on the currency, given a null currency a null specific field would be returned. This way the default, implicit behavior is to clear all fields. Which is the expected behavior. What i would call the explicit implementation would be to verify that the Currency object is null in which case the depending fields are cleared explicitly. I think that the latter version is more clear, less error prone and more testable. But it implies a form of redundancy. The former version is not as clear and it implies a certain behavior from the application layer which is not expressed in the tests. Maybe in the lower layer tests but when the need arises to modify the lower layers, so that given a null currency something else should be returned, i don't think a test that says just that without a motivation is going to be an impediment for introducing a bug in upper layers. What do you guys think?

    Read the article

  • JBoss 5.1 binds to host address while run in vserver with -b <guest address>

    - by bart cane
    Hello, while starting JBoss 5.1.0.GA in virtual server machine on Debian (linux-VServer technology) I get the following error: ERROR [org.jboss.kernel.plugins.dependency.AbstractKernelController] (main) Error installing to Start: name=jboss.remoting:protocol=rmi,service=JMXConnectorServer state=Create mode=Manual requiredState=Installed java.io.IOException: Cannot bind to URL [rmi://10.1.2.11:1090/jmxconnector]: javax.naming.NoPermissionException [Root exception is java.rmi.ServerException: RemoteException occurred in server thread; nested exception is: java.rmi.AccessException: Registry.Registry.bind disallowed; origin /AA.BB.CC.DD is non-local host] where AA.BB.CC.DD is host machine name, 10.1.2.11 is vserver guest with JBoss and JBoss is started with -b 10.1.2.11 (I also tried -Djboss.bind.address=10.1.2.11 - the same result). 10.1.2.11 is bound to dummy2 interface on host (serving 10.1.2.1 network). The root exception is strange - why JBoss wants to bind to host address AA.BB.CC.DD? There were no problems with 4.2.3.GA on the same machine, also started with -b 10.1.2.11. It starts correctly when no params present - binds to localhost and everything is ok, but it MUST be bound to 10.1.2.11 to be visible by Apache on another vserver guest, acting as proxy. I thought that it can be fixed by setting net.ipv4.conf.all.promote_secondaries=1 via sysctl (was 0) but it didn't help much. Has anyone had such problem? Regards, bart

    Read the article

  • Mixing NIO with IO

    - by Steffen Heil
    Hi Usually you have a single bound tcp port and several connections on these. At least there are usually more connections as bound ports. My case is different: I want to bind a lot of ports and usually have no (or at least very few) connections. So I want to use NIO to accept the incoming connections. However, I need to pass the accepted connections to the existing jsch ssh library. That requires IO sockets instead of NIO sockets, it spawns one (or two) thread(s) per connection. But that's fine for me. Now, I thought that the following lines would deliver the very same result: Socket a = serverSocketChannel.accept().socket(); Socket b = serverSocketChannel.socket().accep(); SocketChannel channel = serverSocketChannel.accpet(); channel.configureBlocking( true ); Socket c = channel.socket(); Socket d = serverSocket.accept(); However the getInputStream() and getOutputStream() functions of the returned sockets seem to work different. Only if the socket was accepted using the last call, jsch can work with it. In the first three cases, it fails (and I am sorry: I don't know why). So is there a way to convert such a socket? Regards, Steffen

    Read the article

  • Pythonic mapping of an array (Beginner)

    - by scott_karana
    Hey StackOverflow, I've got a question related to a beginner Python snippet I've written to introduce myself to the language. It's an admittedly trivial early effort, but I'm still wondering how I could have written it more elegantly. The program outputs NATO phoenetic readable versions of an argument, such "H2O" - "Hotel 2 Oscar", or (lacking an argument) just outputs the whole alphabet. I mainly use it for calling in MAC addresses and IQNs, but it's useful for other phone support too. Here's the body of the relevant portion of the program: #!/usr/bin/env python import sys nato = { "a": 'Alfa', "b": 'Bravo', "c": 'Charlie', "d": 'Delta', "e": 'Echo', "f": 'Foxtrot', "g": 'Golf', "h": 'Hotel', "i": 'India', "j": 'Juliet', "k": 'Kilo', "l": 'Lima', "m": 'Mike', "n": 'November', "o": 'Oscar', "p": 'Papa', "q": 'Quebec', "r": 'Romeo', "s": 'Sierra', "t": 'Tango', "u": 'Uniform', "v": 'Victor', "w": 'Whiskey', "x": 'Xray', "y": 'Yankee', "z": 'Zulu', } if len(sys.argv) < 2: for n in nato.keys(): print nato[n] else: # if sys.argv[1] == "-i" # TODO for char in sys.argv[1].lower(): if char in nato: print nato[char], else: print char, As I mentioned, I just want to see suggestions for a more elegant way to code this. My first guess was to use a list comprehension along the lines of [nato[x] for x in sys.argv[1].lower() if x in nato], but that doesn't allow me to output any non-alphabetic characters. My next guess was to use map, but I couldn't format any lambdas that didn't suffer from the same corner case. Any suggestions? Maybe something with first-class functions? Messing with Array's guts? This seems like it could almost be a Code Golf question, but I feel like I'm just overthinking :)

    Read the article

  • refering Tomcat JNDI datasource in persistence.xml

    - by binary_runner
    in server.xml I've defined global resource (I'm using Tomcat 6): <GlobalNamingResources> <Resource name="jdbc/myds" auth="Container" type="javax.sql.DataSource" maxActive="10" maxIdle="3" maxWait="10000" username="sa" password="" driverClassName="org.h2.Driver" url="jdbc:h2:~/.myds/data/db" /> </GlobalNamingResources> I see in catalina.out that this is bound, so I suppose it's OK. In my web app I have the link to the datasource, I'm not sure it's OK: <Context> <ResourceLink global='jdbc/myds' name='jdbc/myds' type="javax.sql.Datasource"/> </Context> and in application there is persistence.xml : <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd" version="2.0"> <persistence-unit name="oam" transaction-type="RESOURCE_LOCAL"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <non-jta-data-source>jdbc/myds</non-jta-data-source> <!-- class definitions here, nothing else --> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/> </properties> </persistence-unit> </persistence> It shuld be OK, but most probably this or the ResourceLink definition is wrong because I'm getting: javax.naming.NameNotFoundException: Name jdbc is not bound in this Context What's wrong and why this does not work ?

    Read the article

  • Format date from SQLCE to display in DataGridView

    - by Ruben Trancoso
    hi folks, I have a DataGridView bound to a table from a .sdf database through a BindSource. The date column display dates like "d/M/yyyy HH:mm:ss". e.: "27/2/1971 00:00:00". I want to make it display just "27/02/1971" in its place. I tried to apply DataGridViewCellStyle {format=dd/MM/yyyy} but nothing happens, event with other pre-built formats. On the the other side, there's a Form with a MasketTextBox with a "dd/MM/yyyy" mask to its input that is bound to the same column and uses a Parse and a Format event handler before display and send it to the db. Binding dataNascimentoBinding = new Binding("Text", this.source, "Nascimento", true); dataNascimentoBinding.Format += new ConvertEventHandler(Util.formatDateConvertEventHandler); dataNascimentoBinding.Parse += new ConvertEventHandler(Util.parseDateConvertEventHandler); this.dataNascimentoTxt.DataBindings.Add(dataNascimentoBinding); public static string convertDateString2DateString(string dateString, string inputFormat, string outputFormat ) { DateTime date = DateTime.ParseExact(dateString, inputFormat, DateTimeFormatInfo.InvariantInfo); return String.Format("{0:" + outputFormat + "}", date); } public static void formatDateConvertEventHandler(object sender, ConvertEventArgs e) { if (e.DesiredType != typeof(string)) return; if (e.Value.GetType() != typeof(string)) return; String dateString = (string)e.Value; e.Value = convertDateString2DateString(dateString, "d/M/yyyy HH:mm:ss", "dd/MM/yyyy"); } public static void parseDateConvertEventHandler(object sender, ConvertEventArgs e) { if (e.DesiredType != typeof(string)) return; if (e.Value.GetType() != typeof(string)) return; string value = (string)e.Value; try { e.Value = DateTime.ParseExact(value, "dd/MM/yyyy", DateTimeFormatInfo.InvariantInfo); } catch { return; } } Like you can see by the code it was expexted that Date coming from SQL would be a DateTime value as is its column, but my eventHandler is receiving a string instead. Likewise, the result date for parse should be a datetime but its a string also. I'm puzzled dealing with this datetime column.

    Read the article

  • WPF Combobox Updates list but not the selected item

    - by JoshReedSchramm
    I have a combo box on a WPF form. On this form the user selects a record from the combo box which populates the rest of the fields on the form so the user can update that record. When they click save I am re-retrieving the combo box source which updates the combo box list. The problem is the selected item keeps the original label even though the data behind it is different. When you expand the combo box the selected item shows the right label. I am using a command binding mechanism. Here is some of the relevant code. private void SaveSalesRep() { BindFromView(); if (_salesRep.Id == 0) SalesRepRepository.AddAndSave(_salesRep); else SalesRepRepository.DataContext.SaveChanges(); int originalId = _salesRep.Id; InitSalesRepDropDown(); SalesRepSelItem = ((List<SalesRep>) SalesRepItems.SourceCollection).Find(x => x.Id == originalId); } private void InitSalesRepDropDown() { var salesRepRepository = IoC.GetRepository<ISalesRepRepository>(); IEnumerable<SalesRep> salesReps = salesRepRepository.GetAll(); _salesRepItems = new CollectionView(salesReps); NotifyPropertyChanged("SalesRepItems"); SalesRepSelItem = SalesRepItems.GetItemAt(0) as SalesRep; } The Selected Item property on the combo box is bound to SalesRepSelItem Property and the ItemsSource is bound to SalesRepItems which is backed by _salesRepItems. THe SalesRepSelItem property called NotifyPropertyChanges("SalesRepSelItem") which raises a PropertyChanged event. All told the binding of new items seems to work and the list updates, but the label on the selected item doesnt. Any ideas? Thanks all.

    Read the article

  • VB.Net Custom Object Master-Detail Data Binding

    - by clawson
    Since beginning to use VB.Net some years ago I have become slowly familiar with using the data binding features of .Net, however I often find my self bewildered by it's behavior and instead of discover the correct way it should work I find some dirty work around to suit my needs and continue on. Needless to say my problems continue to arise. I am using Custom Objects as the Data Sources for by controls and often entire forms. I find it frustrating to separate business logic and the graphical interface. (That could be a new question entirely.) So for a lot of objects I generate a form which has the DataBindingSource for the object. When I create each from using the New Constructor I explicitly pass to it the object to which it should be bound, and then set this passed object as the DataSource of the BindingSource. (That's a mouthful!) Now the Master object (say, bound to each form) often contains a List of objects which I like to have displayed in a DataGridView. I (sometimes) create and modify these child objects in their own form (again creating a databind the same way as the master form) but when I add them to the List in the master object the DataGridView won't update with the new items. So my question really has a few layers: How can I easily/efficiently/correctly update this DataGridView with the list of Detail objects when I add them to the list of the Master object. Is this approach to DataBinding good/viable. What's the best way to separate business logic from graphical interface. Thanks for the help!

    Read the article

  • custom C++ boost::lambda expression help

    - by aaa
    hello. A little bit of background: I have some strange multiple nested loops which I converted to flat work queue (basically collapse single index loops to single multi-index loop). right now each loop is hand coded. I am trying to generalized approach to work with any bounds using lambda expressions: For example: // RANGE(i,I,N) is basically a macro to generate `int i = I; i < N; ++i ` // for (RANGE(lb, N)) { // for (RANGE(jb, N)) { // for (RANGE(kb, max(lb, jb), N)) { // for (RANGE(ib, jb, kb+1)) { // is equivalent to something like (overload , to produce range) flat<1, 3, 2, 4>((_2, _3+1), (max(_4,_3), N), N, N) the internals of flat are something like: template<size_t I1, size_t I2, ..., class L1_, class L2, ..._> boost::array<int,4> flat(L1_ L1, L2_ L2, ...){ //boost::array<int,4> current; class variable bool advance; L2_ l2 = L2.bind(current); // bind current value to lambda { L1_ l1 = L1.bind(current); //bind current value to innermost lambda l1.next(); advance = !(l1 < l1.upper()); // some internal logic if (advance) { l2.next(); current[0] = l1.lower(); } } //..., } my question is, can you give me some ideas how to write lambda (derived from boost) which can be bound to index array reference to return upper, lower bounds according to lambda expression? thank you much

    Read the article

  • Best fit curve for trend line

    - by Dave Jarvis
    Problem Constraints Size of the data set, but not the data itself, is known. Data set grows by one data point at a time. Trend line is graphed one data point at a time (using a spline/Bezier curve). Graphs The collage below shows data sets with reasonably accurate trend lines: The graphs are: Upper-left. By hour, with ~24 data points. Upper-right. By day for one year, with ~365 data points. Lower-left. By week for one year, with ~52 data points. Lower-right. By month for one year, with ~12 data points. User Inputs The user can select: the type of time series (hourly, daily, monthly, quarterly, annual); and the start and end dates for the time series. For example, the user could select a daily report for 30 days in June. Trend Weight To calculate the window size (i.e., the number of data points to average when calculating the trend line), the following expression is used: data points / trend weight Where data points is derived from user inputs and trend weight is 6.4. Even though a trend weight of 6.4 produces good fits, it is rather arbitrary, and might not be appropriate for different user inputs. Question How should trend weight be calculated given the constraints of this problem?

    Read the article

  • Where is my Drupal View pager?

    - by anotherthink
    Hi, I have a Drupal 6 site where I've created a view that shows a list of nodes. Nothing complicated -- except that when I choose "use pager" -- "yes" (and choose the "full pager" option), the pager doesn't show up on the page. The first page of nodes shows up, but there's no way to get to other pages. Through googling, I saw that some people had an issue with the "Pager Element" item, so I changed that from 0 to 1 -- no luck. This shouldn't be very complicated, but I've been at it for a while! Help!? ETA: I've tracked it down to the following lines in /modules/views/theme/theme.inc: $pager_theme = views_theme_functions($pager_type, $view, $view->display_handler->display); $vars['pager'] = theme($pager_theme, $exposed_input, $view->pager['items_per_page'], $view->pager['element']); The first line returns an array; the second line returns nothing. I suspect now that this is a theming problem with the custom theme I'm using that may not have fully been correctly updated for Drupal 6 -- like, maybe I'm missing a pager template somehow? -- however, I'm quite new to Drupal and don't really understand how to further track down and fix the issue. Any advice would be much appreciated! ETA yet again: The pager also doesn't show up when using Garland, so it's not a theme issue after all. ALSO: I have a copy of this site set up on a development server as well, and that copy has working pagination! I've checked what I thought might be different -- files in the theme, what modules are enabled -- and it seems like pretty much everything is the same. The one thing that I know is different, however, is that the production server has a lower version of MySQL (lower than recommended for Drupal 6 -- we're waiting on the hosting company being able to change this later). Would it make sense that the old version of MySQL is unable to do pagination correctly in Drupal 6? If so, does anyone know a workaround I can do until we are able to update MySQL?

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >