Search Results

Search found 6342 results on 254 pages for 'behavior'.

Page 123/254 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Initially Unselected DropDownList

    - by Ricardo Peres
    One of the most (IMHO) things with DropDownList is its inability to show an unselected value at load time, which is something that HTML does permit. I decided to change the DropDownList to add this behavior. All was needed was some JavaScript and reflection. See the result for yourself: public class CustomDropDownList : DropDownList { public CustomDropDownList() { this.InitiallyUnselected = true; } [DefaultValue(true)] public Boolean InitiallyUnselected { get; set; } protected override void OnInit(EventArgs e) { this.Page.RegisterRequiresControlState(this); this.Page.PreRenderComplete += this.OnPreRenderComplete; base.OnInit(e); } protected virtual void OnPreRenderComplete(Object sender, EventArgs args) { FieldInfo cachedSelectedValue = typeof(ListControl).GetField("cachedSelectedValue", BindingFlags.NonPublic | BindingFlags.Instance); if (String.IsNullOrEmpty(cachedSelectedValue.GetValue(this) as String) == true) { if (this.InitiallyUnselected == true) { if ((ScriptManager.GetCurrent(this.Page) != null) && (ScriptManager.GetCurrent(this.Page).IsInAsyncPostBack == true)) { ScriptManager.RegisterStartupScript(this, this.GetType(), "unselect" + this.ClientID, "$get('" + this.ClientID + "').selectedIndex = -1;", true); } else { this.Page.ClientScript.RegisterStartupScript(this.GetType(), "unselect" + this.ClientID, "$get('" + this.ClientID + "').selectedIndex = -1;", true); } } } } } SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • Storing SCA Metadata in the Oracle Metadata Services Repository by Nicolás Fonnegra Martinez and Markus Lohn

    - by JuergenKress
    The advantages of using the Oracle Metadata Services Repository as a central storage for the metadata. SCA has been available since the release of the Oracle SOA Suite 11g. This technology combines and orchestrates several SOA components inside an SCA composite, making design, development, deployment, and maintenance easier. SCA development is metadata-driven, meaning that metadata artifacts, such as Web Services Description Language (WSDL), XML Schema Definition (XSD), XML, others, define the composite's behavior. With the increased number of composites and the dependencies among them, it became necessary to manage all the metadata in an adequate way. This article will address the advantages of using the Oracle Metadata Services (MDS) repository as a central storage for the metadata. The MDS repository is a central part of the Oracle Fusion Middleware landscape, managing the metadata for several technologies, such as Oracle Application Development Framework (Oracle ADF), Oracle WebCenter, and the Oracle SOA Suite. This article is divided into three parts. The first part provides an overview of SCA and MDS. The second part describes some MDS tasks that help in the management of the SCA metadata files inside the repository. The third part shows how to develop SCA composites in combination with an MDS repository. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SCA Metadata. Metadata Services Repository,Nicolás Fonnegra Martinez,Markus Lohn,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • DDD Melbourne -lessons leant

    - by Michael Freidgeim
    I've attended DDD Melbourne and want to list the interesting points, that I've leant and want to follow. To read more: * Moles-Mocking Isolation framework for .NET. Documentation is here.   (See also Mocking frameworks comparison created October 4, 2009 ) * WebFormsMVP * PluralSight   http://www.pluralsight-training.net/offers/default.aspx?cc=trial   * ELMAH: Error Logging Modules and Handlers *Rhino.Mocks   * VS UI Test Recorder -see posts Visual Studio 2010 Coded UI Test User Guide. Note that Microsoft Test Manager (MTM) toolis a separate application, that can be started from Program files/VS 2010 menu.It is not a menu inside Visual Studio.   * CodeContract- seems great in Debug. Will be good if in production  will be possible runtime configuration, ability to log instead of throw exception. Current recommendation to customize Debug.Assert is not trivial The programmer is free to use the customization provided by Debug.Assert using assert listeners to obtain whatever runtime behavior they desire (e.g., ignoring the error, logging it, or throwing an exception).   // Clears the existing list of assert listener (the default pop-up box) System.Diagnostics.Debug.Listeners.Clear(); // Install your own listener System.Diagnostics.Debug.Listeners.Add(MyTraceListener); Note that you can't catch specific ContractException, but can catch generic Exception(see How come you cannot catch Code Contract exceptions?)   Books recommended "Working effectively with legacy code" by Michael Feathers (corresponding article)   Fowler, Martin Refactoring: Improving the Design of Existing Code, slides http://jaoo.dk/jaoo1999/schedule/MartinFowlerRefractoring.pdf

    Read the article

  • Formating Columns in Excel created by af:exportCollectionActionListener

    - by Duncan Mills
    The af:exportCollectionActionListener behavior in ADF Faces Rich client provides a very simple way of quickly dumping out the contents or selected rows in a table or treeTable to Excel. However, that simplicity comes at a price as it pretty much left up to Excel how to format the data. A common use case where you have a problem is that of ID columns which are often long numerics. You probably want to represent this data as a string, Excel however will probably have other ideas and render it as an exponent  - not what you intended. In earlier releases of the framework you could sort of work around this by taking advantage of a bug which would allow you to surround the outputText in question with invisible outputText components which provided formatting hints to Excel. Something like this: <af:column headertext="Some wide label">  <af:panelgrouplayout layout="horizontal">     <af:outputtext value="=TEXT(" visible="false">     <af:outputtext value="#{row.bigNumberValue}" rendered="true"/>    <af:outputtext value=",0)" visible="false">   </af:panelgrouplayout> </af:column> However, this bug was fixed and so it can no longer be used as a trick, the export now ignores invisible columns. So, if you really need control over the formatting there are several alternatives: First the more powerful ADF Desktop Integration (ADFdi) package which allows you to build fully transactional spreadsheets that "pull" the data and can update it. This gives you all the control that might need on formatting but it does need specific Excel Add-ins on the client to work. For more information about ADFdi have a look at this tutorial on OTN. Or you can of course look at BI Publisher or Apache POI if you're happy with output only spreadsheets

    Read the article

  • The Debut of Oracle Database Firewall at RSA 2011

    - by Troy Kitch
    We're very proud of the coverage and headlines Oracle Database Firewall made this past week during RSA Conference 2011 in San Francisco. In case you missed our previous post, we announced the availability of this latest addition to the Oracle Defense-in-Depth database security solutions. The announcement was picked up many publications including eWeek, CRN, InformationWeek and more. Here is just some of the press on this very important security solution: "It's rare to find a new product category these days, but I think a new product from Oracle fills the bill. In the crowded enterprise security field, that's saying something." Enterprise System Journal: A New Approach to Database Security By James E. Powell "Databases and the content they store are among the most valuable IT assets - and the most targeted by hackers. In an effort to help secure databases, Oracle today is launching the new Oracle Database Firewall as an approach to defend databases against SQL injection and other database attacks." Database Journal: Oracle Debuts Database Firewall (also appeared in InternetNews.com) By Sean Michael Kerner "Oracle Database Firewall understands SQL-statement formats, and can be configured to blacklist and whitelist traffic based on source. When it detects suspicious statements within SQL traffic -- ones that might indicate SQL injection attacks, for example -- it can replace them with neutral statements that will keep the session running without allowing potentially harmful traffic through." Network World: Oracle Database Firewall defuses SQL injection attacks By Tim Green "The firewall uses "SQL grammar analysis" to prevent SQL injection attacks and other attempts to grab information. The Oracle Database Firewall features white and black lists policies, exceptions and rules that mark the time of day, IP address, application and user." ZDNet: RSA Roundup: Oracle Database Firewall By Larry Dignan "The database giant announced Oracle Database Firewall on Feb. 14 at the RSA Conference in San Francisco. The firewall application establishes a "defensive perimeter" around databases by monitoring and enforcing normal application behavior in real-time, the company said." eWEEK: Oracle Database Firewall Delivers Vendor-Agnostic Security By Fahmida Y. Rashid

    Read the article

  • OWB 11gR2 - Early Arriving Facts

    - by Dawei Sun
    A common challenge when building ETL components for a data warehouse is how to handle early arriving facts. OWB 11gR2 introduced a new feature to address this for dimensional objects entitled Orphan Management. An orphan record is one that does not have a corresponding existing parent record. Orphan management automates the process of handling source rows that do not meet the requirements necessary to form a valid dimension or cube record. In this article, a simple example will be provided to show you how to use Orphan Management in OWB. We first import a sample MDL file that contains all the objects we need. Then we take some time to examine all the objects. After that, we prepare the source data, deploy the target table and dimension/cube loading map. Finally, we run the loading maps, and check the data in target dimension/cube tables. OK, let’s start… 1. Import MDL file and examine sample project First, download zip file from here, which includes a MDL file and three source data files. Then we open OWB design center, import orphan_management.mdl by using the menu File->Import->Warehouse Builder Metadata. Now we have several objects in BI_DEMO project as below: Mapping LOAD_CHANNELS_OM: The mapping for dimension loading. Mapping LOAD_SALES_OM: The mapping for cube loading. Dimension CHANNELS_OM: The dimension that contains channels data. Cube SALES_OM: The cube that contains sales data. Table CHANNELS_OM: The star implementation table of dimension CHANNELS_OM. Table SALES_OM: The star implementation table of cube SALES_OM. Table SRC_CHANNELS: The source table of channels data, that will be loaded into dimension CHANNELS_OM. Table SRC_ORDERS and SRC_ORDER_ITEMS: The source tables of sales data that will be loaded into cube SALES_OM. Sequence CLASS_OM_DIM_SEQ: The sequence used for loading dimension CHANNELS_OM. Dimension CHANNELS_OM This dimension has a hierarchy with three levels: TOTAL, CLASS and CHANNEL. Each level has three attributes: ID (surrogate key), NAME and SOURCE_ID (business key). It has a standard star implementation. The orphan management policy and the default parent setting are shown in the following screenshots: The orphan management policy options that you can set for loading are: Reject Orphan: The record is not inserted. Default Parent: You can specify a default parent record. This default record is used as the parent record for any record that does not have an existing parent record. If the default parent record does not exist, Warehouse Builder creates the default parent record. You specify the attribute values of the default parent record at the time of defining the dimensional object. If any ancestor of the default parent does not exist, Warehouse Builder also creates this record. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. While removing data from a dimension, you can select one of the following orphan management policies: Reject Removal: Warehouse Builder does not allow you to delete the record if it has existing child records. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#insertedID1) Cube SALES_OM This cube is references to dimension CHANNELS_OM. It has three measures: AMOUNT, QUANTITY and COST. The orphan management policy setting are shown as following screenshot: The orphan management policy options that you can set for loading are: No Maintenance: Warehouse Builder does not actively detect, reject, or fix orphan rows. Default Dimension Record: Warehouse Builder assigns a default dimension record for any row that has an invalid or null dimension key value. Use the Settings button to define the default parent row. Reject Orphan: Warehouse Builder does not insert the row if it does not have an existing dimension record. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#BABEACDG) Mapping LOAD_CHANNELS_OM This mapping loads source data from table SRC_CHANNELS to dimension CHANNELS_OM. The operator CHANNELS_IN is bound to table SRC_CHANNELS; CHANNELS_OUT is bound to dimension CHANNELS_OM. The TOTALS operator is used for generating a constant value for the top level in the dimension. The CLASS_FILTER operator is used to filter out the “invalid” class name, so then we can see what will happen when those channel records with an “invalid” parent are loading into dimension. Some properties of the dimension operator in this mapping are important to orphan management. See the screenshot below: Create Default Level Records: If YES, then default level records will be created. This property must be set to YES for dimensions and cubes if one of their orphan management policies is “Default Parent” or “Default Dimension Record”. This property is set to NO by default, so the user may need to set this to YES manually. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the dimension editor. The values are set to the same as the dimension value when user drops the dimension into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the dimension. REMOVE Orphan Policy: This property is used when removing data from a dimension. Since the dimension loading type is set to LOAD in this example, this property is disabled. Mapping LOAD_SALES_OM This mapping loads source data from table SRC_ORDERS and SRC_ORDER_ITEMS to cube SALES_OM. This mapping seems a little bit complicated, but operators in the red rectangle are used to filter out and generate the records with “invalid” or “null” dimension keys. Some properties of the cube operator in a mapping are important to orphan management. See the screenshot below: Enable Source Aggregation: Should be checked in this example. If the default dimension record orphan policy is set for the cube operator, then it is recommended that source aggregation also be enabled. Otherwise, the orphan management processing may produce multiple fact rows with the same default dimension references, which will cause an “unstable rowset” execution error in the database, since the dimension refs are used as update match attributes for updating the fact table. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the cube editor. The values are set to the same as in the cube editor when the user drops the cube into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the cube. 2. Deploy objects and mappings We now can deploy the objects. First, make sure location SALES_WH_LOCAL has been correctly configured. Then open Control Center Manager by using the menu Tools->Control Center Manager. Expand BI_DEMO->SALES_WH_LOCAL, click SALES_WH node on the project tree. We can see the following objects: Deploy all the objects in the following order: Sequence CLASS_OM_DIM_SEQ Table CHANNELS_OM, SALES_OM, SRC_CHANNELS, SRC_ORDERS, SRC_ORDER_ITEMS Dimension CHANNELS_OM Cube SALES_OM Mapping LOAD_CHANNELS_OM, LOAD_SALES_OM Note that we deployed source tables as well. Normally, we import source table from database instead of deploying them to target schema. However, in this example, we designed the source tables in OWB and deployed them to database for the purpose of this demonstration. 3. Prepare and examine source data Before running the mappings, we need to populate and examine the source data first. Run SRC_CHANNELS.sql, SRC_ORDERS.sql and SRC_ORDER_ITEMS.sql as target user. Then we check the data in these three tables. Table SRC_CHANNELS SQL> select rownum, id, class, name from src_channels; Records 1~5 are correct; they should be loaded into dimension without error. Records 6,7 and 8 have null parents; they should be loaded into dimension with a default parent value, and should be inserted into error table at the same time. Records 9, 10 and 11 have “invalid” parents; they should be rejected by dimension, and inserted into error table. Table SRC_ORDERS and SRC_ORDER_ITEMS SQL> select rownum, a.id, a.channel, b.amount, b.quantity, b.cost from src_orders a, src_order_items b where a.id = b.order_id; Record 178 has null dimension reference; it should be loaded into cube with a default dimension reference, and should be inserted into error table at the same time. Record 179 has “invalid” dimension reference; it should be rejected by cube, and inserted into error table. Other records should be aggregated and loaded into cube correctly. 4. Run the mappings and examine the target data In the Control Center Manager, expand BI_DEMO-> SALES_WH_LOCAL-> SALES_WH-> Mappings, right click on LOAD_CHANNELS_OM node, click Start. Use the same way to run mapping LOAD_SALES_OM. When they successfully finished, we can check the data in target tables. Table CHANNELS_OM SQL> select rownum, total_id, total_name, total_source_id, class_id,class_name, class_source_id, channel_id, channel_name,channel_source_id from channels_om order by abs(dimension_key); Records 1,2 and 3 are the default dimension records for the three levels. Records 8, 10 and 15 are the loaded records that originally have null parents. We see their parents name (class_name) is set to DEF_CLASS_NAME. Those records whose CHANNEL_NAME are Special_4, Special_5 and Special_6 are not loaded to this table because of the invalid parent. Error Table CHANNELS_OM_ERR SQL> select rownum, class_source_id, channel_id, channel_name,channel_source_id, err$$$_error_reason from channels_om_err order by channel_name; We can see all the record with null parent or invalid parent are inserted into this error table. Error reason is “Default parent used for record” for the first three records, and “No parent found for record” for the last three. Table SALES_OM SQL> select a.*, b.channel_name from sales_om a, channels_om b where a.channels=b.channel_id; We can see the order record with null channel_name has been loaded into target table with a default channel_name. The one with “invalid” channel_name are not loaded. Error Table SALES_OM_ERR SQL> select a.amount, a.cost, a.quantity, a.channels, b.channel_name, a.err$$$_error_reason from sales_om_err a, channels_om b where a.channels=b.channel_id(+); We can see the order records with null or invalid channel_name are inserted into error table. If the dimension reference column is null, the error reason is “Default dimension record used for fact”. If it is invalid, the error reason is “Dimension record not found for fact”. Summary In summary, this article illustrated the Orphan Management feature in OWB 11gR2. Automated orphan management policies improve ETL developer and administrator productivity by addressing an important cause of cube and dimension load failures, without requiring developers to explicitly build logic to handle these orphan rows.

    Read the article

  • CodePlex Daily Summary for Wednesday, May 05, 2010

    CodePlex Daily Summary for Wednesday, May 05, 2010New Projects2010微软精英大挑战Heritage of Dragon项目: 我们来自上海市同济大学,兴趣相投,集聚于此共同构建一个开放的网络平台。致力于运用构建在云端基于地图的服务,使用文字、图片、视频、互动动画等形式来展示全国各地的传统手工艺。并且充分发挥网络的优势,通过开放协作的维基平台人人都可以参与到内容的添加修改与完善中来。目的在于记录、展示、挖掘、传承中国古...AutoArchive: Auto archive your "my documents" to a remote machine. I'm writing this so my wife can put things in "my documents" and it'll automaticly archive i...BigDoor .NET Client: A .NET client for the BigDoor Media API. The API enables secured virtual transactions with support for any number of currencies, transactions, awar...bubujie: Dreamweaver LibraryGeckoGit: GeckoGit is a combination of TortoiseSVN and AnkhSVN, but for Git repositories, and built on the GitSharp library.Global: global, config, mail, http, rest, xml, serialization, helper, path, ioIndustrial Dashboard Connected Grid webpart: This Sharepoint 2007/10 webpart provides a simple way to display grid based reports populated with data that comes from a SQL Server stored procedu...IpControls: "IpControls" contains IPv4 and IPv6 text boxes, both as Windows Forms and WPF version. The IPv6 control automatically detects the older hybrid for...LiteME: LiteME is short for LiteMapleStoryEmulator... it is v75, open-source, and still going through it's alpha stages. It is still in development!Meditel PHP Class: Une classe PHP qui vous permet de d'envoyer des SMS vers tous les numeros Meditel en utilisant leservice des SMS gratuits depuis le site Meditel.maMoneySafe: Help people.Mouse Zoom - Visual Studio Extension: Mouse Zoom is a Visual Studio 2010 extension that will cause the mouse zoom functionality to zoom at the mouse's cursor instead of at the top of th...Multi-Language Words Memorizer: This .net application is designed for learning words and help foreign language learners by lots of automatic features. After you select a list of ...Navigation for ASP.NET Web Forms: Navigation for ASP.NET Web Forms manages movement and data passing between aspx Pages in a unit testable manner. There is no Client-side logic, so ...NazTek.Extension.Clr4: CLR 4.0 extensions and utility APIOpalis Community Releases: Sample workflows, objects, code and other items related to System Center's Opalis Integration Server, published by the Opalis team.Power Video Player: Power Video Player is a slim feature-rich video/dvd player that meets everyday needs in video playback on PC with a bunch of advanced features on b...SchemeEditor: <WPF> <.NET> <Editor> <Silverlight> <Scheme> <Graphics> <simulink> <schematic>StyleCop+: StyleCop+ is a plug-in that extends original StyleCop features.timemanager2010: Just another work time managerTweetTunes: Updates Twitter with current song playing in iTunes - if your Twitter account is linked to Facebook - it will update that too The twittervb2 down...WCF Discovery Library: WCF Discovery Library is a small collection of utilities that makes it easy to add WCF 4.0 Discovery features into your projects.New ReleasesAjaxControlToolkit additional extenders: ControlToolkitExtended: this build contains web example with BreadCrumbsAnyCAD: AnyCAD Free Beta1: AnyCAD Free Beta1Baccarat: Single player practice baccarat: This is a simple baccarat game for Windows Mobile. It is single player and is only a practice version, which will help users familiarize themselve...BigDoor .NET Client: BigDoor .NET 2.0 Client (Alpha): Our first iteration of the .NET client. Please fork and or ask to be added if you want to make any contributions.CBM-Command: 2010-05-04: Release NotesNew Features Panel navigation now complete. Scroll up and down through directories using the up and down cursor keys. Switch between...Directory Linker: Directory Linker 2.1: This release introduces XP support, more information about all features can be found at http://www.humblecoder.co.uk/?p=141Extend SmallBasic: Teaching Extensions v.015: added high low quizGoogle AJAX Search Services for jQuery: jquery.gss-0.1.3.js: First official release - use at your own discretion. Thanks, AndrewIndustrial Dashboard Connected Grid webpart: Filtered Industrial Grid: Filtered Industrial Grid web part for SharePoint 2007/2010, First Release.jQuery Library for SharePoint Web Services: SPServices 0.5.5: IMPORTANT NOTE: This release is in an alpha state. You should only download it if you know what you are getting and are interested in testing it f...Meditel PHP Class: Meditel PHP Class: Zipped File : Example : exemplemeditel.php PHP Class : meditel.class.phpMulti-Language Words Memorizer: Memorizer 1.0: First release.mwNSPECT: mwNSPECT Plugin DLL: mwNSPECT Mapwindow plugin dll. Place in your MapWindow or BASINS plugins directory. Presently only for testing form functionality (not including...mwNSPECT: mwNSPECT Simple Installer: Simplistic mwNSPECT Mapwindow plugin installer using Inno setup. Installs all the files you'll need for NSPECT into the C:\NSPECT folder and insta...MyWSAT - ASP.NET Membership Administration Tool: MyWSAT v3.5.3: MyWSAT 3.5.3 Update Notes - May 4th 2010 1.) Added the user search box and a-z navigation menu to all relevant user gridviews. 2.) Added a membersh...Object/Relational Mapper & Code Generator in Net 2.0 for Relational & XML Schema: 2.7: Upgraded UI-generation templates for special case of associative tables (2-column primary keys). Minor bugfix with template-editor.Open NFSe: Open NFSe 2.0: Versao para Belo Horizonte utilizando Windows Services.Power Video Player: PVP 1.1.3776: v1.1.3776 This is mainly a rebuild of version 1.1 under Ms-PL license and is the 1st version available at CodePlex.PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT: PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT-3.1: The following error has been corrected: PCG ERROR: srcproj -- 3933 PCG ERROR: srcproj -- 2943 PCG ERROR: devproj -- 1474 PCG ERROR: mainprj -- 128...Rehost Image: 1.3.9: Fixed locations saving for mac and linux platforms.Robot Shootans: Robot Shootans 0.5.1 (Windows): This is the first public release of this game. Instructions on how to play are included in the game itself Known issues: Changing control style wh...SchemeEditor: SchemeEditor Beta: First release. Wait for documentation & update for some new functionSharePoint Rsync List: SharePoint Rsync 0.9.0.0: Initial release of sprsync. Comments, questions, feedback, and code enhancements are welcome!Software Is Hardwork: Sw. Is Hw. Lib. 3.0.0.x+01: Sw. Is Hw. Lib. 3.0.0.x+01 UNSUPPORTED, UNTESTED ALPHA RELEASE Code may disappear. This is just a preview of code that was in progress. Code is s...Software Localization Tool: SharpSLT 1.0.1: Minor release: bug fixes slight changes in the UIStyleCop+: StyleCop+ 0.6: Several important improvements made for Advanced Naming Rules: - Added new entities for fields and constants - Added new entities for methods (incl...turing machine simulator: First version of turing machine: Overview: First version of turing simulator with example script (transaction function). Files: SimulatorGui.exe - main GUI of simulator TuringMach...VCC: Latest build, v2.1.30504.0: Automatic drop of latest buildVocabulary Training Center: Basic Edition 1.1: A release with medium large changes: New functionality: Multiple-choice questions added Grammatical questions added Evaluation changed accordin...Web Service Software Factory: Web Service Software Factory 2010 RC: To use the Web Service Software Factory 2010, you need the following software installed on your computer: • Microsoft Visual Studio 2010 (Ultima...Web Service Software Factory: WSSF2010 Guide: This is the help and guidance for Web Service Software Factory 2010Windows Phone 7 Panorama control: panorama control v0.6 + samples: IMPORTANT NOTE: Please read the following bug + suggested workaround. I'll fix this in a new release shortly. Panorama Control source code + sampl...WPF Behavior Library: WPF Behavior Library 0.2 Release: Drag & Drop Took away the ItemType and DataTemplate requirements Added functions for inheritors to be able to provide custom logic to handle movi...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight Toolkitpatterns & practices – Enterprise LibraryWindows Presentation Foundation (WPF)iTuner - The iTunes CompanionDotNetNuke® Community EditionASP.NETMost Active Projectspatterns & practices – Enterprise LibraryAJAX Control FrameworkHydroServer - CUAHSI Hydrologic Information System ServerIonics Isapi Rewrite Filterpatterns & practices: Azure Security GuidanceRawrBlogEngine.NETTinyProjectNB_Store - Free DotNetNuke Ecommerce Catalog ModuleAll-In-One Code Framework

    Read the article

  • How do I repeat a texture with GLKit?

    - by Synopfab
    I am using GLKit in order to show textures on my project. The code is like this: -(void)setTextureImage:(UIImage *)image { NSError *error; texture = [GLKTextureLoader textureWithCGImage:image.CGImage options:nil error:&error]; if (error) { NSLog(@"Error loading texture from image: %@",error); } } effect.texture2d0.envMode = GLKTextureEnvModeReplace; effect.texture2d0.target = GLKTextureTarget2D; effect.texture2d0.name = texture.name; glEnableVertexAttribArray(GLKVertexAttribTexCoord0); glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 0, self.textureCoordinates); Now I want to repeat this texture on a rectangle. Is there any way use GLKit for this behavior? I've tried to use opengl function in addition to the glkit ones, but it raises errors: glEnable(GL_TEXTURE_2D); glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT ); glBindTexture( GL_TEXTURE_2D, texture.name ); 2011-11-09 20:10:28.614 **[16309:207] GL ERROR: 0x0500 2011-11-09 20:10:30.840 **[16309:207] Error loading texture from image: Error Domain=GLKTextureLoaderErrorDomain Code=8 "The operation couldn’t be completed. (GLKTextureLoaderErrorDomain error 8.)" UserInfo=0x68545c0 {GLKTextureLoaderGLErrorKey=1280, GLKTextureLoaderErrorKey=OpenGL error}

    Read the article

  • Workshop in Holland - and open questions

    - by Mike Dietrich
    Thanks to everybody visiting yesterday the Upgrade Workshop in Maarsen. I had lots of fun - and I hope you'd enjoy it, too :-) The slides, as always, can be downloaded from: http://apex.oracle.com/folien Use the Schluesselwort/Keyword: upgrade112 And thanks to all those of you sending feedback regarding "traget/destination" (will change it in the slides) and other topics such as Enterprise Manager Grid Control 11g. Enterprise Manager 11g will be launched on 22-APR-2010 - and you can join the event live if you will be accidentialy in New York:http://www.oracle.com/enterprisemanager11g/index.html Thanks for this hint!!! Regarding the open questions: Will there be PSUs available for Intel Solaris? PSUs will be made available on nearly all platforms including Intel Solaris. Please see Note:882604.1 for platform information and Note:854428.1 for direct links to the PSU download location. Is COMMIT_WRITE=NOWAIT the default in patch set 10.2.0.4? I tried to verify this and neither couldn't find a bug entry nor a documentation saying the 10.2.0.4 has a different default setting (default behaviour is WAIT). Checked it in my 10.2.0.4 instances as well and there it is set to WAIT. If this parameter is not explicitly specified, then database commit behavior defaults to writing commit records to disk before control is returned to the client. If only IMMEDIATE or BATCH is specified, but not WAIT or NOWAIT, then WAIT mode is assumed. If only WAIT or NOWAIT is specified, but not IMMEDIATE or BATCH, then IMMEDIATE mode is assumed Please feedback to me if you have different experiences. Service Request escalation by telephone? Thanks for this update - I didn't realize that ;-) Now I know why it hasn't helped last month when I've updated an SR ... here's the official information on that: Note:199389.1 - Note has been updated on 24-FEB-2010. See the telephone number to Oracle support to request an escalation here: http://www.oracle.com/support/contact.html

    Read the article

  • SonicFileFinder 2.2 Released

    - by WeigeltRo
    My colleague Jens Schaller has released a new version of his free Visual Studio add-in SonicFileFinder, adding support for Visual Studio 2010. Announcement on his blog Download on the SonicFileFinder website As far as I can tell, there are no new features compared to version 2.1, but good to know that this add-in is now available for VS2010. For those who a wondering what SonicFileFinder is about: SonicFileFinder implements a command for searching and opening files in a Visual Studio solution, which is very nice especially in large projects. This may sound familiar to users of JetBrain’s ReSharper, which has a “Go To File” feature. But in my opinion SonicFileFinder does a better job overall: While ReSharper (4.5) does a prefix search by default, SonicFileFinder searches for any occurrence of the entered text inside a file name. In a long list of file names (e.g. all starting with “Page…”), this allows me to focus on the part that makes the difference (e.g. “Render” in PageRenderBuffer.cs). In ReSharper I would have to type “*Render*”, which can be shortened to “*Render” (which isn’t even correct). Note that SonicFileFinder does support wildcards, of course.   SonicFileFinder remembers the last input (and thus the last result list) without a noticeable delay of the popup. If I want to search for something different, I can type right away, so this behavior doesn’t slow me down. But where it really shines is when I’m not even sure what file exactly I was looking for – I open one file, notice that it’s not the one I want, re-open the pop-up dialog and now I can choose another one from the result list without re-entering the search text. SonicFileFinder allows me to open multiple files at one (nice for service interfaces and implementations). SonicFileFinder lets me open either a Windows Explorer or Command Line window in the directory containing a specific file.

    Read the article

  • New Bundling and Minification Support (ASP.NET 4.5 Series)

    - by ScottGu
    This is the sixth in a series of blog posts I'm doing on ASP.NET 4.5. The next release of .NET and Visual Studio include a ton of great new features and capabilities.  With ASP.NET 4.5 you'll see a bunch of really nice improvements with both Web Forms and MVC - as well as in the core ASP.NET base foundation that both are built upon. Today’s post covers some of the work we are doing to add built-in support for bundling and minification into ASP.NET - which makes it easy to improve the performance of applications.  This feature can be used by all ASP.NET applications, including both ASP.NET MVC and ASP.NET Web Forms solutions. Basics of Bundling and Minification As more and more people use mobile devices to surf the web, it is becoming increasingly important that the websites and apps we build perform well with them. We’ve all tried loading sites on our smartphones – only to eventually give up in frustration as it loads slowly over a slow cellular network.  If your site/app loads slowly like that, you are likely losing potential customers because of bad performance.  Even with powerful desktop machines, the load time of your site and perceived performance can make an enormous customer perception. Most websites today are made up of multiple JavaScript and CSS files to separate the concerns and keep the code base tight. While this is a good practice from a coding point of view, it often has some unfortunate consequences for the overall performance of the website.  Multiple JavaScript and CSS files require multiple HTTP requests from a browser – which in turn can slow down the performance load time.  Simple Example Below I’ve opened a local website in IE9 and recorded the network traffic using IE’s built-in F12 developer tools. As shown below, the website consists of 5 CSS and 4 JavaScript files which the browser has to download. Each file is currently requested separately by the browser and returned by the server, and the process can take a significant amount of time proportional to the number of files in question. Bundling ASP.NET is adding a feature that makes it easy to “bundle” or “combine” multiple CSS and JavaScript files into fewer HTTP requests. This causes the browser to request a lot fewer files and in turn reduces the time it takes to fetch them.   Below is an updated version of the above sample that takes advantage of this new bundling functionality (making only one request for the JavaScript and one request for the CSS): The browser now has to send fewer requests to the server. The content of the individual files have been bundled/combined into the same response, but the content of the files remains the same - so the overall file size is exactly the same as before the bundling.   But notice how even on a local dev machine (where the network latency between the browser and server is minimal), the act of bundling the CSS and JavaScript files together still manages to reduce the overall page load time by almost 20%.  Over a slow network the performance improvement would be even better. Minification The next release of ASP.NET is also adding a new feature that makes it easy to reduce or “minify” the download size of the content as well.  This is a process that removes whitespace, comments and other unneeded characters from both CSS and JavaScript. The result is smaller files, which will download and load in a browser faster.  The graph below shows the performance gain we are seeing when both bundling and minification are used together: Even on my local dev box (where the network latency is minimal), we now have a 40% performance improvement from where we originally started.  On slow networks (and especially with international customers), the gains would be even more significant. Using Bundling and Minification inside ASP.NET The upcoming release of ASP.NET makes it really easy to take advantage of bundling and minification within projects and see performance gains like in the scenario above. The way it does this allows you to avoid having to run custom tools as part of your build process –  instead ASP.NET has added runtime support to perform the bundling/minification for you dynamically (caching the results to make sure perf is great).  This enables a really clean development experience and makes it super easy to start to take advantage of these new features. Let’s assume that we have a simple project that has 4 JavaScript files and 6 CSS files: Bundling and Minifying the .css files Let’s say you wanted to reference all of the stylesheets in the “Styles” folder above on a page.  Today you’d have to add multiple CSS references to get all of them – which would translate into 6 separate HTTP requests: The new bundling/minification feature now allows you to instead bundle and minify all of the .css files in the Styles folder – simply by sending a URL request to the folder (in this case “styles”) with an appended “/css” path after it.  For example:    This will cause ASP.NET to scan the directory, bundle and minify the .css files within it, and send back a single HTTP response with all of the CSS content to the browser.  You don’t need to run any tools or pre-processor to get this behavior.  This enables you to cleanly separate your CSS into separate logical .css files and maintain a very clean development experience – while not taking a performance hit at runtime for doing so.  The Visual Studio designer will also honor the new bundling/minification logic as well – so you’ll still get a WYSWIYG designer experience inside VS as well. Bundling and Minifying the JavaScript files Like the CSS approach above, if we wanted to bundle and minify all of our JavaScript into a single response we could send a URL request to the folder (in this case “scripts”) with an appended “/js” path after it:   This will cause ASP.NET to scan the directory, bundle and minify the .js files within it, and send back a single HTTP response with all of the JavaScript content to the browser.  Again – no custom tools or builds steps were required in order to get this behavior.  And it works with all browsers. Ordering of Files within a Bundle By default, when files are bundled by ASP.NET they are sorted alphabetically first, just like they are shown in Solution Explorer. Then they are automatically shifted around so that known libraries and their custom extensions such as jQuery, MooTools and Dojo are loaded before anything else. So the default order for the merged bundling of the Scripts folder as shown above will be: Jquery-1.6.2.js Jquery-ui.js Jquery.tools.js a.js By default, CSS files are also sorted alphabetically and then shifted around so that reset.css and normalize.css (if they are there) will go before any other file. So the default sorting of the bundling of the Styles folder as shown above will be: reset.css content.css forms.css globals.css menu.css styles.css The sorting is fully customizable, though, and can easily be changed to accommodate most use cases and any common naming pattern you prefer.  The goal with the out of the box experience, though, is to have smart defaults that you can just use and be successful with. Any number of directories/sub-directories supported In the example above we just had a single “Scripts” and “Styles” folder for our application.  This works for some application types (e.g. single page applications).  Often, though, you’ll want to have multiple CSS/JS bundles within your application – for example: a “common” bundle that has core JS and CSS files that all pages use, and then page specific or section specific files that are not used globally. You can use the bundling/minification support across any number of directories or sub-directories in your project – this makes it easy to structure your code so as to maximize the bunding/minification benefits.  Each directory by default can be accessed as a separate URL addressable bundle.  Bundling/Minification Extensibility ASP.NET’s bundling and minification support is built with extensibility in mind and every part of the process can be extended or replaced. Custom Rules In addition to enabling the out of the box - directory-based - bundling approach, ASP.NET also supports the ability to register custom bundles using a new programmatic API we are exposing.  The below code demonstrates how you can register a “customscript” bundle using code within an application’s Global.asax class.  The API allows you to add/remove/filter files that go into the bundle on a very granular level:     The above custom bundle can then be referenced anywhere within the application using the below <script> reference:     Custom Processing You can also override the default CSS and JavaScript bundles to support your own custom processing of the bundled files (for example: custom minification rules, support for Saas, LESS or Coffeescript syntax, etc). In the example below we are indicating that we want to replace the built-in minification transforms with a custom MyJsTransform and MyCssTransform class. They both subclass the CSS and JavaScript minifier respectively and can add extra functionality:     The end result of this extensibility is that you can plug-into the bundling/minification logic at a deep level and do some pretty cool things with it. 2 Minute Video of Bundling and Minification in Action Mads Kristensen has a great 90 second video that shows off using the new Bundling and Minification feature.  You can watch the 90 second video here. Summary The new bundling and minification support within the next release of ASP.NET will make it easier to build fast web applications.  It is really easy to use, and doesn’t require major changes to your existing dev workflow.  It is also supports a rich extensibility API that enables you to customize it however you want. You can easily take advantage of this new support within ASP.NET MVC, ASP.NET Web Forms and ASP.NET Web Pages based applications. Hope this helps, Scott P.S. In addition to blogging, I use Twitter to-do quick posts and share links. My Twitter handle is: @scottgu

    Read the article

  • Ubuntu 14.04: After login, top- and sidepanel don't load and system settings don't open

    - by Löwe Simon
    I have Ubuntu 14.04 on a Medion Erazer with the Nvidia GTX570M card. At first, while I had not installed the last nvidia driver, each time I would try to suspend my pc it would freeze frame on the login. Once I managed to install the nvidia drivers, everything seemed to work up until I noticed that suddenly the system settings would not open anymore. I then rebooted my pc, but when I logged in I just end up with my cursor but no top panel or side pannel. Thankfully, I had installed the gnome desktops which works, except for the fact the system settings do not open also in the gnome desktop. I have tried following: Problems after upgrading to 14.04 (only background and pointer after login) except for reinstalling nvidia, since then I would be back at square one with suspend not working. When I try to launch system settings through the terminal, I get: simon@simon-X681X:~$ unity-control-center libGL error: No matching fbConfigs or visuals found libGL error: failed to load driver: swrast (unity-control-center:2806): Gdk-ERROR **: The program 'unity-control-center' received an X Window System error. This probably reflects a bug in the program. The error was 'BadLength (poly request too large or internal Xlib length erro'. (Details: serial 229 error_code 16 request_code 155 (GLX) minor_code 1) (Note to programmers: normally, X errors are reported asynchronously; that is, you will receive the error a while after causing it. To debug your program, run it with the GDK_SYNCHRONIZE environment variable to change this behavior. You can then get a meaningful backtrace from your debugger if you break on the gdk_x_error() function.) Trace/breakpoint trap (core dumped) I know the 2 problems seem unrelated, but since they occured together, I am wondering if that is really the case. Your help is much appreciated, Simon

    Read the article

  • Reminder: True WCF Asynchronous Operation

    - by Sean Feldman
    A true asynchronous service operation is not the one that returns void, but the one that is marked as IsOneWay=true using BeginX/EndX asynchronous operations (thanks Krzysztof). To support this sort of fire-and-forget invocation, Windows Communication Foundation offers one-way operations. After the client issues the call, Windows Communication Foundation generates a request message, but no correlated reply message will ever return to the client. As a result, one-way operations can't return values, and any exception thrown on the service side will not make its way to the client. One-way calls do not equate to asynchronous calls. When one-way calls reach the service, they may not be dispatched all at once and may be queued up on the service side to be dispatched one at a time, all according to the service configured concurrency mode behavior and session mode. How many messages (whether one-way or request-reply) the service is willing to queue up is a product of the configured channel and the reliability mode. If the number of queued messages has exceeded the queue's capacity, then the client will block, even when issuing a one-way call. However, once the call is queued, the client is unblocked and can continue executing while the service processes the operation in the background. This usually gives the appearance of asynchronous calls.

    Read the article

  • Two 12.04 machines have the same display settings but different results

    - by durron597
    I have one machine that has a relatively fresh install of 12.04 and one that I inherited. The terminal window in the inherited machine has a really weird font, and the regular one is what I would expect. Especially the behavior of the "m" character is messed up. Note: both of these machines are on the same KVM switch. Here is what I've tried: MyUnity on both machines seem the same .bashrc on both machines seem similar in all the ways that would matter for this issue The terminal profiles on both machines are the default Here are the xrandr outputs: Good xrandr: Screen 0: minimum 320 x 200, current 1280 x 1024, maximum 4096 x 4096 VGA1 connected 1280x1024+0+0 (normal left inverted right x axis y axis) 376mm x 301mm 1280x1024 60.0 + 76.0 75.0* 72.0 70.0 1152x864 75.0 1024x768 75.1 70.1 60.0 832x624 74.6 800x600 72.2 75.0 60.3 640x480 72.8 75.0 66.7 60.0 720x400 70.1 Bad xrandr: Screen 0: minimum 8 x 8, current 1280 x 1024, maximum 8192 x 8192 DP-0 disconnected (normal left inverted right x axis y axis) DP-1 disconnected (normal left inverted right x axis y axis) DP-2 disconnected (normal left inverted right x axis y axis) DP-3 connected 1280x1024+0+0 (normal left inverted right x axis y axis) 376mm x 301mm 1280x1024 60.0*+ 76.0 75.0 72.0 70.0 1152x864 75.0 1024x768 75.0 70.1 60.0 800x600 75.0 72.2 60.3 640x480 75.0 72.8 59.9 Finally here are screenshots of both machines, it seems to really only be Terminal, I have askubuntu behind the terminal window for comparison: Good screenshot: Bad Screenshot: Any thoughts as to what this might be?

    Read the article

  • Overriding GetHashCode in a mutable struct - What NOT to do?

    - by Kyle Baran
    I am using the XNA Framework to make a learning project. It has a Point struct which exposes an X and Y value; for the purpose of optimization, it breaks the rules for proper struct design, since its a mutable struct. As Marc Gravell, John Skeet, and Eric Lippert point out in their respective posts about GetHashCode() (which Point overrides), this is a rather bad thing, since if an object's values change while its contained in a hashmap (ie, LINQ queries), it can become "lost". However, I am making my own Point3D struct, following the design of Point as a guideline. Thus, it too is a mutable struct which overrides GetHashCode(). The only difference is that mine exposes and int for X, Y, and Z values, but is fundamentally the same. The signatures are below: public struct Point3D : IEquatable<Point3D> { public int X; public int Y; public int Z; public static bool operator !=(Point3D a, Point3D b) { } public static bool operator ==(Point3D a, Point3D b) { } public Point3D Zero { get; } public override int GetHashCode() { } public override bool Equals(object obj) { } public bool Equals(Point3D other) { } public override string ToString() { } } I have tried to break my struct in the way they describe, namely by storing it in a List<Point3D>, as well as changing the value via a method using ref, but I did not encounter they behavior they warn about (maybe a pointer might allow me to break it?). Am I being too cautious in my approach, or should I be okay to use it as is?

    Read the article

  • Implement Budget Allocation in DAX for Power Pivot and Tabular #powerpivot #tabular #ssas #dax

    - by Marco Russo (SQLBI)
    Comparing sales and budget, or costs and budget, is a very common operation. However, it is often the case that you have different granularities for different tables containing budget and the data to compare with. There are two ways to do that: you can limit the comparison to the granularity that is common to the two tables, or you can allocate the budget where it’s not defined. For example, if you have a budget defined by quarter and category, you might want to allocate it by month and product. In this way, you will do the comparison as you had a more granular definition of the budget, without actually having to do the manual job of allocating data (usually in an Excel worksheet!). If you want to do budget allocation in DAX, you can use the Budget Patterns we published on DAX Patterns. If you come from and MDX/OLAP background, at first you might find it hard to solve the problem of not having attribute hierarchies that helps you in propagating the budget values to lower hierarchical levels. However, I think that once you get used to DAX, you will find the behavior very predictable and easy to “debug” also for more complex allocation formula. You just have to be careful in writing the DAX formula, but probably the pattern we wrote should help you designing the right data model, without creating physical relationships to the budget table! This pattern is also based on the Handling Different Granularities scenario I discussed a couple of weeks ago.

    Read the article

  • Documentation in Oracle Retail Analytics, Release 13.3

    - by Oracle Retail Documentation Team
    The 13.3 Release of Oracle Retail Analytics is now available on the Oracle Software Delivery Cloud and from My Oracle Support. The Oracle Retail Analytics 13.3 release introduced significant new functionality with its new Customer Analytics module. The Customer Analytics module enables you to perform retail analysis of customers and customer segments. Market basket analysis (part of the Customer Analytics module) provides insight into which products have strong affinity with one another. Customer behavior information is obtained from mining sales transaction history, and it is correlated with customer segment attributes to inform promotion strategies. The ability to understand market basket affinities allows marketers to calculate, monitor, and build promotion strategies based on critical metrics such as customer profitability. Highlighted End User Documentation Updates With the addition of Oracle Retail Customer Analytics, the documentation set addresses both modules under the single umbrella name of Oracle Retail Analytics. Note, however, that the modules, Oracle Retail Merchandising Analytics and Oracle Retail Customer Analytics, are licensed separately. To accommodate new functionality, the Retail Analytics suite of documentation has been updated in the following areas, among others: The User Guide has been updated with an overview of Customer Analytics. It also contains a list of metrics associated with Customer Analytics. The Operations Guide provides details on Market Basket Analysis as well as an updated list of APIs. The program reference list now also details the module (Merchandising Analytics or Customer Analytics) to which each program applies. The Data Model was updated to include new information related to Customer Analytics, and a new section, Market Basket Analysis Module, was added to the document with its own entity relationship diagrams and data definitions. List of Documents The following documents are included in Oracle Retail Analytics 13.3: Oracle Retail Analytics Release Notes Oracle Retail Analytics Installation Guide Oracle Retail Analytics User Guide Oracle Retail Analytics Implementation Guide Oracle Retail Analytics Operations Guide Oracle Retail Analytics Data Model

    Read the article

  • Ask the Readers: Which Search Engine Do You Use?

    - by Mysticgeek
    While Google dominates the search engine market, there are certainly other alternatives out there such as Bing and Yahoo. Today we’re curious about which one you use, and would you ever consider another one? Believe it or not…not everyone uses Google (surprising indeed), there are several other alternatives out there that some of you may be using and we’re interested in hearing about it. One of the more unique and interesting ones we previously covered is ixquick, which doesn’t save your IP or any information and can be customized quite nicely if you’re the paranoid type. We’re interested in hearing about which search engine you currently use. Would you ever switch to a different one? Have you ever tried to experiment and not use Google (or your favorite engine) for a week? Leave a comment below and join in the discussion! Similar Articles Productive Geek Tips A Few Things I’ve Learned from Writing at How-To GeekModify Firefox’s Search Bar Behavior with SearchLoad OptionsGain Access to a Search Box in Google ChromeSearch Alternative Search Engines from within Bing’s Search PageCombine the Address & Search Bars in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Wallpapers From National Geographic Site Spyware Blaster v4.3 Yes, it’s Patch Tuesday Generate Stunning Tag Clouds With Tagxedo Install, Remove and HIDE Fonts in Windows 7 Need Help with Your Home Network?

    Read the article

  • Why does Ubuntu reset brightness settings at the loading screen?

    - by leugim
    Since I first installed Ubuntu 11.10, I noticed that volume and screen brightness get reset every time Ubuntu starts. Why is this so? And what ways are there to keep brightness and volume levels after rebooting? I have found some scripts that change the screen-brightness at login. But this is not a good solution since login is slower because it seems to wait until the screen brightness is at the level specified by the script. After entering the password I see the screen brightness go down gradually. Only after this is complete (~1 or 2 seconds) does the background disappear and Unity come up. The screenbrightness is not remembered but instead redefined at login. So it gets remembered for the first part of the boot, then set to MAX and then again re-set to normal value by the script. My boot process is as follows: desired brightness: 2 (13,33%) / Max brightness: 15 (100%) Bios / brightness: OK GRUB (violet background color, white text) / brightness: OK Ubuntu loading screen with the dots / brightness: MAX (win7 loads with OK-brightness) User Login / brightness: MAX Unity starts / brightness: OK It seems to be more like a temporary patch than a actual solution. I'm looking for solutions that set the desired brightness permanently and consistently throughout the whole boot-process After updating to 12.04 the behavior is the same. I tried setpci -s 02:00.0 F4.B=XX The value of F4.B is always '0' regardless of what value I try to set it to (tried 0, ff, f, 5, etc) The solution in this answer does not have any noticeable effect: Desktop doesn't remember brightness settings after a reboot The variables at /sys/class/backlight/acpi_video0/ get changed if I use Fn+UP and Fn+DOWN Any help is appreciated. Thanks!

    Read the article

  • ArchBeat Link-o-Rama for December 11, 2012

    - by Bob Rhubart
    Good To Know - Conflicting View Objects and Shared Entity | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis shares his thoughts—and a sample application—dealing with an "interesting ADF behavior" encountered over the weekend. Patching Oracle Exalogic - Updating Linux on the Compute Nodes - Part 1 | Jos Nijhoff Jos Nijhoff launches a series of posts the deal with "patching the operating system on the modified Sun Fire X4170 M2 servers...dubbed compute nodes in Exalogic terminology." Expanding on requestaudit - Tracing who is doing what...and for how long | Kyle Hatlestad "One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing," says Oracle Fusion Middleware A-Team architect Kyle Hatlestad. Get up close and technical in his post. Oracle Data Integrator Presentation from NYOUG Webinar | Gurcan Orhan Oracle ACE Director and award-winning data warehouse architect Gurcan Orhan shares his presentation from the recent NYOUG LI SIG. SOA 11g Technology Adapters – ECID Propagation | Greg Mally "Many SOA Suite 11g deployments include the use of the technology adapters for various activities including integration with FTP, database, and files to name a few," says Oracle Fusion Middleware A-Team member Greg Mally. "Although the integrations with these adapters are easy and feature rich, there can be some challenges from the operations perspective." Greg's post focuses on technical tips for dealing with one of these challenges. Missing Duties for RUP3 upgrade in Fusion Applications Richard from the Oracle Fusion Middleware A-Team explains how to safely apply policy store changes in thirteen easy steps. Thought for the Day "Well over half of the time you spend working on a project (on the order of 70 percent) is spent thinking, and no tool, no matter how advanced, can think for you." — Frederick P. Brooks Source: SoftwareQuotes.com

    Read the article

  • This is the End of Business as Usual...

    - by Michael Snow
    This week, we'll be hosting our last Social Business Thought Leader Series Webcast for 2012. Our featured guest this week will be Brian Solis of Altimeter Group. As we've been going through the preparations for Brian's webcast, it became very clear that an hour's time is barely scraping the surface of the depth of Brian's insights and analysis. Accordingly, in the spirit of sharing Brian's perspective for all of our readers, we'll be featuring guest posts all this week pulled from Brian's larger collection of blog postings on his own website. If you like what you've read here this week, we highly recommend digging deeper into his tome of wisdom. Guest Post by Brian Solis, Analyst, Altimeter Group as originally featured on his site with the minor change of the video addition at the beginning of the post. This is the End of Business as Usual and the Beginning of a New Era of Relevance - Brian Solis, Principal Analyst, Altimeter Group The Times They Are A-Changin’ Come gather ’round people Wherever you roam And admit that the waters Around you have grown And accept it that soon You’ll be drenched to the bone If your time to you Is worth savin’ Then you better start swimmin’ Or you’ll sink like a stone For the times they are a-changin’. - Bob Dylan I’m sure you are wondering why I chose lyrics to open this article. If you skimmed through them, stop here for a moment. Go back through the Dylan’s words and take your time. Carefully read, and feel, what it is he’s saying and savor the moment to connect the meaning of his words to the challenges you face today. His message is as important and true today as it was when they were first written in 1964. The tide is indeed once again turning. And even though the 60s now live in the history books, right here, right now, Dylan is telling us once again that this is our time to not only sink or swim, but to do something amazing. This is your time. This is our time. But, these times are different and what comes next is difficult to grasp. How people communicate. How people learn and share. How people make decisions. Everything is different now. Think about this…you’re reading this article because it was sent to you via email. Yet more people spend their online time in social networks than they do in email. Duh. According to Nielsen, of the total time spent online 22.5% are connecting and communicating in social networks. To put that in perspective, the time spent in the likes of Facebook, Twitter, and Youtube is greater than online gaming at 9.8%, email at 7.6% and search at 4%. Imagine for a moment if you and I were connected to one another in Facebook, which just so happens to be the largest social network in the world. How big? Well, Facebook is the size today of the entire Internet in 2004. There are over 1 billion people friending, Liking, commenting, sharing, and engaging in Facebook…that’s roughly 12% of the world’s population. Twitter has over 200 million users. Ever hear of tumblr? More time is spent on this popular microblogging community than Twitter. The point is that the landscape for communication and all that’s affected by human interaction is profoundly different than how you and I learned, shared or talked to one another yesterday. This transformation is only becoming more pervasive and, it’s not going back. Survival of the Fitting But social media is just one of the channels we can use to reach people. I must be honest. I’m as much a part of tomorrow as I am of yesteryear. It’s why I spend all of my time researching the evolution of media and its impact on business and culture. Because of you, I share everything I learn in newsletters, emails, blogs, Youtube videos, and also traditional books. I’m dedicated to helping everyone not only understand, but grasp the change that’s before you. Technologies such as social, mobile, virtual, augmented, et al compel us adapt our story and value proposition and extend our reach to be part of communities we don’t realize exist. The people who will keep you in business or running tomorrow are the very people you’re not reaching today. Before you continue to read on, allow me to clarify my point of view. My inspiration for writing this is to help you augment, not necessarily replace, the programs you’re running today. We must still reach those whom matter to us in the ways they prefer to be engaged. To reach what I call the connected consumer of Geneeration-C we must too reach them in the ways they wish to be engaged. And in all of my work, how they connect, talk to one another, influence others, and make decisions are not at all like the traditional consumers of the past. Nor are they merely the kids…the Millennial. Connected consumers are representative across every age group and demographic. As you can see, use of social networks, media sharing sites, microblogs, blogs, etc. equally span across Gen Y, Gen X, and Baby Boomers. The DNA of connected customers is indiscriminant of age or any other demographic for that matter. This is more about psychographics, the linkage of people through common interests (than it is their age, gender, education, nationality or level of income. Once someone is introduced to the marvels of connectedness, the sensation becomes a contagion. It touches and affects everyone. And, that’s why this isn’t going anywhere but normalcy. Social networking isn’t just about telling people what you’re doing. Nor is it just about generic, meaningless conversation. Today’s connected consumer is incredibly influential. They’re connected to hundreds and even thousands of other like-minded people. What they experiences, what they support, it’s shared throughout these networks and as information travels, it shapes and steers impressions, decisions, and experiences of others. For example, if we revisit the Nielsen research, we get an idea of just how big this is becoming. 75% spend heavily on music. How does that translate to the arts? I’d imagine the number is equally impressive. If 53% follow their favorite brand or organization, imagine what’s possible. Just like this email list that connects us, connections in social networks are powerful. The difference is however, that people spend more time in social networks than they do in email. Everything begins with an understanding of the “5 W’s and H.E.” – Who, What, When, Where, How, and to What Extent? The data that comes back tells you which networks are important to the people you’re trying to reach, how they connect, what they share, what they value, and how to connect with them. From there, your next steps are to create a community strategy that extends your mission, vision, and value and it align it with the interests, behavior, and values of those you wish to reach and galvanize. To help, I’ve prepared an action list for you, otherwise known as the 10 Steps Toward New Relevance: 1. Answer why you should engage in social networks and why anyone would want to engage with you 2. Observe what brings them together and define how you can add value to the conversation 3. Identify the influential voices that matter to your world, recognize what’s important to them, and find a way to start a dialogue that can foster a meaningful and mutually beneficial relationship 4. Study the best practices of not just organizations like yours, but also those who are successfully reaching the type of people you’re trying to reach – it’s benching marking against competitors and benchmarking against undefined opportunities 5. Translate all you’ve learned into a convincing presentation written to demonstrate tangible opportunity to your executive board, make the case through numbers, trends, data, insights – understanding they have no idea what’s going on out there and you are both the scout and the navigator (start with a recommended pilot so everyone can learn together) 6. Listen to what they’re saying and develop a process to learn from activity and adapt to interests and steer engagement based on insights 7. Recognize how they use social media and innovate based on what you observe to captivate their attention 8. Align your objectives with their objectives. If you’re unsure of what they’re looking for…ask 9. Invest in the development of content, engagement 10. Build a community, invest in values, spark meaningful dialogue, and offer tangible value…the kind of value they can’t get anywhere else. Take advantage of the medium and the opportunity! The reality is that we live and compete in a perpetual era of Digital Darwinism, the evolution of consumer behavior when society and technology evolve faster than our ability to adapt. This is why it’s our time to alter our course. We must connect with those who are defining the future of engagement, commerce, business, and how the arts are appreciated and supported. Even though it is the end of business as usual, it is the beginning of a new age of opportunity. The consumer revolution is already underway, and the question is: How do you better understand the role you play in this production as a connected or social consumer as well as business professional? Again, this is your time to define a new era of engagement and relevance. Originally written for The National Arts Marketing Project Connect with Brian via: Twitter | LinkedIn | Facebook | Google+ --- Note from Michael: If you really like this post above - check out Brian's TEDTalk and his thought process for preparing it in this post: 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} http://www.briansolis.com/2012/10/tedtalk-reinventing-consumer-capitalism-screw-business-as-usual/

    Read the article

  • Progressive Enhancement vs. Single Page Apps

    - by SeanPlusPlus
    I just got back from a conference in Boston called An Event Apart. A really popular theme amongst the speakers was the idea of progressive enhancement - a site's content should go in the HTML, and JavaScript should only be used to enhance behavior. The arguments that the speakers gave for progressive enhancement were very compelling. Not only is it a solid pattern for supporting older browsers, and devices on a network with low bandwidth, but HTML fails much more gracefully than JavaScript (i.e. markup that is not supported is just ignored, while if a browser throws an exception while executing your script - you are hosed). Jeremy Keith gave a particularly insightful talk about this. But what about single page web apps like Backbone and Angular? The whole design behind these frameworks seems to push the developer toward moving content out of the HTML, and into something like a JSON API. I can not seem to gel these two design patterns: progressive enhancement vs. single page web apps. Are there instances when one is better than the other? Or are they not even antagonistic technologies, and I am missing something here with my mental model?

    Read the article

  • ext4 jbd2 journaling active even on empty filesystem

    - by Paul
    I have been having several issues with my ext4 filesystems that seem to be due to jbd2 journaling. I made a related post here and am rephrasing it with the hope that someone may be able to help. For a minimal example, I start with an empty 8gb USB stick and use gparted to create one ext4 partition. The command used by gparted when creating the ext4 file system is: mkfs.ext4 -j -O extent -L DataTraveler8gb /dev/sde1 I check the filesystem with gparted: e2fsck -f -y -v /dev/sde1 and I mount it: sudo mount /dev/sde1 /media/test The disk is empty, but the journaling is very active on this disk (/dev/sde1). The other disks are ext4 SSDs formatted similarly. A snapshot of iotop: % sudo iotop -oPa Total DISK READ: 0.00 B/s | Total DISK WRITE: 2027.21 K/s PID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 262 be/3 root 0.00 B 56.00 K 0.00 % 0.18 % [jbd2/sda1-8] 29069 be/3 root 0.00 B 0.00 B 0.00 % 0.16 % [jbd2/sde1-8] 891 be/3 root 0.00 B 4.00 K 0.00 % 0.03 % [jbd2/sdc1-8] What is jbd2 doing with /dev/sde1? If I follow the same steps with a larger 2Tb disk, iotop indicates this empty disk is constantly being written to by jbd2 at the rate of Mb/s as soon as I mount it. On the other disks, which have the OS and /home, I have tried to find if any files are being modified by processes to cause this behavior but couldn't find any. I also moved many of the disk intensive process to use a tmpfs. And used noatime. I have another non-SSD hard disk on this machine, /dev/sdb, that is also ext4 but was not formatted by gparted (given to me by a coworker). It does not appear in iotop. So I am assuming there an issue with gparted. Any suggestions are appreciated. Also any tips on how to modify existing partitions to fix the issue without having to start from scratch would be great. There are some posts related to jbd2 but they didn't help (eg. here).

    Read the article

  • LibGDX Boid Seek Behaviour

    - by childonline
    I'm trying to make a swarm of boids which seek out the mouse position and move towards it, but I'm having a bit of a problem. The boids just seem to want to go to upper-right corner of the game window. The mouse position seems influence the behavior a bit, but not enough to make the boid turn towards it. I suspect there is a problem with the way LibGDX handles its coordinate system, but I'm not sure how to fix it I've uploaded the eclipse project here! Also here are the relevant bits of my code, in case you see something obviously wrong: public Agent(){ _texture = GdxGame.TEX_AGENT; TextureRegion region = new TextureRegion(_texture, 0, 0, 32, 32); TextureRegion region2 = new TextureRegion(GdxGame.TEX_TARGET, 0, 0, 32, 32); _sprite = new Sprite(region); _sprite.setSize(.05f, .05f); _sprite_target = new Sprite(region2); _sprite_target.setSize(.1f, .1f); _max_velocity = 0.05f; _max_speed = 0.005f; _velocity = new Vector2(0, 0); _desired_velocity = new Vector2(0, 0); _steering = new Vector2(0, 0); _position = new Vector2(-_sprite.getWidth()/2, -_sprite.getHeight()/2); _mass = 10f; } public void Update(float deltaTime){ _target = new Vector2(Gdx.input.getX(), Gdx.input.getY()); _desired_velocity = ((_target.sub(_position)).nor()).scl(_max_velocity,_max_velocity); _steering = ((_desired_velocity.sub(_velocity)).limit(_max_speed)).div(_mass); _velocity = (_velocity.add(_steering)).limit(_max_speed); _position = _position.add(_velocity); _sprite.setPosition(_position.x, _position.y); _sprite_target.setPosition(Gdx.input.getX(), Gdx.input.getY()); } I've used this tutorial here. Thanks!

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >