Search Results

Search found 264018 results on 10561 pages for 'stack based'.

Page 240/10561 | < Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >

  • SPARC T3-1 Record Results Running JD Edwards EnterpriseOne Day in the Life Benchmark with Added Batch Component

    - by Brian
    Using Oracle's SPARC T3-1 server for the application tier and Oracle's SPARC Enterprise M3000 server for the database tier, a world record result was produced running the Oracle's JD Edwards EnterpriseOne applications Day in the Life benchmark run concurrently with a batch workload. The SPARC T3-1 server based result has 25% better performance than the IBM Power 750 POWER7 server even though the IBM result did not include running a batch component. The SPARC T3-1 server based result has 25% better space/performance than the IBM Power 750 POWER7 server as measured by the online component. The SPARC T3-1 server based result is 5x faster than the x86-based IBM x3650 M2 server system when executing the online component of the JD Edwards EnterpriseOne 9.0.1 Day in the Life benchmark. The IBM result did not include a batch component. The SPARC T3-1 server based result has 2.5x better space/performance than the x86-based IBM x3650 M2 server as measured by the online component. The combination of SPARC T3-1 and SPARC Enterprise M3000 servers delivered a Day in the Life benchmark result of 5000 online users with 0.875 seconds of average transaction response time running concurrently with 19 Universal Batch Engine (UBE) processes at 10 UBEs/minute. The solution exercises various JD Edwards EnterpriseOne applications while running Oracle WebLogic Server 11g Release 1 and Oracle Web Tier Utilities 11g HTTP server in Oracle Solaris Containers, together with the Oracle Database 11g Release 2. The SPARC T3-1 server showed that it could handle the additional workload of batch processing while maintaining the same number of online users for the JD Edwards EnterpriseOne Day in the Life benchmark. This was accomplished with minimal loss in response time. JD Edwards EnterpriseOne 9.0.1 takes advantage of the large number of compute threads available in the SPARC T3-1 server at the application tier and achieves excellent response times. The SPARC T3-1 server consolidates the application/web tier of the JD Edwards EnterpriseOne 9.0.1 application using Oracle Solaris Containers. Containers provide flexibility, easier maintenance and better CPU utilization of the server leaving processing capacity for additional growth. A number of Oracle advanced technology and features were used to obtain this result: Oracle Solaris 10, Oracle Solaris Containers, Oracle Java Hotspot Server VM, Oracle WebLogic Server 11g Release 1, Oracle Web Tier Utilities 11g, Oracle Database 11g Release 2, the SPARC T3 and SPARC64 VII+ based servers. This is the first published result running both online and batch workload concurrently on the JD Enterprise Application server. No published results are available from IBM running the online component together with a batch workload. The 9.0.1 version of the benchmark saw some minor performance improvements relative to 9.0. When comparing between 9.0.1 and 9.0 results, the reader should take this into account when the difference between results is small. Performance Landscape JD Edwards EnterpriseOne Day in the Life Benchmark Online with Batch Workload This is the first publication on the Day in the Life benchmark run concurrently with batch jobs. The batch workload was provided by Oracle's Universal Batch Engine. System RackUnits Online Users Resp Time (sec) BatchConcur(# of UBEs) BatchRate(UBEs/m) Version SPARC T3-1, 1xSPARC T3 (1.65 GHz), Solaris 10 M3000, 1xSPARC64 VII+ (2.86 GHz), Solaris 10 4 5000 0.88 19 10 9.0.1 Resp Time (sec) — Response time of online jobs reported in seconds Batch Concur (# of UBEs) — Batch concurrency presented in the number of UBEs Batch Rate (UBEs/m) — Batch transaction rate in UBEs/minute. JD Edwards EnterpriseOne Day in the Life Benchmark Online Workload Only These results are for the Day in the Life benchmark. They are run without any batch workload. System RackUnits Online Users ResponseTime (sec) Version SPARC T3-1, 1xSPARC T3 (1.65 GHz), Solaris 10 M3000, 1xSPARC64 VII (2.75 GHz), Solaris 10 4 5000 0.52 9.0.1 IBM Power 750, 1xPOWER7 (3.55 GHz), IBM i7.1 4 4000 0.61 9.0 IBM x3650M2, 2xIntel X5570 (2.93 GHz), OVM 2 1000 0.29 9.0 IBM result from http://www-03.ibm.com/systems/i/advantages/oracle/, IBM used WebSphere Configuration Summary Hardware Configuration: 1 x SPARC T3-1 server 1 x 1.65 GHz SPARC T3 128 GB memory 16 x 300 GB 10000 RPM SAS 1 x Sun Flash Accelerator F20 PCIe Card, 92 GB 1 x 10 GbE NIC 1 x SPARC Enterprise M3000 server 1 x 2.86 SPARC64 VII+ 64 GB memory 1 x 10 GbE NIC 2 x StorageTek 2540 + 2501 Software Configuration: JD Edwards EnterpriseOne 9.0.1 with Tools 8.98.3.3 Oracle Database 11g Release 2 Oracle 11g WebLogic server 11g Release 1 version 10.3.2 Oracle Web Tier Utilities 11g Oracle Solaris 10 9/10 Mercury LoadRunner 9.10 with Oracle Day in the Life kit for JD Edwards EnterpriseOne 9.0.1 Oracle’s Universal Batch Engine - Short UBEs and Long UBEs Benchmark Description JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations. Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and other manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company. The workload consists of online transactions and the UBE workload of 15 short and 4 long UBEs. LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time. The UBE processes workload runs from the JD Enterprise Application server. Oracle's UBE processes come as three flavors: Short UBEs < 1 minute engage in Business Report and Summary Analysis, Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address, Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs. The UBE workload generates large numbers of PDF files reports and log files. The UBE Queues are categorized as the QBATCHD, a single threaded queue for large UBEs, and the QPROCESS queue for short UBEs run concurrently. One of the Oracle Solaris Containers ran 4 Long UBEs, while another Container ran 15 short UBEs concurrently. The mixed size UBEs ran concurrently from the SPARC T3-1 server with the 5000 online users driven by the LoadRunner. Oracle’s UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute. Key Points and Best Practices Two JD Edwards EnterpriseOne Application Servers and two Oracle Fusion Middleware WebLogic Servers 11g R1 coupled with two Oracle Fusion Middleware 11g Web Tier HTTP Server instances on the SPARC T3-1 server were hosted in four separate Oracle Solaris Containers to demonstrate consolidation of multiple application and web servers. See Also SPARC T3-1 oracle.com SPARC Enterprise M3000 oracle.com Oracle Solaris oracle.com JD Edwards EnterpriseOne oracle.com Oracle Database 11g Release 2 Enterprise Edition oracle.com Disclosure Statement Copyright 2011, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 6/27/2011.

    Read the article

  • Responsive Inline Elements with Twitter Bootstrap

    - by MightyZot
    Originally posted on: http://geekswithblogs.net/MightyZot/archive/2013/11/12/responsive-inline-elements-with-twitter-bootstrap.aspxTwitter Boostrap is a responsive css platform created by some dudes affiliated with Twitter and since supported and maintained by an open source following. I absolutely love the new version of this css toolkit. They rebuilt it with a mobile first strategy and it’s very easy to layout pages once you get the hang of it. Using a css / javascript framework like bootstrap is certainly much easier than coding your layout by hand. And, you get a “leg up” when it comes to adding responsive features to your site. Bootstrap includes column layout classes that let you specify size and placement based upon the viewport width. In addition, there are a handful of responsive helpers to hide and show content based upon the user’s device size. Most notably, the visible-xs, visible-sm, visible-md, and visible-lg classes let you show content for devices corresponding to those sizes (they are listed in the bootstrap docs.) hidden-xs, hidden-sm, hidden-md, and hidden-lg let you hide content for devices with those respective sizes. These helpers work great for showing and hiding block elements. Unfortunately, there isn’t a provision yet in Twitter Bootstrap (as of the time of this writing) for inline elements. We are using the navbar classes to create a navigation bar at the top of our website, www.crowdit.com. When you shrink the width of the screen to tablet or phone size, the tools in the navbar are turned into a drop down menu, and a button appears on the right side of the navbar. This is great! But, we wanted different content to display based upon whether the items were on the navbar versus when they were in the dropdown menu. The visible-?? and hidden-?? classes make this easy for images and block elements. In our case, we wanted our anchors to show different text depending upon whether they’re in the navbar, or in the dropdown. span is inherently inline and it can be a block element. My first approach was to create two anchors for each options, one set visible when the navbar is on a desktop or laptop with a wide display and another set visible when the elements converted to a dropdown menu. That works fine with the visible-?? and hidden-?? classes, but it just doesn’t seem that clean to me. I put up with that for about a week…last night I created the following classes to augment the block-based classes provided by bootstrap. .cdt-hidden-xs, .cdt-hidden-sm, .cdt-hidden-md, .cdt-hidden-lg {     display: inline !important; } @media (max-width:767px) {     .cdt-hidden-xs, .cdt-hidden-sm.cdt-hidden-xs, .cdt-hidden-md.cdt-hidden-xs, .cdt-hidden-lg.cdt-hidden-xs {         display: none !important;     } } @media (min-width:768px) and (max-width:991px) {     .cdt-hidden-xs.cdt-hidden-sm, .cdt-hidden-sm, .cdt-hidden-md.cdt-hidden-sm, .cdt-hidden-lg.cdt-hidden-sm {         display: none !important;     } } @media (min-width:992px) and (max-width:1199px) {     .cdt-hidden-xs.cdt-hidden-md, .cdt-hidden-sm.cdt-hidden-md, .cdt-hidden-md, .cdt-hidden-lg.cdt-hidden-md {         display: none !important;     } } @media (min-width:1200px) {     .cdt-hidden-xs.cdt-hidden-lg, .cdt-hidden-sm.cdt-hidden-lg, .cdt-hidden-md.cdt-hidden-lg, .cdt-hidden-lg {         display: none !important;     } } .cdt-visible-xs, .cdt-visible-sm, .cdt-visible-md, .cdt-visible-lg {     display: none !important; } @media (max-width:767px) {     .cdt-visible-xs, .cdt-visible-sm.cdt-visible-xs, .cdt-visible-md.cdt-visible-xs, .cdt-visible-lg.cdt-visible-xs {         display: inline !important;     } } @media (min-width:768px) and (max-width:991px) {     .cdt-visible-xs.cdt-visible-sm, .cdt-visible-sm, .cdt-visible-md.cdt-visible-sm, .cdt-visible-lg.cdt-visible-sm {         display: inline !important;     } } @media (min-width:992px) and (max-width:1199px) {     .cdt-visible-xs.cdt-visible-md, .cdt-visible-sm.cdt-visible-md, .cdt-visible-md, .cdt-visible-lg.cdt-visible-md {         display: inline !important;     } } @media (min-width:1200px) {     .cdt-visible-xs.cdt-visible-lg, .cdt-visible-sm.cdt-visible-lg, .cdt-visible-md.cdt-visible-lg, .cdt-visible-lg {         display: inline !important;     } } I created these by looking at the example provided by bootstrap and consolidating the styles. “cdt” is just a prefix that I’m using to distinguish these classes from the block-based classes in bootstrap. You are welcome to change the prefix to whatever feels right for you. These classes can be applied to spans in textual content to hide and show text based upon the browser width. Applying the styles is simple… <span class=”cdt-visible-xs”>This text is visible in extra small</span> <span class=”cdt-visible-sm”>This text is visible in small</span> Why would you want to do this? Here are a couple of examples, shown in screen shots. This is the CrowdIt navbar on larger displays. Notice how the text is two line and certain words are capitalized? Now, check this out! Here is a screen shot showing the dropdown menu that’s displayed when the browser window is tablet or phone sized. The markup to make this happen is quite simple…take a look. <li>     <a href="@Url.Action("what-is-crowdit","home")" title="Learn about what CrowdIt can do for your Small Business">         <span class="cdt-hidden-xs">WHAT<br /><small>is CrowdIt?</small></span>         <span class="cdt-visible-xs">What is CrowdIt?</span>     </a> </li> There is a single anchor tag in this example and only the spans change visibility based on browser width. I left them separate for readability and because I wanted to use the small tag; however, you could just as easily hide the “WHAT” and the br tag on small displays and replace them with “What “, consolidating this even further to text containing a single span. <span class=”cdt-hidden-xs”>WHAT<br /></span><span class=”cdt-visible-xs”>What </span>is CrowdIt? You might be a master of css and have a better method of handling this problem. If so, I’d love to hear about your solution…leave me some feedback! You’ll be entered into a drawing for a chance to win an autographed picture of ME! Yay!

    Read the article

  • BI Applications overview

    - by sv744
    Welcome to Oracle BI applications blog! This blog will talk about various features, general roadmap, description of functionality and implementation steps related to Oracle BI applications. In the first post we start with an overview of the BI apps and will delve deeper into some of the topics below in the upcoming weeks and months. If there are other topics you would like us to talk about, pl feel free to provide feedback on that. The Oracle BI applications are a set of pre-built applications that enable pervasive BI by providing role-based insight for each functional area, including sales, service, marketing, contact center, finance, supplier/supply chain, HR/workforce, and executive management. For example, Sales Analytics includes role-based applications for sales executives, sales management, as well as front-line sales reps, each of whom have different needs. The applications integrate and transform data from a range of enterprise sources—including Siebel, Oracle, PeopleSoft, SAP, and others—into actionable intelligence for each business function and user role. This blog  starts with the key benefits and characteristics of Oracle BI applications. In a series of subsequent blogs, each of these points will be explained in detail. Why BI apps? Demonstrate the value of BI to a business user, show reports / dashboards / model that can answer their business questions as part of the sales cycle. Demonstrate technical feasibility of BI project and significantly lower risk and improve success Build Vs Buy benefit Don’t have to start with a blank sheet of paper. Help consolidate disparate systems Data integration in M&A situations Insulate BI consumers from changes in the OLTP Present OLTP data and highlight issues of poor data / missing data – and improve data quality and accuracy Prebuilt Integrations BI apps support prebuilt integrations against leading ERP sources: Fusion Applications, E- Business Suite, Peoplesoft, JD Edwards, Siebel, SAP Co-developed with inputs from functional experts in BI and Applications teams. Out of the box dimensional model to source model mappings Multi source and Multi Instance support Rich Data Model    BI apps have a very rich dimensionsal data model built over 10 years that incorporates best practises from BI modeling perspective as well as reflect the source system complexities  Thanks for reading a long post, and be on the lookout for future posts.  We will look forward to your valuable feedback on these topics as well as suggestions on what other topics would you like us to cover. I Conformed dimensional model across all business subject areas allows cross functional reporting, e.g. customer / supplier 360 Over 360 fact tables across 7 product areas CRM – 145, SCM – 47, Financials – 28, Procurement – 20, HCM – 27, Projects – 18, Campus Solutions – 21, PLM - 56 Supported by 300 physical dimensions Support for extensive calendars; Gregorian, enterprise and ledger based Conformed data model and metrics for real time vs warehouse based reporting  Multi-tenant enabled Extensive BI related transformations BI apps ETL and data integration support various transformations required for dimensional models and reporting requirements. All these have been distilled into common patterns and abstracted logic which can be readily reused across different modules Slowly Changing Dimension support Hierarchy flattening support Row / Column Hybrid Hierarchy Flattening As Is vs. As Was hierarchy support Currency Conversion :-  Support for 3 corporate, CRM, ledger and transaction currencies UOM conversion Internationalization / Localization Dynamic Data translations Code standardization (Domains) Historical Snapshots Cycle and process lifecycle computations Balance Facts Equalization of GL accounting chartfields/segments Standardized values for categorizing GL accounts Reconciliation between GL and subledgers to track accounted/transferred/posted transactions to GL Materialization of data only available through costly and complex APIs e.g. Fusion Payroll, EBS / Fusion Accruals Complex event Interpretation of source data – E.g. o    What constitutes a transfer o    Deriving supervisors via position hierarchy o    Deriving primary assignment in PSFT o    Categorizing and transposition to measures of Payroll Balances to specific metrics to support side by side comparison of measures of for example Fixed Salary, Variable Salary, Tax, Bonus, Overtime Payments. o    Counting of Events – E.g. converting events to fact counters so that for example the number of hires can easily be added up and compared alongside the total transfers and terminations. Multi pass processing of multiple sources e.g. headcount, salary, promotion, performance to allow side to side comparison. Adding value to data to aid analysis through banding, additional domain classifications and groupings to allow higher level analytical reporting and data discovery Calculation of complex measures examples: o    COGs, DSO, DPO, Inventory turns  etc o    Transfers within a Hierarchy or out of / into a hierarchy relative to view point in hierarchy. Configurability and Extensibility support  BI apps offer support for extensibility for various entities as automated extensibility or part of extension methodology Key Flex fields and Descriptive Flex support  Extensible attribute support (JDE)  Conformed Domains ETL Architecture BI apps offer a modular adapter architecture which allows support of multiple product lines into a single conformed model Multi Source Multi Technology Orchestration – creates load plan taking into account task dependencies and customers deployment to generate a plan based on a customers of multiple complex etl tasks Plan optimization allowing parallel ETL tasks Oracle: Bit map indexes and partition management High availability support    Follow the sun support. TCO BI apps support several utilities / capabilities that help with overall total cost of ownership and ensure a rapid implementation Improved cost of ownership – lower cost to deploy On-going support for new versions of the source application Task based setups flows Data Lineage Functional setup performed in Web UI by Functional person Configuration Test to Production support Security BI apps support both data and object security enabling implementations to quickly configure the application as per the reporting security needs Fine grain object security at report / dashboard and presentation catalog level Data Security integration with source systems  Extensible to support external data security rules Extensive Set of KPIs Over 7000 base and derived metrics across all modules Time series calculations (YoY, % growth etc) Common Currency and UOM reporting Cross subject area KPIs (analyzing HR vs GL data, drill from GL to AP/AR, etc) Prebuilt reports and dashboards 3000+ prebuilt reports supporting a large number of industries Hundreds of role based dashboards Dynamic currency conversion at dashboard level Highly tuned Performance The BI apps have been tuned over the years for both a very performant ETL and dashboard performance. The applications use best practises and advanced database features to enable the best possible performance. Optimized data model for BI and analytic queries Prebuilt aggregates& the ability for customers to create their own aggregates easily on warehouse facts allows for scalable end user performance Incremental extracts and loads Incremental Aggregate build Automatic table index and statistics management Parallel ETL loads Source system deletes handling Low latency extract with Golden Gate Micro ETL support Bitmap Indexes Partitioning support Modularized deployment, start small and add other subject areas seamlessly Source Specfic Staging and Real Time Schema Support for source specific operational reporting schema for EBS, PSFT, Siebel and JDE Application Integrations The BI apps also allow for integration with source systems as well as other applications that provide value add through BI and enable BI consumption during operational decision making Embedded dashboards for Fusion, EBS and Siebel applications Action Link support Marketing Segmentation Sales Predictor Dashboard Territory Management External Integrations The BI apps data integration choices include support for loading extenral data External data enrichment choices : UNSPSC, Item class etc. Extensible Spend Classification Broad Deployment Choices Exalytics support Databases :  Oracle, Exadata, Teradata, DB2, MSSQL ETL tool of choice : ODI (coming), Informatica Extensible and Customizable Extensible architecture and Methodology to add custom and external content Upgradable across releases

    Read the article

  • How to Control Screen Layouts in LightSwitch

    - by ChrisD
    Visual Studio LightSwitch has a bunch of screen templates that you can use to quickly generate screens. They give you good starting points that you can customize further. When you add a new screen to your project you see a set of screen templates that you can choose from. These templates lay out all the related data you choose to put on a screen automatically for you. And don’t under estimate them; they do a great job of laying out controls in a smart way. For instance, a tab control will be used when you select more than one related set of data to display on a screen. However, you’re not limited to taking the layout as is. In fact, the screen designer is pretty flexible and allows you to create stacks of controls in a variety of configurations. You just need to visualize your screen as a series of containers that you can lay out in rows and columns. You then place controls or stacks of controls into these areas to align the screen exactly how you want. If you’re new in Visual Studio LightSwitch, you can see this tutorial. OK, Let’s start with a simple example. I have already designed my data entities for a simple order tracking system similar to the Northwind database. I also have added a Search Data  Screen to search my Products already. Now I will add a new Details Screen for my Products and make it the default screen via the “Add New Screen” dialog: The screen designer picks a simple layout for me based on the single entity I chose, in this case Product. Hit F5 to run the application, select a Product on the search screen to open the Product Details Screen. Notice that it’s pretty simple because my entity is simple. Click the “Customize” button in the top right of the screen so we can start tweaking it. The left side of the screen shows the containership of controls and data bindings (called the content tree) and the right side shows the live preview with data. Notice that we have a simple layout of two rows but only one row is populated (with a vertical stack of controls in this case). The bottom row is empty. You can envision the screen like this: Each container will display a group of data that you select. For instance in the above screen, the top row is set to a vertical stack control and the group of data to display is coming from Product. So when laying out screens you need to think in terms of containers of controls bound to groups of data. To change the data to which a container is bound, select the data item next to the container: You can select the “New Group” item in order to create more containers (or controls) within the current container. For instance to totally control the layout, select the Product in the top row and hit the delete key. This will delete the vertical stack and therefore all the controls on the screen. The content tree will still have two rows, but the rows are now both empty. If you want a layout of four containers (two rows and two columns) then select “New Group” for the data item and then change the vertical stack control to “Two Columns” for both of the rows as shown here: You can keep going on and on by selecting new groups and choosing between rows or columns. Here’s a layout with 8 containers, 4 rows and 2 columns: And here is a layout with 7 content areas; one row across the top of the screen and three rows with two columns below that: When you select Choose Content and select a data item like Product it will populate all the controls within the container (row or column in a vertical stack) however you have complete control on what to display within each group. You can delete fields you don’t want to display and/or change their controls. You can also change the size of controls and how they display by changing the settings in the properties window. If you are in the Screen Designer (and not the customization mode like we are here) you can also drag-drop data items from the left-hand side of the screen to the content tree. Note, however, that not all areas of the tree will allow you to drop a data item if there is a binding already set to a different set of data. For instance you can’t drop a Customer ID into the same group as a Product if they originate from different entities. To get around this, all you need to do is create a new group and content area as shown above. Let’s take a more complex example that deals with more than just product. I want to design a complex screen that displays Products and their Category, as well as all the OrderDetails for which that product is selected. This time I will create a new screen and select List and Details, select the Products screen data, and include the related OrderDetails. However I’m going to totally change the layout so that a Product grid is at the top left and below that is the selected Product detail. Below that will be the Category text fields and image in two columns below. On the right side I want the OrderDetails grid to take up the whole right side of the screen. All this can be done in customization mode while you’re debugging the application. To do this, I first deleted all the content items in the tree and then re-created the content tree as shown in the image below. I also set the image to be larger and the description textbox to be 5 rows using the property window below the live preview. I added the green lines to indicate the containers and show how it maps to the content tree (click to enlarge): I hope this demystifies the screen designer a little bit. Remember that screen templates are excellent starting points – you can take them as-is or customize them further. It takes a little fooling around with customizing screens to get them to do exactly what you want but there are a ton of possibilities once you get the hang of it. Stay tuned for more information on how to create your own screen templates that show up in the “Add New Screen” dialog. Enjoy! The tutorial that might be interested: Adding Custom Control In LightSwitch

    Read the article

  • post xml to Spring REST server returns Unsupported Media Type

    - by Mayra
    I'm trying to create a simple spring based webservice that supports a "post" with xml content. In spring, I define an AnnotationMethodHandler: <bean id="inboundMessageAdapter" class="org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter"> <property name="messageConverters"> <util:list> <bean class="org.springframework.http.converter.xml.MarshallingHttpMessageConverter"> <property name="marshaller" ref="xmlMarshaller"/> <property name="unmarshaller" ref="xmlMarshaller"/> </bean> </util:list> </property> </bean> And a jaxb based xml marshaller: <bean id="xmlMarshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller"> <property name="contextPaths"> <array> <value>com.company.schema</value> </array> </property> <property name="schemas"> <array> <value>classpath:core.xsd</value> </array> </property> </bean> My controller is annotated as follows, where "Resource" is a class autogenerated by jaxb: @RequestMapping(method = POST, value = "/resource") public Resource createResource(@RequestBody Resource resource) { // do work } The result of a webservice call is always "HTTP/1.1 415 Unsupported Media Type". Here is an example service call: HttpPost post = new HttpPost(uri); post.addHeader("Accept", "application/xml"); post.addHeader("Content-Type", "application/xml"); StringEntity entity = new StringEntity(request, "UTF-8"); entity.setContentType("application/xml"); post.setEntity(entity); It seems to me that I am setting the correct media type everywhere possible. Anyone have an ideas?

    Read the article

  • Multiple Exception Handlers for one exception type

    - by danish
    I am using Enterprose Library 4.1. I have created a custom exception handler called CustomHandler. This is how the configuration section would look like: <exceptionHandling> <exceptionPolicies> <add name="Exception Policy"> <exceptionTypes> <add type="System.Exception, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" postHandlingAction="NotifyRethrow" name="Exception"> <exceptionHandlers> <add type="WindowsFormsApplication1.CustomHandler, WindowsFormsApplication1" name="Custom Handler" /> <add exceptionMessage="Some test mesage." exceptionMessageResourceName="" exceptionMessageResourceType="" replaceExceptionType="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionHandlingException, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling" type="Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ReplaceHandler, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling" name="Replace Handler" /> </exceptionHandlers> </add> </exceptionTypes> </add> </exceptionPolicies> </exceptionHandling> There are two handlers for same exception type. What I want is that based on a certain condition one of the handlers should handle the exception. Any ideas how that can be done? Is there a way to call the other handler from inside the HandleException method of the custom handler based on some condition?

    Read the article

  • Passing parameters to telerik asp.net mvc grid

    - by GlobalCompe
    I have a telerik asp.net mvc grid which needs to be populated based on the search criteria the user enters in separate text boxes. The grid is using ajax method to load itself initially as well as do paging. How can one pass the search parameters to the grid so that it sends those parameters "every time" it calls the ajax method in response to the user clicking on another page to go to the data on that page? I read the telerik's user guide but it does not mention this scenario. The only way I have been able to do above is by passing the parameters to the rebind() method on client side using jquery. The issue is that I am not sure if it is the "official" way of passing parameters which will always work even after updates. I found this method on this post on telerik's site: link text I have to pass in multiple parameters. The action method in the controller when called by the telerik grid runs the query again based on the search parameters. Here is a snippet of my code: $("#searchButton").click(function() { var grid = $("#Invoices").data('tGrid'); var startSearchDate = $("#StartDatePicker-input").val(); var endSearchDate = $("#EndDatePicker-input").val(); grid.rebind({ startSearchDate: startSearchDate , endSearchDate: endSearchDate }); } );

    Read the article

  • $facebook->getSession() returns null in the example code. is that ok?

    - by Toto
    Running the example code for the Facebook API I get a null session object, and I should get a non-null object giving the comment in the code. What am I doing wrong? In other words, in my index.php this fragment from the example code shows "no session" when I go to http://apps.facebook.com/my_app in my browser: <?php require './facebook.php'; // Create our Application instance. $facebook = new Facebook(array( 'appId' => '...', // actual value replaced by '...' for this post 'secret' => '...', // actual value replaced by '...' for the post 'cookie' => true, )); // We may or may not have this data based on a $_GET or $_COOKIE based session. // // If we get a session here, it means we found a correctly signed session using // the Application Secret only Facebook and the Application know. We dont know // if it is still valid until we make an API call using the session. A session // can become invalid if it has already expired (should not be getting the // session back in this case) or if the user logged out of Facebook. $session = $facebook->getSession(); if ($session) { echo "session ok"; } else { echo "no session"; } ?> Note: in my server index.php and facebook.php are in the same folder.

    Read the article

  • Git subtree not properly using .gitignore when doing a partial clone

    - by D W
    I am a graduate student with many scripts, bibliography data in bibtex, thesis draft in latex, presentations in open office, posters in scribus, and figures and result data. I would like to put everything in one project under version control. Then when I need to work on a portion such as the bibliography data, I would like to check that subdirectory out, modify it as necessary and merge it back.I would like the ability to check out one version to my home computer, and a different one to my work computer and make changes to each independently and eventually merge them back. I would also like to be able to check out a piece of code from this big project and import it with versioning into a separate project. If I may changes I'd like to be able to merge them back to the original project. Based on my understanding git subtree can do this. http://github.com/apenwarr/git-subtree There is an example that is along the lines of what I'm trying to do at: http://psionides.jogger.pl/2010/02/04/sharing-code-between-projects-with-git-subtree/ Say the trunk of my project contained the directories: (bib bin cfg data fig src todo). When I use git subtree split -P bib -b export git checkout export I get a the bib directory, plus all files that should have been ignored or considered binary based on .gitignore such as the src directory and everything in it that ends in a tilde or the ./data directory. dwickrama@DWwork:~/research/trunk$ ls * -r biblography.bib JabRef src: script1.sh~ README~ script2.sh~ script3.sh~ script4.R~ script5.awk~ script5.py~ cfg: cfgFile1.ini~ cfgFile2.ini~ cfgFile3.ini~ bin: bigBinaryPackage1 bigBinaryPackage2 dwickrama@DWwork:~/research/trunk$ My .gitignore file is as follows: *.doc diff=word *.tex diff=tex *.bib diff=bibtex *.py diff=python *.eps binary *.jpg binary *.png binary ./bin/* binary *~ How do I prevent this?

    Read the article

  • Android - Using PreferenceScreen to display and save settings to/from ContentProvider

    - by Donal Rafferty
    I have my own custom Content Provider that loads a datasbase which contains the settings information for my application. I load the settings from the ContentProvider on the creation of my Settings activity. My Settings activity is made up of a PreferenceScreen and Dialog based EditText's. The following code shows how I use the preference screen and edit texts. So as you can see from the first image this works and displays the menu with the information underneath. The problem is in image two, when I click on a choice in the menu the dialog pops up but it is empty, I would like to be able to load the data from my content provider into the edit text in the dialog, so in image one it shows "Donal" as the user name so in image two "Donal" should also appear in the edit text in the dialog. I would also like to be able to listen to the OK button in the dialog so when a user changes a setting I can update the data in my content provider. Can anyone help me with what I'm trying to do? Code // Root PreferenceScreen root = getPreferenceManager().createPreferenceScreen(this); // Dialog based preferences PreferenceCategory dialogBasedPrefCat = new PreferenceCategory(this); dialogBasedPrefCat.setTitle(R.string.dialog_based_preferences); root.addPreference(dialogBasedPrefCat); // Edit text preference EditTextPreference editTextPref = new EditTextPreference(this); editTextPref.setDialogTitle(R.string.dialog_title_edittext_preference); editTextPref.setKey("edittext_preference"); editTextPref.setTitle(R.string.title_edittext_preference); editTextPref.setSummary(name); dialogBasedPrefCat.addPreference(editTextPref); Image One Image Two

    Read the article

  • How to populate a core data store programmatically?

    - by jdmuys
    I have ran out of hairs to pull with a crash in this routine that populates a core data store from a 9000+ line plist file. The crash happened at the very end of the routine inside the call to [managedObjectContext save:&error]. While if I save after every object insertion, the crash doesn't happen. Of course, saving after every object insertion totally kills the performance (from less than a second to many minutes). I modified my code so that it saves every K insertions, and the crash happens as soon as K = 2. The crash is an out-of-bound exception for an NSArray: Serious application error. Exception was caught during Core Data change processing: *** -[NSCFArray objectAtIndex:]: index (1) beyond bounds (1) with userInfo (null) Also maybe relevant, when the exception happen, my fetch result controller controllerDidChangeContent: delegate routine is in the call stack. It simply calls my table view endUpdate routine. I am now running out of ideas. How am I supposed to populate a core data store with a table view? Here is the call stack: #0 0x901ca4e6 in objc_exception_throw #1 0x01d86c3b in +[NSException raise:format:arguments:] #2 0x01d86b9a in +[NSException raise:format:] #3 0x00072cb9 in _NSArrayRaiseBoundException #4 0x00010217 in -[NSCFArray objectAtIndex:] #5 0x002eaaa7 in -[UITableView(_UITableViewPrivate) _endCellAnimationsWithContext:] #6 0x002def02 in -[UITableView endUpdates] #7 0x00004863 in -[AirportViewController controllerDidChangeContent:] at AirportViewController.m:463 #8 0x01c43be1 in -[NSFetchedResultsController(PrivateMethods) _managedObjectContextDidChange:] #9 0x0001462a in _nsnote_callback #10 0x01d31005 in _CFXNotificationPostNotification #11 0x00011ee0 in -[NSNotificationCenter postNotificationName:object:userInfo:] #12 0x01ba417d in -[NSManagedObjectContext(_NSInternalNotificationHandling) _postObjectsDidChangeNotificationWithUserInfo:] #13 0x01c03763 in -[NSManagedObjectContext(_NSInternalChangeProcessing) _createAndPostChangeNotification:withDeletions:withUpdates:withRefreshes:] #14 0x01b885ea in -[NSManagedObjectContext(_NSInternalChangeProcessing) _processRecentChanges:] #15 0x01bbe728 in -[NSManagedObjectContext save:] #16 0x000039ea in -[AirportViewController populateAirports] at AirportViewController.m:112 Here is the code to the routine. I apologize because a number of lines are probably irrelevant, but I'd rather err on that side. The crash happens the very first time it calls [managedObjectContext save:&error]: - (void) populateAirports { NSBundle *meBundle = [NSBundle mainBundle]; NSString *dbPath = [meBundle pathForResource:@"DuckAirportsBin" ofType:@"plist"]; NSArray *initialAirports = [[NSArray alloc] initWithContentsOfFile:dbPath]; //********************************************************************************* // get existing countries NSMutableDictionary *countries = [[NSMutableDictionary alloc] initWithCapacity:200]; NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Country" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; NSError *error = nil; NSArray *values = [managedObjectContext executeFetchRequest:fetchRequest error:&error]; if (!values) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } int numCountries = [values count]; NSLog(@"We have %d countries in store", numCountries); for (Country *aCountry in values) { [countries setObject:aCountry forKey:aCountry.code]; } [fetchRequest release]; //********************************************************************************* // read airports int numAirports = 0; int numUnsavedAirports = 0; #define MAX_UNSAVED_AIRPORTS_BEFORE_SAVE 2 numCountries = 0; for (NSDictionary *anAirport in initialAirports) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; NSString *countryCode = [anAirport objectForKey:@"country"]; Country *thatCountry = [countries objectForKey:countryCode]; if (!thatCountry) { thatCountry = [NSEntityDescription insertNewObjectForEntityForName:@"Country" inManagedObjectContext:managedObjectContext]; thatCountry.code = countryCode; thatCountry.name = [anAirport objectForKey:@"country_name"]; thatCountry.population = 0; [countries setObject:thatCountry forKey:countryCode]; numCountries++; NSLog(@"Found %dth country %@=%@", numCountries, countryCode, thatCountry.name); } // now that we have the country, we create the airport Airport *newAirport = [NSEntityDescription insertNewObjectForEntityForName:@"Airport" inManagedObjectContext:managedObjectContext]; newAirport.city = [anAirport objectForKey:@"city"]; newAirport.code = [anAirport objectForKey:@"code"]; newAirport.name = [anAirport objectForKey:@"name"]; newAirport.country_name = [anAirport objectForKey:@"country_name"]; newAirport.latitude = [NSNumber numberWithDouble:[[anAirport objectForKey:@"latitude"] doubleValue]]; newAirport.longitude = [NSNumber numberWithDouble:[[anAirport objectForKey:@"longitude"] doubleValue]]; newAirport.altitude = [NSNumber numberWithDouble:[[anAirport objectForKey:@"altitude"] doubleValue]]; newAirport.country = thatCountry; // [thatCountry addAirportsObject:newAirport]; numAirports++; numUnsavedAirports++; if (numUnsavedAirports >= MAX_UNSAVED_AIRPORTS_BEFORE_SAVE) { if (![managedObjectContext save:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } numUnsavedAirports = 0; } [pool release]; }

    Read the article

  • shouldAutorotateToInterfaceOrientation called several times in a row without any rotation

    - by Mike
    I am trying to implement some interface changes in my app, based on the device rotation. My app is a view based app. So, its main view controller has a didload method. The app starts in portrait. Almost all changes on the device orientation triggers the shouldAutorotateToInterfaceOrientation method but this method is not called when the device is put on portrait, after coming from any landscape orientation. While debugging the app, I have put a NSLog(@"orientation=%d", interfaceOrientation); on my shouldAutorotateToInterfaceOrientation method, and what I see is quite strange: When I run the app, shouldAutorotateToInterfaceOrientation is called 6 times before the app's interface even appears. Every time it runs, it reports a different number for the orientation: the order it reports on console is: portrait, portrait, portrait, landscape right, landscape left, upside down) (????). During this time the app is just beginning. All 6 times the debugger reports the method being run by the app's delegate. So, here comes the questions: WHy shouldAutorotateToInterfaceOrientation is not being called when the device enters on portrait? Why is this method running 6 times called by the delegate even before the app starts and shows its interface if no rotation is being done? thanks.

    Read the article

  • WCF, Metadata and BIGIP - Can I force the correct url for the WSDL items?

    - by Yossi Dahan
    We have a WCF service hosted on ServerA which is a server with no-direct Internet access and has a non-Internet routable IP address. The service is fronted by BIGIP which handles SSL encryption and decryption and forwards the unencrypted request to ServerA (at the moment it does NOT actually do any load balancing, but that is likely to be added in the future) on a specific port. What that means is that our clients would be calling the service through https://www.OurDomain.com/ServiceUrl and would get to our service on http://SeverA:85/ServiceUrl through the BIGIP device; When we browse to the WSDL published on https://www.OurDomain.com/ServiceUrl all the addresses contained in the WSDL are based on the http://SeverA:85/ServiceUrl base address We figured out that we could use the host headers setting to set the domain, but our problem is that while this would sort out the domain, we would still be using the wrong scheme – it would use http://www.OurDomain.com/ServiceUrl while we need it to be Https. Also – as we have other services (asmx based) hosted on that server we had some issues setting the host headers, and so we thought we could get away with creating another site on the server (using, say, port 82) and set the host header on that; now, on top of the http/https problem we have an issue as the WSDL contains the port number in all the urls, where BigIP works on port 443 (for the SSL) Is there a more flexible solution than implementing Host Headers? Ideally we need to retain flexibility and ease of supportability. Thanks for any help…

    Read the article

  • SHA512 vs. Blowfish and Bcrypt

    - by Chris
    I'm looking at hashing algorithms, but couldn't find an answer. Bcrypt uses Blowfish Blowfish is better than MD5 Q: but is Blowfish better than SHA512? Thanks.. Update: I want to clarify that I understand the difference between hashing and encryption. What prompted me to ask the question this way is this article, where the author refers to bcrypt as "adaptive hashing" http://chargen.matasano.com/chargen/2007/9/7/enough-with-the-rainbow-tables-what-you-need-to-know-about-s.html Since bcrypt is based on Blowfish, I was led to think that Blowfish is a hashing algorithm. If it's encryption as answers have pointed out, then seems to me like it shouldn't have a place in this article. What's worse is that he's concluding that bcrypt is the best. What's also confusing me now is that the phpass class (used for password hashing I believe) uses bcrypt (i.e. blowfish, i.e. encryption). Based on this new info you guys are telling me (blowfish is encryption), this class sounds wrong. Am I missing something?

    Read the article

  • iPod/iPhone OpenGL ES UIView flashes when updating

    - by Dave Viner
    I have a simple iPhone application which uses OpenGL ES (v1) to draw a line based on the touches of the user. In the XCode Simulator, the code works perfectly. However, when I install the app onto an iPod or iPhone, the OpenGL ES view "flashes" when drawing the line. If I disable the line drawing, the flash disappears. By "flash", I mean that the background image (which is an OpenGL texture) disappears momentarily, and then reappears. It appears as if the entire scene is completely erased and redrawn. The code which handles the line drawing is the following: renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end { static GLfloat* vertexBuffer = NULL; static NSUInteger vertexMax = 64; NSUInteger vertexCount = 0, count, i; //Allocate vertex array buffer if(vertexBuffer == NULL) vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat)); //Add points to the buffer so there are drawing points every X pixels count = MAX(ceilf(sqrtf((end.x - start.x) * (end.x - start.x) + (end.y - start.y) * (end.y - start.y)) / kBrushPixelStep), 1); for(i = 0; i < count; ++i) { if(vertexCount == vertexMax) { vertexMax = 2 * vertexMax; vertexBuffer = realloc(vertexBuffer, vertexMax * 2 * sizeof(GLfloat)); } vertexBuffer[2 * vertexCount + 0] = start.x + (end.x - start.x) * ((GLfloat)i / (GLfloat)count); vertexBuffer[2 * vertexCount + 1] = start.y + (end.y - start.y) * ((GLfloat)i / (GLfloat)count); vertexCount += 1; } //Render the vertex array glVertexPointer(2, GL_FLOAT, 0, vertexBuffer); glDrawArrays(GL_POINTS, 0, vertexCount); //Display the buffer [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } (This function is based on the function of the same name from the GLPaint sample application.) For the life of me, I can not figure out why this causes the screen to flash. The line is drawn properly (both in the Simulator and in the iPod). But, the flash makes it unusable. Anyone have ideas on how to prevent the "flash"?

    Read the article

  • codingBat repeatEnd using regex

    - by polygenelubricants
    I'm trying to understand regex as much as I can, so I came up with this regex-based solution to codingbat.com repeatEnd: Given a string and an int N, return a string made of N repetitions of the last N characters of the string. You may assume that N is between 0 and the length of the string, inclusive. public String repeatEnd(String str, int N) { return str.replaceAll( ".(?!.{N})(?=.*(?<=(.{N})))|." .replace("N", Integer.toString(N)), "$1" ); } Explanation on its parts: .(?!.{N}): asserts that the matched character is one of the last N characters, by making sure that there aren't N characters following it. (?=.*(?<=(.{N}))): in which case, use lookforward to first go all the way to the end of the string, then a nested lookbehind to capture the last N characters into \1. Note that this assertion will always be true. |.: if the first assertion failed (i.e. there are at least N characters ahead) then match the character anyway; \1 would be empty. In either case, a character is always matched; replace it with \1. My questions are: Is this technique of nested assertions valid? (i.e. looking behind during a lookahead?) Is there a simpler regex-based solution?

    Read the article

  • WPF - How to bind a DataGridTemplateColumn

    - by Andy T
    Hi, I am trying to get the name of the property associated with a particular DataGridColumn, so that I can then do some stuff based on that. This function is called when the user clicks context menu item on the column's header... This is fine for the out-of-the-box ready-rolled column types like DataGridTextColumn, since they are bound, but the problem is that some of my columns are DataGridTemplateColumns, which are not bound. private void GroupByField_Click (object sender, RoutedEventArgs e){ MenuItem mi = (MenuItem)sender; ContextMenu cm = (ContextMenu) mi.Parent; DataGridColumnHeader dgch = (DataGridColumnHeader) cm.PlacementTarget; DataGridBoundColumn dgbc = (DataGridBoundColumn) dgch.Column; Binding binding = (Binding) dgbc.Binding; string BoundPropName = binding.Path.Path; //Do stuff based on bound property name here... } So, take for example my 'Name' column... it's a DataGridTemplateColumn (since it has an image and some other stuff in there). Therefore, it is not actually bound to the 'Name' property... but I would like to be, so that the above code will work. My question is two-part, really: 1) Is it possible to make a DataGridTemplateColumn be BOUND, so that the above code would work? Can I bind it somehow to a property? 2) Or do I need to something entirely different, and change the code above? Thanks in advance! AT

    Read the article

  • Coding a parser for a domain specific language in Java

    - by Bruno Rothgiesser
    We want to design a simple domain specific language for writing test scripts to automatically test a XML-based interface of one of our applications. A sample test would be: Get an input XML file from network shared folder or subversion repository Import the XML file using the interface Check if the import result message was successfull Export the XML corresponding to the object that was just imported using the interface and check if it correct. If the domain specific language can be declarative and its statements look as close as my sentences in the sample above as possible, it will be awesome because people won't necessarily have to be programmers to understand/write/maintain the tests. Something like: newObject = GET FILE "http://svn/repos/template1.xml" reponseMessage = IMPORT newObject newObjectID = GET PROPERTY '/object/id/' FROM responseMessage (..) But then I'm not sure how to implement a simple parser for that languange in Java. Back in school, 10 years ago, I coded a language parser using Lex and Yacc for the C language. Maybe an approach would be to use some equivalent for Java? Or, I could give up the idea of having a declarative language and choose an XML-based language instead, which would possibly be easier to create a parser for? What approach would you recommend?

    Read the article

  • CPUID on Intel i7 processors

    - by StarPacker
    I'm having an issue with my CPUID-based code on newer i7-based machines. It is detecting the CPU as having a single core with 8 HT units instead of 4 cores each with 2 HT units. I must be misinterpreting the results of the CPUID information coming back from the CPU, but I can't see how. Basically, I iterate through each processor visible to Windows, set thread affinity to that thread and then make a sequence of CPUID calls. args = new CPUID_Args(); args.eax = 1; executeHandler(ref args); if (0 != (args.edx & (0x1 << 28))) { //If the 28th bit in EDX is flagged, this processor supports multiple logical processors per physical package // in this case bits 23:16 of EBX should give the count. //** EBX here is 0x2100800 logicalProcessorCount = (args.ebx & 0x00FF0000) >> 16; //** this tells me there are 16 logical processors (wrong) } else { logicalProcessorCount = 1; } apic = unchecked((byte)((0xFF000000 & args.ebx) >> 24)); if (maximumSupportedCPUID >= 4) { args = new CPUID_Args(); args.eax = 4; executeHandler(ref args); //EAX now contains 0x1C004121 coreCount = 1 + ((args.eax & 0xFC000000) >> 26); //This calculates coreCount as 8 } else { coreCount = 1; } This sequence repeats for the remainder of the CPUs in the system. Has anyone faced this before?

    Read the article

  • Binding update adds news series to WPF Toolkit chart (instead of replacing/updating series)

    - by Mal Ross
    I'm currently recoding a bar chart in my app to make use of the Chart class in the WPF Toolkit. Using MVVM, I'm binding the ItemsSource of a ColumnSeries in my chart to a property on my viewmodel. Here's the relevant XAML: <charting:Chart> <charting:ColumnSeries ItemsSource="{Binding ScoreDistribution.ClassScores}" IndependentValuePath="ClassName" DependentValuePath="Score"/> </charting:Chart> And the property on the viewmodel: // NB: viewmodel derived from Josh Smith's BindableObject public class ExamResultsViewModel : BindableObject { // ... private ScoreDistributionByClass _scoreDistribution; public ScoreDistributionByClass ScoreDistribution { get { return _scoreDistribution; } set { if (_scoreDistribution == value) { return; } _scoreDistribution = value; RaisePropertyChanged(() => ScoreDistribution); } } However, when I update the ScoreDistribution property (by setting it to a new ScoreDistribution object), the chart gets an additional series (based on the new ScoreDistribution) as well as keeping the original series (based on the previous ScoreDistribution). To illustrate this, here are a couple of screenshots showing the chart before an update (with a single data point in ScoreDistribution.ClassScores) and after it (now with 3 data points in ScoreDistribution.ClassScores): Now, I realise there are other ways I could be doing this (e.g. changing the contents of the original ScoreDistribution object rather than replacing it entirely), but I don't understand why it's going wrong in its current form. Can anyone help?

    Read the article

  • Ideas for OpenSource CMS in ASP.NET MVC

    - by rajesh pillai
    I am in the process of collecting ideas for building an opensource CMS based on ASP.net framework. I have choosen ASP.NET MVC with Jquery as the tool to develop this. I have made this as community wiki. Background: Most of the good CMS that is available is built on PHP, though off late CMS built on ASP.net framework seems to be cropping up. I would like to collect ideas/suggestion/expectations from an opensource CMS system for ASP.net platform. I am looking for expectation from technology and features that you wish could find in a modern CMS and any other thoughts/ideas that comes to your mind. Your input would be of great help in this direction. Meanwhile I am also reviewing many opensource CMS system built on ASP.net as well as MS Office Sharepoint to get ideas and I would update my findings here for your reference. The following are some of the opensource CMS/BlogEngines that I am in the process of reviewing. -Oxite (ASP.net MVC) : This is the new kid on the block -Wordpress -BlogEngine.net -Umbraco Some of the features that I can think of is noted below Simplified content creation Support Multiple content author Metadata feature Workflow engine Simplified deployment List based contents (sharepoint like) Customizable URL's Support content Caching Roles (contenauthor, contentpublisher etc) Support different types of content (like html, txt, document, image, videos) Skinnable (support extensible themes) Localization & Globalization Unlimited nesting of categories Readymade template for blog, forums,survey. Good documentation You can add your points or add some depth to any of the above feature.

    Read the article

  • Suggestions for doing async I/O with Task Parallel Library

    - by anelson
    I have some high performance file transfer code which I wrote in C# using the Async Programming Model (APM) idiom (eg, BeginRead/EndRead). This code reads a file from a local disk and writes it to a socket. For best performance on modern hardware, it's important to keep more than one outstanding I/O operation in flight whenever possible. Thus, I post several BeginRead operations on the file, then when one completes, I call a BeginSend on the socket, and when that completes I do another BeginRead on the file. The details are a bit more complicated than that but at the high level that's the idea. I've got the APM-based code working, but it's very hard to follow and probably has subtle concurrency bugs. I'd love to use TPL for this instead. I figured Task.Factory.FromAsync would just about do it, but there's a catch. All of the I/O samples I've seen (most particularly the StreamExtensions class in the Parallel Extensions Extras) assume one read followed by one write. This won't perform the way I need. I can't use something simple like Parallel.ForEach or the Extras extension Task.Factory.Iterate because the async I/O tasks don't spend much time on a worker thread, so Parallel just starts up another task, resulting in potentially dozens or hundreds of pending I/O operations; way too much! You can work around that by Waiting on your tasks, but that causes creation of an event handle (a kernel object), and a blocking wait on a task wait handle, which ties up a worker thread. My APM-based implementation avoids both of those things. I've been playing around with different ways to keep multiple read/write operations in flight, and I've managed to do so using continuations that call a method that creates another task, but it feels awkward, and definitely doesn't feel like idiomatic TPL. Has anyone else grappled with an issue like this with the TPL? Any suggestions?

    Read the article

  • UINavigationController with UIView and UITableView

    - by Tobster
    I'm creating a navigation-based app which displays a graph, rendered with openGL, and a tableview listing disclosure buttons of all of the elements that are displayed on the graph, and a settings disclosure button. The navigation controller is also a tableview delegate and datasource, and the tableview is added to the view programatically and has its' delegate and datasource set to 'self'. The OpenGL based graph view is added via IB. The problem I'm having is that I'm trying to push a view controller (either settings or graph element properties) within the didSelectRowAtIndexPath method. The method registers and the new view is pushed on, but the tableview stays and obscures part of the view that was pushed on, as if it has a different navigation controller. I can't seem to set the tableview's navigation controller to be the same as the rest of the UINavigationControllers' view. Does anyone know how I could fix this? My navigation controllers' initWithCoder method, where the tableview is added, appears as follows: elementList = [[UITableView alloc] initWithFrame:tableFrame style:UITableViewStyleGrouped]; elementList.dataSource = self; elementList.delegate = self; [self.view addSubview:elementList]; Further in the source file, the DidSelectRowAtIndexPath method where the navigation controller is pushed appears as follows: Settings* Controller = [[Settings alloc] init]; [self pushViewController:Controller animated:YES]; [Controller release];

    Read the article

  • GWT Best Practices - MVP

    - by GWTNewbie
    A question for all the GWT gurus out there. I'm a newbie in GWT and am trying to understand the best practices of coding a GWT application. I have gone through "Large scale application development and MVP" based on Ray Ryan's talk at Google I/O 2009 and it has given me a good starting point. I downloaded the sample source code as well for the Contacts application based on the best practices listed. The application I'm trying to develop using GWT is a bit bigger (in terms of the modules involved) when compared to the sample "Contacts" application & so I want to split it up into multiple functions. I have been reading that having a single Entry point in a GWT application is a good idea, and I don't want to dump all the code in one single AppController class & one single RpcService, what would be the best approach in this situation? How would I go about dispatching the control to multiple controllers? Is there a way to achieve this using some classes in the GWT framework?

    Read the article

  • Maven deploy:deploy-file not found due to version/timestamp appended to jar

    - by JamesC
    I'm having a problem using deploy:deploy-file with snapshots I'd like some advice on please. I have 2 projects; 1) Ant based and 2) the other Maven based that consumes the jars of the other project via Archiva. I've added a target to the Ant project to deploy snapshots on every successful build during our iteration. The problem is the Maven project cannot find them because the name of the dependency has a timestamp appended like so: someJar-1.0-20100407.171211-1.jar Here is the Ant target: <exec executable="${maven.bin}" dir="../lib"> <arg value="deploy:deploy-file" /> <arg value="-DgroupId=com.my.package" /><arg value="-DartifactId=${ant.project.name}" /> <arg value="-Dversion=${manifest.implementation.version}-SNAPSHOT" /> <arg value="-Dpackaging=jar" /> <arg value="-Dfile=../lib/${ant.project.name}-${manifest.implementation.version}-SNAPSHOT.jar" /> <arg value="-Durl=http://archiva.xxx.com/archiva/repository/snapshots" /> <arg value="-DrepositoryId=snapshots" /> </exec> I have a similar Ant target for releases and this works fine. Other pure Maven projects which deploy snapshosts via mvn deploy work fine. Does anyone know where I am going wrong? Thank You Update Figured out the answer, see below.

    Read the article

< Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >