Search Results

Search found 13608 results on 545 pages for 'performance dashboard'.

Page 221/545 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Painless deployment of a Django app (port from Drupal). Do I have to switch to a VPS?

    - by Monden
    I'm about to complete porting my Drupal based community site to Django. My Drupal site is hosted at a shared hosting (Dreamhost) for last 4 years, and stability & performance has been satisfactory. The site gets around 5k unique visitors with 70-80k page views a day. This will be my first deployment of a Django application and I'm not comfortable with managing my own VPS. I use Ubuntu as a dev. server, but I don't have experience with it at the production env. I have an unrelated internal CRM app (Django) that I host with Webfaction. However security and performance isn't an issue as it's only accessed by 5 people. Unfortunately, I don't have much time to learn and maintain a VPS at this moment. I would like to know if I can host a site with this much traffic at Webfaction's shared environment? How would performance differ in comparison to Linode or Slicehost? Google AppEngine isn't an option at the moment as I'll be using my current Postgresql database.

    Read the article

  • virtual machines, dual booting and data disks on SSD

    - by stevemarvell
    This is in planning, so if I've got the strategy wrong, please let me know. There are multiple questions here, but I think they all degenerate to the same answers. The hardware is a laptop with a single SSD. I'm trying to not lose the performance of the SSD. I plan a native dual booting Windows (plus cygwin) and Linux machine which is my BYOD and represents the development environment. I keep the codebase on a shared partition (though sometimes this is an external thunderbolt SSD) which can be natively "mounted" by whichever OS is in operation. I boot into one or the other environments depending on the task in hand. Sometime I have to develop with windows tools, but generally, Linux is my preferred development environment. It would be ideal if I could VM the other OS and run either in either. I'm going to assume, because I've not found a sensible VM based solution, that I have get samba involved to share the code partition between VMs. Is this going to blow my SSD performance in the VM? The client also supplies me with a VM for the target environment, usually linux. This is not often suited to development and is used for testing only. I normally keep two copies of this, one as a sandbox and one which I deploy to using the client's preferred method. I keep these VM snapshots on the shared partition. The latter is interacted with over the network and so has no disk sharing requirements. However, it would be useful for the sandbox to be able to "mount" the code base from the natively running OS. Is this samba or nfs again, depending on the native OS? Am I missing a trick which allows this to all work smoothly with all four environments running at once without loosing the SSD performance?

    Read the article

  • Curious enigma of a network cable / connection / quality

    - by Foo Bar
    So, the situation is like this: I'm renting an apartment in a large house and I'm sharing internet with the landlord who lives downstairs. The internet is (in my best guess) optical 20/20Mbit. I don't know how it's all wired in his flat (haven't been there / seen it). Anyway, in my flat comes a cable which seems to be connected directly to the optic to ethernet router (and the password is the default one, so I have access, he he). There was a switch connected to that and to wires that go around the flat, and the wiring is terrible. It's even mixing phone and ethernet, and from what I see some cables are even interconnected!? Anyways, this cable that comes to my flat is very short. I can barely connect my computer on it, but if I do, I seem to get decent speed / performance. Not great, but decent. If, however, I connect switch to it (tried 2 different switches and a wifi switch) it's all blinking but I can't even connect to 192.168.1.1 (the router). DHCP fails, ping is losing 80-100% of replies. So I connected this cable directly to the other cable which goes to my work room, with a connector that has two female jacks and no electronics. Now when I connect my computer in my room, again, the performance is decent. When I connect WRT54GL (with tomato, DHCP disabled) to it and I plug a cable in this WRT and to my computer... the performance is gone. Download seems okay on Speedtest, but upload is .2Mbps and it's connecting forever. So what kind cable troll am I having here? Any ideas?

    Read the article

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

  • Database – Beginning with Cloud Database As A Service

    - by Pinal Dave
    I love my weekend projects. Everybody does different activities in their weekend – like traveling, reading or just nothing. Every weekend I try to do something creative and different in the database world. The goal is I learn something new and if I enjoy my learning experience I share with the world. This weekend, I decided to explore Cloud Database As A Service – Morpheus. In my career I have managed many databases in the cloud and I have good experience in managing them. I should highlight that today’s applications use multiple databases from SQL for transactions and analytics, NoSQL for documents, In-Memory for caching to Indexing for search.  Provisioning and deploying these databases often require extensive expertise and time.  Often these databases are also not deployed on the same infrastructure and can create unnecessary latency between the application layer and the databases.  Not to mention the different quality of service based on the infrastructure and the service provider where they are deployed. Moreover, there are additional problems that I have experienced with traditional database setup when hosted in the cloud: Database provisioning & orchestration Slow speed due to hardware issues Poor Monitoring Tools High network latency Now if you have a great software and expert network engineer, you can continuously work on above problems and overcome them. However, not every organization have the luxury to have top notch experts in the field. Now above issues are related to infrastructure, but there are a few more problems which are related to software/application as well. Here are the top three things which can be problems if you do not have application expert: Replication and Clustering Simple provisioning of the hard drive space Automatic Sharding Well, Morpheus looks like a product build by experts who have faced similar situation in the past. The product pretty much addresses all the pain points of developers and database administrators. What is different about Morpheus is that it offers a variety of databases from MySQL, MongoDB, ElasticSearch to Reddis as a service.  Thus users can pick and chose any combination of these databases.  All of them can be provisioned in a matter of minutes with a simple and intuitive point and click user interface.  The Morpheus cloud is built on Solid State Drives (SSD) and is designed for high-speed database transactions.  In addition it offers a direct link to Amazon Web Services to minimize latency between the application layer and the databases. Here are the few steps on how one can get started with Morpheus. Follow along with me.  First go to http://www.gomorpheus.com and register for a new and free account. Step 1: Signup It is very simple to signup for Morpheus. Step 2: Select your database   I use MySQL for my daily routine, so I have selected MySQL. Upon clicking on the big red button to add Instance, it prompted a dialogue of creating a new instance.   Step 3: Create User Now we just have to create a user in our portal which we will use to connect to a database hosted at Morpheus. Click on your database instance and it will bring you to User Screen. Over here you will notice once again a big red button to create a new user. I created a user with my first name.   Step 4: Configure your MySQL client I used MySQL workbench and connected to MySQL instance, which I had created with an IP address and user.   That’s it! You are connecting to MySQL instance. Now you can create your objects just like you would create on your local box. You will have all the features of the Morpheus when you are working with your database. Dashboard While working with Morpheus, I was most impressed with its dashboard. In future blog posts, I will write more about this feature.  Also with Morpheus you use the same process for provisioning and connecting with other databases: MongoDB, ElasticSearch and Reddis. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL analytical mash-ups deliver real-time WOW! for big data

    - by KLaker
    One of the overlooked capabilities of SQL as an analysis engine, because we all just take it for granted, is that you can mix and match analytical features to create some amazing mash-ups. As we move into the exciting world of big data these mash-ups can really deliver those "wow, I never knew that" moments. While Java is an incredibly flexible and powerful framework for managing big data there are some significant challenges in using Java and MapReduce to drive your analysis to create these "wow" discoveries. One of these "wow" moments was demonstrated at this year's OpenWorld during Andy Mendelsohn's general keynote session.  Here is the scenario - we are looking for fraudulent activities in our big data stream and in this case we identifying potentially fraudulent activities by looking for specific patterns. We using geospatial tagging of each transaction so we can create a real-time fraud-map for our business users. Where we start to move towards a "wow" moment is to extend this basic use of spatial and pattern matching, as shown in the above dashboard screen, to incorporate spatial analytics within the SQL pattern matching clause. This will allow us to compute the distance between transactions. Apologies for the quality of this screenshot….hopefully below you see where we have extended our SQL pattern matching clause to use location of each transaction and to calculate the distance between each transaction: This allows us to compare the time of the last transaction with the time of the current transaction and see if the distance between the two points is possible given the time frame. Obviously if I buy something in Florida from my favourite bike store (may be a new carbon saddle for my Trek) and then 5 minutes later the system sees my credit card details being used in Arizona there is high probability that this transaction in Arizona is actually fraudulent (I am fast on my Trek but not that fast!) and we can flag this up in real-time on our dashboard: In this post I have used the term "real-time" a couple of times and this is an important point and one of the key reasons why SQL really is the only language to use if you want to analyse  big data. One of the most important questions that comes up in every big data project is: how do we do analysis? Many enlightened customers are now realising that using Java-MapReduce to deliver analysis does not result in "wow" moments. These "wow" moments only come with SQL because it is offers a much richer environment, it is simpler to use and it is faster - which makes it possible to deliver real-time "Wow!". Below is a slide from Andy's session showing the results of a comparison of Java-MapReduce vs. SQL pattern matching to deliver our "wow" moment during our live demo.  You can watch our analytical mash-up "Wow" demo that compares the power of 12c SQL pattern matching + spatial analytics vs. Java-MapReduce  here: You can get more information about SQL Pattern Matching on our SQL Analytics home page on OTN, see here http://www.oracle.com/technetwork/database/bi-datawarehousing/sql-analytics-index-1984365.html.  You can get more information about our spatial analytics here: http://www.oracle.com/technetwork/database-options/spatialandgraph/overview/index.html If you would like to watch the full Database 12c OOW presentation see here: http://medianetwork.oracle.com/video/player/2686974264001

    Read the article

  • Partner Blog: Hub City Media Introduces iPad Application for Oracle Identity Analytics

    - by Tanu Sood
    About the Writer:Steve Giovannetti is CTO of Hub City Media, Inc., a company that specializes in implementation and product development on the Oracle Identity Management platform. Recently, Hub City Media announced the introduction of iPad application IdentityCert for Oracle Identity Analytics. This post explore the business use cases and application of IdentityCert.Hub City Media(HCM) has been deploying certification solutions based on Oracle Identity Analytics since it first appeared on the market as Vaau RBACx. With each deployment we've seen the same pattern repeat time and time again:1. Customers suffering under the weight of manual access certification regimens deploy Oracle Identity Analytics (OIA) for automated certification. 2. OIA improves the frequency, speed, accuracy, and participation of certifications across the organization. 3. Then the certifiers, typically managers and supervisors, ask, “Is there any easier way to do these certifications offline?”The current version of OIA has a way to export certification data to a spreadsheet.  For some customers, we've leveraged this feature and combined it with some of our own custom code to provide a solution based on spreadsheet exports and imports.  Customers export the certification to Microsoft Excel, complete it, and then import the spreadsheet to OIA. It worked well for offline certification, but if the user accidentally altered the format of the spreadsheet, the import of the data could fail. We were close to a solution but it wasn’t reliable.Over the past few years, we've seen the proliferation of Apple iOS devices, specifically the iPhone and iPad, in the enterprise.  As our customers were asking for offline certification, we noticed the same population of users traditionally responsible for access certification, were early adopters of the iPad. The environment seemed ideal for us to create an iPad application to support offline certifications using Oracle Identity Analytics. That’s why we created IdentityCert™.IdentityCert allows users to view their analytics dashboard, complete user certifications, and resolve policy violations with OIA, from their iPads.The current IdentityCert analytics dashboard displays the same charts that are available in the Oracle Identity Analytics product. However, we plan to expand the number of available analytics in future releases.The main function of IdentityCert is user certification which can be performed quickly and efficiently using a simple touch interface. Managers tap into a certification, use simple gestures to claim users and certify their access.  Certifications can be securely downloaded to IdentityCert and can be completed with or without a network connection. The user can upload the completed certifications once they are connected to a cellular or wi-fi network.Oracle Identity Analytics can generate policy violation notifications based on detective scans of identity warehouse or via preventative analysis of identity access requests. IdentityCert allows users to view all policy violations, resolve, or delegate them to appropriate users. IdentityCert also analyzes the policy violation expression and produces more human friendly descriptions of the policy violation which improves the ability of users to resolve the violation. IdentityCert can be deployed quickly into a customer's environment. It is deployed with Hub City Media's ID Services to connect Oracle Identity Analytics securely with the iPad application.Oracle Identity Management 11g R2 is an important evolutionary release. Oracle's Identity Management suite has more characteristics of a cohesive platform. This platform provides an integrated set of identity services that can be used to protect, manage, and audit security within the enterprise. At HCM we take the platform concept a step further and see it as an opportunity to create unique solutions for Oracle Identity Management customers. IdentityCert is our commitment to this platform. You can download IdentityCert from the Apple iOS App Store today. It includes a demo dataset that you can use to explore the functions of the product without any server infrastructure. Download it. Give it a try. We would appreciate your interest and welcome any feedback.Resources:Press Release: Hub City Media Introduces iPad Application IdentityCert™ for Oracle Identity AnalyticsApp Store Download: http://bit.ly/IdentityCertOracle Identity Governance Suite

    Read the article

  • What is the difference between Workcenters, Dashboards, and the Interaction Hub?

    - by Matthew Haavisto
    Oracle Open World has just concluded.  Over the course of the conference, we presented several sessions covering different aspects of the PeopleSoft user experience, including Workcenters, Dashboards, and the PeopleSoft Interaction Hub (formerly known as the PeopleSoft Applications Portal).  Although we've produced collateral on these features and covered them in sessions, it became apparent at the conference that customers still have many questions about the these products, including how they are licensed, how they are installed, what their various purposes are, and how they can be used together synergistically. Let's Start with Licensing and Installation As you may know, we've extended the restricted use license (RUL) for the Interaction Hub.  This grants customers with PeopleTools 8.52 licenses the right to install the Interaction Hub for free for use as specified in the Tools license notes.  Note that this means customers receive a restricted use license for the Interaction Hub that doesn't cost them an additional license fee, but it is a separate product, not part of PeopleTools or PeopleSoft applications, and is a separate installation.  This means customers must provide the infrastructure to install and run the Hub, just like any other application.  The benefits of using the Hub to unify your PeopleSoft user experience can be great.  PeopleSoft applications have not yet delivered instances of the Hub with their products, though they may in the future. Workcenters and Dashboards, on the other hand, are frameworks provided by PeopleTools.  No other license is required, and no additional installation of a separate product is needed (apart from PeopleTools and PeopleSoft applications).  PeopleSoft applications are delivering instances of the workcenters and dashboards with their products.  Some are available now, and more are coming in future releases.  These delivered workcenter and dashboard instances require no additional licenses, and no additional installations beyond Tools and the applications that provide them.  In addition, the workcenter and dashboard frameworks provided by PeopleTools can be used by customers to build their own workcenters and dashboards, and it's quite easy and simple to do so. What are Their Differences?  What Purposes do they Serve? Workcenters, Dashboards and the Interaction Hub appear somewhat similar.  They all contain pagelets, and have some visual characteristics in common.  However, their strengths and purposes are very different, and they were designed to provide different benefits to your PeopleSoft ecosystem. Workcenters and Dashboards have the following characteristics: Designed for specific roles Focus on the daily tasks of those roles Help to streamline the work performed most often Personal view of my work world Makes navigation and search easier and quicker, particularly for transactions and decision support Reports and data needed for day-to-day work Personalizable, but minimal Delivered by PS Apps, but can be altered by customer for their requirements Customers can create their own Workcenters can be used for guided processes  The Interaction Hub is designed to aggregate content from multiple applications, and is is used to unify the user experience of those applications.  It offers a rich, web site-based user experience, and is often used to provide access to infrequently performed activities like benefits enrollment, payroll inquiries, life event changes, onboarding, and so on. Full-featured and robust Centrally administered Pushed to large audience Broad info like Company News Infrequent activities like benefits, not day-to-day tasks Self-service, access to employer info Central launch point for other activities and can navigate to workcenters and dashboards Deployed by customers or consultants, instances not delivered by PeopleSoft (at this time) Content management Unified PS application navigation Although these products are quite different and serve different purposes in your PeopleSoft environment, they can be used together to provide a richer, more efficient and engaging user experience for your all your user communities.

    Read the article

  • Find Knowledge Quickly

    - by Get Proactive Customer Adoption Team
    Untitled Document Get to relevant knowledge on the Oracle products you use in a few quick steps! Customers tell us that the volume of search results returned can make it difficult to find the information they need, especially when similar Oracle products exist. These simple tips show you how to filter, browse, search, and refine your results to get relevant answers faster. Filter first: PowerView is your best friend Powerview is an often ignored feature of My Oracle Support that enables you to control the information displayed on the Dashboard, the Knowledge tab and regions, and the Service Request tab based on one or more parameters. You can define a PowerView to limit information based on product, product line, support ID, platform, hostname, system name and others. Using PowerView allows you to restrict: Your search results to the filters you have set The product list when selecting your products in Search & Browse and when creating service requests   The PowerView menu is at the top of My Oracle Support, near the title You turn PowerView on by clicking PowerView is Off, which is a button. When PowerView is On, and filters are active, clicking the button again will toggle Powerview off. Click the arrow to the right to create new filters, edit filters, remove a filter, or choose from the list of previously created filters. You can create a PowerView in 3 simple steps! Turn PowerView on and select New from the PowerView menu. Select your filter from the Select Filter Type dropdown list and make selections from the other two menus. Hint: While there are many filter options, selecting your product line or your list of products will provide you with an effective filter. Click the plus sign (+) to add more filters. Click the minus sign (-) to remove a filter. Click Create to save and activated the filter(s) You’ll notice that PowerView is On displays along with the active filters. For more information about the PowerView capabilities, click the Learn more about PowerView… menu item or view a short video. Browse & Refine: Access the Best Match Fast For Your Product and Task In the Knowledge Browse region of the Knowledge or Dashboard tabs, pick your product, pick your task, select a version, if applicable. A best match document – a collection of knowledge articles and resources specific to your selections - may display, offering you a one-stop shop. The best match document, called an “information center,” is an aggregate of dynamically updated links to information pertinent to the product, task, and version (if applicable) you chose. These documents are refreshed every 24 hours to ensure that you have the most current information at your fingertips. Note: Not all products have “information centers.” If no information center appears as a best match, click Search to see a list of search results. From the information center, you can access topics from a product overview to security information, as shown in the left menu. Just want to search? That’s easy too! Again, pick your product, pick your task, select a version, if applicable, enter a keyword term, and click Search. Hint: In this example, you’ll notice that PowerView is on and set to PeopleSoft Enterprise. When PowerView is on and you select a product from the Knowledge Base product list, the listed products are limited to the active PowerView filter. (Products you’ve previously picked are also listed at the top of the dropdown list.) Your search results are displayed based on the parameters you entered. It’s that simple! Related Information: My Oracle Support - User Resource Center [ID 873313.1] My Oracle Support Community For more tips on using My Oracle Support, check out these short video training modules. My Oracle Support Speed Video Training [ID 603505.1]

    Read the article

  • ASP.NET MVC Routing - Redirect to aspx?

    - by bmoeskau
    This seems like it should be easy, but for some reason I'm having no luck. I'm migrating an existing WebForms app to MVC, so I need to keep the root of the site pointing to my existing aspx pages for now and only apply routing to named routes. Here's what I have: public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.IgnoreRoute("{resource}.aspx/{*pathInfo}"); RouteTable.Routes.Add( "Root", new Route("", new DefaultRouteHandler()) ); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Calendar2", action = "Index", id = "" } // Parameter defaults ); } So aspx pages should be ignored, and the default root url should be handled by this handler: public class DefaultRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { return System.Web.Compilation.BuildManager.CreateInstanceFromVirtualPath( "~/Dashboard/default.aspx", typeof(Page)) as IHttpHandler; } } This seems to work OK, but the resulting YPOD gives me this: Multiple controls with the same ID '__Page' were found. Trace requires that controls have unique IDs. which seems to imply that the page is somehow getting rendered twice. If I simply type in the url to my dashboard page directly it works fine (no routing, no error). I have no idea why the handler code would be doing anything differently. Bottom line -- I'd like to simply redirect the root url path to an aspx of my choosing -- can anyone shed some light?

    Read the article

  • WPF Animation FPS vs. CPU usage - Am I expecting too much?

    - by Cory Charlton
    Working on a screen saver for my wife, http://cchearts.codeplex.com/, and while I've been able to improve FPS on lower end machines (switch from Path to StreamGeometry, use DrawingVisual instead of UserControl, etc) the CPU usage still seems very high. Here's some numbers I ran from a few 5 minute sampling periods: ~60FPS 35% average CPU on Core 2 Duo T7500 @ 2.2GHz, 3GB ram, NVIDIA Quadro NVS 140M (128MB), Vista [My dev laptop] ~40FPS 50% average CPU on Pentium D @ 3.4GHz, 1.5GB ram, Standard VGA Graphics Adapter (unknown), 2003 Server [A crappy desktop] I can understand the lower frame rate and higher CPU usage on the crappy desktop but it still seems pretty high and 35% on my dev laptop seems high as well. I'd really like to analyze the application to get more details but I'm having issues there as well so I'm wondering if I'm doing something wrong (never profiled WPF before). WPF Performance Suite: Process Launch Error Unable to attach to process: CCHearts.exe Do you want to kill it? This error message occurs when I click cancel after attempting launch. If I don't click cancel it sits there idle, I guess waiting to attach. Performance Explorer: Could not launch C:\Projects2\CC.Hearts\CC.Hearts\bin\Debug (USEVISUAL)\CCHearts.exe. Previous attempt to profile the application finished unsuccessfully. Please restart the application. Output Window from Performance: Profiling started. Profiling process ID 5360 (CCHearts). Process ID 5360 has exited. Data written to C:\Projects2\CC.Hearts\CCHearts100608.vsp. Profiling finished. PRF0025: No data was collected. Profiling complete. So I'm stuck wanting to improve performance but have no concrete way to determine where the bottleneck is. Have been relatively successful throwing darts at this point but I'm beyond that now :) PS: Screensaver is hosted at CodePlex if you want to look at the source and missed the link above. Edit: My RenderOptions darts... // NOTE: Grasping at straws here ;-) RenderOptions.SetBitmapScalingMode(newHeart, BitmapScalingMode.LowQuality); RenderOptions.SetCachingHint(newHeart, CachingHint.Cache); RenderOptions.SetEdgeMode(newHeart, EdgeMode.Aliased); I threw those a little while back and didn't see much difference (not sure if the bitmap scaling even comes into play). Really wish I could get profiling working to know where I should try to optimize. For now I assume there is some overhead in creating a new HeartVisual and the DrawingVisual contained inside. Maybe if I reset and reused the hearts (tossed them in a queue once they completed or something) I'd see an improvement. Shrug Throwing darts while blindfolder is always fun.

    Read the article

  • Which MacBook(Pro) for running Visual Studio 2010 on VMWare Fusion on a Mac?

    - by Greg
    Hi Anyone have experience running Visual Studio 2010 on a MacBook or MacBook Pro? (via VMWare fusion) Any feedback / advice based on your experience re what level of MacBook Pro (i.e. CPU type, CPU speed) you would target to get reasonable/good performance from VS2010 on it? (I'm just concerned about getting a base level MacBook Pro 13" 2.4GHz Core2Duo whether I would be frustrated with performance or not)

    Read the article

  • DDD/NHibernate Use of Aggregate root and impact on web design - ex. Editing children of aggregate ro

    - by pbrophy
    Hopefully, this fictitious example will illustrate my problem: Suppose you are writing a system which tracks complaints for a software product, as well as many other attributes about the product. In this case the SoftwareProduct is our aggregate root and Complaints are entities that only can exist as a child of the product. In other words, if the software product is removed from the system, so shall the complaints. In the system, there is a dashboard like web page which displays many different aspects of a single SoftwareProduct. One section in the dashboard, displays a list of Complaints in a grid like fashion, showing only some very high level information for each complaint. When an admin type user chooses one of these complaints, they are directed to an edit screen which allows them to edit the detail of a single Complaint. The question is: what is the best way for the edit screen to retrieve the single Complaint, so that it can be displayed for editing purposes? Keep in mind we have already established the SoftwareProduct as an aggregate root, therefore direct access to a Complaint should not be allowed. Also, the system is using NHibernate, so eager loading is an option, but my understanding is that even if a single Complaint is eager loaded via the SoftwareProduct, as soon as the Complaints collection is accessed the rest of the collection is loaded. So, how do you get the single Complaint through the SoftwareProduct without incurring the overhead of loading the entire Complaints collection?

    Read the article

  • C# memory management: unsafe keyword and pointers

    - by Alerty
    What are the consequences (positive/negative) of using the unsafe keyword in C# to use pointers? For example, what becomes of garbage collection, what are the performance gains/losses, what are the performance gains/losses compared to other languages manual memory management, what are the dangers, in which situation is it really justifiable to make use of this language feature... ?

    Read the article

  • Typus not working in Heroku (error 500) what im doing wrong?

    - by Victor P
    Have someone used Typus (admin plugin for rails) in Heroku? http://intraducibles.com/projects/typus/install I follow the instructions and in my local machine (Rails 2.3.5) is working fine, but when I deploy to Heroku it crashes. What Im doing wrong? the log: Logfile created on Mon Mar 29 18:14:06 -0700 2010 Processing TypusController#dashboard (for 190.196.113.93 at 2010-03-29 18:14:07) [GET] Parameters: {"action"=>"dashboard", "controller"=>"typus"} ActiveRecord::StatementInvalid (PGError: ERROR: relation "typus_users" does not exist : SELECT a.attname, format_type(a.atttypid, a.atttypmod), d.adsrc, a.attnotnull FROM pg_attribute a LEFT JOIN pg_attrdef d ON a.attrelid = d.adrelid AND a.attnum = d.adnum WHERE a.attrelid = '"typus_users"'::regclass AND a.attnum > 0 AND NOT a.attisdropped ORDER BY a.attnum ): vendor/plugins/typus/app/controllers/typus_controller.rb:128:in `verify_typus_users_table_schema' /home/heroku_rack/lib/static_assets.rb:9:in `call' /home/heroku_rack/lib/last_access.rb:25:in `call' /home/heroku_rack/lib/date_header.rb:14:in `call' thin (1.0.1) lib/thin/connection.rb:80:in `pre_process' thin (1.0.1) lib/thin/connection.rb:78:in `catch' thin (1.0.1) lib/thin/connection.rb:78:in `pre_process' thin (1.0.1) lib/thin/connection.rb:57:in `process' thin (1.0.1) lib/thin/connection.rb:42:in `receive_data' eventmachine (0.12.6) lib/eventmachine.rb:240:in `run_machine' eventmachine (0.12.6) lib/eventmachine.rb:240:in `run' thin (1.0.1) lib/thin/backends/base.rb:57:in `start' thin (1.0.1) lib/thin/server.rb:150:in `start' thin (1.0.1) lib/thin/controllers/controller.rb:80:in `start' thin (1.0.1) lib/thin/runner.rb:173:in `send' thin (1.0.1) lib/thin/runner.rb:173:in `run_command' thin (1.0.1) lib/thin/runner.rb:139:in `run!' thin (1.0.1) bin/thin:6 /usr/local/bin/thin:20:in `load' /usr/local/bin/thin:20 Rendering /disk1/home/slugs/157361_3469154_60b1/mnt/public/500.html (500 Internal Server Error)

    Read the article

  • Using jQuery.UI CSS Framework for DIV Styling

    - by Gordon
    Hi folks, a Project I am currently working on uses the jQuery UI framework for some of its widgets. To provide the user with a global look and feel I would like to use the framework also for its css stuff. I am implementing at the moment a dashboard like homepage, where the user can see an overall status of its data. This dashboard is build of some divs that should be aligned into a grid layout. I try to style the divs like follows <div class="ui-widget"> <div class="ui-widget-header">Box Header</div> <div class="ui-widget-content"> Content of the Box </div> </div> Later I would like to implement some draggable-and-sortable functionality. The Problem I am facing right now is that the boxes aren't properly aligned. Does anyone has a hint on using jQuery.UI for that kind of css work? I was studing the CSS framework documentation on jqueryui.com but there aren't that much information. best regards, Gordon

    Read the article

  • Mongoid or MongoMapper?

    - by PanosJee
    I have tried MongoMapper and it is feature complete (offering almost all AR functionality) but i was not very happy with the performance when using large datasets. Has anyone compared with Mongoid? Any performance gains ?

    Read the article

  • How to modify code so that it adheres to the Law of Demeter

    - by guazz
    public class BigPerformance { public decimal Value {get;set;} } public class Performance { public BigPerformance BigPerf {get; set}; } public class Category { public Performance Perf {get;set; } } If I call: Category cat = new Cateogry(); cat.Perf.BigPerf.Value = 1.0; I assume this this breaks the LoD? If so, how do I remedy this if I have a large number of inner class Properties?

    Read the article

  • Windows RPC vs XML-RPC

    - by Y.Z
    Is there any benchmark about encoding/decoding certain common typed data in Microsoft RPC NDR engine (DCE 1.1) in comparison with that in XML-RPC-C/C++ in the de-facto C/C++ implementation in XML-RPC? Actually I have to choose between Windows RPC and XML-RPC-C/C++ to implement my own common object infrastructure for High Performance Computing on Windows. Any recommandation about which with regard to their performance? Thank you. Best Regards, Yang

    Read the article

  • StringBuilder vs XmlTextWriter

    - by Wololo
    I am trying to squeeze as much performance as i can from a custom HttpHandler that serves Xml content. I' m wondering which is better for performance. Using the XmlTextWriter class or ad-hoc StringBuilder operations like: StringBuilder sb = new StringBuilder("<?xml version="1.0" encoding="UTF-8" ?>"); sb.AppendFormat("<element>{0}</element>", SOMEVALUE); Does anyone have first hand experience?

    Read the article

  • Flex: View Stack Navigator

    - by Deena
    Hi, I have a component mxml file in which i have a view stack, on click of a button i navigate to the first child, now i need to navigate to the second child during onclick of a button present in the second child. All the childs are component files included within the view stack. How could this be done, Sample code is present below, --------------------Application.mxml--------------------- <mx:Canvas xmlns:mx="http://www.adobe.com/2006/mxml" > <mx:Script> <![CDATA[ private function loadScreen():void { navigationViewStack.selectedChild=id_offering; } ]]> </mx:Script> <mx:Button label="Save" click="loadScreen();"/> </mx:Canvas> <mx:ViewStack id="navigationViewStack" width="100%" height="100%"> <components:dashboard id="id_dashboard" label="Dashboard" /> <components:offering id="id_offering" label="Offering" /> <components:IssueSec id="id_issueSec" label = "Issues"/> </mx:ViewStack> -------------------------Ends-------------------------------------- Now in my offering.mxml file if i try to access navigationViewStack i am getting an error stating 'Access of undefined property navigationViewStack. Help me on how to access the view stack from my component mxml file. Thanks! Cheers, Deena

    Read the article

  • Running virtual machines: Linux vs Windows 7

    - by vikp
    Hi, I have tried running windows xp development virtual machine under windows 7 and the performance was dreadful. I'm considering installing Linux and running the virtual machine from the Linux, but I'm not sure whether I can expect any performance gains? It's a 2.4ghz core 2 duo machine with 4gb ram and 5400 rpm hdd. Can somebody please recommend very cut down version of linux that can run VMWare player and isn't resource hungry? Thank you

    Read the article

  • String literals vs constants for Session[...] dictionary keys

    - by FreshCode
    Session[Constant] vs Session["String Literal"] Performance I'm retrieving user-specific data like ViewData["CartItems"] = Session["CartItems"]; with a string literal for keys on every request. Should I be using constants for this? If yes, how should I go about implementing frequently used string literals and will it significantly affect performance on a high-traffic site? Related question does not address ASP.NET MVC or Session.

    Read the article

  • Linux memory fragmentation

    - by Raghu
    Hi all, Is there a way to detect memory fragmentation on linux ? This is because on some long running servers I have noticed performance degradation and only after I restart process I see better performance. I noticed it more when using linux huge page support -- are huge pages in linux more prone to fragmentation ? I have looked at /proc/buddyinfo in particular. I want to know whether there are any better ways to look at it.

    Read the article

  • Is using SharePoint as a intranet/extranet portal a good idea?

    - by Rob
    I work for a fortune 500 company in IT and we have developed many systems/applications to do a variety of things. We are in need of some commonality of these applications and a better portal/dashboard/landing page for these applications. So, our customers and employees would log into this portal and see all the "things" that they can do which then link to their own application. This could maybe just iframe in each application inside of this portal to keep brand and navigation consistency. We are trying to decide whether to use SharePoint 2007 or 2010 for this or develop a portal/dashboard of sorts in house. We would like this portal to look and feel very branded to our needs and really not even feel like its using SharePoint (if needed). An example is to provide our own Menu control that drives the navigation if needed. Does anyone have any pros/cons for using SharePoint in such a way? Any advice on implementation (e.g. use 2010, much easier to customize design than 2007, etc)?

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >