Search Results

Search found 32254 results on 1291 pages for 'model view presenter'.

Page 311/1291 | < Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >

  • Microsoft&rsquo;s new technical computing initiative

    - by Randy Walker
    I made a mental note from earlier in the year.  Microsoft literally buys computers by the truckload.  From what I understand, it’s a typical practice amongst large software vendors.  You plug a few wires in, you test it, and you instantly have mega tera tera flops (don’t hold me to that number).  Microsoft has been trying to plug away at their cloud services (named Azure).  Which, for the layman, means Microsoft runs your software on their computers, and as demand increases you can allocate more computing power on the fly. With this in mind, it doesn’t surprise me that I was recently sent an executive email concerning Microsoft’s new technical computing initiative.  I find it to be a great marketing idea with actual substance behind their real work.  From the programmer academic perspective, in college we dreamed about this type of processing power.  This has decades of computer science theory behind it. A copy of the email received.  (note that I almost deleted this email, thinking it was spam due to it’s length) We don't often think about how complex life really is. Take the relatively simple task of commuting to and from work: it is, in fact, a complicated interplay of variables such as weather, train delays, accidents, traffic patterns, road construction, etc. You can however, take steps to shorten your commute - using a good, predictive understanding of a few of these variables. In fact, you probably are already taking these inputs and instinctively building a predictive model that you act on daily to get to your destination more quickly. Now, when we apply the same method to very complex tasks, this modeling approach becomes much more challenging. Recent world events clearly demonstrated our inability to process vast amounts of information and variables that would have helped to more accurately predict the behavior of global financial markets or the occurrence and impact of a volcano eruption in Iceland. To make sense of issues like these, researchers, engineers and analysts create computer models of the almost infinite number of possible interactions in complex systems. But, they need increasingly more sophisticated computer models to better understand how the world behaves and to make fact-based predictions about the future. And, to do this, it requires a tremendous amount of computing power to process and examine the massive data deluge from cameras, digital sensors and precision instruments of all kinds. This is the key to creating more accurate and realistic models that expose the hidden meaning of data, which gives us the kind of insight we need to solve a myriad of challenges. We have made great strides in our ability to build these kinds of computer models, and yet they are still too difficult, expensive and time consuming to manage. Today, even the most complicated data-rich simulations cannot fully capture all of the intricacies and dependencies of the systems they are trying to model. That is why, across the scientific and engineering world, it is so hard to say with any certainty when or where the next volcano will erupt and what flight patterns it might affect, or to more accurately predict something like a global flu pandemic. So far, we just cannot collect, correlate and compute enough data to create an accurate forecast of the real world. But this is about to change. Innovations in technology are transforming our ability to measure, monitor and model how the world behaves. The implication for scientific research is profound, and it will transform the way we tackle global challenges like health care and climate change. It will also have a huge impact on engineering and business, delivering breakthroughs that could lead to the creation of new products, new businesses and even new industries. Because you are a subscriber to executive e-mails from Microsoft, I want you to be the first to know about a new effort focused specifically on empowering millions of the world's smartest problem solvers. Today, I am happy to introduce Microsoft's Technical Computing initiative. Our goal is to unleash the power of pervasive, accurate, real-time modeling to help people and organizations achieve their objectives and realize their potential. We are bringing together some of the brightest minds in the technical computing community across industry, academia and science at www.modelingtheworld.com to discuss trends, challenges and shared opportunities. New advances provide the foundation for tools and applications that will make technical computing more affordable and accessible where mathematical and computational principles are applied to solve practical problems. One day soon, complicated tasks like building a sophisticated computer model that would typically take a team of advanced software programmers months to build and days to run, will be accomplished in a single afternoon by a scientist, engineer or analyst working at the PC on their desktop. And as technology continues to advance, these models will become more complete and accurate in the way they represent the world. This will speed our ability to test new ideas, improve processes and advance our understanding of systems. Our technical computing initiative reflects the best of Microsoft's heritage. Ever since Bill Gates articulated the then far-fetched vision of "a computer on every desktop" in the early 1980's, Microsoft has been at the forefront of expanding the power and reach of computing to benefit the world. As someone who worked closely with Bill for many years at Microsoft, I am happy to share with you that the passion behind that vision is fully alive at Microsoft and is carried out in the creation of our new Technical Computing group. Enabling more people to make better predictions We have seen the impact of making greater computing power more available firsthand through our investments in high performance computing (HPC) over the past five years. Scientists, engineers and analysts in organizations of all sizes and sectors are finding that using distributed computational power creates societal impact, fuels scientific breakthroughs and delivers competitive advantages. For example, we have seen remarkable results from some of our current customers: Malaria strikes 300,000 to 500,000 people around the world each year. To help in the effort to eradicate malaria worldwide, scientists at Intellectual Ventures use software that simulates how the disease spreads and would respond to prevention and control methods, such as vaccines and the use of bed nets. Technical computing allows researchers to model more detailed parameters for more accurate results and receive those results in less than an hour, rather than waiting a full day. Aerospace engineering firm, a.i. solutions, Inc., needed a more powerful computing platform to keep up with the increasingly complex computational needs of its customers: NASA, the Department of Defense and other government agencies planning space flights. To meet that need, it adopted technical computing. Now, a.i. solutions can produce detailed predictions and analysis of the flight dynamics of a given spacecraft, from optimal launch times and orbit determination to attitude control and navigation, up to eight times faster. This enables them to avoid mistakes in any areas that can cause a space mission to fail and potentially result in the loss of life and millions of dollars. Western & Southern Financial Group faced the challenge of running ever larger and more complex actuarial models as its number of policyholders and products grew and regulatory requirements changed. The company chose an actuarial solution that runs on technical computing technology. The solution is easy for the company's IT staff to manage and adjust to meet business needs. The new solution helps the company reduce modeling time by up to 99 percent - letting the team fine-tune its models for more accurate product pricing and financial projections. Our Technical Computing direction Collaborating closely with partners across industry and academia, we must now extend the reach of technical computing even further to help predictive modelers and data explorers make faster, more accurate predictions. As we build the Technical Computing initiative, we will invest in three core areas: Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable 'just-in-time' processing. This platform will help ensure processing resources are available whenever they are needed-reliably, consistently and quickly. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today's modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop... to the cluster... to the cloud. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. Thinking bigger There is so much left to be discovered and so many questions yet to be answered in the fascinating world around us. We believe the technical computing community will show us that we have not seen anything yet. Imagine just some of the breakthroughs this community could make possible: Better predictions to help improve the understanding of pandemics, contagion and global health trends. Climate change models that predict environmental, economic and human impact, accessible in real-time during key discussions and debates. More accurate prediction of natural disasters and their impact to develop more effective emergency response plans. With an ambitious charter in hand, this new team is ready to build on our progress to-date and execute Microsoft's technical computing vision over the months and years ahead. We will steadily invest in the right technologies, tools and talent, and work to bring together the technical computing community. I invite you to visit www.modelingtheworld.com today. We welcome your ideas and feedback. I look forward to making this journey with you and others who want to answer the world's biggest questions, discover solutions to problems that seem impossible and uncover a host of new opportunities to change the world we live in for the better. Bob

    Read the article

  • Tutorial: Getting Started with the NoSQL JavaScript / Node.js API for MySQL Cluster

    - by Mat Keep
    Tutorial authored by Craig Russell and JD Duncan  The MySQL Cluster team are working on a new NoSQL JavaScript connector for MySQL. The objectives are simplicity and high performance for JavaScript users: - allows end-to-end JavaScript development, from the browser to the server and now to the world's most popular open source database - native "NoSQL" access to the storage layer without going first through SQL transformations and parsing. Node.js is a complete web platform built around JavaScript designed to deliver millions of client connections on commodity hardware. With the MySQL NoSQL Connector for JavaScript, Node.js users can easily add data access and persistence to their web, cloud, social and mobile applications. While the initial implementation is designed to plug and play with Node.js, the actual implementation doesn't depend heavily on Node, potentially enabling wider platform support in the future. Implementation The architecture and user interface of this connector are very different from other MySQL connectors in a major way: it is an asynchronous interface that follows the event model built into Node.js. To make it as easy as possible, we decided to use a domain object model to store the data. This allows for users to query data from the database and have a fully-instantiated object to work with, instead of having to deal with rows and columns of the database. The domain object model can have any user behavior that is desired, with the NoSQL connector providing the data from the database. To make it as fast as possible, we use a direct connection from the user's address space to the database. This approach means that no SQL (pun intended) is needed to get to the data, and no SQL server is between the user and the data. The connector is being developed to be extensible to multiple underlying database technologies, including direct, native access to both the MySQL Cluster "ndb" and InnoDB storage engines. The connector integrates the MySQL Cluster native API library directly within the Node.js platform itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. The following sections take you through how to connect to MySQL, query the data and how to get started. Connecting to the database A Session is the main user access path to the database. You can get a Session object directly from the connector using the openSession function: var nosql = require("mysql-js"); var dbProperties = {     "implementation" : "ndb",     "database" : "test" }; nosql.openSession(dbProperties, null, onSession); The openSession function calls back into the application upon creating a Session. The Session is then used to create, delete, update, and read objects. Reading data The Session can read data from the database in a number of ways. If you simply want the data from the database, you provide a table name and the key of the row that you want. For example, consider this schema: create table employee (   id int not null primary key,   name varchar(32),   salary float ) ENGINE=ndbcluster; Since the primary key is a number, you can provide the key as a number to the find function. function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find('employee', 0, onData); }; function onData = function(err, data) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(data));   ... use data in application }; If you want to have the data stored in your own domain model, you tell the connector which table your domain model uses, by specifying an annotation, and pass your domain model to the find function. var annotations = new nosql.Annotations(); function Employee = function(id, name, salary) {   this.id = id;   this.name = name;   this.salary = salary;   this.giveRaise = function(percent) {     this.salary *= percent;   } }; annotations.mapClass(Employee, {'table' : 'employee'}); function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData); }; Updating data You can update the emp instance in memory, but to make the raise persistent, you need to write it back to the database, using the update function. function onData = function(err, emp) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp); // oops, session is out of scope here }; Using JavaScript can be tricky because it does not have the concept of block scope for variables. You can create a closure to handle these variables, or use a feature of the connector to remember your variables. The connector api takes a fixed number of parameters and returns a fixed number of result parameters to the callback function. But the connector will keep track of variables for you and return them to the callback. So in the above example, change the onSession function to remember the session variable, and you can refer to it in the onData function: function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData, session); }; function onData = function(err, emp, session) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp, onUpdate); // session is now in scope }; function onUpdate = function(err, emp) {   if (err) {     console.log(err);     ... error handling   } Inserting data Inserting data requires a mapped JavaScript user function (constructor) and a session. Create a variable and persist it: function onSession = function(err, session) {   var data = new Employee(999, 'Mat Keep', 20000000);   session.persist(data, onInsert);   } }; Deleting data To remove data from the database, use the session remove function. You use an instance of the domain object to identify the row you want to remove. Only the key field is relevant. function onSession = function(err, session) {   var key = new Employee(999);   session.remove(Employee, onDelete);   } }; More extensive queries We are working on the implementation of more extensive queries along the lines of the criteria query api. Stay tuned. How to evaluate The MySQL Connector for JavaScript is available for download from labs.mysql.com. Select the build: MySQL-Cluster-NoSQL-Connector-for-Node-js You can also clone the project on GitHub Since it is still early in development, feedback is especially valuable (so don't hesitate to leave comments on this blog, or head to the MySQL Cluster forum). Try it out and see how easy (and fast) it is to integrate MySQL Cluster into your Node.js platforms. You can learn more about other previewed functionality of MySQL Cluster 7.3 here

    Read the article

  • Which Ubuntu version to use on a MAXDATA laptop Eco 3100X ? with this system info

    - by Erjet Malaj
    i am speaking as new ubuntu user, i just have installed ubuntu 10.04 on my laptop, but is running very slow... So i am here to ask you a question: WHich ubuntu version can fit for my laptop MAXDATA Eco 3100x, . My Laptop System Information are: SYSTEM INFORMATION Running Ubuntu Linux, the Ubuntu 10.04 (lucid) release. GNOME: 2.30.2 (Ubuntu 2010-06-25) Kernel version: 2.6.32-40-generic (#87-Ubuntu SMP Mon Mar 5 20:26:31 UTC 2012) GCC: 4.4.3 (i486-linux-gnu) Xorg: unknown (25 February 2012 06:59:39AM) (25 February 2012 06:59:39AM) Hostname: lotus-laptop Uptime: 0 days 1 h 6 min CPU INFORMATION GenuineIntel, Intel(R) Pentium(R) 4 CPU 2.40GHz Number of CPUs: 1 CPU clock currently at 2390.561 MHz with 512 KB cache Numbering: family(15) model(2) stepping(7) Bogomips: 4781.12 Flags: fpu vme de pse tsc msr pae mce cx8 mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe up pebs bts cid MEMORY INFORMATION Total memory: 228 MB Total swap: 455 MB STORAGE INFORMATION SCSI device - scsi0 Vendor: ATA Model: IBM-DJSA-210 SCSI device - scsi1 Vendor: TOSHIBA Model: DVD-ROM SD-C2502 HARDWARE INFORMATION MOTHERBOARD Host bridge Silicon Integrated Systems [SiS] 650/M650 Host (rev 11) PCI bridge(s) Silicon Integrated Systems [SiS] Virtual PCI-to-PCI bridge (AGP) Silicon Integrated Systems [SiS] Virtual PCI-to-PCI bridge (AGP) USB controller(s) Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) (prog-if 10) Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) (prog-if 10) Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) (prog-if 10) Silicon Integrated Systems [SiS] USB 2.0 Controller (prog-if 20) Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) (prog-if 10) Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) (prog-if 10) Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) (prog-if 10) Silicon Integrated Systems [SiS] USB 2.0 Controller (prog-if 20) ISA bridge Silicon Integrated Systems [SiS] SiS962 [MuTIOL Media IO] (rev 04) IDE interface Silicon Integrated Systems [SiS] 5513 [IDE] (prog-if 80 [Master]) Subsystem: Silicon Integrated Systems [SiS] 5513 [IDE] GRAPHIC CARD VGA controller Silicon Integrated Systems [SiS] 65x/M650/740 PCI/AGP VGA Display Adapter Subsystem: Uniwill Computer Corp Device 5103 SOUND CARD Multimedia controller Silicon Integrated Systems [SiS] AC'97 Sound Controller (rev a0) Subsystem: Uniwill Computer Corp Device 5203 NETWORK Ethernet controller Silicon Integrated Systems [SiS] SiS900 PCI Fast Ethernet (rev 91) Subsystem: Uniwill Computer Corp Device 5002 Modem Silicon Integrated Systems [SiS] AC'97 Modem Controller (rev a0) Subsystem: Uniwill Computer Corp Device 4003 Thanks you asap. :-) E

    Read the article

  • Skewed: a rotating camera in a simple CPU-based voxel raycaster/raytracer

    - by voxelizr
    TL;DR -- in my first simple software voxel raycaster, I cannot get camera rotations to work, seemingly correct matrices notwithstanding. The result is skewed: like a flat rendering, correctly rotated, however distorted and without depth. (While axis-aligned ie. unrotated, depth and parallax are as expected.) I'm trying to write a simple voxel raycaster as a learning exercise. This is purely CPU based for now until I figure out how things work exactly -- fow now, OpenGL is just (ab)used to blit the generated bitmap to the screen as often as possible. Now I have gotten to the point where a perspective-projection camera can move through the world and I can render (mostly, minus some artifacts that need investigation) perspective-correct 3-dimensional views of the "world", which is basically empty but contains a voxel cube of the Stanford Bunny. So I have a camera that I can move up and down, strafe left and right and "walk forward/backward" -- all axis-aligned so far, no camera rotations. Herein lies my problem. Screenshot #1: correct depth when the camera is still strictly axis-aligned, ie. un-rotated. Now I have for a few days been trying to get rotation to work. The basic logic and theory behind matrices and 3D rotations, in theory, is very clear to me. Yet I have only ever achieved a "2.5 rendering" when the camera rotates... fish-eyey, bit like in Google Streetview: even though I have a volumetric world representation, it seems --no matter what I try-- like I would first create a rendering from the "front view", then rotate that flat rendering according to camera rotation. Needless to say, I'm by now aware that rotating rays is not particularly necessary and error-prone. Still, in my most recent setup, with the most simplified raycast ray-position-and-direction algorithm possible, my rotation still produces the same fish-eyey flat-render-rotated style looks: Screenshot #2: camera "rotated to the right by 39 degrees" -- note how the blue-shaded left-hand side of the cube from screen #2 is not visible in this rotation, yet by now "it really should"! Now of course I'm aware of this: in a simple axis-aligned-no-rotation-setup like I had in the beginning, the ray simply traverses in small steps the positive z-direction, diverging to the left or right and top or bottom only depending on pixel position and projection matrix. As I "rotate the camera to the right or left" -- ie I rotate it around the Y-axis -- those very steps should be simply transformed by the proper rotation matrix, right? So for forward-traversal the Z-step gets a bit smaller the more the cam rotates, offset by an "increase" in the X-step. Yet for the pixel-position-based horizontal+vertical-divergence, increasing fractions of the x-step need to be "added" to the z-step. Somehow, none of my many matrices that I experimented with, nor my experiments with matrix-less hardcoded verbose sin/cos calculations really get this part right. Here's my basic per-ray pre-traversal algorithm -- syntax in Go, but take it as pseudocode: fx and fy: pixel positions x and y rayPos: vec3 for the ray starting position in world-space (calculated as below) rayDir: vec3 for the xyz-steps to be added to rayPos in each step during ray traversal rayStep: a temporary vec3 camPos: vec3 for the camera position in world space camRad: vec3 for camera rotation in radians pmat: typical perspective projection matrix The algorithm / pseudocode: // 1: rayPos is for now "this pixel, as a vector on the view plane in 3d, at The Origin" rayPos.X, rayPos.Y, rayPos.Z = ((fx / width) - 0.5), ((fy / height) - 0.5), 0 // 2: rotate around Y axis depending on cam rotation. No prob since view plane still at Origin 0,0,0 rayPos.MultMat(num.NewDmat4RotationY(camRad.Y)) // 3: a temp vec3. planeDist is -0.15 or some such -- fov-based dist of view plane from eye and also the non-normalized, "in axis-aligned world" traversal step size "forward into the screen" rayStep.X, rayStep.Y, rayStep.Z = 0, 0, planeDist // 4: rotate this too -- 0,zstep should become some meaningful xzstep,xzstep rayStep.MultMat(num.NewDmat4RotationY(CamRad.Y)) // set up direction vector from still-origin-based-ray-position-off-rotated-view-plane plus rotated-zstep-vector rayDir.X, rayDir.Y, rayDir.Z = -rayPos.X - me.rayStep.X, -rayPos.Y, rayPos.Z + rayStep.Z // perspective projection rayDir.Normalize() rayDir.MultMat(pmat) // before traversal, the ray starting position has to be transformed from origin-relative to campos-relative rayPos.Add(camPos) I'm skipping the traversal and sampling parts -- as per screens #1 through #3, those are "basically mostly correct" (though not pretty) -- when axis-aligned / unrotated.

    Read the article

  • Customize SharePoint list using InfoPath2010 form Part4

    - by ybbest
    Customize SharePoint list using InfoPath2010 form Part1 Customize SharePoint list using InfoPath2010 form Part2 Customize SharePoint list using InfoPath2010 form Part3 In this post, I’d like to show you how to create print functionality in InfoPath for SharePoint list. The print functionality is provided out of box in InfoPath form library; however it is not available in SharePoint list. Here are the steps to create the print functionality.You can download the new form here. 1. Create print page in the list by first copy and paste the displayifs.aspx and rename the file to Printifs.aspx. 2. Open the page in the SharePoint designer and copy the following javascript to the PlaceHolderTitleAreaClass ContentPlaceHolder. <script type="text/javascript"> $(document).ready(function(){ $("[id^='Ribbon']").hide(); $(".s4-title").hide(); $("[id='s4-leftpanel']").hide(); $("[id='s4-ribbonrow']").hide(); $("[id='s4-titlerow']").hide(); $("[id='s4-titlerow']").css("height", "0px"); $("body").css("background-color", "white"); $("body").css("zoom", "135%"); $("[id='MSO_ContentTable']").css("margin-left", "0px"); $("[id='MT-BodyContent']").css("width", "900px"); $(".MT-BodyArea").css("width", "900px"); $("[id='MT-Layout']").css("width", "900px"); $(".ms-bodyareacell").css("width", "900px"); $(".s4-wpTopTable").css("border", "none"); $("[id$='XmlFormView']").css("margin-left", "-80px"); $("body").css("margin-top", "-30px"); $(":contains('CAPEX')").css("border", "5px solid #FFCC00"); window.print(); }); </script> 3. Open InfoPath form for the list and create a field called PrintLink 4. Set the default value of printLink that points to the print page I just created before with the query string id.You can download the formula for the default value here. 5. Add a new image that looks like Print button on the display view, then I can set the url to the Print link Field. (The reason I did not use button is that you cannot set the navigate url for the button). 6.Set the url of the image to the PrintLInk field. 7.Next , create the print view. 8. Copy the contents from the display view to print view 9. Finally, go to the printifs.aspx and edit the InfoPath web part to set the view to PrintView. 9. Republish you form you will see the form as shown below 10. If you click the Print button, you will see the print page and print dialog,you can also add the company logo in the print page using css as well. 11.To deploy the customization,you can use the backup and restore content database approach , you can get more details from my previous blog post here.

    Read the article

  • DopeWarz 2010

    - by Theo Moore
    A few (6?) weeks ago, I started a project that I've always wanted to do. I am doing a re-write of the old VB6 game DopeWars with a partner in crime. I loved that game and spent many, many hours wasting time playing it years ago. I liked it so much, I even registered it (it was $5...even then, that wasn't much to spend). The VB6 version itself was a port of the old DOS game DrugWars (never go to play that). I needed a game project to work on as it's been far too long since I've done any game work. There is surprising amount of logic built into the game (there is the way I'm writing it, anyway) with what is really a minimal interface. My design goal was to have an object model that could be easily adapted to any interface and so far, I've managed to do that. I am even considering a web-based version that could run via Facebook (no clue how to do that...yet). I've enlisted the help of one of my DBA buddies to work on the interface while I am working on the object model. So far, this arrangement has gone well. The logical separation of concerns allows for us to collaborate easily. Once we get to the Facebook step, it will be great to have a DBA "on staff" to help that part off of the ground. The object model is probably about 60% complete with quite a bit of testing to go. More on this as we go....

    Read the article

  • Best Design Pattern for Coupling User Interface Components and Data Structures

    - by szahn
    I have a windows desktop application with a tree view. Due to lack of a sound data-binding solution for a tree view, I've implemented my own layer of abstraction on it to bind nodes to my own data structure. The requirements are as follows: Populate a tree view with nodes that resemble fields in a data structure. When a node is clicked, display the appropriate control to modify the value of that property in the instance of the data structure. The tree view is populated with instances of custom TreeNode classes that inherit from TreeNode. The responsibility of each custom TreeNode class is to (1) format the node text to represent the name and value of the associated field in my data structure, (2) return the control used to modify the property value, (3) get the value of the field in the control (3) set the field's value from the control. My custom TreeNode implementation has a property called "Control" which retrieves the proper custom control in the form of the base control. The control instance is stored in the custom node and instantiated upon first retrieval. So each, custom node has an associated custom control which extends a base abstract control class. Example TreeNode implementation: //The Tree Node Base Class public abstract class TreeViewNodeBase : TreeNode { public abstract CustomControlBase Control { get; } public TreeViewNodeBase(ExtractionField field) { UpdateControl(field); } public virtual void UpdateControl(ExtractionField field) { Control.UpdateControl(field); UpdateCaption(FormatValueForCaption()); } public virtual void SaveChanges(ExtractionField field) { Control.SaveChanges(field); UpdateCaption(FormatValueForCaption()); } public virtual string FormatValueForCaption() { return Control.FormatValueForCaption(); } public virtual void UpdateCaption(string newValue) { this.Text = Caption; this.LongText = newValue; } } //The tree node implementation class public class ExtractionTypeNode : TreeViewNodeBase { private CustomDropDownControl control; public override CustomControlBase Control { get { if (control == null) { control = new CustomDropDownControl(); control.label1.Text = Caption; control.comboBox1.Items.Clear(); control.comboBox1.Items.AddRange( Enum.GetNames( typeof(ExtractionField.ExtractionType))); } return control; } } public ExtractionTypeNode(ExtractionField field) : base(field) { } } //The custom control base class public abstract class CustomControlBase : UserControl { public abstract void UpdateControl(ExtractionField field); public abstract void SaveChanges(ExtractionField field); public abstract string FormatValueForCaption(); } //The custom control generic implementation (view) public partial class CustomDropDownControl : CustomControlBase { public CustomDropDownControl() { InitializeComponent(); } public override void UpdateControl(ExtractionField field) { //Nothing to do here } public override void SaveChanges(ExtractionField field) { //Nothing to do here } public override string FormatValueForCaption() { //Nothing to do here return string.Empty; } } //The custom control specific implementation public class FieldExtractionTypeControl : CustomDropDownControl { public override void UpdateControl(ExtractionField field) { comboBox1.SelectedIndex = comboBox1.FindStringExact(field.Extraction.ToString()); } public override void SaveChanges(ExtractionField field) { field.Extraction = (ExtractionField.ExtractionType) Enum.Parse(typeof(ExtractionField.ExtractionType), comboBox1.SelectedItem.ToString()); } public override string FormatValueForCaption() { return string.Empty; } The problem is that I have "generic" controls which inherit from CustomControlBase. These are just "views" with no logic. Then I have specific controls that inherit from the generic controls. I don't have any functions or business logic in the generic controls because the specific controls should govern how data is associated with the data structure. What is the best design pattern for this?

    Read the article

  • Securing Flexfield Value Sets in EBS 12.2

    - by Sara Woodhull
    Release 12.2 includes a new feature: flexfield value set security. This new feature gives you additional options for ensuring that different administrators have non-overlapping responsibilities, which in turn provides checks and balances for sensitive activities.  Separation of Duties (SoD) is one of the key concepts of internal controls and is a requirement for many regulations including: Sarbanes-Oxley (SOX) Act Health Insurance Portability and Accountability Act (HIPAA) European Union Data Protection Directive. Its primary intent is to put barriers in place to prevent fraud or theft by an individual acting alone. Implementing Separation of Duties requires minimizing the possibility that users could modify data across application functions where the users should not normally have access. For flexfields and report parameters in Oracle E-Business Suite, values in value sets can affect functionality such as the rollup of accounting data, job grades used at a company, and so on. Controlling access to the creation or modification of value set values can be an important piece of implementing Separation of Duties in an organization. New Flexfield Value Set Security feature Flexfield value set security allows system administrators to restrict users from viewing, adding or updating values in specific value sets. Value set security enables role-based separation of duties for key flexfields, descriptive flexfields, and report parameters. For example, you can set up value set security such that certain users can view or insert values for any value set used by the Accounting Flexfield but no other value sets, while other users can view and update values for value sets used for any flexfields in Oracle HRMS. You can also segregate access by Operating Unit as well as by role or responsibility.Value set security uses a combination of data security and role-based access control in Oracle User Management. Flexfield value set security provides a level of security that is different from the previously-existing and similarly-named features in Oracle E-Business Suite: Function security controls whether a user has access to a specific page or form, as well as what operations the user can do in that screen. Flexfield value security controls what values a user can enter into a flexfield segment or report parameter (by responsibility) during routine data entry in many transaction screens across Oracle E-Business Suite. Flexfield value set security (this feature, new in Release 12.2) controls who can view, insert, or update values for a particular value set (by flexfield, report, or value set) in the Segment Values form (FNDFFMSV). The effect of flexfield value set security is that a user of the Segment Values form will only be able to view those value sets for which the user has been granted access. Further, the user will be able to insert or update/disable values in that value set if the user has been granted privileges to do so.  Flexfield value set security affects independent, dependent, and certain table-validated value sets for flexfields and report parameters. Initial State of the Feature upon Upgrade Because this is a new security feature, it is turned on by default.  When you initially install or upgrade to Release 12.2.2, no users are allowed to view, insert or update any value set values (users may even think that their values are missing or invalid because they cannot see the values).  You must explicitly set up access for specific users by enabling appropriate grants and roles for those users.We recommend using flexfield value set security as part of a comprehensive Separation of Duties strategy. However, if you choose not to implement flexfield value set security upon upgrading to or installing Release 12.2, you can enable backwards compatibility--users can access any value sets if they have access to the Values form--after you upgrade. The feature does not affect day-to-day transactions that use flexfields.  However, you must either set up specific grants and roles or enable backwards compatibility before users can create new values or update or disable existing values. For more information, see: Release 12.2 Flexfield Value Set Security Documentation Update for Patch 17305947:R12.FND.C (Document 1589204.1) R12.2 TOI: Implement and Use Application Object Library (AOL) - Flexfields Security and Separation of Duties for Value Sets (recorded training)

    Read the article

  • Server side C# MVC with AngularJS

    - by Ryan Langton
    I like using .NET MVC and have used it quite a bit in the past. I have also built SPA's using AngularJS with no page loads other than the initial one. I think I want to use a blend of the two in many cases. Set up the application initially using .NET MVC and set up routing server side. Then set up each page as a mini-SPA. These pages can be quite complex with a lot of client side functionality. My confusion comes in how to handle the model-view binding using both .NET MVC and the AngularJS scope. Do I avoid the use of razor Html helpers? The Html helpers create the markup behind the scene so it could get messy trying to add angularjs tags using the helpers. So how to add ng-model for example to @Html.TextBoxFor(x = x.Name). Also ng-repeat wouldn't work because I'm not loading my model into the scope, right? I'd still want to use a c# foreach loop? Where do I draw the lines of separation? Has anyone gotten .NET MVC and AngularJS to play nicely together and what are your architectural suggestions to do so?

    Read the article

  • Should we be able to deploy a single package to the SSIS Catalog?

    - by jamiet
    My buddy Sutha Thiru sent me an email recently asking about my opinion on a particular nuance of the project deployment model in SQL Server Integration Services (SSIS) 2012 and I’d like to share my response as I think it warrants a wider discussion. Sutha asked: Jamie What is your take on this? http://www.mattmasson.com/index.php/2012/07/can-i-deploy-a-single-ssis-package-from-my-project-to-the-ssis-catalog/ Overnight I was talking to Matt who confirmed that they got no plans to change the deployment model. For example if we have following scenrio how do we do deploy? Sprint 1 Pkg1, 2 & 3 has been developed and deployed to UAT. Once signed off its been deployed to Live. Sprint 2 Pkg 4 & 5 been developed. During this time users raised a bug on Pkg2. We want to make the change to Pkg2 and deploy that to UAT and eventually to LIVE without releasing Pkg 4 &5. How do we do it? Matt pointed me to his blog entry which I have seen before . http://www.mattmasson.com/index.php/2012/02/thoughts-on-branching-strategies-for-ssis-projects/ Thanks Sutha My response: Personally, even though I've experienced the exact problem you just outlined, I agree with the current approach. I steadfastly believe that there should not be a way for an unscrupulous developer to slide in a new version of a package under the covers. Deploying .ispac files brings a degree of rigour to your operational processes. Yes, that means that we as SSIS developers are going to have to get better at using source control and branching properly but that is no bad thing in my opinion. Claiming to be proper "developers" is a bit of a cheap claim if we don't even do the fundamentals correctly. I would be interested in the thoughts of others who have used the project deployment model. Do you agree with my point of view? @Jamiet

    Read the article

  • ArchBeat Link-o-Rama for October 29, 2013

    - by OTN ArchBeat
    Exceptions Handling and Notifications in ODI | Christophe Dupupet Oracle Fusion Middleware A-Team director Christophe Dupupet reviews the techniques that are available in Oracle Data Integrator to guarantee that the appropriate individuals are notified in the event that ODI processes are impacted by network outages or other mishaps. Tech Article: SOA in Real Life: Mobile Solutions The latest article in the Industrial SOA series looks at mobile computing and how companies are developing SOA to go. Oracle Coherence, Split-Brain and Recovery Protocols In Detail | Ricardo Ferreira Ricardo Ferreira's article "provides a high level conceptual overview of Split-Brain scenarios in distributed systems," focusins on a "specific example of cluster communication failure and recovery in Oracle Coherence." WebLogic & FMW Provisioning update | Edwin Biemond "Provisioning was a hot topic on Oracle Openworld 2013," says Oracle ACE Edwin Biemond. His latest blog post discusses what is now possible with WebLogic and Fusion Middleware, and looks at what might be possible in the future. Reusing and Extending ADF BC Entities from Common Model | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis' post is about "ADF architecture and better application structuring with EO reuse from a common model." Andrejus describes "how to implement additional requirements to common model in extended ADF BC Entities." Thought for the Day "I work hard, I work late, I have nothing on my conscience. When I go to bed, I sleep." — Ellen Johnson Sirleaf, 24th and current President of Liberia (Born 29 October 1938) Source: brainyquote.com

    Read the article

  • 3ds Max error dialog: "Instancing not supported for this action"

    - by monsto
    "Instancing not supported for this action” is the dialog I get. My favorite part is that, according to google and yahoo, apparently i am the only person in the history of mankind to experience these words together in this order, let along get this message from Max. Thanks, autodesk, for putting this dialog in special for me! So I’ve created my model (nws) and was setting up a Skin Wrap. Selected "Face Deformation", added the base-skin for weight, checked “weight all points”. . . clicked “convert to skin” and got that dialog. My model doesn’t have a whole lot of elements to it, I had a left and right appendage that came from a base model (skyrim). so, i did a clonecopy of all 3 of my elements, just to be sure nothing was instanced… and VOILA! Same error message. the only other elements are an imported NIF mesh and skeleton. Any idea where this is coming from or how I can make it go away so that I can export my mesh?

    Read the article

  • A good substitute for ASMX web service methods, but not a general handler

    - by Saeed Neamati
    The best thing I like about ASP.NET MVC, is that you can directly call a server method (called action), from the client. This is so convenient, and so straightforward, that I really like to implement such a model in ASP.NET WebForms too. However, in ASP.NET WebForms, to call a server method from the client, you should either use Page Methods, or Web Services, both of which use SOAP as their communication protocol (though JSON can also be used). There is also another substitution, which is using Generic Handlers. The problem with them however is that, a separate Generic Handler should be written for each server method. In other words, each Generic Handler works like a simple method. Is there anyway else to imitate MVC model in ASP.NET WebForms? Please note that I can't change to MVC platform right now, cause the project at our hand is a big project and we don't have required resources and time to change our platform. What we seek, is a simple MVC model implementation for our AJAX calls. A problem that we have with Web Services, is the known problem of SoapException, and we're not interested in creating custom SoapExctensions.

    Read the article

  • SQLAuthority News – Download Whitepaper – Choosing a Tabular or Multidimensional Modeling Experience in SQL Server 2012 Analysis Services

    - by pinaldave
    Data modeling is the most important task for any BI professional. Matter of the fact, the biggest challenge is to organizing disparate data into an analytic model that effectively and efficiently supports the reporting and analysis. SQL Server 2012 introduces BI Semantic Model (BISM), a single model that can support a broad range of reporting and analysis while blending two Analysis Services modeling experiences behind the scenes. Multidimensional modeling – enables BI professionals to create sophisticated multidimensional cubes using traditional online analytical processing (OLAP). Tabular modeling – provides self-service data modeling capabilities to business and data analysts. As data modeling is evolving and business needs are growing new technologies and tools are emerging to help end users to make the necessary adjustment to the reporting and analysis needs. This white paper is will provide practical guidance to help you decide which SQL Server 2012 Analysis Services modeling experience – tabular or multidimensional. Do let me know what do is your opinion as a comment. In simple word – I would like to know when will you use Tabular modeling and when Multidimensional modeling? Download Choosing a Tabular or Multidimensional Modeling Experience in SQL Server 2012 Analysis Services Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • Distinction between API and frontend-backend

    - by Jason
    I'm trying to write a "standard" business web site. By "standard", I mean this site runs the usual HTML5, CSS and Javascript for the front-end, a back-end (to process stuff), and runs MySQL for the database. It's a basic CRUD site: the front-end just makes pretty whatever the database has in store; the backend writes to the database whatever the user enters and does some processing. Just like most sites out there. In creating my Github repositories to begin coding, I've realized I don't understand the distinction between the front-end back-end, and the API. Another way of phrasing my question is: where does the API come into this picture? I'm going to list some more details and then questions I have - hopefully this gives you guys a better idea of what my actual question is, because I'm so confused that I don't know the specific question to ask. Some more details: I'd like to try the Model-View-Controller pattern. I don't know if this changes the question/answer. The API will be RESTful I'd like my back-end to use my own API instead of allowing the back-end to cheat and call special queries. I think this style is more consistent. My questions: Does the front-end call the back-end which calls the API? Or does the front-end just call the API instead of calling the back-end? Does the back-end just execute an API and the API returns control to the back-end (where the back-end acts as the ultimate controller, delegating tasks)? Long and detailed answers explaining the role of the API alongside the front-end back-end are encouraged. If the answer depends on the model of programming (models other than the Model-View-Controller pattern), please describe these other ways of thinking of the API. Thanks. I'm very confused.

    Read the article

  • Workaround: build FBX in XNA raise OutOfMemoryException

    - by Vitus
    If you try to add large FBX 3D model to the XNA project, and build it, you can get an OutOfMemoryException build error like following: Error    1    Building content threw OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.    at System.Collections.Generic.List`1.set_Capacity(Int32 value)    at System.Collections.Generic.List`1.EnsureCapacity(Int32 min)    at System.Collections.Generic.List`1.InsertRange(Int32 index, IEnumerable`1 collection)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.VertexChannel`1.InsertRange(Int32 index, Int32 count)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.VertexContent.InsertRange(Int32 index, IEnumerable`1 positionIndexCollection)    at Microsoft.Xna.Framework.Content.Pipeline.Graphics.MeshBuilder.AddTriangleVertex(Int32 indexIntoVertexCollection)    at Microsoft.Xna.Framework.Content.Pipeline.MeshConverter.FillNodeWithInfoFromMesh(KFbxNode* fbxNode, String name, KFbxGeometryConverter* geometryConverter)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessInformationInNode(KFbxNode* fbxNode, String name, Boolean* partOfMainSkeleton, Boolean* warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessNode(ValueType parentAbsoluteTransform, NodeContent potentialParent, KFbxNode* fbxNode, Boolean partOfMainSkeleton, Boolean warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.ProcessNode(ValueType parentAbsoluteTransform, NodeContent potentialParent, KFbxNode* fbxNode, Boolean partOfMainSkeleton, Boolean warnIfBoneButNotChild)    at Microsoft.Xna.Framework.Content.Pipeline.FbxImporter.Import(String filename, ContentImporterContext context)    at Microsoft.Xna.Framework.Content.Pipeline.ContentImporter`1.Microsoft.Xna.Framework.Content.Pipeline.IContentImporter.Import(String filename, ContentImporterContext context)    //additional calls here …   My desktop PC have 8Gb RAM, and Visual Studio’s process devenv.exe use under 2Gb of it while build process (about 3.5-4Gb of RAM is always free). It’s obvious, that VS can’t address more than 2Gb of RAM, and when that limit is over, build process is fail. OS on my PC is Win x64,  so I “charge” devenv.exe by using editbin.exe utility – in the VS Command prompt I run following: editbin "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe" /LARGEADDRESSAWARE This command edits the image to indicate that the application can handle addresses larger than 2 gigabytes. After that FBX file successfully built! Of course, you must put proper path to devenv.exe, depend on your installation path. If you are on Win x86, you need to do additional action – more info here.   P.S.: although now you can build a bigger files, than usual, keep in mind, that XNA have some restrictions on vertex buffer size etc., depend on your current XNA project profile (Reach or HiDef). And if your model’s vertexbuffer size more than 64Mb (with Reach profile), that model can’t be built and raise an error.

    Read the article

  • Segmentation fault while switching QCompleter for QLineEdit [on hold]

    - by san
    I have a QLineEdit that uses autocompletion one which on focusIn event in which it shows paths from XML List(here I have used hardcoded list) but if user doesn't find the path from that list popped by QCompleter than I want user to be able to browse to path typing '/' in QLineEdit , I am not able to select the paths say /Users etc and on trying to type Segmentation fault occurs. from PyQt4.Qt import Qt, QObject,QLineEdit from PyQt4.QtCore import pyqtSlot,SIGNAL,SLOT from PyQt4 import QtGui, QtCore import sys class DirLineEdit(QLineEdit, QtCore.QObject): """docstring for DirLineEdit""" def __init__(self): super(DirLineEdit, self).__init__() self.defaultList = ['~/Development/python/searchMethod', '~/Development/Nuke_python', '~/Development/python/openexr', '~/Development/python/cpp2python'] self.textChanged.connect(self.__dirCompleter) def focusInEvent(self, event): if len(self.text()) == 0: self._pathsList() QtGui.QLineEdit.focusInEvent(self, event) self.completer().complete() def __dirCompleter(self): if len(self.text()) == 0: model = MyListModel(self.defaultList, self) completer = QtGui.QCompleter(model, self) completer.setModel(model) else: dirModel = QtGui.QFileSystemModel() dirModel.setRootPath(QtCore.QDir.currentPath()) dirModel.setFilter(QtCore.QDir.AllDirs | QtCore.QDir.NoDotAndDotDot | QtCore.QDir.Files) dirModel.setNameFilterDisables(0) completer = QtGui.QCompleter(dirModel, self) completer.setCaseSensitivity(QtCore.Qt.CaseInsensitive) completer.setModel(dirModel) self.setCompleter(completer) def _pathsList(self): completerList = QtCore.QStringList() for i in self.defaultList: completerList.append(QtCore.QString(i)) lineEditCompleter = QtGui.QCompleter(completerList) lineEditCompleter.setCompletionMode(QtGui.QCompleter.UnfilteredPopupCompletion) self.setCompleter(lineEditCompleter) class MyListModel(QtCore.QAbstractListModel): def __init__(self, datain, parent=None, *args): """ datain: a list where each item is a row """ QtCore.QAbstractTableModel.__init__(self, parent, *args) self.listdata = datain def rowCount(self, parent=QtCore.QModelIndex()): return len(self.listdata) def data(self, index, role): if index.isValid() and role == QtCore.Qt.DisplayRole: return QtCore.QVariant(self.listdata[index.row()]) else: return QtCore.QVariant() app = QtGui.QApplication(sys.argv) smObj = DirLineEdit() smObj.show() app.exec_() Please help fix this or suggest better way of implementation?

    Read the article

  • What the Hekaton?

    - by Tony Davis
    Hekaton, the power behind SQL Server 2014′s In-Memory OLTP technology, is intended to make data operations run orders of magnitude faster on SQL Server. This works its magic partly by serving database workloads entirely from main memory, using memory-optimized table structures. It replaces the relational engine’s standard locking model with an optimistic concurrency model based on time-stamped row versions. Deeper down the Hekaton engine uses new, ‘latch free’ data structures. So far, so good, but performance improvements on this scale require a compromise, and the compromise is that these aren’t tables as we understand them. For the database developer, these differences are painful because they involve sacrificing some very important bits of the relational model. Most importantly, Hekaton tables don’t currently support FOREIGN KEY constraints or CHECK constraints, and you can’t put the checks in triggers because there aren’t any DML triggers either. Constraints allow a relational designer to enforce relational integrity and data integrity. Without them, of course, ‘bad data’ can get into our Hekaton tables. There is no easy way of preventing it. For several classes of database and data, this is a show-stopper. One may regard all these restrictions regretfully, seeing limited opportunity to try out Hekaton with current databases, but perhaps there is also a sudden glow of recognition. Isn’t this how we all originally imagined table variables were going to be, back in SQL 2005? And they have much the same restrictions. Maybe, instead of pretending that a currently-designed database can be ‘Hekatonized’ with a few mouse clicks, we should redesign databases for SQL 2014 to replace table variables with Hekaton tables, exploiting this technology for fast intermediate processing, and for the most part forget, for now, the idea of trying to convert our base relational tables into Hekaton tables. Few database developers would be averse to having their working tables running an order of magnitude faster, as long as it didn’t compromise the integrity of the data in the base tables.

    Read the article

  • Advanced Record-Level Business Intelligence with Inner Queries

    - by gt0084e1
    While business intelligence is generally applied at an aggregate level to large data sets, it's often useful to provide a more streamlined insight into an individual records or to be able to sort and rank them. For instance, a salesperson looking at a specific customer could benefit from basic stats on that account. A marketer trying to define an ideal customer could pull the top entries and look for insights or patterns. Inner queries let you do sophisticated analysis without the overhead of traditional BI or OLAP technologies like Analysis Services. Example - Order History Constancy Let's assume that management has realized that the best thing for our business is to have customers ordering every month. We'll need to identify and rank customers based on how consistently they buy and when their last purchase was so sales & marketing can respond accordingly. Our current application may not be able to provide this and adding an OLAP server like SSAS may be overkill for our needs. Luckily, SQL Server provides the ability to do relatively sophisticated analytics via inner queries. Here's the kind of output we'd like to see. Creating the Queries Before you create a view, you need to create the SQL query that does the calculations. Here we are calculating the total number of orders as well as the number of months since the last order. These fields might be very useful to sort by but may not be available in the app. This approach provides a very streamlined and high performance method of delivering actionable information without radically changing the application. It's also works very well with self-service reporting tools like Izenda. SELECT CustomerID,CompanyName, ( SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID ) As Orders, DATEDIFF(mm, ( SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) ,getdate() ) AS MonthsSinceLastOrder FROM Customers Creating Views To turn this or any query into a view, just put CREATE VIEW AS before it. If you want to change it use the statement ALTER VIEW AS. Creating Computed Columns If you'd prefer not to create a view, inner queries can also be applied by using computed columns. Place you SQL in the (Formula) field of the Computed Column Specification or check out this article here. Advanced Scoring and Ranking One of the best uses for this approach is to score leads based on multiple fields. For instance, you may be in a business where customers that don't order every month require more persistent follow up. You could devise a simple formula that shows the continuity of an account. If they ordered every month since their first order, they would be at 100 indicating that they have been ordering 100% of the time. Here's the query that would calculate that. It uses a few SQL tricks to make this happen. We are extracting the count of unique months and then dividing by the months since initial order. This query will give you the following information which can be used to help sales and marketing now where to focus. You could sort by this percentage to know where to start calling or to find patterns describing your best customers. Number of orders First Order Date Last Order Date Percentage of months order was placed since last order. SELECT CustomerID, (SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) As Orders, (SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS LastOrder, (SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS FirstOrder, DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) AS MonthsSinceFirstOrder, 100*(SELECT COUNT(DISTINCT 100*DATEPART(yy,OrderDate) + DATEPART(mm,OrderDate)) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) / DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) As OrderPercent FROM Customers

    Read the article

  • How to parse JSON data from web more faster [closed]

    - by Kaidul Islam Sazal
    I have json inventory inventory.json on the server like this: [ { "body" : "SUV", "color" : { "ext" : "White diamond pearl", "int" : "Taupe" }, "id" : "276181", "make" : "Acura", "miles" : 35949, "model" : "RDX", "pic" : [ { "full" : "http://images1.dealercp.com/90961/000JNBD/001_0292.jpg" } ], "power" : { "drive" : "Front wheel drive", "eng" : "2.3L DOHC PGM-FI 16-VALVE", "trans" : "Automatic" }, "price" : { "net" : 29488 }, "stock" : "6942", "trim" : "AWD 4dr Tech Pkg SUV", "vin" : "5J8TB2H53BA000334", "year" : 2011 }, { "body" : "Sedan", "color" : { "ext" : "Premium white pearl", "int" : "Taupe" }, "id" : "275622", "make" : "Acura", "miles" : 40923, "model" : "TSX", "pic" : [ { "full" : "http://images1.dealercp.com/90961/000JMC6/001_1765.jpg" } ], "power" : { "drive" : "Front wheel drive", "eng" : "2.4L L4 MPI DOHC 16V", "trans" : "Automatic" }, "price" : { "net" : 22288 }, "stock" : "6945", "trim" : "4dr Sdn I4 Auto Sedan", "vin" : "JH4CU2F66AC011933", "year" : 2010 } ] here are two index, There are almost 5000 index like this. I parsed this json like this: var url = "inventory/inventory.json"; $.getJSON(url, function(data){ $.each(data, function(index, item){ //straight-forward loop if(item.year == 2012) { $('#desc').append(item.make + ' ' + item.model + ' ' + '<br/>' + item.price.net + '<br/>' + item.pic[0].full); } }); }); This is working fine.But the problem is that, this searching and fetching process is little bit slow as there are 5000 indexes already and it's increasing day by day. It seems that, it is a straight-forward loop to parse the data and a normal brute-force method. Now I want to know if there any time efiicient way to parse more faster.Any faster method to parse instead of straight-forward loop ?

    Read the article

  • Modelling work-flow plus interaction with a database - quick and accessible options

    - by cjmUK
    I'm wanting to model a (proposed) manufacturing line, with specific emphasis on interaction with a traceability database. That is, various process engineers have already mapped the manufacturing process - I'm only interested in the various stations along the line that have to talk to the DB. The intended audience is a mixture of project managers, engineers and IT people - the purpose is to identify: points at which the line interacts with the DB (perhaps going so far as indicating the Store Procs called at each point, perhaps even which parameters are passed.) the communication source (PC/Handheld device/PLC) the communication medium (wireless/fibre/copper) control flow (if leak test fails, unit is diverted to repair station) Basically, the model will be used as a focus different groups on outstanding tasks; for example, I'm interested in the DB and any front-end app needed, process engineers need to be thinking about the workflow and liaising with the PLC suppliers, the other IT guys need to make sure we have the hardware and comms in place. Obviously I could just improvise in Visio, but I was wondering if there was a particular modelling technique that might particularly suit my needs or my audience. I'm thinking of a visual model with supporting documentation (as little as possible, as much as is necessary). Clearly, I don't want something that will take me ages to (effectively) learn, nor one that will alienate non-technical members of the project team. So far I've had brief looks at BPMN, EPC Diagrams, standard Flow Diagrams... and I've forgotten most of what I used to know about UML... And I'm not against picking and mixing... as long as it is quick, clear and effective. Conclusion: In the end, I opted for a quasi-workflow/dataflow diagram. I mapped out the parts of the manufacturing process that interact with the traceability DB, and indicated in a significantly-simplified form, the data flows and DB activity. Alongside which, I have a supporting document which outlines each process, the data being transacted for each process (a 'data dictionary' of sorts) and details of hardware and connectivity required. I can't decide whether is a product of genius or a crime against established software development practices, but I do think that is will hit the mark for this particular audience.

    Read the article

  • Does this syntax for specifying Django conditional form display align with python/django convention?

    - by andy
    I asked a similar question on Stackoverflow and was told it was better asked here. So I'll ask it slightly rephrased. I am working on a Django project, part of which will become a distributable plugin that allows the python/django developer to specify conditional form field display logic in the form class or model class. I am trying to decide how the developer must specify that logic. Here's an example: class MyModel(models.Model): #these are some django model fields which will be used in a form yes_or_no = models.SomeField...choices are yes or no... why = models.SomeField...text, but only relevant if yes_or_no == yes... elaborate_even_more = models.SomeField...more text, just here so we can have multiple conditions #here i am inventing some syntax...i am looking for suggestions!! #this is one possibility why.show_if = ('yes_or_no','==','yes') elaborate_even_more.show_if = (('yes_or_no','==','yes'),('why','is not','None')) #help me choose a syntax that is *easy*...and Pythonic and...Djangonic...and that makes your fingers happy to type! #another alternative... conditions = {'why': ('yes_or_no','==','yes'), 'elaborate_even_more': (('yes_or_no','==','yes'),('why','is not','None')) } #or another alternative... """Showe the field whiche hath the name *why* only under that circumstance in whiche the field whiche hath the name *yes_or_no* hath the value *yes*, in strictest equality.""" etc... Those conditions will be eventually passed via django templates to some javascript that will show or hide form fields accordingly. Which of those options (or please propose a better option) aligns better with conventions such that it will be easiest for the python/django developer to use? Also are there other considerations that should impact what syntax I choose?

    Read the article

  • Go/Obj-C style interfaces with ability to extend compiled objects after initial release

    - by Skrylar
    I have a conceptual model for an object system which involves combining Go/Obj-C interfaces/protocols with being able to add virtual methods from any unit, not just the one which defines a class. The idea of this is to allow Ruby-ish open classes so you can take a minimalist approach to library development, and attach on small pieces of functionality as is actually needed by the whole program. Implementation of this involves a table of methods marked virtual in an RTTI table, which system functions are allowed to add to during module initialization. Upon typecasting an object to an interface, a Go-style lookup is done to create a vtable for that particular mapping and pass it off so you can have comparable performance to C/C++. In this case, methods may be added /afterwards/ which were not previously known and these new methods allow newer interfaces to be satisfied; while I like this idea because it seems like it would be very flexible (disregarding the potential for spaghetti code, which can happen with just about any model you use regardless). By wrapping the system calls for binding methods up in a set of clean C-compatible calls, one would also be able to integrate code with shared libraries and retain a decent amount of performance (Go does not do shared linking, and Objective-C does a dynamic lookup on each call.) Is there a valid use-case for this model that would make it worth the extra background plumbing? As much as this Dylan-style extensibility would be nice to have access to, I can't quite bring myself to a use case that would justify the overhead other than "it could make some kinds of code more extensible in future scenarios."

    Read the article

  • Raspberry Pi Now Shipping with 512MB RAM; Still Only $35

    - by Jason Fitzpatrick
    Fans of the tiny Raspberry Pi will be pleased to hear the new version of their Model B board now ships with 512MB of RAM (up from the previous 256MB). The best part about the upgrade? The price point stays at $35 a board. From the official Raspberry Pi blog: One of the most common suggestions we’ve heard since launch is that we should produce a more expensive “Model C” version of Raspberry Pi with extra RAM. This would be useful for people who want to use the Pi as a general-purpose computer, with multiple large applications running concurrently, and would enable some interesting embedded use cases (particularly using Java) which are slightly too heavyweight to fit comfortably in 256MB. The downside of this suggestion for us is that we’re very attached to $35 as our highest price point. With this in mind, we’re pleased to announce that from today all Model B Raspberry Pis will ship with 512MB of RAM as standard. If you have an outstanding order with either distributor, you will receive the upgraded device in place of the 256MB version you ordered. Units should start arriving in customers’ hands today, and we will be making a firmware upgrade available in the next couple of days to enable access to the additional memory. We’re excited to get our hands on a new board and try out Raspbmc with that extra RAM. HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems

    Read the article

  • Sorting objects before rendering

    - by dreta
    I'm trying to implement a scene graph and in all the articles i've come across there is talk about object sorting. So you'd sort your objects by "material" for example. Now untill i sat down and started implementing it, i kind of took this for granted, because it made sense. But now i'm wondering what does sorting actually change? In my engine, i have a manager for UBOs, i use those to store data that'll be shared between programs, at the moment that only involves time, camera and projection matrices and lights (i'm not worrying about managing which lights affect which objects ATM). Now for each model i have to change the model to world matrix uniform, no sorting is going to change that. So is the jump from changing this matrix to also setting a material for each object that bad? I vaguely remember reading somewhere that each time you change something in the pipeline, it has to get flushed and that can cause performance issues. But for each drawing call i'm setting up a model to world matrix anyway, so what sense does it make to ever be concerned about this? BTW is there any information about whether changing a uniform and calling glBufferSubData is more (or less) expensive.

    Read the article

< Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >