Search Results

Search found 2114 results on 85 pages for 'historical debugger'.

Page 40/85 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • How to find where error is.

    - by gurugio
    How can I find where the error occurs? In C language, the return value means what error occurs, such as failure to open file or memory allocation. There is no information where the error occurs. For example, function 'foo' calls A,B,C,D. If foo returns an error value, it might be return value of A or B or C or D. I cannot find what function returns error. I have to run debugger or add some codes to find what function returns error.

    Read the article

  • While remote deubgging how are the pdb located (VS 2008)

    - by Saar
    When the deubgger is attached to a process on remote server - What locations are searched for the pdb? In what order? (e.g. is it searched on the remote server (debuggee) or on the local client (deubger)) When I use the deubgger to manually load pdb file from specific location - is the deubbger looking for the file locally or is it the remote debugger monitor looking for the file on the? Is there any article that describes that process?

    Read the article

  • Web application / Domain model integration using JSON capable DTOs [on hold]

    - by g-makulik
    I'm a bit confused about architectural choices for the web-applications/java/python world. For c/c++ world the available (open source) choices to implement web applications is pretty limited to zero, involving java or python the choices explode to a,- hard to sort out -, mess of available 'frameworks' and application approaches. I want to sort out a clean MVC model, where the M stands for a fully blown (POCO, POJO driven) domain model (according M.Fowler's EAA pattern) using a mature OO language (Java,C++) for implementation. The background is: I have a system with certain hardware components (that introduce system immanent active behavior) and a configuration database for system meta and HW-components configuration data (these are even usually self contained, since the HW-components are capable to persist their configuration data anyway). For realization of the configuration/status data exchange protocol with the HW-components we have chosen the Google Protobuf format, which works well for the directly wired communication with these components. This protocol is already used successfully with a Java based GUI application via TCP/IP connection to the main system controlling HW-component. This application has some drawbacks and design flaws for historical reasons. Now we want to develop an abstract model (domain model) for configuration and monitoring those HW-components, that represents a more use case oriented view to the overall system behavior. I have the feeling that a plain Java class model would fit best for this (c++ implementation seems to have too much implementation/integration overhead with viable language-bridge interfaces). Google Protobuf message definitions could still serve well to describe DTO objects used to interact with a domain model API. But integrating Google Protobuf messages client side for e.g. data binding in the current view doesn't seem to be a good choice. I'm thinking about some extra serialization features, e.g. for JSON based data exchange with the views/controllers. Most lightweight solutions seem to involve a python based presentation layer using JSON based data transfer (I'm at least not sure to be fully informed about this). Is there some lightweight (applicable for a limited ARM Linux platform) framework available, supporting such architecture to realize a web-application? UPDATE: According to my recent research and comments of colleagues I've noticed that using Java (and some JVM) might not be the preferable choice for integration with python on a limited linux system as we have (running on ARM9 with hard to discuss memory and MCU costs), but C/C++ modules would do well for this (since this forms the native interface to python extensions, doesn't it?). I can imagine to provide a domain model from an appropriate C/C++ API (though I still think it's more efforts and higher skill requirements for the involved developers to do with these languages). Still I'm searching for a good approach that supports such architecture. I'll appreciate any pointers!

    Read the article

  • What&rsquo;s new in VS.10 &amp; TFS.10?

    - by johndoucette
    Getting my geek on… I have decided to call the products VS.10 (Visual Studio 2010), TP.10 (Test Professional 2010),  and TFS.10 (Team Foundation Server 2010) Thanks Neno Loje. What's new in Visual Studio & Team Foundation Server 2010? Focusing on Visual Studio Team System (VSTS) ALM-related parts: Visual Studio Ultimate 2010 NEW: IntelliTrace® (aka the historical debugger) NEW: Architecture Tools New Project Type: Modeling Project UML Diagrams UML Use Case Diagram UML Class Diagram UML Sequence Diagram (supports reverse enginneering) UML Activity Diagram UML Component Diagram Layer Diagram (with Team Build integration for layer validation) Architecuture Explorer Dependency visualization DGML Web & Load Tests Visual Studio Premium 2010 NEW: Architecture Tools Read-only model viewer Development Tools Code Analysis New Rules like SQL Injection detection Rule Sets Code Profiler Multi-Tier Profiling JScript Profiling Profiling applications on virtual machines in sampling mode Code Metrics Test Tools Code Coverage NEW: Test Impact Analysis NEW: Coded UI Test Database Tools (DB schema versioning & deployment) Visual Studio Professional 2010 Debuger Mixed Mode Debugging for 64-bit Applications Export/Import of Breakpoints and data tips Visual Studio Test Professional 2010 Microsoft Test Manager (MTM, formerly known as "Camano")) Fast Forward Testing Visual Studio Team Foundation Server 2010 Work Item Tracking and Project Management New MSF templatesfor Agile and CMMI (V 5.0) Hierarchical Work Items Custom Work Item Link Types Ready to use Excel agile project management workbooks for managing your backlogs (including capacity planing) Convert Work Item query to an Excel report MS Excel integration Support for Work Item hierarchies Formatting is preserved after doing a 'Refresh' MS Project integration Hierarchy and successor/predecessor info is now synchronized NEW: Test Case Management Version Control Public Workspaces Branch & Merge Visualization Tracking of Changesets & Work Items Gated Check-In Team Build Build Controllers and Agents Workflow 4-based build process NEW: Lab Management (only a pre-release is avaiable at the moment!) Project Portal & Reporting Dashboards (on SharePoint Portal) Burndown Chart TFS Web Parts (to show data from TFS) Administration & Operations Topology enhancements Application tier network load balancing (NLB) SQL Server scale out Improved Sharepoint flexibility Report Server flexibility Zone support Kerberos support Separation of TFS and SQL administration Setup Separate install from configure Improved installation wizards Optional components Simplified account requirements Improved Reporting Services configuration Setup consolidation Upgrading from previous TFS versions Improved IIS flexibility Administration Consolidation of command line tools User rename support Project Collections Archive/restore individual project collections Move Team Project Collections Server consolidation Team Project Collection Split Team Project Collection Isolation Server request cancellation Licensing: TFS server license included in MSDN subscriptions Removed features (former features not part of Visual Studio 2010): Debug » Start With Application Verifier Object Test Bench IntelliSense for C++ / CLI Debugging support for SQL 2000

    Read the article

  • SQL Server 2008 Compression

    - by Peter Larsson
    Hi! Today I am going to talk about compression in SQL Server 2008. The data warehouse I currently design and develop holds historical data back to 1973. The data warehouse will have an other blog post laster due to it's complexity. However, the server has 60GB of memory (of which 48 is dedicated to SQL Server service), so all data didn't fit in memory and the SAN is not the fastest one around. So I decided to give compression a go, since we use Enterprise Edition anyway. This is the code I use to compress all tables with PAGE compression. DECLARE @SQL VARCHAR(MAX)   DECLARE curTables CURSOR FOR             SELECT 'ALTER TABLE ' + QUOTENAME(OBJECT_SCHEMA_NAME(object_id))                     + '.' + QUOTENAME(OBJECT_NAME(object_id))                     + ' REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE)'             FROM    sys.tables   OPEN    curTables   FETCH   NEXT FROM    curTables INTO    @SQL   WHILE @@FETCH_STATUS = 0     BEGIN         IF @SQL IS NOT NULL             RAISERROR(@SQL, 10, 1) WITH NOWAIT           FETCH   NEXT         FROM    curTables         INTO    @SQL     END   CLOSE       curTables DEALLOCATE  curTables Copy and paste the result to a new code window and execute the statements. One thing I noticed when doing this, is that the database grows with the same size as the table. If the database cannot grow this size, the operation fails. For me, I first ended up with orphaned connection. Not good. And this is the code I use to create the index compression statements DECLARE @SQL VARCHAR(MAX)   DECLARE curIndexes CURSOR FOR             SELECT      'ALTER INDEX ' + QUOTENAME(name)                         + ' ON '                         + QUOTENAME(OBJECT_SCHEMA_NAME(object_id))                         + '.'                         + QUOTENAME(OBJECT_NAME(object_id))                         + ' REBUILD PARTITION = ALL WITH (FILLFACTOR = 100, DATA_COMPRESSION = PAGE)'             FROM        sys.indexes             WHERE       OBJECTPROPERTY(object_id, 'IsMSShipped') = 0                         AND OBJECTPROPERTY(object_id, 'IsTable') = 1             ORDER BY    CASE type_desc                             WHEN 'CLUSTERED' THEN 1                             ELSE 2                         END   OPEN    curIndexes   FETCH   NEXT FROM    curIndexes INTO    @SQL   WHILE @@FETCH_STATUS = 0     BEGIN         IF @SQL IS NOT NULL             RAISERROR(@SQL, 10, 1) WITH NOWAIT           FETCH   NEXT         FROM    curIndexes         INTO    @SQL     END   CLOSE       curIndexes DEALLOCATE  curIndexes When this was done, I noticed that the 90GB database now only was 17GB. And most important, complete database now could reside in memory! After this I took care of the administrative tasks, backups. Here I copied the code from Management Studio because I didn't want to give too much time for this. The code looks like (notice the compression option). BACKUP DATABASE [Yoda] TO              DISK = N'D:\Fileshare\Backup\Yoda.bak' WITH            NOFORMAT,                 INIT,                 NAME = N'Yoda - Full Database Backup',                 SKIP,                 NOREWIND,                 NOUNLOAD,                 COMPRESSION,                 STATS = 10,                 CHECKSUM GO   DECLARE @BackupSetID INT   SELECT  @BackupSetID = Position FROM    msdb..backupset WHERE   database_name = N'Yoda'         AND backup_set_id =(SELECT MAX(backup_set_id) FROM msdb..backupset WHERE database_name = N'Yoda')   IF @BackupSetID IS NULL     RAISERROR(N'Verify failed. Backup information for database ''Yoda'' not found.', 16, 1)   RESTORE VERIFYONLY FROM    DISK = N'D:\Fileshare\Backup\Yoda.bak' WITH    FILE = @BackupSetID,         NOUNLOAD,         NOREWIND GO After running the backup, the file size was even more reduced due to the zip-like compression algorithm used in SQL Server 2008. The file size? Only 9 GB. //Peso

    Read the article

  • Enterprise MDM: Rationalizing Reference Data in a Fast Changing Environment

    - by Mala Narasimharajan
    By Rahul Kamath Enterprises must move at a rapid pace to establish and retain global market leadership by continuously focusing on operational efficiency, customer intimacy and relentless execution. Reference Data Management    As multi-national companies with a presence in multiple industry categories, market segments, and geographies, their ability to proactively manage changes and harness them to align their front office with back-office operations and performance management initiatives is critical to make the proverbial elephant dance. Managing reference data including types and codes, business taxonomies, complex relationships as well as mappings represent a key component of the broader agenda for enabling flexibility and agility, without sacrificing enterprise-level consistency, regulatory compliance and control. Financial Transformation  Periodically, companies find that processes implemented a decade or more ago no longer mirror the way of doing business and seek to proactively transform how they operate their business and underlying processes. Financial transformation often begins with the redesign of one’s chart of accounts. The ability to model and redesign one’s chart of accounts collaboratively, quickly validate against historical transaction bases and secure business buy-in across multiple line of business stakeholders, while continuing to manage changes within the legacy general ledger systems and downstream analytical applications while piloting the in-flight transformation can mean the difference between controlled success and project failure. Attend the session titled CON8275 - Oracle Hyperion Data Relationship Management: Enabling Enterprise Transformation at Oracle Openworld on Monday, October 1, 2012 at 4:45pm in Ballroom A of the InterContinental Hotel to learn how Oracle’s Data Relationship Management solution can help you stay ahead of the competition and proactively harness master (and reference) data changes to transform your enterprise. Hear in-depth customer testimonials from GE Healthcare and Old Mutual South Africa to learn how others have harnessed this technology effectively to build enduring competitive advantage through business process innovation and investments in master data governance. Hear GE Healthcare discuss how DRM has enabled financial transformation, ERP consolidation, mergers and acquisitions, and the alignment reference data across financial and management reporting applications. Also, learn how Old Mutual SA has upgraded to EBS R12 Financials and is transforming the management of chart of accounts for corporate reporting. Separately, an esteemed panel of DRM customers including Cisco Systems, Nationwide Insurance, Ralcorp Holdings and Mentor Graphics will discuss their perspectives on how DRM has helped them address business challenges associated with enterprise MDM including major change management initiatives including financial transformations, corporate restructuring, mergers & acquisitions, and the rationalization of financial and analytical master reference data to support alternate business perspectives for the alignment of EPM/BI initiatives. Attend the session titled CON9377 - Customer Showcase: Success with Oracle Hyperion Data Relationship Management at Openworld on Thursday, October 4, 2012 at 12:45pm in Ballroom of the InterContinental Hotel to interact with our esteemed speakers first hand.

    Read the article

  • SCHA API for resource group failover / switchover history

    - by krishna.k.murthy
    The Oracle Solaris Cluster framework keeps an internal log of cluster events, including switchover and failover of resource groups. These logs can be useful to Oracle support engineers for diagnosing cluster behavior. However, till now, there was no external interface to access the event history. Oracle Solaris Cluster 4.2 provides a new API option for viewing the recent history of resource group switchovers in a program-parsable format. Oracle Solaris Cluster 4.2 provides a new option tag argument RG_FAILOVER_LOG for the existing API command scha_cluster_get which can be used to list recent failover / switchover events for resource groups. The command usage is as shown below: # scha_cluster_get -O RG_FAILOVER_LOG number_of_days number_of_days : the number of days to be considered for scanning the historical logs. The command returns a list of events in the following format. Each field is separated by a semi-colon [;]: resource_group_name;source_nodes;target_nodes;time_stamp source_nodes: node_names from which resource group is failed over or was switched manually. target_nodes: node_names to which the resource group failed over or was switched manually. There is a corresponding enhancement in the C API function scha_cluster_get() which uses the SCHA_RG_FAILOVER_LOG query tag. In the example below geo-infrastructure (failover resource group), geo-clusterstate (scalable resource group), oracle-rg (failover resource group), asm-dg-rg (scalable resource group) and asm-inst-rg (scalable resource group) are part of Geographic Edition setup. # /usr/cluster/bin/scha_cluster_get -O RG_FAILOVER_LOG 3 geo-infrastructure;schost1c;;Mon Jul 21 15:51:51 2014 geo-clusterstate;schost2c,schost1c;schost2c;Mon Jul 21 15:52:26 2014 oracle-rg;schost1c;;Mon Jul 21 15:54:31 2014 asm-dg-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:54:58 2014 asm-inst-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:56:11 2014 oracle-rg;;schost2c;Mon Jul 21 15:58:51 2014 geo-infrastructure;;schost2c;Mon Jul 21 15:59:19 2014 geo-clusterstate;schost2c;schost2c,schost1c;Mon Jul 21 16:01:51 2014 asm-inst-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:01:10 2014 asm-dg-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:02:10 2014 oracle-rg;schost2c;;Tue Jul 22 16:58:02 2014 oracle-rg;;schost1c;Tue Jul 22 16:59:05 2014 oracle-rg;schost1c;schost1c;Tue Jul 22 17:05:33 2014 Note that in the output some of the entries might have an empty string in the source_nodes. Such entries correspond to events in which the resource group is switched online manually or during a cluster boot-up. Similarly, an empty destination_nodes list indicates an event in which the resource group went offline. - Arpit Gupta, Harish Mallya

    Read the article

  • Where Are You on the Visualization Maturity Curve?

    - by Celine Beck
    The old phrase “A picture is worth a thousand words” is as true now as ever. Providing the right users with access to the right product data, at the right time, can provide significant benefits to a business. This is especially evident with increasing technical and product complexities, elongated supply chains, and growing pressure to bring innovative products to market faster. With this in mind, it is easy to understand why visualization is an integral part of any successful product lifecycle management (PLM) strategy. At a bare minimum, knowledge workers use multiple individual documents of different formats and structure, and leverage visualization solutions to access information; but the real value of visualization can be fully reaped when it is connected to enterprise applications like PLM and tied to the appropriate business context. The picture below illustrates this visualization maturity curve, as we presented during the last Oracle Open World and the transformational effect that visualization can have on PLM processes and performance (check out the post about AutoVue Key Highlights from Oracle Open World 2012 for more information). Organizations are likely to see greater positive impact on business performance when visualization is connected to enterprise systems, allowing access to information coming from multiple sources, such as PLM, supply chain management (SCM) and enterprise resource planning (ERP). This allows organizations to reach higher levels of collaboration and optimize decision-making capacity as users can benefit from in-context access to visual information. For instance, within a PLM system, a design engineer can access a product assembly and review digital annotations added by other users specific to the engineering change request he is reviewing rather than all historical annotations. The last stage on the curve is what we call augmented business visualization (ABV).  ABV is an innovative framework which lets structured data (from Oracle’s Agile PLM for instance) interact with unstructured data (documents, design, 3D models, etc). With this new level of integration, information coming from multiple sources can be presented in a highly visual fashion; color displays can be used in order to identify parts with specific characteristics (for example pending quality issues) and you can take actions directly from within the context of documents and designs, maximizing user productivity. Those who had the chance to attend our PLM session during Oracle Open World already got a sneak peek of our latest augmented business visualization for Oracle’s Agile PLM. The solution generated a lot of wows. Stephen Porter, CEO at Zero Wait State, indicated in a post entitled “The PLM State: the Manhattan Project-Oracle’s Next Big Secret Weapon” that “this kind of synergy between visualization and PLM could qualify as a powerful weapon differentiating Agile PLM from other solutions.” If you are interested in learning more about ABV for Oracle’s Agile PLM and hear about real examples of usage of visualization at all stages of the visualization maturity curve, don’t miss our Visual Decision Making to Optimize New Product Development and Introduction session during the Oracle Value Chain Summit (Feb. 4-6, 2013, San Francisco). We look forward to seeing you there!

    Read the article

  • Harnessing Business Events for Predictive Decision Making - part 1 / 3

    - by Sanjeev Sharma
    Businesses have long relied on data mining to elicit patterns and forecast future demand and supply trends. Improvements in computing hardware, specifically storage and compute capacity, have significantly enhanced the ability to store and analyze mountains of data in ever shrinking time-frames. Nevertheless, the reality is that data growth is outpacing storage capacity by a factor of two and computing power is still very much bounded by Moore's Law, doubling only every 18 months.Faced with this data explosion, businesses are exploring means to develop human brain-like capabilities in their decision systems (including BI and Analytics) to make sense of the data storm, in other words business events, in real-time and respond pro-actively rather than re-actively. It is more like having a little bit of the right information just a little bit before hand than having all of the right information after the fact. To appreciate this thought better let's first understand the workings of the human brain.Neuroscience research has revealed that the human brain is predictive in nature and that talent is nothing more than exceptional predictive ability. The cerebral-cortex, part of the human brain responsible for cognition, thought, language etc., comprises of five layers. The lowest layer in the hierarchy is responsible for sensory perception i.e. discrete, detail-oriented tasks whereas each of the above layers increasingly focused on assembling higher-order conceptual models. Information flows both up and down the layered memory hierarchy. This allows the conceptual mental-models to be refined over-time through experience and repetition. Secondly, and more importantly, the top-layers are able to prime the lower layers to anticipate certain events based on the existing mental-models thereby giving the brain a predictive ability. In a way the human brain develops a "memory of the future", some sort of an anticipatory thinking which let's it predict based on occurrence of events in real-time. A higher order of predictive ability stems from being able to recognize the lack of certain events. For instance, it is one thing to recognize the beats in a music track and another to detect beats that were missed, which involves a higher order predictive ability.Existing decision systems analyze historical data to identify patterns and use statistical forecasting techniques to drive planning. They are similar to the human-brain in that they employ business rules very much like mental-models to chunk and classify information. However unlike the human brain existing decision systems are unable to evolve these rules automatically (AI still best suited for highly specific tasks) and  predict the future based on real-time business events. Mistake me not,  existing decision systems remain vital to driving long-term and broader business planning. For instance, a telco will still rely on BI and Analytics software to plan promotions and optimize inventory but tap into business events enabled predictive insight to identify specifically which customers are likely to churn and engage with them pro-actively. In the next post, i will depict the technology components that enable businesses to harness real-time events and drive predictive decision making.

    Read the article

  • SOA Summit - Oracle Session Replay

    - by Bruce Tierney
    If you think you missed the most recent Integration Developer News (IDN) "SOA Summit" 2013...good news, you didn't.  At least not the replay of the Oracle session titled: Three Solutionsfor Simplifying Cloud/On-Premises Integration As you will see in the reply below, this session introduces Three common reasons for integration complexity: Disparate Toolkits Lack of API Management Rigid, Brittle Infrastructure and then the Three solutions to these challenges: Unify Cloud On-premises Integration Enable Multi-channel Development with API Management Plan for the Unexpected - Future Readiness The last solution on future readiness describes how you can transition from being reactive to new trends, such as the Internet of Things (IoT), by modifying your integration strategy to enable business agility and how to recognize trends through Fast Data event processing ahead of your competition. Oracle SOA Suite customer SFpark's (San Francisco Metropolitan Transit Authority) implementation with API Management is covered as shown in the screenshot to the right This case study covers the core areas of API Management for partners to build their own applications by leveraging parking availability and real-time pricing as well as mobile enablement of data integrated by SOA Suite underneath.  Download the free SFpark app from the Apple and Android app stores to check it out. When looking into the future, the discussion starts with a historical look to better prepare for what comes next.   As shown in the image below, one of the next frontiers after mobile and cloud integration is a deeper level of direct "enterprise to customer" interaction.  Much of this relates to the Internet of Things.  Examples of IoT from the perspective of SOA and integration is also covered in the session. For example, early adopter Turkcell and their tracking of mobile phone users as they move from point A to B to C is shown in the image the right.   As you look into more "smart services" such as Location-Based Services, how "future ready" is your application infrastructure?  . . . Check out the replay by clicking the video image below to learn about these three challenges and solution including how to "future ready" your application infrastructure:

    Read the article

  • How do we provide valid time estimates during Sprint Planning without doing "too much" design?

    - by Michael Edenfield
    My team is getting up to speed with Scrum, but most of us are more familiar with non-agile or "pseudo-"agile methodologies. The part that is the biggest hurdle for us is running an efficient Sprint Planning meeting where we break our backlog items into tasks, and estimate hours. (I'm using the terminology from the VS2010 Scrum Template; apologies if I use the wrong word somewhere.) When we try to figure out how long a task is going to take, we often fall into the trap of designing the feature at the code level -- table layout, interfaces, etc -- in order to figure out how long that's going to take. I'm pretty sure this is not the appropriate place to be doing that kind of design. We should be scheduling tasks for these design meetings during the sprint. However, we are having trouble figuring out how else to come up with meaningful estimates for the tasks. Are there any practical habits/techniques/etc. for making a judgement call about how long a feature is going to take, without knowing how you plan to implement it? If our time estimates are going to change significantly once the design has been completed, how can we properly budget our Sprint backlog ahead of time? EDIT: Just to clarify, since some of the comments/answers are very valid but I think addressing the wrong question. We know that what we're doing is not right, and that we should be building time into the sprint for this design. Conceptually all of the developers understand that. We also also bringing in a team member with Scrum experience to keep us on track if we start going off into the weeds. The problem is that, without going through this design process, we are finding it difficult to provide concrete time estimates for anything. We are constantly saying things like "well if we design it this way it might take 8 hours but if we end up having to do this other way instead that will take about 32 but it might not be as bad once we start trying to write it...". I also assume that this process will get better once we have some historical velocity to work from, but many of the technologies and architectural patterns we are using are new to us. But if potentially-wildly-wrong estimates are just a natural part of adapting this process then we will just need to recondition ourselves to accept that :)

    Read the article

  • Powerful Lessons in Data from the Presidential Election

    - by Christina McKeon
    Now that we’ve had a few days to recover from the U.S. presidential election, it’s a good time to take a step back from politics and look for the customer experience lessons that we can take away. The most powerful lesson is that when you know more about your base, you will have an advantage over your competition. That advantage will translate into you winning and your competition losing. Michael Scherer of TIME was given access to Obama’s data analysts two days before the election. His account is documented in Inside the Secret World of the Data Crunchers Who Helped Obama Win. What we learned from Scherer’s inside view is how well Obama’s team did in getting the right data, analyzing it, and acting on it. This data team recognized how critical it was to break down data silos within the campaign. As Scherer noted, they created “a single system that merged information from pollsters, fundraisers, field workers, consumer databases, and social-media and mobile contacts with the main Democratic voter files in the swing states.” The Obama analysis was so meticulous that they knew which celebrity and which type of celebrity event would help them maximize campaign contributions. With a single system, their data models became more precise. They determined which messages were more successful with specific demographic groups and that who made the calls mattered. Data analysis also led to many other changes in Obama’s campaign including a new ad buying strategy, using social media and applications to tap into supporters’ friends, and using new social news sites. While we did not have that same inside view into Romney’s campaign, much of the post-mortem coverage indicates that Romney’s team did not have the right analysis. As Peter Hamby of CNN wrote in Analysis: Why Romney Lost, “Romney officials had modeled an electorate that looked something like a mix of 2004 and 2008….” That historical data did not account for the changing demographics in the U.S. Does your organization approach data like the Obama or Romney team? Do you really know your base? How well can you predict what is going to happen in your business? If you haven’t already put together a strategy and plan to know more, this week’s civics lesson is a powerful reason to do it sooner rather than later. Your competitors are probably thinking the same thing that you are!

    Read the article

  • Oredev 2012: Summary and source code

    - by Laurent Bugnion
    This week, I had the pleasure to be invited to talk at Oredev, a really cool conference taking place in Malmo, Sweden. The whole event is awesome, including a very special dinner on Monday including sauna and swimming in a 6 degrees cold Baltic sea, and a reception with dinner at the town hall, including the mayor himself. Considering Malmo is a town of 300'000 inhabitants, it is a pretty nice occasion and the historical building itself is really worth seeing. For those interested, I placed my pictures on my Flickr account. I had a workshop on Tuesday morning about Windows 8 development with XAML/C#, and then a session on Wednesday about MVVM in Windows Phone 8 and Windows 8, of course using MVVM Light. I was very nervous because I reworked some of my demos as recently as this morning, in the wake of the Build conference last week and the release of both the Windows Phone SDK and MVVM Light V4.1. Everything went well however, and if I judge by the people I talked t after the talk, and Twitter, everything went pretty well. Before my talk on Tuesday, I had the pleasure to see a talk by Iris Classon (@irisclasson) on the challenges of being a "n00b" and a woman in software development. I especially appreciated her research and conclusions on the lack of women I our industry, a topic that is dear to my heart (because I want the best possible future for my two daughters, and also because I really enjoy working with women on projects, and getting a different insight on the art of software development. I really want to thank the excellent organization committee for their hard work and their fantastic welcome to Malmo. In particular Emily Holweck did a wonderful job and was super helpful throughout the preparation and the conference itself. I made a few pictures during my stay, all with the new Nokia Lumia 920, and hope you will enjoy them too. The source code and the slides… The source code is available for download from Skydrive. You will find the following: Windows 8 workshop slides. MVVM Applied slides Source code package with Win8Demo: The demo I built during the 4 hours workshop, with some light MVVM, web services (JSON), GridView, Design time data (Blend / Visual Studio designer), Bing maps integration, location sensor, Search pane integration. SemanticZoomSample: a sample I put together to demonstrate the SemanticZoom control, with two GridViews and of course full design time data for Blend work. Due to time constraints, I was not able to show this demo during the workshop, but I publish it anyway, hoping it will be useful to someone. PictureUploader: The demo I built during my 50 minutes session about MVVM Applied in Windows Phone 8 and Windows 8. Code sharing, design time data, MVVM Light are used in Windows Phone 8 and Windows 8 apps. And in video… You can also see the video of my MVVM talk thanks to the good services of the Oredev team! MVVM Applied in Windows Phone and Windows 8 from Øredev Conference on Vimeo.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Why shouldn't I be using public variables in my Java class?

    - by Omega
    In school, I've been told many times to stop using public for my variables. I haven't asked why yet. This question: Are Java's public fields just a tragic historical design flaw at this point? seems kinda related to this. However, they don't seem to discuss why is it "wrong", but instead focus on how can they use them instead. Look at this (unfinished) class: public class Reporte { public String rutaOriginal; public String rutaNueva; public int bytesOriginales; public int bytesFinales; public float ganancia; /** * Constructor para objetos de la clase Reporte */ public Reporte() { } } No need to understand Spanish. All this class does is hold some statistics (those public fields) and then do some operations with them (later). I will also need to be modifying those variables often. But well, since I've been told not to use public, this is what I ended up doing: public class Reporte { private String rutaOriginal; private String rutaNueva; private int bytesOriginales; private int bytesFinales; private float ganancia; /** * Constructor para objetos de la clase Reporte */ public Reporte() { } public String getRutaOriginal() { return rutaOriginal; } public String getRutaNueva() { return rutaNueva; } public int getBytesOriginales() { return bytesOriginales; } public int getBytesFinales() { return bytesFinales; } public float getGanancia() { return ganancia; } public void setRutaOriginal(String rutaOriginal) { this.rutaOriginal = rutaOriginal; } public void setRutaNueva(String rutaNueva) { this.rutaNueva = rutaNueva; } public void setBytesOriginales(int bytesOriginales) { this.bytesOriginales = bytesOriginales; } public void setBytesFinales(int bytesFinales) { this.bytesFinales = bytesFinales; } public void setGanancia(float ganancia) { this.ganancia = ganancia; } } Looks kinda pretty. But seems like a waste of time. Google searches about "When to use public in Java" and "Why shouldn't I use public in Java" seem to discuss about a concept of mutability, although I'm not really sure how to interpret such discussions. I do want my class to be mutable - all the time.

    Read the article

  • CodePlex Daily Summary for Sunday, April 18, 2010

    CodePlex Daily Summary for Sunday, April 18, 2010New ProjectsBare Bones Email Trace Listener: Bare Bones Email Trace Listener is about the simplest email trace listener you can have. No bells, no whistles, and no good if you need authenticat...Cartellino: Scopo del progetto è la realizzazione di un software in grado di rilevare i dati dai rilevatori 3Tec (www.3tec.it) e stampare i cartellini presenza...Castle Windsor app.config Properties: The Castle Windsor app.config Properties library makes it possible for users of Castle Windsor to reference appSettings values in Windsor's XML pro...DeskD: This is a simple desktop dictionary application(something like WordWeb) created in Java using Netbeans IDE. Since i am new to codeplex all updates ...FunPokerMakerOnline: It is a play of poker online with a game editor. It is done with .net 4 and WPF and SOAP or WCF. KLOCS Team GIN Project: This is a Master's Degree program group project. It may have academic interest, but won't be maintained after June 2010KNN: This is KNN projectProject Santa: Program to organize teams using mysql databases and c# in a clean and robust task and group system. For more information see my blog post at http:/...ProjetoIntegradoJuridico: Sistema Integrado de Acompanhamento JurídicoRSSR for Windows Phone 7: This is a simple RSS reader application, the project aims to show people that it is easy to build application for windows phones. The applicatio...Simple Rcon: Simple Rcon is a simple lightweight rcon client for HL1/HL2 Servers. It is developed in C# and WPFTAB METHOD SQL Create a data dictionary from your Transact SQL code: TABMETHODSQL makes it easier for data/information workers to document their work. Create a data governance solution that maps sql data process, inc...TM BF Tournament: WPF software to manage Trackmania tournament with Battle France RulesviBlog: visinia plugin, this plugin is used to add blogging facility in visinia cmsviNews: visinia plugin, this plugin can be used to create a news portal like cnn.com nytimeVolumeMaster: VolumeMaster is an On Screen Display (OSD) that gets activated whenever the volume changes. It's written in WPF and uses Vista Core Audio API by Ra...WiiCIS.NET: This is a managed port of WiiCIS, which is a Nintendo Wiimote library originally created by TheOboeNerd and posted on Sourceforge.New ReleasesCastle Windsor app.config Properties: Version 1.0: Initial release.Code for Rapid C# Windows Development eBook: Enumerable Debugger Visualizer Version 1.1: Second release of the Enumerable Debugger Visualizer. There are more classes registered and it is more robust. The list of classes I have register...Convection Game Engine (Basic Edition): Convection Basic (40223): Compiled version of Convection Basic change set 40223.CycleMania Starter Kit EAP - ASP.NET 4 Problem - Design - Solution: Cyclemania 0.08.59: See Source Code tab for recent change history.DbEntry.Net (Lephone Framework): DbEntry.Net 3.9: DbEntry.Net is a lightweight Object Relational Mapping (ORM) database access compnent for .Net 3.5. It has clearly and easily programing interface ...Hash Calculator: HashCalculator 2.0: Upgraded to .NET Framework 4.0 Added support to calculate CRC32 hash function Added "Cancel" button in the Windows 7 taskbar thumbnailHKGolden Express: HKGoldenExpress (Build 201004172120): New features: Added jump links at top of page of message. Bug fix: Fixed page count bug. Improvements: HKGolden Express now uses DocumentBuild...HTML Ruby: 6.21.4: Styles added to override those on some sites for better rendering of ruby Fix regression on complex ruby annotation rendering Better spacingHTML Ruby: 6.21.5: Removed debug code in preference handling Status bar indicator now resets for each action Replace ruby in place without using document fragment...IceChat: IceChat 2009 Alpha 12.4 EXE Update: This is simply an update to the main IceChat program files and DLL. Simpply overwrite the ones in the place where IceChat 2009 is installed.IceChat: IceChat 2009 Alpha 12.4 Full Install: Build Alpha 12.4 - April 17 2010 Added IceChatScript.dll , needs to be added in same folder with EXE and IPluginIceChat.dll Added Self Notice in ...PokeIn Comet Ajax Library: PokeIn Library v05 x64: With this version, PokeIn library has become a stable. Numerous tests have completed. This is the first release candidate of PokeIn. Cheers!PokeIn Comet Ajax Library: PokeIn Library v05 x86: PokeIn Library version 0.5 (x86) With this version, PokeIn library has become a stable. Numerous tests have completed. This is the first release c...Project Santa: Project Santa V1.0: The first initial release of my project manager program, for more information see http://coderplex.blogspot.com/2010/04/project-manager-using-mysq...Salient: TestingWithVSDevServer v1: Using code from Salient, I have assembled a few strategies for programmatic contol of the Visual Studio Development Server (WebDev.WebServer.exe). ...SharePoint Navigation Menu: spNavigationMenu 1.1: Changed the CAML query so it will order by Link Order, then Title. Added the ability to override the On Hover event on the parent menu to use On ...Simple Rcon: Simple Rcon Version 1: Version 1TAB METHOD SQL Create a data dictionary from your Transact SQL code: RELEASE 1: TESTING THE RELEASE SYSTEMTribe.Cache: Tribe.Cache Beta 0.1: Beta release of Tribe.Cache - Now with cache expiration serviceviBlog: viBlog_beta: visinia plugin to add blogging facility in visinia cmsviNews: viNews_beta: visinia plugin.visinia: visinia_beta2: visinia beta 2 released with many new feature.Visual Studio DSite: Visual C++ 2008 Login Form: A simple login form made in visual c 2008. Source code only.WiiCIS.NET: WiiCIS.NET v0.11: 0.11 Removed an unnecessary function from the Wiimote class, and improved the demo. You will need the latest version of SlimDX to compile the sourc...WinControls TreeListView: TreeListView 1.5.1: -fixes issue #5837 -Preliminary feature #5874WoW Character Viewer: Viewer Setup: Finally, I've brought out the next setup of WoW Viewer. Most loose ends have been tied up. Loading and Saving of character files has been fixed.Most Popular ProjectsRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseMicrosoft SQL Server Community & Samplespatterns & practices – Enterprise LibraryPHPExcelFacebook Developer ToolkitBlogEngine.NETMvcContrib: a Codeplex Foundation projectIronPythonMost Active ProjectsRawrpatterns & practices – Enterprise LibraryIndustrial DashboardFarseer Physics EnginejQuery Library for SharePoint Web ServicesIonics Isapi Rewrite FilterGMap.NET - Great Maps for Windows Forms & PresentationProxi [Proxy Interface]BlogEngine.NETCaliburn: An Application Framework for WPF and Silverlight

    Read the article

  • Why do AWS spot-instance prices spike above the "on demand" pricing?

    - by Laykes
    Amazon Pricing on Spot Instance Inconsistencies This is something which will be best explained through screenshots of a historical chart of instance pricings. If you look at a lot of the instance prices for spot instances, you will notice regular patterns of spikes. See here: As you can see, the price for this compute medium instance, regularly spikes above the on demand price. A c1.medium instance (on demand), would only cost $0.186 per hour. But for a period of a few weeks, in zone B, the price would regularly spike to $1.20. This is some 6 times the actual on demand price. It's also not isolated. If you look at zone-b again for small instances, there is a similar, spike frequently. Which goes 4x the on demand pricing. Does anyone know why this happens? Here are a few suggestions Someone entered $1.2 instead of $0.12 (I would discount this since it happened 20 times over the space of 3 weeks). Amazon regularly artifically inflate their prices by bidding on their own instances to get the most bang for their buck. (I would discount this since it would be ridiculous and bad business) Some company launched 1000 servers at once, and wants to make sure that they all launch. (I would discount this since they would presumably launch them at a price which would be below the minimum on demand price. Why would you pay above on demand for a single server?). It's a bug in their reporting?

    Read the article

  • Verification of files on backup media - after the backup

    - by Greg Sansom
    I have a system in place where I back up my work files daily to a portable hard drive. I actually have two portable hard drives - one is stored off-site and I swap them regularly. I also keep my family photos and other historical files backed up, but I only back the photos up occasionally (ie when I have new photos). The backup media is for backup only, and it is unlikely I will ever read the files from the backup media unless a disaster occurs and I lose the master. It worries me that my backed up files could become corrupt without me knowing it. It is also feasible that my master files could become corrupt, and eventually the corrupt files would be replicated to the backup media. I'm currently using Cobian Backup, but I'm open to alternatives. Is there a tool I can use to confirm that the backed up files are identical to the files that were first copied? I know it would be possible to generate a checksum and periodically validate the backup files against the original checksum, but I'm looking for a tool which will do this automatically.

    Read the article

  • Tracking Security Vulnerability remediation

    - by Zypher
    I've been looking into this for a little while, but havn't really found anything suitable. What I am looking for is a system to track security vulnerability remdiation status. Something like "bugzilla for IT" What I am looking for is something pretty simple that allows the following: batch entry of new vulnerabilities that need to be remediated Per user assignment AD/LDAP Authentiation Simple interface to track progress - research, change control status, remediated, etc. Historical search ability Ability to divide by division Ability to store proof of resolution for the Security Team to access Dependency tracking Linux based is best (that's my group :) ) Free is good, but cost doesn't matter so much if the system is worth it The systems doesn't have to have all of these features, but if it did that would be great. yes we could use our helpdesk software, but that has a bunch of pitfalls such as triggering SLA alerts and penalties as well as not easily searchable outside of a group. Most of what I have found are bug tracking systems that are geared towards developers, and are honstely way overkill for what I am looking for. Server Faults input is greatly appreciated as always!

    Read the article

  • Looking for a host based network monitor solution

    - by Ole Martin Handeland
    Hi all! Problem So, my hosting company has a network usage graph for my dedicated server. It seems that one day earlier this month, my network usage suddenly spiked with several hundred megabytes transferred (usually it's in the tens, not hundreds). It was probably me, but i just can't be sure who or what it was. Question So my question is; does anyone know of any host based solution for monitoring network usage that would tell me the client's IP-address, the port/service he/she used? What I don't want I'm just guessing that someone will suggest i use nagios, munin, zabbix, cacti, mrtg - I've also looked at those, but a graph over network usage will not give me the answers I'm looking for. :-) Almost there I've already looked at a lot of monitoring solutions, and I've tried [ntop][http://www.ntop.org/], [darkstat][http://unix4lyfe.org/darkstat/] and others. Darkstat just didn't give me the answers. Although it listed a lot of statistics, and i could list the clients - it doesn't show me the network usage for a particular period. Ntop is by far the best I've seen so far - but i think it mostly shows current network usage, not the historical part. I could run apt-get upgrade and download a whole bunch of software, but not see it in the log afterwards.

    Read the article

  • OpenOffice Calc: How can I count the number of different items with data pilot?

    - by manu
    Hi all, I have a rather long spreadsheet with historical information of issues solved by some user on a collaborative environment. The spreadsheet have the following (relevant) columns date, week no., project, author id, etc... The week no. is calculated from the date, is basically the year concatenated with the week number within that year; for instance, both 2009-02-18 and 2009-02-20 yield the week number 200908 - the 8th week of year 2009; and 2009-02-23 yields 200909 - the 9th week of year 2009. I need to count how many different users (given by author id) contributed to some project, on a weekly basis. I have setup a data pilot with the week as Row Field, the project as the Column Field, and count-author as the Data Field. However, this counts the author id as different instances. This is not what I need. I need to count how many different users contributed to each project on a weekly basis. I expect to get something like: projects week Project1 Project2 Project3 200901 10 2 200902 2 7 Each inner cell containing how many different users contributed. With the count-author configuration, what I get is how many contributions (total) got the project on that week. Is there a way to tell OpenOffice Calc to do what I want?

    Read the article

  • Looking for issue tracker software for residential property management

    - by Rob
    This question is about a computer software (as per SU guidelines) application for centrally tracking issues concerning the management of a residental block of flats (apartments as they say in the US and France). Issues are incidents - and their resultant unplanned maintenance to address them, also planned one-off maintenance and also regular planned routine maintenance. I live in a block of flats (apartments), and along with other residents, are looking to more closely watch over issues with the communal, shared areas of the premises (corridors, courtyards, stairs, lifts, lights, trash/bin shed, bike stands, parking areas etc) and their maintenance, currently done by a property management company. Our own homes are our own affair internally, its the outside communal areas that I have the interest. The aim being to control costs and possibly reduce them, by proactively managing the property using historical data to predict issues and also to scrutinise maintenance charges against such data to ensure that the costs are as expected. Trending could also be established whereby recurrences of things can be detected and pre-empted to reduce costs. As a software professional, I'm aware of Bugzilla, eventum being free tools for software - which could be customised to fit this application, but wondered if there was something more appropriate. It might be useful for such software to be on a web server, with secure access, so that residents can log in and view the issues.

    Read the article

  • With puppet, can you have the client ask to be a certain set of roles?

    - by Aitch
    I've recently got my puppetmaster and client up and running and have had the client correctly signed, then requested and applied simple changes, all good. I have a growing number of machines (100). They are not consistently named (historical reasons). They fall into a handful of categories (think of it like: dataserver_type1, dataserver_type2, webserver_type1, webserver_type2....). New instances of these types of machines are added weekly. I don't understand (yet) or cannot see how I can declare a "generic" node of (say) "dataserver_type1" that contains whatever modules it needs, and set something in the client puppet.conf that says "I am a dataserver_type1" without using the hostname/FQDN If I set the node name in the catalog as (say) "my-data-server-type1" - the certified hostname - it picks it up and works. I know you can use patterns for hostnames but as I said, my server names are not consistent, and I can't change them. This seems disingenuous to have to edit a file and manually add a node for each server, when they continue to grow. Edit: Digging deeper, it seems roles may be what I want. But there still seems to be an element whereby the master has contain a list of roles that a specific named server should do. Perhaps what I am asking is, how can a client say "I want to be this role", without the server having to be updated?

    Read the article

  • Will Windows repair my multi-boot when I format the 1st physical partition with boot sector?

    - by user2353806
    Due to historical reasons I got a laptop with Vista, Windows 7 and Windows Server 2008R2 partitions. (boot from external wasn´t that viable) Nothing (Windows Repair, bootrec /whateveroption) worked when I restored only the Windows 7 and WS2k8 with Acronis TrueImage. Don´t ask me through what idiotic error messages I went during repair tries. (Wrong Windows version,...) So I grudgingly restored all three - with the little additional excursion that I thought changing the active partition to the Windows 7 partition would move the boot sector and let me format the Vista part... Oh no. Seems too logical for MS. (Dunno what I changed, but today it will let me format!) So the real question is: Will formatting the Vista part trash things again beyond comprehension or will Windows Repair bring back the boot rec and remove Vista from the boot options? Or should I just erase all the files to avoid trashing the boot? Where will the boot rec be (after repair) when I format the Vista? On 1st or 2nd partition? And if I get drunk and install Windows 8.1 on the 1st, will anything work? ;-) Thanks

    Read the article

  • php-fpm start error

    - by Sujay
    I am using php-fpm. I recently recompiled php for including imap functions. But on php-fpm start it gives the following error: Starting php_fpm Error in argument 1, char 1: no argument for option - Usage: php-cgi [-q] [-h] [-s] [-v] [-i] [-f ] php-cgi [args...] -a Run interactively -C Do not chdir to the script's directory -c | Look for php.ini file in this directory -n No php.ini file will be used -d foo[=bar] Define INI entry foo with value 'bar' -e Generate extended information for debugger/profiler -f Parse . Implies `-q' -h This help -i PHP information -l Syntax check only (lint) -m Show compiled in modules -q Quiet-mode. Suppress HTTP Header output. -s Display colour syntax highlighted source. -v Version number -w Display source with stripped comments and whitespace. -z Load Zend extension ................................... failed What could be the problem? Is it in php-fpm.conf or php.ini.

    Read the article

  • Nexenta/OpenSolaris filer kernel panic/crash

    - by ewwhite
    I've an x4540 Sun storage server running NexentaStor Enterprise. It's serving NFS over 10GbE CX4 for several VMWare vSphere hosts. There are 30 virtual machines running. For the past few weeks, I've had random crashes spaced 10-14 days apart. This system used to open OpenSolaris and was stable in that arrangement. The crashes trigger the automated system recovery feature on the hardware, forcing a hard system reset. Here's the output from mdb debugger: panic[cpu5]/thread=ffffff003fefbc60: Deadlock: cycle in blocking chain ffffff003fefb570 genunix:turnstile_block+795 () ffffff003fefb5d0 unix:mutex_vector_enter+261 () ffffff003fefb630 zfs:dbuf_find+5d () ffffff003fefb6c0 zfs:dbuf_hold_impl+59 () ffffff003fefb700 zfs:dbuf_hold+2e () ffffff003fefb780 zfs:dmu_buf_hold+8e () ffffff003fefb820 zfs:zap_lockdir+6d () ffffff003fefb8b0 zfs:zap_update+5b () ffffff003fefb930 zfs:zap_increment+9b () ffffff003fefb9b0 zfs:zap_increment_int+68 () ffffff003fefba10 zfs:do_userquota_update+8a () ffffff003fefba70 zfs:dmu_objset_do_userquota_updates+de () ffffff003fefbaf0 zfs:dsl_pool_sync+112 () ffffff003fefbba0 zfs:spa_sync+37b () ffffff003fefbc40 zfs:txg_sync_thread+247 () ffffff003fefbc50 unix:thread_start+8 () Any ideas what this means?

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >