Search Results

Search found 1538 results on 62 pages for 'guidance'.

Page 43/62 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • SQL Saturday and Exploring Data Privacy

    - by Johnm
    I have been highly impressed with the growth of the SQL Saturday phenomenon. It seems that an announcement for a new wonderful event finds its way to my inbox on a daily basis. I have had the opportunity to attend the first of the SQL Saturday's for Tampa, Chicago, Louisville and recently my home town of Indianapolis. It is my hope that there will be many more in my future. This past weekend I had the honor of being selected to speak amid a great line up of speakers at SQL Saturday #82 in Indianapolis. My session topic/title was "Exploring Data Privacy". Below is a brief synopsis of my session: Data Privacy in a Nutshell        - Definition of data privacy        - Examples of personally identifiable data        - Examples of Sensitive data Laws and Stuff        - Various examples of laws, regulations and policies that influence the definition of data privacy        - General rules of thumb that encompasses most laws Your Data Footprint        - Who has personal information about you?        - What are you exchanging data privacy for?        - The amazing resilience of data        - The cost of data loss Weapons of Mass Protection       - Data classification       - Extended properties       - Database Object Schemas       - An extraordinarily brief introduction of encryption       - The amazing data professional  <-the most important point of the entire session! The subject of data privacy is one that is quickly making its way to the forefront of the mind of many data professionals. Somewhere out there someone is storing personally identifiable and other sensitive data about you. In some cases it is kept reasonably secure. In other cases it is kept in total exposure without the consideration of its potential of damage to you. Who has access to it and how is it being used? Are we being unnecessarily required to supply sensitive data in exchange for products and services? These are just a few questions on everyone's mind. As data loss events of grand scale hit the headlines in a more frequent succession, the level of frustration and urgency for a solution increases. I assembled this session with the intent to raise awareness of sensitive data and remind us all that we, data professionals, are the ones who have the greatest impact and influence on how sensitive data is regarded and protected. Mahatma Gandhi once said "Be the change you want to see in the world." This is guidance that I keep near to my heart as I approached this topic of data privacy.

    Read the article

  • WiX, MSDeploy and an appealing configuration/deployment paradigm

    - by alexhildyard
    I do a lot of application and server configuration; I've done this for many years and have tended to view the complexity of this strictly in terms of the complexity of the ultimate configuration to be deployed. For example, specific APIs aside, I would tend to regard installing a server certificate as a more complex activity than, say, copying a file or adding a Registry entry.My prejudice revolved around the idea of a sequential deployment script that not only had the explicit prescription to apply a specific server configuration, but also made the implicit presumption that the server in question was in a good known state. Scripts like this fail for hundreds of reasons -- the Default Website didn't exist; the application had already been deployed; the application had already been partially deployed and failed to rollback fully, and so on. And so the problem is that the more complex the configuration activity, the more scope for error in any individual part of that activity, and therefore the greater the chance the server in question will not end up at exactly the desired configuration level.Recently I was introduced to a completely different mindset, which, for want of a better turn of phrase, I will call the "make it so" mindset. It's extremely simple both to explain and to implement. In place of the head-down, imperative script you used to use, you substitute a set of checks -- much like exception handlers -- around each configuration activity, starting with a check of the current system state. Thus the configuration logic becomes: "IF these services aren't started then start them, and IF XYZ website doesn't exist then create it, and IF these shares don't exist then create them, and IF these shares aren't permissioned in some particular way, then permission them so." This works. Really well, in my experience. Scenario 1: You want to get a system into a good known state; it's already in a good known state; you quickly realise there is nothing to do.Scenario 2: You want to get the system into a good known state; your script is flawed or the system is bust; it cannot be put into that state. You know exactly where (at least part of) the problem is and why.Scenario 3: You want to get the system into a good known state; people are fiddling around with the system just now. That's fine. You do what you can, and later you come back and try it againScenario 4: No one wants to deploy anything; they want you to prove that the previous deployment was successful. So you re-run the deployment script with the "-WhatIf" flag. It reports that there was nothing to change. There's your proof.I mentioned two technologies in the title -- MSI and MSDeploy. I am thinking specifically of the conversation that took place here. Having worked with both technologies, I think Rob Mensching's response is appropriately nuanced, and in essence the difference is this: sometimes your target is either to achieve a specific new server state, or to rollback to a known good one. Then again, your target may be to configure what you can, and to understand what you can't. Implicitly MSDeploy's "rollback" is simply to redeploy the previous version, whereas a well-crafted MSI will actively put your system into that state without further intervention. Either way, if all goes well it will leave you with a system in one of two states, whereas MSDeploy could leave your system in one of many states. The key is that MSDeploy and MSI are complementary technologies; which suits you best depends as much on Operational guidance as your Configuration remit.What I wanted to say was that I have always been for atomic, transactional-based configuration, but having worked with the "make it so" paradigm, I have been favourably impressed by the actual results. I'm tempted to put a more technical post up on this in due course.

    Read the article

  • Making it GREAT! Oracle Partners Building Apps Workshop with UX and ADF in UK

    - by ultan o'broin
    Yes, making is what it's all about. This time, Oracle Partners in the UK were making great looking usable apps with the Oracle Applications Development Framework (ADF) and user experience (UX) toolkit. And what an energy-packed and productive event at the Oracle UK, Thames Valley Park, location it was. Partners learned the fundamentals of enterprise applications UX, why it's important, all about visual design, how to wireframe designs, and then how to build their already-proven designs in ADF. There was a whole day on mobile apps, learning about mobile design principles, free mobile UX and ADF resources from Oracle, and then trying it out. The workshop wrapped up with the latest Release 7 simplified UIs, Mobilytics, and other innovations from Oracle, and a live demo of a very neat ADF Mobile Android app built by an Oracle contractor. And, what a fun two days both Grant Ronald of ADF and myself had in running the workshop with such a great audience, too! I particularly enjoyed the wireframing and visual design sessions interaction; and seeing some outstanding work done by partners. Of note from the UK workshop were innovative design features not seen before and made me all the happier that developers were bringing their own ideas from the consumer IT world of mobility, simplicity, and social to the world of work apps in a smart way within an enterprise methodology too.  Partner wireframe exercise. Applying mobile design principles and UX design patterns means you've already productively making great usable apps! Next, over to Oracle ADF Mobile with it! One simple example from the design of a mobile field service app was that participants immediately saw how the UX and device functionality of the super UK-based app Hailo app could influence their designs (the London cabbie influence maybe?), as well as how we all use maps, cameras, barcode scanners and microphones on our phones could be used in work. And, of course, ADF Mobile has the device integration solutions there too! I wonder will U.S. workshops in Silicon Valley see an Uber UX influence (LOL)! That we also had partners experienced with Oracle Forms who could now offer a roadmap from Forms to Simplified UI and Mobile using ADF, and do it through through the cloud, really made this particular workshop go "ZING!" for me. Many thanks to the Oracle PartnerNetwork (OPN) team for organizing this event with us, and to the representatives of the Oracle Partners that showed and participated so well. That's what I love out this outreach. It's a two-way, solid value-add for all. Interested? Why would partners and developers with ADF skills sign up for this workshop? Here's why: Learn to use the Oracle Applications User Experience design patterns as the usability building blocks for applications development in Oracle Application Development Framework. The workshop enables attendees to build modern and visually compelling desktop and mobile applications that look and behave like Oracle Cloud Applications, and that can co-exist with partner integrations, new, or existing applications deployments. Partners learn to offer customers and clients more than just coded functionality; instead they can provide a complete user experience with a roadmap for continued ROI from applications that also creating more business and attracts the kudos and respect from other makers of apps as they're wowed by the results. So, if you're a partner and interested in attending one of these workshops and benefitting from such learning, as well as having a platform to show off some of your own work, stay well tuned to your OPN channels, to this blog, to the VoX blog, and to the @usableapps Twitter account too. Can't wait? For developers and partners, some key mobile resources to explore now Oracle ADF Mobile UX Patterns and Components Wiki Oracle ADF Academy (Mobile) Oracle ADF Insider Essentials Oracle Applications Mobile User Experience Design Patterns and Guidance

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • When will EBS 12.2 be released?

    - by Steven Chan (Oracle Development)
    The most frequently asked question at OpenWorld this year was, "When will EBS 12.2 be released?" Sadly, Oracle's communication policies prohibit us from speculating about release dates for unreleased software. We are not permitted to give estimates, rough timelines, guesses, or anything else that remotely resembles specific guidance on release dates. You can monitor My Oracle Support and this blog for updates on EBS 12.2.  I'll post them here as soon as they're available.  I'm embedding an old favourite from 2007 in its entirety here, since it applies equally to new releases as well as certifications. "Loose Lips Sink Ships" (March 20, 2007)If I were to sort emails in my inbox into groups, the biggest -- by far -- would be the one for emails that start with, "When will _____ be certified with the E-Business Suite?"  I answer these dutifully but know that my replies can sometimes be maddening, for two reasons:  technical uncertainty, and Oracle's rules for such communications. On the Spiral Model of CertificationsTechnology stack certifications tend to be highly iterative in nature.  As a result, statements about certification dates tend to be accurate only when made in hindsight.  Laypeople are horrified to hear this, but it's the ugly truth.  Uncertainty is simply inherent to the process.  I've become inured to it over the years, but it might come as a surprise to you that it can take many cycles to get fully-released software to work together.  Take this scenario: We test a particular combination of Component A and B. If we encounter a problem, say, with Component A, we log a bug. We receive a new version of Component A. The process iterates again. The reality is this: until a certification is completed and released, there's no accurate way of telling how many iterations are yet to come.  This is true regardless of the number of iterations that have already been completed.  Our Lips Are SealedGenerally, people understand that things are subject to change, so the second reason I can't say anything specific is actually much more important than the first.  "Loose lips might sink ships" was coined in World War II in an effort to remind people that careless talk can have serious consequences.  Curiously, this applies to Oracle's communications about upcoming features, configurations, and releases, too.  As a publicly traded company, we have very strict policies that prohibit us from linking specific releases to specific dates.  If you've ever listened to an earnings call with analysts, you'll often hear them asking, "Can you add a little more color to that statement?"  For certifications, color is usually the only thing that I have.  Sometimes I can provide a bit more information about the technical nature of the certification in question, such as expected footprints or version levels.  I can occasionally share technical issues that we've found, too, to convey the degree of risk or complexity involved in the certification.  Aside from that, there's little additional information about specific dates, date ranges, or even speculation about dates that I can provide... that is, without having one of those uncomfortable conversations with Oracle Legal.  So, as much as it pains me to do so, when it comes to dates, I'm always forced to conclude with a generic reply that blandly states one of the following: We're working on that certification right now That certification is in the pipeline but hasn't been started yet We don't have plans for that certification Don't Shoot the MessengerThankfully, I've developed a thick skin over the years -- which is a good thing, considering the colorful and energetic responses I've received over the years after answering these questions.  However, on behalf of my Oracle colleagues who are faced with these questions every day in the field, I urge you to remember that they're required to follow these same corporate rules about date disclosures.  It never hurts to ask, but don't be too disappointed if we can't provide you with a detailed answer.  The Go-Go's had it right, after all.  Related Articles Webcast Replay Available: Technical Preview of EBS 12.2 Online Patching

    Read the article

  • Getting Started with Cloud Computing

    - by juanlarios
    You’ve likely heard about how Office 365 and Windows Intune are great applications to get you started with Cloud Computing. Many of you emailed me asking for more info on what Cloud Computing is, including the distinction between "Public Cloud" and "Private Cloud". I want to address these questions and help you get started. Let's begin with a brief set of definitions and some places to find more info; however, an excellent place where you can always learn more about Cloud Computing is the Microsoft Virtual Academy. Public Cloud computing means that the infrastructure to run and manage the applications users are taking advantage of is run by someone else and not you. In other words, you do not buy the hardware or software to run your email or other services being used in your organization – that is done by someone else. Users simply connect to these services from their computers and you pay a monthly subscription fee for each user that is taking advantage of the service. Examples of Public Cloud services include Office 365, Windows Intune, Microsoft Dynamics CRM Online, Hotmail, and others. Private Cloud computing generally means that the hardware and software to run services used by your organization is run on your premises, with the ability for business groups to self-provision the services they need based on rules established by the IT department. Generally, Private Cloud implementations today are found in larger organizations but they are also viable for small and medium-sized businesses since they generally allow an automation of services and reduction in IT workloads when properly implemented. Having the right management tools, like System Center 2012, to implement and operate Private Cloud is important in order to be successful. So – how do you get started? The first step is to determine what makes the most sense to your organization. The nice thing is that you do not need to pick Public or Private Cloud – you can use elements of both where it makes sense for your business – the choice is yours. When you are ready to try and purchase Public Cloud technologies, the Microsoft Volume Licensing web site is a good place to find links to each of the online services. In particular, if you are interested in a trial for each service, you can visit the following pages: Office 365, CRM Online, Windows Intune, and Windows Azure. For Private Cloud technologies, start with some of the courses on Microsoft Virtual Academy and then download and install the Microsoft Private Cloud technologies including Windows Server 2008 R2 Hyper-V and System Center 2012 in your own environment and take it for a spin. Also, keep up to date with the Canadian IT Pro blog to learn about events Microsoft is delivering such as the IT Virtualization Boot Camps and more to get you started with these technologies hands on. Finally, I want to ask for your help to allow the team at Microsoft to continue to provide you what you need. Twice a year through something we call "The Global Relationship Study" – they reach out and contact you to see how they're doing and what Microsoft could do better. If you get an email from "Microsoft Feedback" with the subject line "Help Microsoft Focus on Customers and Partners" between March 5th and April 13th, please take a little time to tell them what you think. Cloud Computing Resources: Microsoft Server and Cloud Computing site – information on Microsoft's overall cloud strategy and products. Microsoft Virtual Academy – for free online training to help improve your IT skillset. Office 365 Trial/Info page – get more information or try it out for yourself. Office 365 Videos – see how businesses like yours have used Office 365 to transition to the cloud. Windows Intune Trial/Info – get more information or try it out for yourself. Microsoft Dynamics CRM Online page – information on trying and licensing Microsoft Dynamics CRM Online. Additional Resources You May Find Useful: Springboard Series Your destination for technical resources, free tools and expert guidance to ease the deployment and management of your Windows-based client infrastructure. TechNet Evaluation Center Try some of our latest Microsoft products for free, Like System Center 2012 Pre-Release Products, and evaluate them before you buy. AlignIT Manager Tech Talk Series A monthly streamed video series with a range of topics for both infrastructure and development managers. Ask questions and participate real-time or watch the on-demand recording. Tech·Days Online Discover what's next in technology and innovation with Tech·Days session recordings, hands-on labs and Tech·Days TV.

    Read the article

  • Auto-organized / smart inventory system?

    - by VeXe
    for the past week I've been working on an inventory system with Unity3D. At first I got help from the guys at Design3 but it wasn't too long till we split path, because I really didn't like the way they did their code, it didn't have any smell of OOP whatsoever. I took it further steps ahead - items take more than one slot, advanced placement system (items tries their best to find the best close fit), local mouse system (mouse gets trapped in active bag area), etc. Here's a demo of my work. What we would like to have in our game, is an auto-organizing feature - not auto-sort. We want this feature because our inventory's going to be in 'real-time' - not like in Resident Evil 1,2,3 etc where you would pause the game and do things in your inventory. Now imagine your self in a sticky situation surrounded by zombies, and you don't have bullets, you look around, you see that there are bullets nearby on the ground, so you go for them and try to pick them up, but they don't fit! you look at your inventory and find out that if you reorganize some of the items, it will fit! - now the player - in that situation doesn't have time to reorganize because he's surrounded with zombies and will die if he stops and organizes the inventory to make space (remember inventory in real-time, no pausing) - wouldn't it be nice for that to happen automatically? - Yes! (I believe this has been implemented in some games like Dungeon siege or something, so sure it's doable) take a look at this picture for example: Yes, so if you auto-sort the issue you will get your spaces but it's bad because: 1- Expensive: it doesn't need a whole sort operation to free those spaces, in the first picture, just slide the red item at the bottom to the very left, and you get the same spaces that you got from the auto-sort. 2- It's annoying to the player: "Who the F told you to re-order my stuff?" I'm not asking for "How to write the code" for this, I'm just asking for some guidance, where to look, what algorithms are involved? Is this something related to graphs and shortest path stuff? I hope not cuz I didn't manage to continue my college studies :/ But even if it is, just tell me and I will learn the stuff related. Notice there could be more than just one solution. So I guess the first thing I have to do is figure out if the situation is 'solvable' - if I know how to determine if a situation is solvable or not, then I can 'solve' it. I just need to know the conditions that makes it 'solvable'. And I believe there must be some algorithm/data structure for this. Here's a pic for more than one solution of trying to fit a 1x3 item: The arrows show just one of the solutions, but if you look you will find more than one. This is what I ultimately not auto-sorting but find a solution and applying it. Note that if I spend time on it I will come up with a way to solve it, but it wouldn't be the best way, it's like, holding a car wheel with your feet instead of your hands! XD Or just like trying to solve an issue that requires arrays, but you're not yet aware of their existence! So what is the right approach to this? Hope somebody helps, thanks a lot in advance :)

    Read the article

  • Challenges in Corporate Reporting - New Independent Research

    - by ndwyouell
    Earlier this year, Oracle and Accenture sponsored a global study on trends in financial close and reporting. We surveyed 1,123 finance professionals in large organizations in 12 countries around the world during February and March. Financial Consolidation and Reporting is the most mature aspect of Enterprise Performance Management with mainstream solutions having been around for over 30 years. But of course over this time there have been many changes and very significant increases in regulation. So just what is the current state is Financial Consolidation and Reporting in our major corporations across the world? We commissioned this independent research to find out. Highlights of the result are: •          Seeking change: Businesses recognize they need to invest in financial reporting to address the challenges they currently face. 47 percent of companies have made substantial investments over the last year to the financial close, filing, and reporting processes. •          Ineffective investments: Despite these investments, spreadsheets (72 percent) and e-mails (68 percent) are still being used daily to track and manage reporting, suggesting that new investments are falling short of expectations. •          Increased costs and uncertainty: The situation is so opaque that managers across the finance function are unable to fully understand the financial impact or cost implications of reporting, with 60 percent of respondents admitting they did not know the total cost of managing and publicizing their financial results. •          Persistent challenges: 68 percent of respondents admitted that they have inadequate visibility into reporting processes, while 84 percent of finance managers surveyed said they find it difficult to control the quality of financial data across the entire reporting process. •          Decreased effectiveness: 71 percent of finance managers feel their effectiveness is limited in some way by data-analysis–related issues, while 39 percent of C-level or VP-level respondents say their effectiveness is impaired by limited visibility. •          Missed deadlines: Due to late changes to the chart of accounts, 15 percent of global businesses have missed statutory filings, putting their companies at risk of financial penalties and potentially impacting share value. The report makes it clear that investments made to date by these large organizations around the world have been uneven across the close, reporting, and filing processes, which has led to the challenges these organizations currently face in the overall process. Regardless of whether companies are using a variety of solutions or a single solution, the report shows they continue to witness increased costs, ineffectual data management, and missed reporting, which—in extreme circumstances—can impact a company’s corporate image and share value. The good news is that businesses realize that these problems persist and 86 percent of companies are likely to make a significant investment during the next five years to address these issues. While they should invest, it is critical that they direct investments correctly to address the key issues this research identified: •          Improving data integrity •          Optimizing processes •          Integrating the extended financial close process By addressing these issues and with clear guidance on how to implement the correct business processes, infrastructure, and software solutions, finance teams will find that their reporting processes are much more effective, cost-efficient, and aligned with their performance expectations. To get a copy of the full report: http://www.oracle.com/webapps/dialogue/ns/dlgwelcome.jsp?p_ext=Y&p_dlg_id=11747758&src=7300117&Act=92 To replay a webcast discussing the findings: http://www.cfo.com/webcast.cfm?webcast=14639438&pcode=ORA061912_ORA

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • Ado.net dataservices BeginExecuteBatch call works on development fails on production server with Obj

    - by Mike Morley
    We have an ado.net dataservices 1.0 call that is being passed to a [WebGet] service operation as a batch through BeginExecuteBatch. Everything works perfectly on our development server - we have the project configured to use IIS instead of the cassini web server to make it as close to our production server as we can. When we publish to the production server, all the service operations work perfectly except the batch call, which fails with Object does not match target type. . I have not been able to find any cause for this. I can even run a single non-batch style GET operation against the [WebGet] service by copying the URL used in the batch and pasting it in a browser. I have not been able to find any information to help me solve this - any guidance would be most appreciated. Thanks, Mike M. Error message From Fiddler: HTTP/1.1 500 Internal Server Error Content-Type: application/xml DataServiceVersion: 1.0; An error occurred while processing this request. Object does not match target type. System.Reflection.TargetException at System.Reflection.RuntimeMethodInfo.CheckConsistency(Object target) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at System.Data.Services.RequestUriProcessor.CreateFirstSegment(IDataService service, String identifier, Boolean checkRights, String queryPortion, Boolean& crossReferencingUrl) at System.Data.Services.RequestUriProcessor.CreateSegments(String[] segments, IDataService service) at System.Data.Services.RequestUriProcessor.ProcessRequestUri(Uri absoluteRequestUri, IDataService service) at System.Data.Services.DataService`1.BatchDataService.HandleBatchContent(Stream responseStream)

    Read the article

  • Advice on database design / SQL for retrieving data with chronological order

    - by Remnant
    I am creating a database that will help keep track of which employees have been on a certain training course. I would like to get some guidance on the best way to design the database. Specifically, each employee must attend the training course each year and my database needs to keep a history of all the dates on which they have attend the course in the past. The end user will use the software as a planning tool to help them book future course dates for employees. When they select a given employee they will see: (a) Last attendance date (b) Projected future attendance date(i.e. last attendance date + 1 calendar year) In terms of my database, any given employee may have multiple past course attendance dates: EmpName AttandanceDate Joe Bloggs 1st Jan 2007 Joe Bloggs 4th Jan 2008 Joe Bloggs 3rd Jan 2009 Joe Bloggs 8th Jan 2010 My question is what is the best way to set up the database to make it easy to retrieve the most recent course attendance date? In the example above, the most recent would be 8th Jan 2010. Is there a good way to use SQL to sort by date and pick the MAX date? My other idea was to add a column called ‘MostRecent’ and just set this to TRUE. EmpName AttandanceDate MostRecent Joe Bloggs 1st Jan 2007 False Joe Bloggs 4th Jan 2008 False Joe Bloggs 3rd Jan 2009 False Joe Bloggs 8th Jan 2010 True I wondered if this would simplify the SQL i.e. SELECT Joe Bloggs WHERE MostRecent = ‘TRUE’ Also, when the user updates a given employee’s attendance record (i.e. with latest attendance date) I could use SQL to: Search for the employee and set the MostRecent value to FALSE Add a new record with MostRecent set to TRUE? Would anybody recommended either method over the other? Or do you have a completely different way of solving this problem?

    Read the article

  • Home automation using Arduino / XMPP client for Arduino

    - by Ashish
    I am trying to setup a system for automating certain tasks in my home. I am thinking of a solution wherein a server side application would be able to send/receive commands/data to Arduino (attached with Arduino Ethernet Shield) via the web. Here the Arduino may both act as a sensor interface to the server application or command executor interface for the server app. E.g. (user story): The overhead water tank in my house has a water level sensor attached with Arduino (attached with Arduino Ethernet Shield). Another Arduino (attached with Arduino Ethernet Shield) is attached with a relay/latch. This relay/latch is then connected to a water pump. Now the server side application on the web is able to get/receive water level information from the Arduino on the water tank. Depending on the water level information received, the web application should send suitable signals/commands to Arduino on water pump to switch 'ON' or switch 'OFF' the water pump. Now for such a system to work across the web, I am thinking of using one of the type of solutions in order of my priority: Using XMPP for communication between server application and Arduino. Using HTTP polling. Using HTTP hanging GET. For solution number 1, I need to implement a XMPP client that would reside on Arduino. Is it possible to write a XMPP client small enough to reside on an Arduino? If yes what are the minimum possible XMPP client functionality that I need to write for Arduino, so that it would be able to contact XMPP servers solutions like GTalk, etc.? For solution number 2 and 3 I need guidance in implementation. Also which solution would be cost effective and easily extendable?

    Read the article

  • Creating a Custom EventAggregator Class

    - by Phil
    One thing I noticed about Microsoft's Composite Application Guidance is that the EventAggregator class is a little inflexible. I say that because getting a particular event from the EventAggregator involves identifying the event by its type like so: _eventAggregator.GetEvent<MyEventType>(); But what if you want different events of the same type? For example, if a developer wants to add a new event to his application of type CompositePresentationEvent<int>, he would have to create a new class that derives from CompositePresentationEvent<int> in a shared library somewhere just to keep it separate from any other events of the same type. In a large application, that's a lot of little two-line classes like the following: public class StuffHappenedEvent : CompositePresentationEvent<int> {} public class OtherStuffHappenedEvent : CompositePresentationEvent<int> {} I don't really like that approach. It almost feels dirty to me, partially because I don't want a million two-line event classes sitting around in my infrastructure dll. What if I designed my own simple event aggregator that identified events by an event ID rather than the event type? For example, I could have an enum such as the following: public enum EventId { StuffHappened, OtherStuffHappened, YetMoreStuffHappened } And my new event aggregator class could use the EventId enum (or a more general object) as a key to identify events in the following way: _eventAggregator.GetEvent<CompositePresentationEvent<int>>(EventId.StuffHappened); _eventAggregator.GetEvent<CompositePresentationEvent<int>>(EventId.OtherStuffHappened); Is this good design for the long run? One thing I noticed is that this reduces type safety. In a large application, is this really as important of a concern as I think it is? Do you think there could be a better alternative design?

    Read the article

  • Microsoft JScript runtime error: Sys.InvalidOperationException: Two components with the same id.

    - by Irwin
    I'm working in ASP .NET dynamic data. In one of my edit controls I wanted to allow the user to add records from a related table to the current page. (Literally, if you are on the orders page, you would be allowed to add a new customer to the system on this page as well, and then associate it with that order). So, I have a DetailsView set to InsertMode, nested inside of an UpdatePanel, which is shown by a ModalPopupExtender which is invoked when 'add new' is clicked. This doohickey works the first time i execute this process, that is, a customer is added (and i update the dropdown list as well). However, I realized it didn't work (properly) again until I refreshed the entire page. When I attached my debugger, my worst fears were realized (ok, not really). But an exception was being thrown: "Microsoft JScript runtime error: Sys.InvalidOperationException: Two components with the same id." Which seemed to be complaining about a Calendar Extender Control that is part of the details view. Any guidance on what's going on here would be great. Thanks.

    Read the article

  • Notification Email Best Practices--From Server Setup to Programming

    - by Andrew Wagner
    All, I'm in the process now of building a SaaS tool that allows network admins to generate notification emails to the members of the end-users of our platform (among many many other things). I'm running into a bit of an "out of my expertise" wall, as I know there are a lot of variables involved with configuring an application that can: Run in a distributed way via load balancing and still-- Leverage a single mail server for sending notification emails Process unsubscribe requests Avoid any ISP blacklisting in the process. If anyone has the time and has done this before, I'd love if you could walk me through the A-Z of best practices both from a configuration perspective and an execution perspective for generating these emails (anything from necessary DNS settings to ideal SMTP setup and configuration) Currently, our application generates email via Google Apps using the PHPMailer class. While this works well, it doesn't queue messages (potential for timeout problems if any of our clients amass a very large list of end-users), and Google limits the amount of allowed generated email messages to 500/day. I know this is a lofty question, but any guidance you could provide would be smashing and a big help as we work through this hurtle in our beta development stage. Thanks!

    Read the article

  • Generate JSON object with transactionReceipt

    - by Carlos
    Hi, I've been the past days trying to test my first in-app purchse iphone application. Unfortunately I can't find the way to talk to iTunes server to verify the transactionReceipt. Because it's my first try with this technology I chose to verify the receipt directly from the iPhone instead using server support. But after trying to send the POST request with a JSON onbject created using the JSON api from google code, itunes always returns a strange response (instead the "status = 0" string I wait for). Here's the code that I use to verify the receipt: - (void)recordTransaction:(SKPaymentTransaction *)transaction { NSString *receiptStr = [[NSString alloc] initWithData:transaction.transactionReceipt encoding:NSUTF8StringEncoding]; NSDictionary *jsonDictionary = [NSDictionary dictionaryWithObjectsAndKeys:@"algo mas",@"receipt-data",nil]; NSString *jsonString = [jsonDictionary JSONRepresentation]; NSLog(@"string to send: %@",jsonString); NSLog(@"JSON Created"); urlData = [[NSMutableData data] retain]; //NSURL *sandboxStoreURL = [[NSURL alloc] initWithString:@"https://sandbox.itunes.apple.com/verifyReceipt"]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:[NSURL URLWithString:@"https://sandbox.itunes.apple.com/verifyReceipt"]]; [request setHTTPMethod:@"POST"]; [request setHTTPBody:[jsonString dataUsingEncoding:NSUTF8StringEncoding]]; NSLog(@"will create connection"); [[NSURLConnection alloc] initWithRequest:request delegate:self]; } maybe I'm forgetting something in the request's headers but I think that the problem is in the method I use to create the JSON object. HEre's how the JSON object looks like before I add it to the HTTPBody : string to send: {"receipt-data":"{\n\t\"signature\" = \"AUYMbhY ........... D0gIjEuMCI7Cn0=\";\n\t\"pod\" = \"100\";\n\t\"signing-status\" = \"0\";\n}"} The responses I've got: complete response { exception = "java.lang.IllegalArgumentException: Property list parsing failed while attempting to read unquoted string. No allowable characters were found. At line number: 1, column: 0."; status = 21002; } Thanks a lot for your guidance.

    Read the article

  • How to best transfer large payloads of data using wsHttp with WCF with message security

    - by jpierson
    I have a case where I need to transfer large amounts of serialized object graphs (via NetDataContractSerializer) using WCF using wsHttp. I'm using message security and would like to continue to do so. Using this setup I would like to transfer serialized object graph which can sometimes approach around 300MB or so but when I try to do so I've started seeing a exception of type System.InsufficientMemoryException appear. After a little research it appears that by default in WCF that a result to a service call is contained within a single message by default which contains the serialized data and this data is buffered by default on the server until the whole message is completely written. Thus the memory exception is being caused by the fact that the server is running out of memory resources that it is allowed to allocate because that buffer is full. The two main recommendations that I've come across are to use streaming or chunking to solve this problem however it is not clear to me what that involves and whether either solution is possible with my current setup (wsHttp/NetDataContractSerializer/Message Security). So far I understand that to use streaming message security would not work because message encryption and decryption need to work on the whole set of data and not a partial message. Chunking however sounds like it might be possible however it is not clear to me how it would be done with the other constraints that I've listed. If anybody could offer some guidance on what solutions are available and how to go about implementing it I would greatly appreciate it. Related resources: Chunking Channel How to: Enable Streaming Large attachments over WCF Custom Message Encoder Another spotting of InsufficientMemoryException I'm also interested in any type of compression that could be done on this data but it looks like I would probably be best off doing this at the transport level once I can transition into .NET 4.0 so that the client will automatically support the gzip headers if I understand this properly.

    Read the article

  • Rhino Mocks, Dependency Injection, and Separation of Concerns

    - by whatispunk
    I am new to mocking and dependency injection and need some guidance. My application is using a typical N-Tier architecture where the BLL references the DAL, and the UI references the BLL but not the DAL. Pretty straight forward. Lets say, for example, I have the following classes: class MyDataAccess : IMyDataAccess {} class MyBusinessLogic {} Each exists in a separate assembly. I want to mock MyDataAccess in the tests for MyBusinessLogic. So I added a constructor to the MyBusinessLogic class to take an IMyDataAccess parameter for the dependency injection. But now when I try to create an instance of MyBusinessLogic on the UI layer it requires a reference to the DAL. I thought I could define a default constructor on MyBusinessLogic to set a default IMyDataAccess implementation, but not only does this seem like a codesmell it didn't actually solve the problem. I'd still have a public constructor with IMyDataAccess in the signature. So the UI layer still requires a reference to the DAL in order to compile. One possible solution I am toying with is to create an internal constructor for MyBusinessLogic with the IMyDataAccess parameter. Then I can use an Accessor from the test project to call the constructor. But there's still that smell. What is the common solution here. I must just be doing something wrong. How could I improve the architecture?

    Read the article

  • Reverse engineering and patching a DirectX game?

    - by yodaj007
    Background I am playing Imperishable Night, one of the Touhou series of games. The shoot button is 'z', moving slower is 'shift', and the arrow keys move. Unfortunately for me, using shift-z ghosts my right arrow key, so I can't move to the right while shooting. This ghosting happens in all applications, and switching keyboards fixes it. Goal I want to locate in the disassembled code the directx function that gets the keyboard input and compares it against the 'z' key, and change that key to 'a'. I'm considering this a fun project. Assuming the size of the scan codes are the same, this should be fairly simple. And because the executable is only 400k, maybe this will provide a unique opportunity for me to explore the dark side of the computing underworld (kidding). Relevant experience I have some experience with coding in assembly, but not in the disassembly of such. I have no experience with the DirectX apis. Question I need some guidance. I've found a listing of directx keyboard scan codes, and a program called PEExplorer that looks like it will do what I need. Is there a means by which I can turn some of the assembly with C function calls so it's more easily read? I will need to locate where the game retrieves the currently pressed keys, compares those against a list, and it's that list I need to modify. Any input would be greatly appreciated.

    Read the article

  • Adding new record to a VFP data table in VB.NET with ADO recordsets

    - by Gerry
    I am trying to add a new record to a Visual FoxPro data table using an ADO dataset with no luck. The code runs fine with no exceptions but when I check the dbf after the fact there is no new record. The mDataPath variable shown in the code snippet is the path to the .dbc file for the entire database. A note about the For loop at the bottom; I am adding the body of incoming emails to this MEMO field so thought I needed to break the addition of this string into 256 character Chunks. Any guidance would be greatly appreciated. cnn1.Open("Driver={Microsoft Visual FoxPro Driver};" & _ "SourceType=DBC;" & _ "SourceDB=" & mDataPath & ";Exclusive=No") Dim RS As ADODB.RecordsetS = New ADODB.Recordset RS.Open("select * from gennote", cnn1, 1, 3, 1) RS.AddNew() 'Assign values to the first three fields RS.Fields("ignnoteid").Value = NextIDI RS.Fields("cnotetitle").Value = "'" & mail.Subject & "'" RS.Fields("cfilename").Value = "''" 'Looping through 254 characters at a time and add the data 'to Ado Field buffer For i As Integer = 1 To Len(memo) Step liChunkSize liStartAt = i liWorkString = Mid(mail.Body, liStartAt, liChunkSize) RS.Fields("mnote").AppendChunk(liWorkString) Next 'Update the recordset RS.Update() RS.Requery() RS.Close()

    Read the article

  • Recover files from corrupt filesystem

    - by Emile 81
    My situation: I have an older 80GB IDE internal hdd, with a few files on that I would like very much to recover: some word documents some latex documents (text files) and pictures (png, jpg, eps files) some other text documents and visual studio project files I had backed them (not the latex ones though) up using svn, but have not committed lately, and would loose a lot of work if I cant recover. the hdd seems to have lost its filesystem, i have no idea how it came about. I know it has/had 3 NTFS partitions, i know the files i want are on the second or third partition. I read http://superuser.com/questions/81877/recover-hard-disk-data Partition Find and Mount did not see all the partitions using intelligent scan TestDisk does (i think), I followed the step by step instructions here, but when I try to list the files it says: "Can't open filesystem, filesystem seems damaged." I'm not sure how to proceed here, as TestDisks wiki does not contain this error message afaik. I don't know if the hdd is gonna fail, or some prog has caused the filesystem to be corrupt, the hdd doesnt make a sound, so i guess that's good. I would like some guidance so I don't accidentally cause more damage. (eg. is it ok to let testdisk write the filesystem to disk? I'm pretty the partitions are listed ok, but not 100%)

    Read the article

  • Embedding IronPython in a WinForms app and interrupting execution

    - by namenlos
    BACKGROUND I've successfully embedded IronPython in my WinForm apps using techniques like the one described here: http://blog.peterlesliemorris.com/archive/2010/05/19/embedding-ironpython-into-a-c-application.aspx In the context of the embedding, my user may any write loops, etc. I'm using the IronPython 2.6 (the IronPython for .NET 2.0 and IronPython for .NET 4.0) MY PROBLEM Sometimes the users will need to interrupt the execution of their code In other words they need something like the ability to hit CTRL-C to halt execution when running Python or IronPython from the cmdline I want to add a button to the winform that when pressed halts the execution, but I'm not sure how to do this. MY PROBLEM The users will need to interrupt the execution of their code In other words they need something like the ability to hit CTRL-C to halt execution when running Python or IronPython from the cmdline MY QUESTION How can I make it to that pressing the a "stop" button will actually halt the execution of the using entered IronPython code? NOTES Note: I don't wont to simply through away that "session" - I still want the user to be able to interact with session and access any results that were available before it was halted. I am assuming I will need to execute this in a separate thread, any guidance or sample code in doing this correctly will be appreciated.

    Read the article

  • iPhone TableView alternative options for Check Mark Accessory

    - by jimsis
    I'm looking for the best approach to this problem. In my tableview I have a list of options from which you can select one and only one. The problem is the selection to choose is not obvious without displaying more details on the option. If I use the disclosure indicator or button for the more detail, I lose the checkmark functionality. In searching around I see some have used the cell Image as a work around. I see others instead of using the standard disclosure button have created custom disclosure button looking like a checkmark. Haven't seen this one but is it viable (HIG) to add a button in the cell ('more info') to launch the next tableview. My thought was to use a disclosure indicator and on the second view in the navigation bar (where the edit button usually is) add a 'selectMe' button. I think I am probably manage to code either of the above, am just asking for information on what is the best (HIG) way. Example Option 1 Option 2 Option 3 (x) Option 4 Where x is the checked one But in order to know which is the best choice you need to see Option 3 (Header) Option 3-a Option 3-b Option 3-c Option 3-d Where even at this level option 3-c might have additional information. Any guidance you can provide would appreciated.

    Read the article

  • Boost in Visual Studio 2010, IntelliSense error

    - by Peretz
    Hello, I would like to see if you could orient me. It happens that I compiled and referenced the boost libraries in order to use them with Visual Studio 2010. When building my test project I get these two IntelliSense errors 1 IntelliSense: #error directive: "Macro BOOST_LIB_NAME not set (internal error)" c:\boost_1_43_0\boost\config\auto_link.hpp 2 IntelliSense: #error directive: "some required macros where not defined (internal logic error)." c:\boost_1_43_0\boost\config\auto_link.hpp Checking the auto_link.hpp header file the first error is in this line #ifndef BOOST_LIB_NAME # error "Macro BOOST_LIB_NAME not set (internal error)" #endif Tracing the definition of BOOST_LIB_NAME, it seems that is defined in config.hpp by boost_regex, which code I am including below #if !defined(BOOST_REGEX_NO_LIB) && !defined(BOOST_REGEX_SOURCE) && !defined(BOOST_ALL_NO_LIB) && defined(__cplusplus) # define BOOST_LIB_NAME boost_regex # if defined(BOOST_REGEX_DYN_LINK) || defined(BOOST_ALL_DYN_LINK) # define BOOST_DYN_LINK ... more code and strangely when I point to BOOST_LIB_NAME it defines BOOST_LIB_NAME and the IntelliSense errors disappear. My program builds and executes fine using the Boost:Regex library -- with or without the Intellisense errors; however, I do not understand why these IntelliSense errors appear in the first place, and second why pointing the macro in the config.hpp defines BOOST_LIB_NAME. Any guidance will be greatly appreciated. Thanks, Jaime

    Read the article

  • How to weed out the bad programmers from the competent ones in the interview process

    - by thaBadDawg
    I am getting ready to add another developer to my team and I want to try and fix the mistakes I made in my last hiring cycle. I like to think of myself as a competent programmer (I can be given a project, I can deliver on that project and the deliverable work with very few if any bugs) and so I ask questions that I would ask myself in an interview. I've come to the conclusion that my interviewing skills are completely lacking because the last two people I've hired interviewed incredibly well but have been less than ideal at the tasks that they've been given. My CTO (who was completely useless in giving any guidance as to how) suggested I improve on my interviewing skills. The question is this - How does one programmer interview another programmer and get an understanding of the other programmer's abilities? Edit: Though slightly different, the answers provided to this question could be of use to you. That question concerns specific interview questions while yours seems to be more general about interview approaches and not just about the questions themselves. Update: Just for the hell of it I asked two of the guys I worked with if they could do FizzBuzz. 45 minutes and 80 minutes to work it out. And these aren't bottom level guys either.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >