Search Results

Search found 1309 results on 53 pages for 'solving'.

Page 50/53 | < Previous Page | 46 47 48 49 50 51 52 53  | Next Page >

  • Varnish VCL - Regular Expression Evaluation

    - by Hugues ALARY
    I have been struggling for the past few days with this problem: Basically, I want to send to a client browser a cookie of the form foo[sha1oftheurl]=[randomvalue] if and only if the cookie has not already been set. e.g. If a client browser requests "/page.html", the HTTP response will be like: resp.http.Set-Cookie = "foo4c9ae249e9e061dd6e30893e03dc10a58cc40ee6=ABCD;" then, if the same client request "/index.html", the HTTP response will contain a header: resp.http.Set-Cookie = "foo14fe4559026d4c5b5eb530ee70300c52d99e70d7=QWERTY;" In the end, the client browser will have 2 cookies: foo4c9ae249e9e061dd6e30893e03dc10a58cc40ee6=ABCD foo14fe4559026d4c5b5eb530ee70300c52d99e70d7=QWERTY Now, that, is not complicated in itself. The following code does it: import digest; import random; ##This vmod does not exist, it's just for the example. sub vcl_recv() { ## We compute the sha1 of the requested URL and store it in req.http.Url-Sha1 set req.http.Url-Sha1 = digest.hash_sha1(req.url); set req.http.random-value = random.get_rand(); } sub vcl_deliver() { ## We create a cookie on the client browser by creating a "Set-Cookie" header ## In our case the cookie we create is of the form foo[sha1]=[randomvalue] ## e.g for a URL "/page.html" the cookie will be foo4c9ae249e9e061dd6e30893e03dc10a58cc40ee6=[randomvalue] set resp.http.Set-Cookie = {""} + resp.http.Set-Cookie + "foo"+req.http.Url-Sha1+"="+req.http.random-value; } However, this code does not take into account the case where the Cookie already exists. I need to check that the Cookie does not exists before generating a random value. So I thought about this code: import digest; import random; sub vcl_recv() { ## We compute the sha1 of the requested URL and store it in req.http.Url-Sha1 set req.http.Url-Sha1 = digest.hash_sha1(req.url); set req.http.random-value = random.get_rand(); set req.http.regex = "abtest"+req.http.Url-Sha1; if(!req.http.Cookie ~ req.http.regex) { set req.http.random-value = random.get_rand(); } } The problem is that Varnish does not compute Regular expression at run time. Which leads to this error when I try to compile: Message from VCC-compiler: Expected CSTR got 'req.http.regex' (program line 940), at ('input' Line 42 Pos 31) if(req.http.Cookie !~ req.http.regex) { ------------------------------##############--- Running VCC-compiler failed, exit 1 VCL compilation failed One could propose to solve my problem by matching on the "abtest" part of the cookie or even "abtest[a-fA-F0-9]{40}": if(!req.http.Cookie ~ "abtest[a-fA-F0-9]{40}") { set req.http.random-value = random.get_rand(); } But this code matches any cookie starting by 'abtest' and containing an hexadecimal string of 40 characters. Which means that if a client requests "/page.html" first, then "/index.html", the condition will evaluate to true even if the cookie for the "/index.html" has not been set. I found in bug report phk or someone else stating that computing regular expressions was extremely expensive which is why they are evaluated during compilation. Considering this, I believe that there is no way of achieving what I want the way I've been trying to. Is there any way of solving this problem, other than writting a vmod? Thanks for your help! -Hugues

    Read the article

  • The Sensemaking Spectrum for Business Analytics: Translating from Data to Business Through Analysis

    - by Joe Lamantia
    One of the most compelling outcomes of our strategic research efforts over the past several years is a growing vocabulary that articulates our cumulative understanding of the deep structure of the domains of discovery and business analytics. Modes are one example of the deep structure we’ve found.  After looking at discovery activities across a very wide range of industries, question types, business needs, and problem solving approaches, we've identified distinct and recurring kinds of sensemaking activity, independent of context.  We label these activities Modes: Explore, compare, and comprehend are three of the nine recognizable modes.  Modes describe *how* people go about realizing insights.  (Read more about the programmatic research and formal academic grounding and discussion of the modes here: https://www.researchgate.net/publication/235971352_A_Taxonomy_of_Enterprise_Search_and_Discovery) By analogy to languages, modes are the 'verbs' of discovery activity.  When applied to the practical questions of product strategy and development, the modes of discovery allow one to identify what kinds of analytical activity a product, platform, or solution needs to support across a spread of usage scenarios, and then make concrete and well-informed decisions about every aspect of the solution, from high-level capabilities, to which specific types of information visualizations better enable these scenarios for the types of data users will analyze. The modes are a powerful generative tool for product making, but if you've spent time with young children, or had a really bad hangover (or both at the same time...), you understand the difficult of communicating using only verbs.  So I'm happy to share that we've found traction on another facet of the deep structure of discovery and business analytics.  Continuing the language analogy, we've identified some of the ‘nouns’ in the language of discovery: specifically, the consistently recurring aspects of a business that people are looking for insight into.  We call these discovery Subjects, since they identify *what* people focus on during discovery efforts, rather than *how* they go about discovery as with the Modes. Defining the collection of Subjects people repeatedly focus on allows us to understand and articulate sense making needs and activity in more specific, consistent, and complete fashion.  In combination with the Modes, we can use Subjects to concretely identify and define scenarios that describe people’s analytical needs and goals.  For example, a scenario such as ‘Explore [a Mode] the attrition rates [a Measure, one type of Subject] of our largest customers [Entities, another type of Subject] clearly captures the nature of the activity — exploration of trends vs. deep analysis of underlying factors — and the central focus — attrition rates for customers above a certain set of size criteria — from which follow many of the specifics needed to address this scenario in terms of data, analytical tools, and methods. We can also use Subjects to translate effectively between the different perspectives that shape discovery efforts, reducing ambiguity and increasing impact on both sides the perspective divide.  For example, from the language of business, which often motivates analytical work by asking questions in business terms, to the perspective of analysis.  The question posed to a Data Scientist or analyst may be something like “Why are sales of our new kinds of potato chips to our largest customers fluctuating unexpectedly this year?” or “Where can innovate, by expanding our product portfolio to meet unmet needs?”.  Analysts translate questions and beliefs like these into one or more empirical discovery efforts that more formally and granularly indicate the plan, methods, tools, and desired outcomes of analysis.  From the perspective of analysis this second question might become, “Which customer needs of type ‘A', identified and measured in terms of ‘B’, that are not directly or indirectly addressed by any of our current products, offer 'X' potential for ‘Y' positive return on the investment ‘Z' required to launch a new offering, in time frame ‘W’?  And how do these compare to each other?”.  Translation also happens from the perspective of analysis to the perspective of data; in terms of availability, quality, completeness, format, volume, etc. By implication, we are proposing that most working organizations — small and large, for profit and non-profit, domestic and international, and in the majority of industries — can be described for analytical purposes using this collection of Subjects.  This is a bold claim, but simplified articulation of complexity is one of the primary goals of sensemaking frameworks such as this one.  (And, yes, this is in fact a framework for making sense of sensemaking as a category of activity - but we’re not considering the recursive aspects of this exercise at the moment.) Compellingly, we can place the collection of subjects on a single continuum — we call it the Sensemaking Spectrum — that simply and coherently illustrates some of the most important relationships between the different types of Subjects, and also illuminates several of the fundamental dynamics shaping business analytics as a domain.  As a corollary, the Sensemaking Spectrum also suggests innovation opportunities for products and services related to business analytics. The first illustration below shows Subjects arrayed along the Sensemaking Spectrum; the second illustration presents examples of each kind of Subject.  Subjects appear in colors ranging from blue to reddish-orange, reflecting their place along the Spectrum, which indicates whether a Subject addresses more the viewpoint of systems and data (Data centric and blue), or people (User centric and orange).  This axis is shown explicitly above the Spectrum.  Annotations suggest how Subjects align with the three significant perspectives of Data, Analysis, and Business that shape business analytics activity.  This rendering makes explicit the translation and bridging function of Analysts as a role, and analysis as an activity. Subjects are best understood as fuzzy categories [http://georgelakoff.files.wordpress.com/2011/01/hedges-a-study-in-meaning-criteria-and-the-logic-of-fuzzy-concepts-journal-of-philosophical-logic-2-lakoff-19731.pdf], rather than tightly defined buckets.  For each Subject, we suggest some of the most common examples: Entities may be physical things such as named products, or locations (a building, or a city); they could be Concepts, such as satisfaction; or they could be Relationships between entities, such as the variety of possible connections that define linkage in social networks.  Likewise, Events may indicate a time and place in the dictionary sense; or they may be Transactions involving named entities; or take the form of Signals, such as ‘some Measure had some value at some time’ - what many enterprises understand as alerts.   The central story of the Spectrum is that though consumers of analytical insights (represented here by the Business perspective) need to work in terms of Subjects that are directly meaningful to their perspective — such as Themes, Plans, and Goals — the working realities of data (condition, structure, availability, completeness, cost) and the changing nature of most discovery efforts make direct engagement with source data in this fashion impossible.  Accordingly, business analytics as a domain is structured around the fundamental assumption that sense making depends on analytical transformation of data.  Analytical activity incrementally synthesizes more complex and larger scope Subjects from data in its starting condition, accumulating insight (and value) by moving through a progression of stages in which increasingly meaningful Subjects are iteratively synthesized from the data, and recombined with other Subjects.  The end goal of  ‘laddering’ successive transformations is to enable sense making from the business perspective, rather than the analytical perspective.Synthesis through laddering is typically accomplished by specialized Analysts using dedicated tools and methods. Beginning with some motivating question such as seeking opportunities to increase the efficiency (a Theme) of fulfillment processes to reach some level of profitability by the end of the year (Plan), Analysts will iteratively wrangle and transform source data Records, Values and Attributes into recognizable Entities, such as Products, that can be combined with Measures or other data into the Events (shipment of orders) that indicate the workings of the business.  More complex Subjects (to the right of the Spectrum) are composed of or make reference to less complex Subjects: a business Process such as Fulfillment will include Activities such as confirming, packing, and then shipping orders.  These Activities occur within or are conducted by organizational units such as teams of staff or partner firms (Networks), composed of Entities which are structured via Relationships, such as supplier and buyer.  The fulfillment process will involve other types of Entities, such as the products or services the business provides.  The success of the fulfillment process overall may be judged according to a sophisticated operating efficiency Model, which includes tiered Measures of business activity and health for the transactions and activities included.  All of this may be interpreted through an understanding of the operational domain of the businesses supply chain (a Domain).   We'll discuss the Spectrum in more depth in succeeding posts.

    Read the article

  • Refactoring FizzBuzz

    - by MarkPearl
    A few years ago I blogger about FizzBuzz, at the time the post was prompted by Scott Hanselman who had podcasted about how surprized he was that some programmers could not even solve the FizzBuzz problem within a reasonable period of time during a job interview. At the time I thought I would give the problem a go in F# and sure enough the solution was fairly simple – I then also did a basic solution in C# but never posted it. Since then I have learned that being able to solve a problem and how you solve the problem are two totally different things. Today I decided to give the problem a retry and see if I had learnt anything new in the last year or so. Here is how my solution looked after refactoring… Solution 1 – Cheap and Nasty public class FizzBuzzCalculator { public string NumberFormat(int number) { var numDivisibleBy3 = (number % 3) == 0; var numDivisibleBy5 = (number % 5) == 0; if (numDivisibleBy3 && numDivisibleBy5) return String.Format("{0} FizzBuz", number); else if (numDivisibleBy3) return String.Format("{0} Fizz", number); else if (numDivisibleBy5) return String.Format("{0} Buz", number); return number.ToString(); } } class Program { static void Main(string[] args) { var fizzBuzz = new FizzBuzzCalculator(); for (int i = 0; i < 100; i++) { Console.WriteLine(fizzBuzz.NumberFormat(i)); } } } My first attempt I just looked at solving the problem – it works, and could be an acceptable solution but tonight I thought I would see how far  I could refactor it… The section I decided to focus on was the mass of if..else code in the NumberFormat method. Solution 2 – Replacing If…Else with a Dictionary public class FizzBuzzCalculator { private readonly Dictionary<Tuple<bool, bool>, string> _mappings; public FizzBuzzCalculator(Dictionary<Tuple<bool, bool>, string> mappings) { _mappings = mappings; } public string NumberFormat(int number) { var numDivisibleBy3 = (number % 3) == 0; var numDivisibleBy5 = (number % 5) == 0; var mappedKey = new Tuple<bool, bool>(numDivisibleBy3, numDivisibleBy5); return String.Format("{0} {1}", number, _mappings[mappedKey]); } } class Program { static void Main(string[] args) { var mappings = new Dictionary<Tuple<bool, bool>, string> { { new Tuple<bool, bool>(true, true), "- FizzBuzz"}, { new Tuple<bool, bool>(true, false), "- Fizz"}, { new Tuple<bool, bool>(false, true), "- Buzz"}, { new Tuple<bool, bool>(false, false), ""} }; var fizzBuzz = new FizzBuzzCalculator(mappings); for (int i = 0; i < 100; i++) { Console.WriteLine(fizzBuzz.NumberFormat(i)); } Console.ReadLine(); } } In my second attempt I looked at removing the if else in the NumberFormat method. A dictionary proved to be useful for this – I added a constructor to the class and injected the dictionary mapping. One could argue that this is totally overkill, but if I was going to use this code in a large system an approach like this makes it easy to put this data in a configuration file, which would up its OC (Open for extensibility, closed for modification principle). I could of course take the OC principle even further – the check for divisibility by 3 and 5 is tightly coupled to this class. If I wanted to make it 4 instead of 3, I would need to adjust this class. This introduces my third refactoring. Solution 3 – Introducing Delegates and Injecting them into the class public delegate bool FizzBuzzComparison(int number); public class FizzBuzzCalculator { private readonly Dictionary<Tuple<bool, bool>, string> _mappings; private readonly FizzBuzzComparison _comparison1; private readonly FizzBuzzComparison _comparison2; public FizzBuzzCalculator(Dictionary<Tuple<bool, bool>, string> mappings, FizzBuzzComparison comparison1, FizzBuzzComparison comparison2) { _mappings = mappings; _comparison1 = comparison1; _comparison2 = comparison2; } public string NumberFormat(int number) { var mappedKey = new Tuple<bool, bool>(_comparison1(number), _comparison2(number)); return String.Format("{0} {1}", number, _mappings[mappedKey]); } } class Program { private static bool DivisibleByNum(int number, int divisor) { return number % divisor == 0; } public static bool Divisibleby3(int number) { return number % 3 == 0; } public static bool Divisibleby5(int number) { return number % 5 == 0; } static void Main(string[] args) { var mappings = new Dictionary<Tuple<bool, bool>, string> { { new Tuple<bool, bool>(true, true), "- FizzBuzz"}, { new Tuple<bool, bool>(true, false), "- Fizz"}, { new Tuple<bool, bool>(false, true), "- Buzz"}, { new Tuple<bool, bool>(false, false), ""} }; var fizzBuzz = new FizzBuzzCalculator(mappings, Divisibleby3, Divisibleby5); for (int i = 0; i < 100; i++) { Console.WriteLine(fizzBuzz.NumberFormat(i)); } Console.ReadLine(); } } I have taken this one step further and introduced delegates that are injected into the FizzBuzz Calculator class, from an OC principle perspective it has probably made it more compliant than the previous Solution 2, but there seems to be a lot of noise. Anonymous Delegates increase the readability level, which is what I have done in Solution 4. Solution 4 – Anon Delegates public delegate bool FizzBuzzComparison(int number); public class FizzBuzzCalculator { private readonly Dictionary<Tuple<bool, bool>, string> _mappings; private readonly FizzBuzzComparison _comparison1; private readonly FizzBuzzComparison _comparison2; public FizzBuzzCalculator(Dictionary<Tuple<bool, bool>, string> mappings, FizzBuzzComparison comparison1, FizzBuzzComparison comparison2) { _mappings = mappings; _comparison1 = comparison1; _comparison2 = comparison2; } public string NumberFormat(int number) { var mappedKey = new Tuple<bool, bool>(_comparison1(number), _comparison2(number)); return String.Format("{0} {1}", number, _mappings[mappedKey]); } } class Program { static void Main(string[] args) { var mappings = new Dictionary<Tuple<bool, bool>, string> { { new Tuple<bool, bool>(true, true), "- FizzBuzz"}, { new Tuple<bool, bool>(true, false), "- Fizz"}, { new Tuple<bool, bool>(false, true), "- Buzz"}, { new Tuple<bool, bool>(false, false), ""} }; var fizzBuzz = new FizzBuzzCalculator(mappings, (n) => n % 3 == 0, (n) => n % 5 == 0); for (int i = 0; i < 100; i++) { Console.WriteLine(fizzBuzz.NumberFormat(i)); } Console.ReadLine(); } }   Using the anonymous delegates I think the noise level has now been reduced. This is where I am going to end this post, I have gone through 4 iterations of the code from the initial solution using If..Else to delegates and dictionaries. I think each approach would have it’s pro’s and con’s and depending on the intention of where the code would be used would be a large determining factor. If you can think of an alternative way to do FizzBuzz, add a comment!

    Read the article

  • Orchestrating the Virtual Enterprise

    - by John Murphy
    During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems.  Case in point, almost everyone has ordered from Amazon.com at one time or another. Our orders are as likely to be fulfilled by third parties as they are by Amazon itself. To deliver the order promptly and efficiently, Amazon has to send it to the right fulfillment location and know the availability in that location. It needs to be able to track status of the fulfillment and deal with exceptions. As a virtual enterprise, Amazon's operations, using thousands of trading partners, requires a very different approach to fulfillment than the traditional 'take an order and ship it from your own warehouse' model. Amazon had no choice but to develop a complex, expensive and custom solution to tackle this problem as there used to be no product solution available. Now, other companies who want to follow similar models have a better off-the-shelf choice -- Oracle Distributed Order Orchestration (DOO).  Consider how another of our customers is using our distributed orchestration solution. This major airplane manufacturer has a highly complex business and interacts regularly with the U.S. Government and major airlines. It sits in the middle of an intricate supply chain and needed to improve visibility across its many different entities. Oracle Fusion DOO gives the company an orchestration mechanism so it could improve quality, speed, flexibility, and consistency without requiring an organ transplant of these highly complex legacy systems. Many retailers face the challenge of dealing with brick and mortar, Web, and reseller channels. They all need to be knitted together into a virtual enterprise experience that is consistent for their customers. When a large U.K. grocer with a strong brick and mortar retail operation added an online business, they turned to Oracle Fusion DOO to bring these entities together. Disturbing the Peace with Acquisitions Quite often a company's ERP system is disrupted when it acquires a new company. An acquisition can inject a new set of processes and systems -- or even introduce an entirely new business like Sun's hardware did at Oracle. This challenge has been a driver for some of our DOO customers. A large power management company is using Oracle Fusion DOO to provide the flexibility to rapidly integrate additional products and services into its central fulfillment operation. The Flip Side of Fulfillment Meanwhile, we haven't ignored similar challenges on the supply side of the equation. Specifically, how to manage complex supply in a flexible way when there are multiple trading parties involved? How to manage the supply to suppliers? How to manage critical components that need to merge in a tier two or tier three supply chain? By investing in supply orchestration solutions for the virtual enterprise, we plan to give users better visibility into their network of suppliers to help them drive down costs. We also think this technology and full orchestration process can be applied to the financial side of organizations. An example is transactions that flow through complex internal structures to minimize tax exposure. We can help companies manage those transactions effectively by thinking about the internal organization as a virtual enterprise and bringing the same solution set to this internal challenge.  The Clear Front Runner No other company is investing in solving the virtual enterprise supply chain issues like Oracle is. Oracle is in a unique position to become the gold standard in this market space. We have the infrastructure of Oracle technology. We already have an Oracle Fusion DOO application which embraces the best of what's required in this area. And we're absolutely committed to extending our Fusion solution to other use cases and delivering even more business value.

    Read the article

  • Partner Blog Series: PwC Perspectives Part 2 - Jumpstarting your IAM program with R2

    - by Tanu Sood
    Identity and access management (IAM) isn’t a new concept. Over the past decade, companies have begun to address identity management through a variety of solutions that have primarily focused on provisioning. . The new age workforce is converging at a rapid pace with ever increasing demand to use diverse portfolio of applications and systems to interact and interface with their peers in the industry and customers alike. Oracle has taken a significant leap with their release of Identity and Access Management 11gR2 towards enabling this global workforce to conduct their business in a secure, efficient and effective manner. As companies deal with IAM business drivers, it becomes immediately apparent that holistic, rather than piecemeal, approaches better address their needs. When planning an enterprise-wide IAM solution, the first step is to create a common framework that serves as the foundation on which to build the cost, compliance and business process efficiencies. As a leading industry practice, IAM should be established on a foundation of accurate data for identity management, making this data available in a uniform manner to downstream applications and processes. Mature organizations are looking beyond IAM’s basic benefits to harness more advanced capabilities in user lifecycle management. For any organization looking to embark on an IAM initiative, consider the following use cases in managing and administering user access. Expanding the Enterprise Provisioning Footprint Almost all organizations have some helpdesk resources tied up in handling access requests from users, a distraction from their core job of handling problem tickets. This dependency has mushroomed from the traditional acceptance of provisioning solutions integrating and addressing only a portion of applications in the heterogeneous landscape Oracle Identity Manager (OIM) 11gR2 solves this problem by offering integration with third party ticketing systems as “disconnected applications”. It allows for the existing business processes to be seamlessly integrated into the system and tracked throughout its lifecycle. With minimal effort and analysis, an organization can begin integrating OIM with groups or applications that are involved with manually intensive access provisioning and de-provisioning activities. This aspect of OIM allows organizations to on-board applications and associated business processes quickly using out of box templates and frameworks. This is especially important for organizations looking to fold in users and resources from mergers and acquisitions. Simplifying Access Requests Organizations looking to implement access request solutions often find it challenging to get their users to accept and adopt the new processes.. So, how do we improve the user experience, make it intuitive and personalized and yet simplify the user access process? With R2, OIM helps organizations alleviate the challenge by placing the most used functionality front and centre in the new user request interface. Roles, application accounts, and entitlements can all be found in the same interface as catalog items, giving business users a single location to go to whenever they need to initiate, approve or track a request. Furthermore, if a particular item is not relevant to a user’s job function or area inside the organization, it can be hidden so as to not overwhelm or confuse the user with superfluous options. The ability to customize the user interface to suit your needs helps in exercising the business rules effectively and avoiding access proliferation within the organization. Saving Time with Templates A typical use case that is most beneficial to business users is flexibility to place, edit, and withdraw requests based on changing circumstances and business needs. With OIM R2, multiple catalog items can now be added and removed from the shopping cart, an ecommerce paradigm that many users are already familiar with. This feature can be especially useful when setting up a large number of new employees or granting existing department or group access to a newly integrated application. Additionally, users can create their own shopping cart templates in order to complete subsequent requests more quickly. This feature saves the user from having to search for and select items all over again if a request is similar to a previous one. Advanced Delegated Administration A key feature of any provisioning solution should be to empower each business unit in managing their own access requests. By bringing administration closer to the user, you improve user productivity, enable efficiency and alleviate the administration overhead. To do so requires a federated services model so that the business units capable of shouldering the onus of user life cycle management of their business users can be enabled to do so. OIM 11gR2 offers advanced administrative options for creating, managing and controlling business logic and workflows through easy to use administrative interface and tools that can be exposed to delegated business administrators. For example, these business administrators can establish or modify how certain requests and operations should be handled within their business unit based on a number of attributes ranging from the type of request or the risk level of the individual items requested. Closed-Loop Remediation Security continues to be a major concern for most organizations. Identity management solutions bolster security by ensuring only the right users have the right access to the right resources. To prevent unauthorized access and where it already exists, the ability to detect and remediate it, are key requirements of an enterprise-grade proven solution. But the challenge with most solutions today is that some of this information still exists in silos. And when changes are made to systems directly, not all information is captured. With R2, oracle is offering a comprehensive Identity Governance solution that our customer organizations are leveraging for closed loop remediation that allows for an automated way for administrators to revoke unauthorized access. The change is automatically captured and the action noted for continued management. Conclusion While implementing provisioning solutions, it is important to keep the near term and the long term goals in mind. The provisioning solution should always be a part of a larger security and identity management program but with the ability to seamlessly integrate not only with the company’s infrastructure but also have the ability to leverage the information, business models compiled and used by the other identity management solutions. This allows organizations to reduce the cost of ownership, close security gaps and leverage the existing infrastructure. And having done so a multiple clients’ sites, this is the approach we recommend. In our next post, we will take a journey through our experiences of advising clients looking to upgrade to R2 from a previous version or migrating from a different solution. Meet the Writers:   Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years.

    Read the article

  • Big data: An evening in the life of an actual buyer

    - by Jean-Pierre Dijcks
    Here I am, and this is an actual story of one of my evenings, trying to spend money with a company and ultimately failing. I just gave up and bought a service from another vendor, not the incumbent. Here is that story and how I think big data could actually fix this (and potentially prevent some of this from happening). In the end this story should illustrate how big data can benefit me (get me what I want without causing grief) and the company I am trying to buy something from. Note: Lots of details left out, I have no intention of being the annoyed blogger moaning about a specific company. What did I want to get? We watch TV, we have internet and we do have a land line. The land line is from a different vendor then the TV and the internet. I have decided that this makes no sense and I was going to get a bundle (no need to infer who this is, I just picked the generic bundle word as this is what I want to get) of all three services as this seems to save me money. I also want to not talk to people, I just want to click on a website when I feel like it and get it all sorted. I do think that is reality. I want to just do my shopping at 9.30pm while watching silly reruns on TV. Problem 1 - Bad links So, I'm an existing customer of the company I want to buy my bundle from. I go to the website, I click on offers. Turns out they are offers for new customers. After grumbling about how good they are, I click on offers for existing customers. Bummer, it goes to offers for new customers, so I click again on the link for offers for existing customers. No cigar... it just does not work. Big data solutions: 1) Do not show an existing customer the offers for new customers unless they are the same => This is only partially doable without login, but if a customer logs in the application should always know that this is an existing customer. But in general, imagine I do this from my home going through the internet service of this vendor to their domain... an instant filter should move me into the "existing customer route". 2) Flag dead or incorrect links => I've clicked the link for "existing customer offers" at least 3 times in under 5 seconds... Identifying patterns like this is easy in Hadoop and can very quickly make a list of potentially incorrect links. No need for realtime fixing, just the fact that this link can be pro-actively fixed across my entire web domain is a good thing. Preventative maintenance! Problem 2 - Purchase cannot be completed Apart from the fact that the browsing pattern to actually get to what I want is poorly designed, my purchase never gets past a specific point. In other words, I put something into my shopping cart and when I want to move on the application either crashes (with me going to an error page) or hangs or goes into something like chat. So I try again, and again and again. I think I tried this entire path (while being logged in!!) at least 10 times over the course of 20 minutes. I also clicked on the feedback button and, frustrated as I was, tried to explain this did not work... Big Data Solutions: 1) This web site does shopping cart analysis. I got an email next day stating I have things in my shopping cart, just click here to complete my purchase. After the above experience, this just added insult to my pain... 2) What should have happened, is a Hadoop job going over all logged in customers that are on the buy flow. It should flag anyone who is trying (multiple attempts from the same user to do the same thing), analyze the shopping card, the clicks to identify what the customers wants, his feedback provided (note: always own your own website feedback, never just farm this out!!) and in a short turn around time (30 minutes to 2 hours or so) email me with a link to complete my purchase. Not with a link to my shopping cart 12 hours later, but a link to actually achieve what I wanted... Why should this company go through the big data effort? I do believe this is relatively easy to do using our Oracle Event Processing and Big Data Appliance solutions combined. It is almost so simple (to my mind) that it makes no sense that this is not in place? But, now I am ranting... Why is this interesting? It is because of $$$$. After trying really hard, I mean I did this all in the evening, and again in the morning before going to work. I kept on failing, But I really wanted this to work... so an email that said, sorry, we noticed you tried to get a bundle (the log knows what I wanted, where I failed, so easy to generate), here is the link to click and complete your purchase. And here is 2 movies on us as an apology would have kept me as a customer, and got the additional $$$$ per month for the next couple of years. It would also lead to upsell on my phone package etc. Instead, I went to a completely different company, bought service from them. Lost money for company A, negative sentiment for company A and me telling this story at the water cooler so I'm influencing more people to think negatively about company A. All in all, a loss of easy money, a ding in sentiment and image where a relatively simple solution exists and can be in place on the software I describe routinely in this blog... For those who are coming to Openworld and maybe see value in solving the above, or are thinking of how to solve this, come visit us in Moscone North - Oracle Red Lounge or in the Engineered Systems Showcase.

    Read the article

  • At times, you need to hire a professional.

    - by Phil Factor
    After months of increasingly demanding toil, the development team I belonged to was told that the project was to be canned and the whole team would be fired.  I’d been brought into the team as an expert in the data implications of a business re-engineering of a major financial institution. Nowadays, you’d call me a data architect, I suppose.  I’d spent a happy year being paid consultancy fees solving a succession of interesting problems until the point when the company lost is nerve, and closed the entire initiative. The IT industry was in one of its characteristic mood-swings downwards.  After the announcement, we met in the canteen. A few developers had scented the smell of death around the project already hand had been applying unsuccessfully for jobs. There was a sense of doom in the mass of dishevelled and bleary-eyed developers. After giving vent to anger and despair, talk turned to getting new employment. It was then that I perked up. I’m not an obvious choice to give advice on getting, or passing,  IT interviews. I reckon I’ve failed most of the job interviews I’ve ever attended. I once even failed an interview for a job I’d already been doing perfectly well for a year. The jobs I’ve got have mostly been from personal recommendation. Paradoxically though, from years as a manager trying to recruit good staff, I know a lot about what IT managers are looking for.  I gave an impassioned speech outlining the important factors in getting to an interview.  The most important thing, certainly in my time at work is the quality of the résumé or CV. I can’t even guess the huge number of CVs (résumés) I’ve read through, scanning for candidates worth interviewing.  Many IT Developers find it impossible to describe their  career succinctly on two sides of paper.  They leave chunks of their life out (were they in prison?), get immersed in detail, put in irrelevancies, describe what was going on at work rather than what they themselves did, exaggerate their importance, criticize their previous employers, aren’t  aware of the important aspects of a role to a potential employer, suffer from shyness and modesty,  and lack any sort of organized perspective of their work. There are many ways of failing to write a decent CV. Many developers suffer from the delusion that their worth can be recognized purely from the code that they write, and shy away from anything that seems like self-aggrandizement. No.  A resume must make a good impression, which means presenting the facts about yourself in a clear and positive way. You can’t do it yourself. Why not have your resume professionally written? A good professional CV Writer will know the qualities being looked for in a CV and interrogate you to winkle them out. Their job is to make order and sense out of a confused career, to summarize in one page a mass of detail that presents to any recruiter the information that’s wanted. To stand back and describe an accurate summary of your skills, and work-experiences dispassionately, without rancor, pity or modesty. You are no more capable of producing an objective documentation of your career than you are of taking your own appendix out.  My next recommendation was more controversial. This is to have a professional image overhaul, or makeover, followed by a professionally-taken photo portrait. I discovered this by accident. It is normal for IT professionals to face impossible deadlines and long working hours by looking more and more like something that had recently blocked a sink. Whilst working in IT, and in a state of personal dishevelment, I’d been offered the role in a high-powered amateur production of an old ex- Broadway show, purely for my singing voice. I was supposed to be the presentable star. When the production team saw me, the air was thick with tension and despair. I was dragged kicking and protesting through a succession of desperate grooming, scrubbing, dressing, dieting. I emerged feeling like “That jewelled mass of millinery, That oiled and curled Assyrian bull, Smelling of musk and of insolence.” (Tennyson Maud; A Monodrama (1855) Section v1 stanza 6) I was then photographed by a professional stage photographer.  When the photographs were delivered, I was amazed. It wasn’t me, but it looked somehow respectable, confident, trustworthy.   A while later, when the show had ended, I took the photos, and used them for work. They went with the CV to job applications. It did the trick better than I could ever imagine.  My views went down big with the developers. Old rivalries were put immediately to one side. We voted, with a show of hands, to devote our energies for the entire notice period to getting employable. We had a team sourcing the CV Writer,  a team organising the make-overs and photographer, and a third team arranging  mock interviews. A fourth team determined the best websites and agencies for recruitment, with the help of friends in the trade.  Because there were around thirty developers, we were in a good negotiating position.  Of the three CV Writers we found who lived locally, one proved exceptional. She was an ex-journalist with an eye to detail, and years of experience in manipulating language. We tried her skills out on a developer who seemed a hopeless case, and he was called to interview within a week.  I was surprised, too, how many companies were experts at image makeovers. Within the month, we all looked like those weird slick  people in the ‘Office-tagged’ stock photographs who stare keenly and interestedly at PowerPoint slides in sleek chromium-plated high-rise offices. The portraits we used still adorn the entries of many of my ex-colleagues in LinkedIn. After a months’ worth of mock interviews, and technical Q&A, our stutters, hesitations, evasions and periphrastic circumlocutions were all gone.  There is little more to relate. With the résumés or CVs, mugshots, and schooling in how to pass interviews, we’d all got new and better-paid jobs well  before our month’s notice was ended. Whilst normally, an IT team under the axe is a sad and depressed place to belong to, this wonderful group of people had proved the power of organized group action in turning the experience to advantage. It left us feeling slightly guilty that we were somehow cheating, but I guess we were merely leveling the playing-field.

    Read the article

  • Adventures in Windows 8: Working around the navigation animation issues in LayoutAwarePage

    - by Laurent Bugnion
    LayoutAwarePage is a pretty cool add-on to Windows 8 apps, which facilitates greatly the implementation of orientation-aware (portrait, landscape) as well as state-aware (snapped, filled, fullscreen) apps. It has however a few issues that are obvious when you use transformed elements on your page. Adding a LayoutAwarePage to your application If you start with a blank app, the MainPage is a vanilla Page, with no such feature. In order to have a LayoutAwarePage into your app, you need to add this class (and a few helpers) with the following operation: Right click on the Solution and select Add, New Item from the context menu. From the dialog, select a Basic Page (not a Blank Page, which is another vanilla page). If you prefer, you can also use Split Page, Items Page, Item Detail Page, Grouped Items Page or Group Detail Page which are all LayoutAwarePages. Personally I like to start with a Basic Page, which gives me more creative freedom. Adding this new page will cause Visual Studio to show a prompt asking you for permission to add additional helper files to the Common folder. One of these helpers in the LayoutAwarePage class, which is where the magic happens. LayoutAwarePage offers some help for the detection of orientation and state (which makes it a pleasure to design for all these scenarios in Blend, by the way) as well as storage for the navigation state (more about that in a future article). Issue with LayoutAwarePage When you use UI elements such as a background picture, a watermark label, logos, etc, it is quite common to do a few things with those: Making them partially transparent (this is especially true for background pictures; for instance I really like a black Page background with a half transparent picture placed on top of it). Transforming them, for instance rotating them a bit, scaling them, etc. Here is an example with a picture of my two beautiful daughters in the Bird Park in Kuala Lumpur, as well as a transformed TextBlock. The image has an opacity of 40% and the TextBlock a simple RotateTransform. If I create an application with a MainPage that navigates to this LayoutAwarePage, however, I will have a very annoying effect: The background picture appears with an Opacity of 100%. The TextBlock is not rotated. This lasts only for less than a second (during the navigation animation) before the elements “snap into place” and get their desired effect. Here is the XAML that cause the annoying effect: <common:LayoutAwarePage x:Name="pageRoot" x:Class="App13.BasicPage1" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:common="using:App13.Common" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d"> <Grid Style="{StaticResource LayoutRootStyle}"> <Grid.RowDefinitions> <RowDefinition Height="140" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <Image Source="Assets/el20120812025.jpg" Stretch="UniformToFill" Opacity="0.4" Grid.RowSpan="2" /> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <Button x:Name="backButton" Click="GoBack" IsEnabled="{Binding Frame.CanGoBack, ElementName=pageRoot}" Style="{StaticResource BackButtonStyle}" /> <TextBlock x:Name="pageTitle" Grid.Column="1" Text="Welcome" Style="{StaticResource PageHeaderTextStyle}" /> </Grid> <TextBlock HorizontalAlignment="Center" TextWrapping="Wrap" Text="Welcome to my Windows 8 Application" Grid.Row="1" VerticalAlignment="Bottom" FontFamily="Segoe UI Light" FontSize="70" FontWeight="Light" TextAlignment="Center" Foreground="#FFFFA200" RenderTransformOrigin="0.5,0.5" UseLayoutRounding="False" d:LayoutRounding="Auto" Margin="0,0,0,153"> <TextBlock.RenderTransform> <CompositeTransform Rotation="-6.545" /> </TextBlock.RenderTransform> </TextBlock> <VisualStateManager.VisualStateGroups> [...] </VisualStateManager.VisualStateGroups> </Grid> </common:LayoutAwarePage> Solving the issue In order to solve this “snapping” issue, the solution is to wrap the elements that are transformed into an empty Grid. Honestly, to me it sounds like a bug in the LayoutAwarePage navigation animation, but thankfully the workaround is not that difficult: Simple change the main Grid as follows: <Grid Style="{StaticResource LayoutRootStyle}"> <Grid.RowDefinitions> <RowDefinition Height="140" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <Grid Grid.RowSpan="2"> <Image Source="Assets/el20120812025.jpg" Stretch="UniformToFill" Opacity="0.4" /> </Grid> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <Button x:Name="backButton" Click="GoBack" IsEnabled="{Binding Frame.CanGoBack, ElementName=pageRoot}" Style="{StaticResource BackButtonStyle}" /> <TextBlock x:Name="pageTitle" Grid.Column="1" Text="Welcome" Style="{StaticResource PageHeaderTextStyle}" /> </Grid> <Grid Grid.Row="1"> <TextBlock HorizontalAlignment="Center" TextWrapping="Wrap" Text="Welcome to my Windows 8 Application" VerticalAlignment="Bottom" FontFamily="Segoe UI Light" FontSize="70" FontWeight="Light" TextAlignment="Center" Foreground="#FFFFA200" RenderTransformOrigin="0.5,0.5" UseLayoutRounding="False" d:LayoutRounding="Auto" Margin="0,0,0,153"> <TextBlock.RenderTransform> <CompositeTransform Rotation="-6.545" /> </TextBlock.RenderTransform> </TextBlock> </Grid> <VisualStateManager.VisualStateGroups> [...] </Grid> Hopefully this will help a few people, I banged my head on the wall for a while before someone at Microsoft pointed me to the solution ;) Happy coding, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • OpenIndiana (illumos): vmxnet3 interface lost on reboot

    - by protomouse
    I want my VMware vmxnet3 interface to be brought up with DHCP on boot. I can manually configure the NIC with: # ifconfig vmxnet3s0 plumb # ipadm create-addr -T dhcp vmxnet3s0/v4dhcp But after creating /etc/dhcp.vmxnet3s0 and rebooting, the interface is down and the logs show: Aug 13 09:34:15 neumann vmxnet3s: [ID 654879 kern.notice] vmxnet3s:0: getcapab(0x200000) -> no Aug 13 09:34:15 neumann vmxnet3s: [ID 715698 kern.notice] vmxnet3s:0: stop() Aug 13 09:34:17 neumann vmxnet3s: [ID 654879 kern.notice] vmxnet3s:0: getcapab(0x200000) -> no Aug 13 09:34:17 neumann vmxnet3s: [ID 920500 kern.notice] vmxnet3s:0: start() Aug 13 09:34:17 neumann vmxnet3s: [ID 778983 kern.notice] vmxnet3s:0: getprop(TxRingSize) -> 256 Aug 13 09:34:17 neumann vmxnet3s: [ID 778983 kern.notice] vmxnet3s:0: getprop(RxRingSize) -> 256 Aug 13 09:34:17 neumann vmxnet3s: [ID 778983 kern.notice] vmxnet3s:0: getprop(RxBufPoolLimit) -> 512 Aug 13 09:34:17 neumann nwamd[491]: [ID 605049 daemon.error] 1: nwamd_set_unset_link_properties: dladm_set_linkprop failed: operation not supported Aug 13 09:34:17 neumann vmxnet3s: [ID 654879 kern.notice] vmxnet3s:0: getcapab(0x20000) -> no Aug 13 09:34:17 neumann nwamd[491]: [ID 751932 daemon.error] 1: nwamd_down_interface: ipadm_delete_addr failed on vmxnet3s0: Object not found Aug 13 09:34:17 neumann nwamd[491]: [ID 819019 daemon.error] 1: nwamd_plumb_unplumb_interface: plumb IPv4 failed for vmxnet3s0: Operation not supported on disabled object Aug 13 09:34:17 neumann nwamd[491]: [ID 160156 daemon.error] 1: nwamd_plumb_unplumb_interface: plumb IPv6 failed for vmxnet3s0: Operation not supported on disabled object Aug 13 09:34:17 neumann nwamd[491]: [ID 771489 daemon.error] 1: add_ip_address: ipadm_create_addr failed on vmxnet3s0: Operation not supported on disabled object Aug 13 09:34:17 neumann nwamd[491]: [ID 405346 daemon.error] 9: start_dhcp: ipadm_create_addr failed for vmxnet3s0: Operation not supported on disabled object I then tried disabling network/physical:nwam in favour of network/physical:default. This works, the interface is brought up but physical:default fails and my network services (e.g. NFS) refuse to start. # ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vmxnet3s0: flags=1004843<UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:1: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:2: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:3: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:4: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:5: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:6: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:7: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 vmxnet3s0:8: flags=1004842<BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 9000 index 2 inet 192.168.178.248 netmask ffffff00 broadcast 192.168.178.255 lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1 inet6 ::1/128 vmxnet3s0: flags=20002000840<RUNNING,MULTICAST,IPv6> mtu 9000 index 2 inet6 ::/0 # cat /var/svc/log/network-physical\:default.log [ Aug 16 09:46:39 Enabled. ] [ Aug 16 09:46:41 Executing start method ("/lib/svc/method/net-physical"). ] [ Aug 16 09:46:41 Timeout override by svc.startd. Using infinite timeout. ] starting DHCP on primary interface vmxnet3s0 ifconfig: vmxnet3s0: DHCP is already running [ Aug 16 09:46:43 Method "start" exited with status 96. ] NFS server not running: # svcs -xv network/nfs/server svc:/network/nfs/server:default (NFS server) State: offline since August 16, 2012 09:46:40 AM UTC Reason: Service svc:/network/physical:default is not running because a method failed. See: http://illumos.org/msg/SMF-8000-GE Path: svc:/network/nfs/server:default svc:/milestone/network:default svc:/network/physical:default Reason: Service svc:/network/physical:nwam is disabled. See: http://illumos.org/msg/SMF-8000-GE Path: svc:/network/nfs/server:default svc:/milestone/network:default svc:/network/physical:nwam Reason: Service svc:/network/nfs/nlockmgr:default is disabled. See: http://illumos.org/msg/SMF-8000-GE Path: svc:/network/nfs/server:default svc:/network/nfs/nlockmgr:default See: man -M /usr/share/man -s 1M nfsd Impact: This service is not running. I'm new to the world of Solaris, so any help solving would be much appreciated. Thanks!

    Read the article

  • Azure - Part 4 - Table Storage Service in Windows Azure

    - by Shaun
    In Windows Azure platform there are 3 storage we can use to save our data on the cloud. They are the Table, Blob and Queue. Before the Chinese New Year Microsoft announced that Azure SDK 1.1 had been released and it supports a new type of storage – Drive, which allows us to operate NTFS files on the cloud. I will cover it in the coming few posts but now I would like to talk a bit about the Table Storage.   Concept of Table Storage Service The most common development scenario is to retrieve, create, update and remove data from the data storage. In the normal way we communicate with database. When we attempt to move our application over to the cloud the most common requirement should be have a storage service. Windows Azure provides a in-build service that allow us to storage the structured data, which is called Windows Azure Table Storage Service. The data stored in the table service are like the collection of entities. And the entities are similar to rows or records in the tradtional database. An entity should had a partition key, a row key, a timestamp and set of properties. You can treat the partition key as a group name, the row key as a primary key and the timestamp as the identifer for solving the concurrency problem. Different with a table in a database, the table service does not enforce the schema for tables, which means you can have 2 entities in the same table with different property sets. The partition key is being used for the load balance of the Azure OS and the group entity transaction. As you know in the cloud you will never know which machine is hosting your application and your data. It could be moving based on the transaction weight and the number of the requests. If the Azure OS found that there are many requests connect to your Book entities with the partition key equals “Novel” it will move them to another idle machine to increase the performance. So when choosing the partition key for your entities you need to make sure they indecate the category or gourp information so that the Azure OS can perform the load balance as you wish.   Consuming the Table Although the table service looks like a database, you cannot access it through the way you are using now, neither ADO.NET nor ODBC. The table service exposed itself by ADO.NET Data Service protocol, which allows you can consume it through the RESTful style by Http requests. The Azure SDK provides a sets of classes for us to connect it. There are 2 classes we might need: TableServiceContext and TableServiceEntity. The TableServiceContext inherited from the DataServiceContext, which represents the runtime context of the ADO.NET data service. It provides 4 methods mainly used by us: CreateQuery: It will create a IQueryable instance from a given type of entity. AddObject: Add the specified entity into Table Service. UpdateObject: Update an existing entity in the Table Service. DeleteObject: Delete an entity from the Table Service. Beofre you operate the table service you need to provide the valid account information. It’s something like the connect string of the database but with your account name and the account key when you created the storage service on the Windows Azure Development Portal. After getting the CloudStorageAccount you can create the CloudTableClient instance which provides a set of methods for using the table service. A very useful method would be CreateTableIfNotExist. It will create the table container for you if it’s not exsited. And then you can operate the eneities to that table through the methods I mentioned above. Let me explain a bit more through an exmaple. We always like code rather than sentence.   Straightforward Accessing to the Table Here I would like to build a WCF service on the Windows Azure platform, and for now just one requirement: it would allow the client to create an account entity on the table service. The WCF service would have a method named Register and accept an instance of the account which the client wants to create. After perform some validation it will add the entity into the table service. So the first thing I should do is to create a Cloud Application on my VIstial Studio 2010 RC. (The Azure SDK 1.1 only supports VS2008 and VS2010 RC.) The solution should be like this below. Then I added a configuration items for the storage account through the Settings section under the cloud project. (Double click the Services file under Roles folder and navigate to the Setting section.) This setting will be used when to retrieve my storage account information. Since for now I just in the development phase I will select “UseDevelopmentStorage=true”. And then I navigated to the WebRole.cs file under my WCF project. If you have read my previous posts you would know that this file defines the process when the application start, and terminate on the cloud. What I need to do is to when the application start, set the configuration publisher to load my config file with the config name I specified. So the code would be like below. I removed the original service and contract created by the VS template and add my IAccountService contract and its implementation class - AccountService. And I add the service method Register with the parameters: email, password and it will return a boolean value to indicates the result which is very simple. At this moment if I press F5 the application will be established on my local development fabric and I can see my service runs well through the browser. Let’s implement the service method Rigister, add a new entity to the table service. As I said before the entities you want to store in the table service must have 3 properties: partition key, row key and timespan. You can create a class with these 3 properties. The Azure SDK provides us a base class for that named TableServiceEntity in Microsoft.WindowsAzure.StorageClient namespace. So what we need to do is more simply, create a class named Account and let it derived from the TableServiceEntity. And I need to add my own properties: Email, Password, DateCreated and DateDeleted. The DateDeleted is a nullable date time value to indecate whether this entity had been deleted and when. Do you notice that I missed something here? Yes it’s the partition key and row key I didn’t assigned. The TableServiceEntity base class defined 2 constructors one was a parameter-less constructor which will be used to fill values into the properties from the table service when retrieving data. The other was one with 2 parameters: partition key and row key. As I said below the partition key may affect the load balance and the row key must be unique so here I would like to use the email as the parition key and the email plus a Guid as the row key. OK now we finished the entity class we need to store onto the table service. The next step is to create a data access class for us to add it. Azure SDK gives us a base class for it named TableServiceContext as I mentioned below. So let’s create a class for operate the Account entities. The TableServiceContext need the storage account information for its constructor. It’s the combination of the storage service URI that we will create on Windows Azure platform, and the relevant account name and key. The TableServiceContext will use this information to find the related address and verify the account to operate the storage entities. Hence in my AccountDataContext class I need to override this constructor and pass the storage account into it. All entities will be saved in the table storage with one or many tables which we call them “table containers”. Before we operate an entity we need to make sure that the table container had been created on the storage. There’s a method we can use for that: CloudTableClient.CreateTableIfNotExist. So in the constructor I will perform it firstly to make sure all method will be invoked after the table had been created. Notice that I passed the storage account enpoint URI and the credentials to specify where my storage is located and who am I. Another advise is that, make your entity class name as the same as the table name when create the table. It will increase the performance when you operate it over the cloud especially querying. Since the Register WCF method will add a new account into the table service, here I will create a relevant method to add the account entity. Before implement, I should add a reference - System.Data.Services.Client to the project. This reference provides some common method within the ADO.NET Data Service which can be used in the Windows Azure Table Service. I will use its AddObject method to create my account entity. Since the table service are not fully implemented the ADO.NET Data Service, there are some methods in the System.Data.Services.Client that TableServiceContext doesn’t support, such as AddLinks, etc. Then I implemented the serivce method to add the account entity through the AccountDataContext. You can see in the service implmentation I load the storage account information through my configuration file and created the account table entity from the parameters. Then I created the AccountDataContext. If it’s my first time to invoke this method the constructor of the AccountDataContext will create a table container for me. Then I use Add method to add the account entity into the table. Next, let’s create a farely simple client application to test this service. I created a windows console application and added a service reference to my WCF service. The metadata information of the WCF service cannot be retrieved if it’s deployed on the Windows Azure even though the <serviceMetadata httpGetEnabled="true"/> had been set. If we need to get its metadata we can deploy it on the local development service and then changed the endpoint to the address which is on the cloud. In the client side app.config file I specified the endpoint to the local development fabric address. And the just implement the client to let me input an email and a password then invoke the WCF service to add my acocunt. Let’s run my application and see the result. Of course it should return TRUE to me. And in the local SQL Express I can see the data had been saved in the table.   Summary In this post I explained more about the Windows Azure Table Storage Service. I also created a small application for demostration of how to connect and consume it through the ADO.NET Data Service Managed Library provided within the Azure SDK. I only show how to create an eneity in the storage service. In the next post I would like to explain about how to query the entities with conditions thruogh LINQ. I also would like to refactor my AccountDataContext class to make it dyamic for any kinds of entities.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • dynatree: how can i select child node programmatically

    - by Muhammad Adeel Zahid
    hello everyone i m using jquery's dynaTree in my application and i want to select the all the child nodes programmably when a node is selected. the structure of my tree is as follows <div id = "tree"> <ul> <li>package 1 <ul> <li>module 1.1 <ul> <li> document 1.1.1</li> <li> document 1.1.2</li> </ul> </li> <li>module 1.2 <ul> <li>document 1.2.1</li> <li>document 1.2.2</li> </ul> </li> </ul> </li> <li> package 2 <ul> <li> module 2.1 <ul> <li>document 2.1.1</li> <li>document 2.1.1</li> </ul> </li> </ul> </li> </ul> </div> now what i want is that when i click on tree node with title "package 1" all its child nodes i.e (module 1.1, document 1.1.1, document 1.1.2, module 1.2, document 1.2.1, document 1.2.2) should also be selected below is the approach i tried to use $("#tree").dynatree({ onSelect: function(flag, dtnode) { // This will happen each time a check box is selected/deselected var selectedNodes = dtnode.tree.getSelectedNodes(); var selectedKeys = $.map(selectedNodes, function(node) { //alert(node.data.key); return node.data.key; }); // Set the hidden input field's value to the selected items $('#SelectedItems').val(selectedKeys.join(",")); if (flag) { child = dtnode.childList; alert(child.length); for (i = 0; i < child.length; i++) { var x = child[i].select(true); alert(i); } } }, checkbox: true, onActivate: function(dtnode) { //alert("You activated " + dtnode.data.key); } }); in the if(flag) condition i get all the child nodes of element that is selected by user and it gives me the correct value that i can see from alert(child.length) statement. then i run the loop to select all the children but loop never goes beyond the statement var x = child[i].select(true); and i can never see the statement alert(i) being executed. the result of above statement is that if i select package 1, module 1.1 and document 1.1.1 is also selected but never does it execute alert(i) statement neither other children of package 1 are selected. in my view when first time child[i].select(true) statement is executed it also triggers the on select event of its children thus making a recursion kind of thing is my thinking correct? no matter recursion or what why on earth does it not complete the loop and execute very next instruction alert(i). please help me in solving this problem. i m dying to see that alert any suggestion and help is highly appriciated thanks Adeel

    Read the article

  • Fitting it together, database, reporting, applications in C#

    - by alvonellos
    Introduction Preamble I was hesitant to post this, since it's an application whose intricate details are defined elsewhere, and answers may not be helpful to others. Within the past few weeks (I was actually going to write a blog post about this after I finished) I've discovered that the barrier I'm encountering is one that's actually quite common for newer developers. This question is not so much about a specific thing as it is about piecing those things together. I've searched the internet far and wide, and found many tutorials on how to create applications that are kind of similar to what I'm looking for. I've also looked at hiring another, more experienced, developer to help me along, but all I've gotten are unqualified candidates that don't have the experience necessary and won't take care of the client or project like I will. I'd rather have the project never transpire than to release a solution that is half-baked. I've asked professors at my school, but they've not turned up answers to my question. I'm an experienced developer, and I've written many applications that are -- very abstractly -- close to what I'm doing, but my experiences from those applications aren't giving me enough leverage to solve this particular problem. I just hope that posting this article isn't a mistake for me to write. Project Description I have a project I'm working on for a client that is a rewrite of an application, originally written in Foxpro 2.6 by someone before me, that performs some analysis (which, sadly, I'm not allowed to disclose as per of my employment contract) on financial data. One day, after a long talk between the client and I -- where he intimately described his frustrations with all the bugs I've been hacking out of this code for 6 months now -- he told me to just rewrite it and gave me a month to write a good 1/8 of this 65k LOC Foxpro monstrosity. this 65k line of code foxpro monstrosity. It'll take me a good 3 - 6 months to rewrite this software (I know things the original programmer did not, like inheritance) going as I am right now, but I'm quickly discovering that I'm going to need to use databases. Prior to this contract I didn't even know about foxpro, and so I've had to learn foxpro on the fly, write procedures and make modifications to the database. I've actually come to like it, and this project would be rewritten in Foxpro if it were still a supported language, because over the past few months, I've come to like the features of Foxpro that make it so easy to develop data-driven applications. I once perfomed an experiment, comparing C# to Foxpro. What took me 45 minutes in C# took me two in Foxpro, and I knew C# prior to Foxpro. I was hoping to leverage the power of C#, but it intimidates me that in foxpro, you can have one line of code and be using a database. Prior to this, I have never written any serious database development from scratch. All the applications that I've written are in a different league. They are either completely data-naive or data-naive enough that I can get away with not using a database through serialization or by designing algorithms that work with the data in a manner that is stateless, so there is no need to worry about databases. I've come to realize, very quickly, that serialization and my efficacy with data structures has been my crutch all these years that's prevented me from adventuring into databases, and has consequently hindered my success in real-world programming. Sure, I've written some database stuff in Perl and Python, and I've done forms and worked with relational databases and tables, I'm a wizard in Access and Excel (seriously) and can do just about anything, but it just feels unnatural writing SQL code in another language... I don't mind writing SQL, and I don't It's that bridge between the database and the program code that drives me absolutely bonkers. I hope I'm not the only one to think this, but it bothers me that I have to create statements like the following string sSql = "SELECT * from tablename" When there's really no reason for that kind of unchecked language binding between two languages and two API's. Don't get my wrong, SQL is great, but I don't like the idea that, when executing commands on a SQL database, that one must intermix database and application software, and there's no database independence, which means that different versions of different databases can break code. This isn't very nice. The nicest thing about Foxpro is the cohesiveness between programming language and database. It's so easy, and Foxpro makes it easy, because the tool just fits the task. I can see why so many developers have created a career with this language, because it lowered the barrier of entry to data-driven applications that so many businesses need. It was wonderful. For my purposes today, with the demands and need for community support, extensibility, and language features, Foxpro isn't a solution that I feel would be the right tool for the job. I'm also worried about working too heavy with the database, because I've seen data-driven .NET applications have issues with database caches, running out of memory, and objects in the database not being collected. (Memory leaks) And OH the queries. Which one, how, and why? There are a plethora of different ways that a database can be setup, I think I counted 5 or 6 different kinds of database applications alone that I can chose from. That is a great mountain for me to climb when I don't even know where to begin when it comes to writing data-driven applications. The problem isn't that I don't know SQL or that I don't know C#. I know both and have worked with both extensively. It's making them work together that's the problem, and it's something I've never done in C# before. Reports The client likes paper. The data needs to be printed out in a format that is extensible, layered, and easy to use. I have never done reporting before, and so this is a bit of a problem. From the data source comes crystal reports, and so there's a dependency on the database, from what I understand. Code reuse A large part of the design decision that I've gone through so far is to break the task of writing a piece of this software into routines and modular DLL's and so forth such that much of the code can be reused. For example, when I setup this database, I want to be able to reuse the same database code over and over again. I also want to make sure that when the day comes that another developer is here, that he/she will be able to pick up just where I left off. The quicker I develop these applications, the better off I am. Tasks & Goals In my project, I need to write routines that apply algorithms and look for predefined patterns in financial data. Additionally, I need to simulate trading based on predefined algorithms and data. Then I need to prepare reports on that data. Additionally, I need to have a way to change the code base for this application quickly and effectively, without hacking together some band-aid solution for a problem that really needs a trauma ward. Special Considerations The solution must be fast, run quickly on existing hardware, and not be too much of a pain to maintain and write. I understand that anything I write I'm married to -- I'm responsible for the things that I write because my reputation and livelihood is dependent on it. Do I really need a database? What about performance? Performance was such a big issue that I hand wrote a data structure that is capable of performing 2 billion operations, using a total of 4 gigs of memory in under 1/4 of a second using the standard core two duo processor. I could not find a similar, pre-written data structure in C# to perform this task. What setup do I use in terms of database? What about reporting? I'd prefer to have PDF's generated, but I'd like to be able to visually sketch those reports and then just have a ReportFactory of some sort, that when I pass some variables in, it just does that data. About Me I'm a lone developer for a small business in this area. This is the first time I've done this and I've never had the breadth and depth of my knowledge tested. I'm incredibly frustrated with this project because I feel incredibly overwhelmed with the task at hand. I'm looking for that entry level point where I can draw a line and say "this is what I need to do" Conclusion I may have not been clear enough on my post. I'm still new to this whole thing, and I've been doing my best to contribute back to the community that I've leached so much knowledge from. I'd be glad to edit my post and add more information if possible. I'm looking for a big-picture solution or design process that helps me get off the ground in this world of data-driven applications, because I have a feeling that it's going to be concentric to my entire career as a programmer for some time. Specifically, if you didn't get it from the rest of the post (I may not have been clear enough) I really need some guidance as to where to go in terms of the design decisions for this project. Some things that'll be useful will be a pro/con list for the different kinds of database projects available in VS2010. I've tried, but generating that list has been as hard as solving the problem itself... If you could walk a developer writing a data-driven application for the first time in C#, how would you do that? Where would you point them to?

    Read the article

  • facebook iframe stream.Publish cannot close dialog or skip

    - by fooyee
    am pulling my hair over this :( spent 10 hrs but nothing came out I read this thread http://forum.developers.facebook.com/viewtopic.php?pid=198128 but it didn't help much. I'm running a local dev App Engine server ( localhost:8080 ) iframe app so I have a couple of problems. 1) on safari 4.0.4, the publish story dialog comes up nicely with all images/data/action_links. upon posting a story (or skipping), the dialog goes blank and wouldn't close. 2) I tested the same code on firefox 3.5.8, the dialog comes up with all images/data/action_links, but then the whole thing freezes. Clicking anywhere on the dialog doesn't help at all. If i'm patient enough and click "publish", I have to wait for abt 10 seconds before the dialog says "story is published". then it freezes. (clicking on skip doesn't make a difference). btw, there is no "button clicking effect" : ie: the buttons don't look like they "sink down" upon clicking. I checked the firefox memory using the command "top" on the terminal, it all seems okay, no spike in CPU processes ( i could open other firefox tabs and work on them) My futile attempts at solving the problems... 1) so i thought, hmm could this be because of local dev (localhost) problem? I uploaded the code to the production server, the same thing happens. 2) I tried an older firefox (3.1) and the same problem persisted ( the freezing ) 3) I noticed that i kind of used 2 different FB features ( Connect and XFBML). The Connect Feature I used in the PostStory function. The XFBML feature I used before the tag. So I thought, hmm ... I tried replacing the FB_RequireFeatures["Connect"] feature with FB_RequireFeatures["XFBML"]. nothing changed. I still can't close the story dialog. 4) Is there a possibility that I didn't connect to xd_receiver.htm properly? my xd_receiver.htm is stored in my folder /media/fbconnect in my app.yaml handler: - url: /fbconnect static_dir: media/fbconnect so i thought a connection has to be established with xd_receiver.htm. any way I can test that? here're all the codes: <script type="text/javascript"> //post story function function PostStory() { //init facebook FB_RequireFeatures(["Connect"], function() { FB.Facebook.init('my_app_key', "/fbconnect/xd_receiver.htm"); FB.ensureInit(function() { var message = 'the message'; var attachment = { 'name': 'a simple app to send gifts', 'href': 'http://apps.facebook.com/my_app_name', 'caption': '{*actor*} sent u something', 'description': 'some description', "media": [{ "type": "image", "src": "http://bit.ly/105QYr", "href": "http://bit.ly/105QYr"}] }; //action links can only be seen AFTER the feed is published var action_links = [{ 'text': 'Send him/her a gift back!', 'href': 'http://somelink.com'}]; FB.Connect.streamPublish(message, attachment, action_links, null, "Share the gift with your friends", callback, false, null); }); }); function callback(post_id, exception) { //alert('Wall Post Complete'); } } </script> just before the end of the /body tag, i have this: <script type="text/javascript"> function callFBInit() { FB_RequireFeatures( ["XFBML"], function(){ FB.Facebook.init("my_app_key", "/fbconnect/xd_receiver.htm"); } ); } callFBInit(); btw, my xd_receiver.htm contains: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns=? http://www.w3.org/1999/xhtml? > <head> <title>cross-domain receiver page</title> </head> <body> <script src=?http://static.ak.facebook.com/js/api_lib/v0.4/xdcommreceiver.debug.js? type=? text/javascript? ></script> </body> </html> hope you guys can help out. thx

    Read the article

  • Facebook require_login() in iFrame App

    - by LapKom
    Hi, I have serious problem with iframe application. I need to use many external JS libraries and other dynamic stuuf so FMBL application can't be done. When I call require_login() I get applicaition installing dialog when app is not already installed, which is ok. But then after authorization application enters an endless redirect loop with parameters like auth_token, installed and so. Yesterday I managed to fix this, but today it's broken again... What the heck is happening with FB? It's driving me crazy to find a sollution, none of ones found on net doesn't seem to be working. So far I tried: http://abhirama.wordpress.com/2010/03/07/facebook-iframe-xfbml-app/ (7th march 2010!) http://forum.developers.facebook.com/viewtopic.php?pid=156092 http://www.keywordintellect.com/facebook-development/how-to-set-up-a-facebook-iframe-application-in-php-in-5-minutes/ http://www.markdeepwell.com/2010/02/validating-a-facebook-session-within-an-iframe/ http://forum.developers.facebook.com/viewtopic.php?pid=210449 http://www.ajaxlines.com/ajax/stuff/article/facebook_fbml_rendering_in_iframe_application.php http://www.aratide.com/php/solving-the-break-out-issue-in-iframe-facebook-applications/ None of the above worked... According to those and some FB docs: http://wiki.developers.facebook.com/index.php/FB_RequireFeatures http://wiki.developers.facebook.com/index.php/Cross_Domain_Communication_Channel My example test files look as follow: <?php //Link in library. require_once '../application/vendor/Facebook/facebook.php'; //Authentication Keys $appapikey = 'XXXX'; $appsecret = 'XXXX'; //Construct the class $facebook = new Facebook($appapikey, $appsecret); //Require login $user_id = $facebook->require_login(); ?> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml"> <head> <title></title> </head> <body> <script src="http://static.ak.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php" type="text/javascript"></script> This is you: <fb:name uid="<?php echo $user_id?>"></fb:name> <?php var_dump($facebook->$this->facebook->api_client->friends_get())?> <script type="text/javascript"> FB_RequireFeatures(["XFBML"], function(){ FB.Facebook.init("<?=$appapikey?>", "xd_receiver.html"); }); </script> </body> </html> And cross-domain file xd_receiver.html is: <!doctype html public "-//w3c//dtd xhtml 1.0 strict//en" "http://www.w3.org/tr/xhtml1/dtd/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head> <title>cross-domain receiver page</title> </head> <body> <script src="http://static.ak.facebook.com/js/api_lib/v0.4/XdCommReceiver.js" type="text/javascript"></script> </body> </html> How do I get it working? I'm using Kohana framework to do this and already replaced header('Location') with url::redirect() in facebook php library.

    Read the article

  • NINE Questions with Michelle Juett

    - by NINEQuestions
    Michelle Juett is one of the more interesting people I know, even though we’ve never met face to face. She’s part artist, part techie and all cool. We “met” via my good buddy George Clingerman and have plotting to take over the world, errr… I mean “collaborating” ever since. If you happen to live in the Seattle area, you can catch her and her work at Sakura Con on April 2-4, 2010 and various other gamer and art cons throughout the year. You can also find her on Twitter as @Shelldragon. Now that you know a little bit, I’ll let her tell you the rest of the story in these NINE Questions: 1. Where are you from? I was born in Clearwater, Florida. I like to tell people I'm from the Bermuda Triangle, it just makes explaining myself so much easier. My family moved to Washington when I was 5 and I've been in the Pacific Northwest ever since. We like to QQ about the rain but we really love the green trees and clean water. 2. What do you do? I fight evil by moonlight and win love by daylight.. or something like that.  I’ve been in quality assurance for games during the day since January 2008 and an artist for life. I currently work in QA for a really awesome game company in Bellevue.  At home, I work on personal digital art, making game assets as well as other random freelance projects as they pop up. 3. How did you get to where you are now? I'm still not where I want to be but I'm getting closer. The biggest piece of advice I can give is to work hard and never settle for the minimum required. I tend to overwork myself but I've never regretted it. You can want something really bad but if you aren't willing to work for it, then you can't expect it to just happen. I've always drawn and had an unhealthy love for video games that I was told I’d grow out of.  I knew I would not ‘grow out’ of games and that real adults make them and I could too. After I graduated, in searching for jobs, I discovered game testing. I figured this would be a good way to get my foot in the door and start networking. I’ve worked with consoles, websites and now, PC games.  I stuck with my journey, although it has been a rocky one, daylighting as a tester and moonlighting as an artist. I'm still on that journey but I wouldn't have it any other way. Test has given me a perspective that is difficult, if not impossible, to obtain any other way. It gives an unconditional respect for other hard working testers and an insight into creative problem solving. 4. So video game testing probably sounds WAY cooler than the reality. What's it like? What's a given day for you? Game testers don't get a lot of respect because of their stigmas and the fact most people don't actually know what we do.  People hear about the opening and closing disc trays all day. Many places do treat their testers like numbers. It all depends on where you work and how awesome your company is. I've had to deal with a lot of bad work situations to get to a really good one. QA exists to ensure the game is as flawless and enjoyable as it can be by the time it has to leave the nest and go out into the world. This includes everything obvious: “can I beat the level and save the princess?” to the more obscure: ‘What happens when I lose internet connection while trying to save right before falling into a pit to my death while holding the jump key then my cat pulls out my memory card and hides it in her litter box?” On the dev side, for developers, testers can be very scary people. Especially when the test team is not in house and you can’t see each other’s faces.  I've seen both sides. We don't mean to hurt your feelings. We really DO love you and want your game to be the best it can be! It can be some serious tough love. 5. You are also an accomplished artist. Got any major projects right now you'd like to talk about? LOL, I don't know if I’d say I'm an accomplished artist just yet. I’m still a long way from where I want to be. I figure that’s what makes you grow though: the desire to never stop improving. I like QA but I want to be a full time artist. I was lucky enough to register for a table at Sakura Con in the 11 second window that the tables sold out. As such, I’ll be selling my wares in the Artist Alley April 2-4th. Part of preparing for this is actually making the art to be sold there. Anime is a fun pass time but I don’t draw a whole lot of it so I’m making up for lost time. As I seem to enjoy burying myself in work, I’m an art lead for a secret project that’s so secret I might be killed tonight for even mentioning it. I also take on various freelance projects and do what I can to help out indie games. I discovered the XNA community a year and a half ago and developed a love for Indies when I was writing a weekly newsletter on XBLA news. I’m a little late to the party but I find myself in a unique position where I am an artist and also have technical skills in games. While not programmer myself, I have a lot of game sense and experience. I hope to make some awesome happen. Lastly, I have an ongoing web comic Shell’s Angels) that tends to get neglected when I get busy. I still love drawing comics and keep a little book with me to sketch down ideas as they pop into my head. I may pick it back up again as a larger project sometime in the future. 6. Can you talk about any of the other freelance projects you're doing or are you sworn to secrecy on those too? We wouldn't want a team of game developer ninjas to take you out or anything. All my projects are currently 2d. I have personal projects such as the ongoing comic as well as a graphic novel I've been picking at here and there. My main focus until April is Sakura Con, Sakura Con, Sakura Con.  I see it as a great way to get exposure and convention experience. I found out I love conventions a couple years ago and I want to get more involved in them. 7. As an artist, what is your weapon of choice? What do you use to get most of your stuff done? I am a Photoshop Hero and I have the hoodie to prove it. (http://www.pennyarcademerch.com/pah090011.html) I've dabbled in other paint programs but I always gravitate back to Photoshop. She is my one true love. I'd like to learn programs like Flash or Anime Studio when I get a bit more time because of their animation abilities. I've worked on frame by frame animation forever but I would love to learn 2d rigging. Still, nothing can compare to a simple sketchpad and a pencil. I always have one on me in case I come across or think of something interesting and can't get to a computer. If the Courier ever comes to exist it will be an ideal weapon for me. 8. You did some videos too, depicting the art creation process. What was the motivation behind those? The creative process is just as important as the final product, if not more so.  I've always loved watching speed paint videos and wanted to try it out myself. Turns out it's a lot of work and time but it's definitely fun to go back and rewatch them. Art isn't always the end result and is more often the process itself. 9. Got any interesting tattoos? Designed any for yourself or other people? Not yet, but not for lack of desire. I've toiled over what and where for years. Last year, I finally decided the back of my shoulders would be the place. Like anything permanent, I want it to have meaning. I thought of somehow incorporating games but I couldn't find something I felt would stand the test of time even with all the classic sprite games. I'm very picky so we'll see if I can get something solid decided. Come see me at Sakura Con April 2 -4!!!

    Read the article

  • Standards Corner: OAuth WG Client Registration Problem

    - by Tanu Sood
    Phil Hunt is an active member of multiple industry standards groups and committees (see brief bio at the end of the post) and has spearheaded discussions, creation and ratifications of  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} industry standards including the Kantara Identity Governance Framework, among others. Being an active voice in the industry standards development world, we have invited him to share his discussions, thoughts, news & updates, and discuss use cases, implementation success stories (and even failures) around industry standards on this monthly column. Author: Phil Hunt This afternoon, the OAuth Working Group will meet at IETF88 in Vancouver to discuss some important topics important to the maturation of OAuth. One of them is the OAuth client registration problem.OAuth (RFC6749) was initially developed with a simple deployment model where there is only monopoly or singleton cloud instance of a web API (e.g. there is one Facebook, one Google, on LinkedIn, and so on). When the API publisher and API deployer are the same monolithic entity, it easy for developers to contact the provider and register their app to obtain a client_id and credential.But what happens when the API is for an open source project where there may be 1000s of deployed copies of the API (e.g. such as wordpress). In these cases, the authors of the API are not the people running the API. In these scenarios, how does the developer obtain a client_id? An example of an "open deployed" API is OpenID Connect. Connect defines an OAuth protected resource API that can provide personal information about an authenticated user -- in effect creating a potentially common API for potential identity providers like Facebook, Google, Microsoft, Salesforce, or Oracle. In Oracle's case, Fusion applications will soon have RESTful APIs that are deployed in many different ways in many different environments. How will developers write apps that can work against an openly deployed API with whom the developer can have no prior relationship?At present, the OAuth Working Group has two proposals two consider: Dynamic RegistrationDynamic Registration was originally developed for OpenID Connect and UMA. It defines a RESTful API in which a prospective client application with no client_id creates a new client registration record with a service provider and is issued a client_id and credential along with a registration token that can be used to update registration over time.As proof of success, the OIDC community has done substantial implementation of this spec and feels committed to its use. Why not approve?Well, the answer is that some of us had some concerns, namely: Recognizing instances of software - dynamic registration treats all clients as unique. It has no defined way to recognize that multiple copies of the same client are being registered other then assuming if the registration parameters are similar it might be the same client. Versioning and Policy Approval of open APIs and clients - many service providers have to worry about change management. They expect to have approval cycles that approve versions of server and client software for use in their environment. In some cases approval might be wide open, but in many cases, approval might be down to the specific class of software and version. Registration updates - when does a client actually need to update its registration? Shouldn't it be never? Is there some characteristic of deployed code that would cause it to change? Options lead to complexity - because each client is treated as unique, it becomes unclear how the clients and servers will agree on what credentials forms are acceptable and what OAuth features are allowed and disallowed. Yet the reality is, developers will write their application to work in a limited number of ways. They can't implement all the permutations and combinations that potential service providers might choose. Stateful registration - if the primary motivation for registration is to obtain a client_id and credential, why can't this be done in a stateless fashion using assertions? Denial of service - With so much stateful registration and the need for multiple tokens to be issued, will this not lead to a denial of service attack / risk of resource depletion? At the very least, because of the information gathered, it would difficult for service providers to clean up "failed" registrations and determine active from inactive or false clients. There has yet to be much wide-scale "production" use of dynamic registration other than in small closed communities. Client Association A second proposal, Client Association, has been put forward by Tony Nadalin of Microsoft and myself. We took at look at existing use patterns to come up with a new proposal. At the Berlin meeting, we considered how WS-STS systems work. More recently, I took a review of how mobile messaging clients work. I looked at how Apple, Google, and Microsoft each handle registration with APNS, GCM, and WNS, and a similar pattern emerges. This pattern is to use an existing credential (mutual TLS auth), or client bearer assertion and swap for a device specific bearer assertion.In the client association proposal, the developer's registration with the API publisher is handled by having the developer register with an API publisher (as opposed to the party deploying the API) and obtaining a software "statement". Or, if there is no "publisher" that can sign a statement, the developer may include their own self-asserted software statement.A software statement is a special type of assertion that serves to lock application registration profile information in a signed assertion. The statement is included with the client application and can then be used by the client to swap for an instance specific client assertion as defined by section 4.2 of the OAuth Assertion draft and profiled in the Client Association draft. The software statement provides a way for service provider to recognize and configure policy to approve classes of software clients, and simplifies the actual registration to a simple assertion swap. Because the registration is an assertion swap, registration is no longer "stateful" - meaning the service provider does not need to store any information to support the client (unless it wants to). Has this been implemented yet? Not directly. We've only delivered draft 00 as an alternate way of solving the problem using well-known patterns whose security characteristics and scale characteristics are well understood. Dynamic Take II At roughly the same time that Client Association and Software Statement were published, the authors of Dynamic Registration published a "split" version of the Dynamic Registration (draft-richer-oauth-dyn-reg-core and draft-richer-oauth-dyn-reg-management). While some of the concerns above are addressed, some differences remain. Registration is now a simple POST request. However it defines a new method for issuing client tokens where as Client Association uses RFC6749's existing extension point. The concern here is whether future client access token formats would be addressed properly. Finally, Dyn-reg-core does not yet support software statements. Conclusion The WG has some interesting discussion to bring this back to a single set of specifications. Dynamic Registration has significant implementation, but Client Association could be a much improved way to simplify implementation of the overall OpenID Connect specification and improve adoption. In fairness, the existing editors have already come a long way. Yet there are those with significant investment in the current draft. There are many that have expressed they don't care. They just want a standard. There is lots of pressure on the working group to reach consensus quickly.And that folks is how the sausage is made.Note: John Bradley and Justin Richer recently published draft-bradley-stateless-oauth-client-00 which on first look are getting closer. Some of the details seem less well defined, but the same could be said of client-assoc and software-statement. I hope we can merge these specs this week. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} About the Writer: Phil Hunt joined Oracle as part of the November 2005 acquisition of OctetString Inc. where he headed software development for what is now Oracle Virtual Directory. Since joining Oracle, Phil works as CMTS in the Identity Standards group at Oracle where he developed the Kantara Identity Governance Framework and provided significant input to JSR 351. Phil participates in several standards development organizations such as IETF and OASIS working on federation, authorization (OAuth), and provisioning (SCIM) standards.  Phil blogs at www.independentid.com and a Twitter handle of @independentid.

    Read the article

  • SQL Spatial: Getting “nearest” calculations working properly

    - by Rob Farley
    If you’ve ever done spatial work with SQL Server, I hope you’ve come across the ‘nearest’ problem. You have five thousand stores around the world, and you want to identify the one that’s closest to a particular place. Maybe you want the store closest to the LobsterPot office in Adelaide, at -34.925806, 138.605073. Or our new US office, at 42.524929, -87.858244. Or maybe both! You know how to do this. You don’t want to use an aggregate MIN or MAX, because you want the whole row, telling you which store it is. You want to use TOP, and if you want to find the closest store for multiple locations, you use APPLY. Let’s do this (but I’m going to use addresses in AdventureWorks2012, as I don’t have a list of stores). Oh, and before I do, let’s make sure we have a spatial index in place. I’m going to use the default options. CREATE SPATIAL INDEX spin_Address ON Person.Address(SpatialLocation); And my actual query: WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT l.Name, a.AddressLine1, a.City, s.Name AS [State], c.Name AS Country FROM MyLocations AS l CROSS APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a JOIN Person.StateProvince AS s     ON s.StateProvinceID = a.StateProvinceID JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; Great! This is definitely working. I know both those City locations, even if the AddressLine1s don’t quite ring a bell. I’m sure I’ll be able to find them next time I’m in the area. But of course what I’m concerned about from a querying perspective is what’s happened behind the scenes – the execution plan. This isn’t pretty. It’s not using my index. It’s sucking every row out of the Address table TWICE (which sucks), and then it’s sorting them by the distance to find the smallest one. It’s not pretty, and it takes a while. Mind you, I do like the fact that it saw an indexed view it could use for the State and Country details – that’s pretty neat. But yeah – users of my nifty website aren’t going to like how long that query takes. The frustrating thing is that I know that I can use the index to find locations that are within a particular distance of my locations quite easily, and Microsoft recommends this for solving the ‘nearest’ problem, as described at http://msdn.microsoft.com/en-au/library/ff929109.aspx. Now, in the first example on this page, it says that the query there will use the spatial index. But when I run it on my machine, it does nothing of the sort. I’m not particularly impressed. But what we see here is that parallelism has kicked in. In my scenario, it’s split the data up into 4 threads, but it’s still slow, and not using my index. It’s disappointing. But I can persuade it with hints! If I tell it to FORCESEEK, or use my index, or even turn off the parallelism with MAXDOP 1, then I get the index being used, and it’s a thing of beauty! Part of the plan is here: It’s massive, and it’s ugly, and it uses a TVF… but it’s quick. The way it works is to hook into the GeodeticTessellation function, which is essentially finds where the point is, and works out through the spatial index cells that surround it. This then provides a framework to be able to see into the spatial index for the items we want. You can read more about it at http://msdn.microsoft.com/en-us/library/bb895265.aspx#tessellation – including a bunch of pretty diagrams. One of those times when we have a much more complex-looking plan, but just because of the good that’s going on. This tessellation stuff was introduced in SQL Server 2012. But my query isn’t using it. When I try to use the FORCESEEK hint on the Person.Address table, I get the friendly error: Msg 8622, Level 16, State 1, Line 1 Query processor could not produce a query plan because of the hints defined in this query. Resubmit the query without specifying any hints and without using SET FORCEPLAN. And I’m almost tempted to just give up and move back to the old method of checking increasingly large circles around my location. After all, I can even leverage multiple OUTER APPLY clauses just like I did in my recent Lookup post. WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT     l.Name,     COALESCE(a1.AddressLine1,a2.AddressLine1,a3.AddressLine1),     COALESCE(a1.City,a2.City,a3.City),     s.Name AS [State],     c.Name AS Country FROM MyLocations AS l OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 1000     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a1 OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 5000     AND a1.AddressID IS NULL     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a2 OUTER APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     WHERE l.Geo.STDistance(ad.SpatialLocation) < 20000     AND a2.AddressID IS NULL     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a3 JOIN Person.StateProvince AS s     ON s.StateProvinceID = COALESCE(a1.StateProvinceID,a2.StateProvinceID,a3.StateProvinceID) JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; But this isn’t friendly-looking at all, and I’d use the method recommended by Isaac Kunen, who uses a table of numbers for the expanding circles. It feels old-school though, when I’m dealing with SQL 2012 (and later) versions. So why isn’t my query doing what it’s supposed to? Remember the query... WITH MyLocations AS (SELECT * FROM (VALUES ('LobsterPot Adelaide', geography::Point(-34.925806, 138.605073, 4326)),                        ('LobsterPot USA', geography::Point(42.524929, -87.858244, 4326))                ) t (Name, Geo)) SELECT l.Name, a.AddressLine1, a.City, s.Name AS [State], c.Name AS Country FROM MyLocations AS l CROSS APPLY (     SELECT TOP (1) *     FROM Person.Address AS ad     ORDER BY l.Geo.STDistance(ad.SpatialLocation)     ) AS a JOIN Person.StateProvince AS s     ON s.StateProvinceID = a.StateProvinceID JOIN Person.CountryRegion AS c     ON c.CountryRegionCode = s.CountryRegionCode ; Well, I just wasn’t reading http://msdn.microsoft.com/en-us/library/ff929109.aspx properly. The following requirements must be met for a Nearest Neighbor query to use a spatial index: A spatial index must be present on one of the spatial columns and the STDistance() method must use that column in the WHERE and ORDER BY clauses. The TOP clause cannot contain a PERCENT statement. The WHERE clause must contain a STDistance() method. If there are multiple predicates in the WHERE clause then the predicate containing STDistance() method must be connected by an AND conjunction to the other predicates. The STDistance() method cannot be in an optional part of the WHERE clause. The first expression in the ORDER BY clause must use the STDistance() method. Sort order for the first STDistance() expression in the ORDER BY clause must be ASC. All the rows for which STDistance returns NULL must be filtered out. Let’s start from the top. 1. Needs a spatial index on one of the columns that’s in the STDistance call. Yup, got the index. 2. No ‘PERCENT’. Yeah, I don’t have that. 3. The WHERE clause needs to use STDistance(). Ok, but I’m not filtering, so that should be fine. 4. Yeah, I don’t have multiple predicates. 5. The first expression in the ORDER BY is my distance, that’s fine. 6. Sort order is ASC, because otherwise we’d be starting with the ones that are furthest away, and that’s tricky. 7. All the rows for which STDistance returns NULL must be filtered out. But I don’t have any NULL values, so that shouldn’t affect me either. ...but something’s wrong. I do actually need to satisfy #3. And I do need to make sure #7 is being handled properly, because there are some situations (eg, differing SRIDs) where STDistance can return NULL. It says so at http://msdn.microsoft.com/en-us/library/bb933808.aspx – “STDistance() always returns null if the spatial reference IDs (SRIDs) of the geography instances do not match.” So if I simply make sure that I’m filtering out the rows that return NULL… …then it’s blindingly fast, I get the right results, and I’ve got the complex-but-brilliant plan that I wanted. It just wasn’t overly intuitive, despite being documented. @rob_farley

    Read the article

  • How to configure a GridView CommandField to trigger Full Page Update in UpdatePanel

    - by Frinavale
    I have 2 user controls on my page. One is used for searching, and the other is used for editing (along with a few other things). The user control that provides the search functionality uses a GridView to display the search results. This GridView has a CommandField used for editing (showEditButton="true"). I would like to place the GridView into an UpdatePanel so that paging through the search results will be smooth. The thing is that when the user clicks the Edit Link (the CommandField) I need to preform a full page postback so that the search user control can be hidden and the edit user control can be displayed. Edit: the reason I need to do a full page postback is because the edit user control is outside of the UpdatePanel that my GridView is in. It is not only outside the UpdatePanel, but it's in a completely different user control. I have no idea how to add the CommandField as a full page postback trigger to the UpdatePanel. The PostBackTrigger (which is used to indicate the controls cause a full page postback) takes a ControlID as a parameter; however the CommandButton does not have an ID...and you can see why I'm having a problem with this. Update on what else I've tried to solve the problem: I took a new approach to solving the problem. In my new approach, I used a TemplateField instead of a CommandField. I placed a LinkButton control in the TemplateField and gave it a name. During the GridView's RowDataBound event I retrieved the LinkButton control and added it to the UpdatePanel's Triggers. This is the ASP Markup for the UpdatePanel and the GridView <asp:UpdatePanel ID="SearchResultsUpdateSection" runat="server"> <ContentTemplate> <asp:GridView ID="SearchResultsGrid" runat="server" AllowPaging="true" AutoGenerateColumns="false"> <Columns> <asp:TemplateField> <HeaderTemplate></HeaderTemplate> <ItemTemplate> <asp:LinkButton ID="Edit" runat="server" Text="Edit"></asp:LinkButton> </ItemTemplate> </asp:TemplateField> <asp:BoundField ...... </Columns> </asp:GridView> </ContentTemplate> </asp:UpdatePanel> In my VB.NET code. I implemented a function that handles the GridView's RowDataBound event. In this method I find the LinkButton for the row being bound to, create a PostBackTrigger for the LinkButton, and add it to the UpdatePanel's Triggers. This means that a PostBackTrigger is created for every "edit" LinkButton in the GridView Edit: this did not create a PostBackTrigger for Every "edit" LinkButton because the ID is the same for all LinkButtons in the GridView. Private Sub SearchResultsGrid_RowDataBound(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.GridViewRowEventArgs) Handles SearchResultsGrod.RowDataBound If e.Row.RowType = DataControlRowType.Header Then ''I am doing stuff here that does not pertain to the problem Else Dim editLink As LinkButton = CType(e.Row.FindControl("Edit"), LinkButton) If editLink IsNot Nothing Then Dim fullPageTrigger As New PostBackTrigger fullPageTrigger.ControlID = editLink.ID SearchResultsUpdateSection.Triggers.Add(fullPageTrigger) End If End If End Sub And instead of handling the GridView's RowEditing event for editing purposes i use the RowCommand instead. Private Sub SearchResultsGrid_RowCommand(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.GridViewCommandEventArgs) Handles SearchResultsGrid.RowCommand RaiseEvent EditRecord(Me, New EventArgs()) End Sub This new approach didn't work at all because all of the LinkButtons in the GridView have the same ID. After reading the MSDN article on the UpdatePanel.Triggers Property I have the impression that Triggers can only be defined declaratively. This would mean that anything I did in the VB code wouldn't work. Any advise would be greatly appreciated. Thanks, -Frinny

    Read the article

  • Apache mod_rewrite driving me mad

    - by WishCow
    The scenario I have a webhost that is shared among multiple sites, the directory layout looks like this: siteA/ - css/ - js/ - index.php siteB/ - css/ - js/ - index.php siteC/ . . . The DocumentRoot is at the top level, so, to access siteA, you type http://webhost/siteA in your browser, to access siteB, you type http://webhost/siteB, and so on. Now I have to deploy my own site, which was designed with having 3 VirtualHosts in mind, so my structure looks like this: siteD/ - sites/sitename.com/ - log/ - htdocs/ - index.php - sites/static.sitename.com - log/ - htdocs/ - css - js - sites/admin.sitename.com - log/ - htdocs/ - index.php As you see, the problem is that my index.php files are not at the top level directory, unlike the already existing sites on the webhost. Each VirtualHost should point to the corresponding htdocs/ folder: http://siteD.com -> siteD/sites/sitename.com/htdocs http://static.siteD.com -> siteD/sites/static.sitename.com/htdocs http://admin.siteD.com -> siteD/sites/admin.sitename.com/htdocs The problem I cannot have VirtualHosts on this host, so I have to emulate it somehow, possibly with mod_rewrite. The idea Have some predefined parts in all of the links on the site, that I can identify, and route accordingly to the correct file, with mod_rewrite. Examples: http://webhost/siteD/static/js/something.js -> siteD/sites/static.sitename.com/htdocs/js/something.js http://webhost/siteD/static/css/something.css -> siteD/sites/static.sitename.com/htdocs/css/something.css http://webhost/siteD/admin/something -> siteD/sites/admin.sitename.com/htdocs/index.php http://webhost/siteD/admin/sub/something -> siteD/sites/admin.sitename.com/htdocs/index.php http://webhost/siteD/something -> siteD/sites/sitename.com/htdocs/index.php http://webhost/siteD/sub/something -> siteD/sites/sitename.com/htdocs/index.php Anything that starts with http://url/sitename/admin/(.*) will get rewritten, to point to siteD/sites/admin.sitename.com/htdocs/index.php Anything that starts with http://url/sitename/static/(.*) will get rewritten, to point to siteD/sites/static.sitename.com/htdocs/$1 Anything that starts with http://url/sitename/(.*) AND did not have a match already from above, will get rewritten to point to siteD/sites/sitename.com/htdocs/index.php The solution Here is the .htaccess file that I've come up with: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} ^/siteD/static/(.*)$ [NC] RewriteRule ^siteD/static/(.*)$ siteD/sites/static/htdocs/$1 [L] RewriteCond %{REQUEST_URI} ^/siteD/admin/(.*)$ [NC] RewriteRule ^siteD/(.*)$ siteD/sites/admin/htdocs/index.php [L,QSA] So far, so good. It's all working. Now to add the last rule: RewriteCond %{REQUEST_URI} ^/siteD/(.*)$ [NC] RewriteRule ^siteD/(.*)$ siteD/sites/public/htdocs/index.php [L,QSA] And it's broken. The last rule catches everything, even the ones that have static/ or admin/ in them. Why? Shouldn't the [L] flag stop the rewriting process in the first two cases? Why is the third case evaluated? Is there a better way of solving this? I'm not sticking to rewritemod, anything is fine as long as it does not need access to server-level config. I don't have access to RewriteLog, or anything like that. Please help :(

    Read the article

  • FMOD.net streaming, callback and exinfo parameters

    - by Tesserex
    I posted a question on gamedev about how to play nsf files (NES console music) in FMOD. It didn't get any results, but since then I made some progress. I decided that the easiest method was just to compile an existing player into a dll and then call it from C# to populate my buffer. The problem now is getting it to sound right, and making sure all my paremeters are correct. Here are the facts so far: The nsf dll is dealing with shorts, so the data is PCM16. The sample nsf I'm using has a playback rate of 60 Hz. Just for playing around now, I'm using a frequency of 48000. Based on 2 and 3, the dll calculates a necessary buffer size of 48000 / 60hz = 800. This means it will render 800 shorts worth of buffer for every simulated NES frame. I've so far got my C# code to play the nsf, at the correct pitch and tempo, but it's very grainy / fuzzy, which I'm attributing to the fact that the FMOD read callback is giving a data length of 1600, whereas I should be expecting 800. I've tried playing around with all the numbers and it either crashes, or the music changes pitch, tempo, or both. Here's some of my C# code: uint channels = 1, frequency = 48000; FMOD.MODE mode = (FMOD.MODE.DEFAULT | FMOD.MODE.OPENUSER | FMOD.MODE.LOOP_NORMAL); FMOD.Sound sound = new FMOD.Sound(); FMOD.CREATESOUNDEXINFO ex = new FMOD.CREATESOUNDEXINFO(); ex.cbsize = Marshal.SizeOf(ex); ex.fileoffset = 0; ex.format = FMOD.SOUND_FORMAT.PCM16; // does this even matter? It doesn't change my results as long as it's long enough for one update ex.length = frequency; ex.numchannels = (int)channels; ex.defaultfrequency = (int)frequency; ex.pcmreadcallback = pcmreadcallback; ex.dlsname = null; // eventually I will calculate this with frequency / nsf hz, but I'm just testing for now ex.decodebuffersize = 800; // from the dll load_nsf_file("file.nsf", 8, (int)frequency); // 8 is the track number to play var result = system.createSound( (string)null, (mode | FMOD.MODE.CREATESTREAM), ref ex, ref sound); channel = new FMOD.Channel(); result = system.playSound(FMOD.CHANNELINDEX.FREE, sound, false, ref channel); private FMOD.RESULT PCMREADCALLBACK(IntPtr soundraw, IntPtr data, uint datalen) { // from the dll process_buffer(data, (int)800); // if I use datalen, it usually crashes (I can't get datalen to = 800 safely) return FMOD.RESULT.OK; } So here are some of my questions: What is the relationship between exinfo.decodebuffersize, frequency, and the datalen parameter of the read callback? With this code sample, it's coming in as 3200. I don't know where that factor of 4 between it and the decodebuffersize comes from. Is datalen in the callback referring to number of bytes, or shorts? The process_buffer function takes a short array and its length. I would expect fmod is talking about shorts as well because I told it PCM16. Maybe my playback quality is bad for some totally different reason. If so I have no idea where to begin solving that. Any ideas there?

    Read the article

  • What would cause ANY .NET application to crash immediately... except a project I create and Debug in

    - by blak3r
    My software recently got deployed to a customer who said that the application was crashing immediately after it started. After some initial debugging, the customer provided me remote access to one of the computers which was unable to run the application. I found that the crash wasn't specific to my application. Any application which depended on the .NET framework crashed immediately. Conveniently, Visual Studio 2008 was installed so I created a quick hello world application on it and clicked Debug. The application worked fine. But, then when I tried to execute the generated binaries in the /bin/Debug/HelloWorld.exe directory outside of visual studio it crashed. List of things i've tried (UPDATED): I checked that "Everyone" has Read&Execute permissions for c:\Windows. To test that the problem was with the .NET Framework (and not my application), I attempted to download Paint .NET on to the computers. The setup frontend crashed in the same manner. Performed a repair of the .NET framework as outlined in http://support.microsoft.com/kb/908077 (Boy was this fun and time consuming). No luck. Installed .NET 3.5 SP1 (before it just had .NET 3.5) Note: my application targets 2.0 so I did this more as a long shot... but i learned in the process that .NET 3.5 SP1 also updates the underlying frameworks. Ran Aaron Stebner's .NET Setup Verification Tool. This tool indicated that .NET was successfully installed. (I forget if i checked all the versions but at least 2.0 worked). Tested some mini hello world applications which were targeted for .NET 2.0 and .NET 3.5 and both crashed in the same way. Tried launching .NET apps via windbg cmd line. Doing this did allow me invoke my simple hello world applications. So, simple .NET hello world works when invoked by windbg or by launching via debug in visual studio... but doesn't if i try to execute it standalone. I created a dump file using WinDbg. It wasn't all that revealing to me. FAULTING_IP: mscorwks!PEImage::GetEntryPointToken+21 79f4ff9d f6401010 test byte ptr [eax+10h],10h EXCEPTION_RECORD: 0012f710 -- (.exr 0x12f710) ExceptionAddress: 79f4ff9d (mscorwks!PEImage::GetEntryPointToken+0x00000021) ExceptionCode: c0000005 (Access violation) ExceptionFlags: 00000000 NumberParameters: 2 Parameter[0]: 00000000 Parameter[1]: 00000010 Attempt to read from address 00000010 FAULTING_THREAD: 00000b44 PROCESS_NAME: MyProcess.exe ERROR_CODE: (NTSTATUS) 0x80000003 - {EXCEPTION} Breakpoint A breakpoint has been reached. EXCEPTION_CODE: (HRESULT) 0x80000003 (2147483651) - One or more arguments are invalid DETOURED_IMAGE: 1 NTGLOBALFLAG: 0 APPLICATION_VERIFIER_FLAGS: 0 MANAGED_STACK: !dumpstack -EE OS Thread Id: 0xb44 (0) Current frame: ChildEBP RetAddr Caller,Callee EXCEPTION_OBJECT: !pe cb10b4 Exception object: 00cb10b4 Exception type: System.ExecutionEngineException Message: <none> InnerException: <none> StackTrace (generated): <none> StackTraceString: <none> HResult: 80131506 MANAGED_OBJECT_NAME: System.ExecutionEngineException CONTEXT: 0012f72c -- (.cxr 0x12f72c) eax=00000000 ebx=00000000 ecx=00000000 edx=0000000e esi=001a1490 edi=00000001 eip=79f4ff9d esp=0012f9f8 ebp=0012fa1c iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010246 mscorwks!PEImage::GetEntryPointToken+0x21: 79f4ff9d f6401010 test byte ptr [eax+10h],10h ds:0023:00000010=?? Resetting default scope READ_ADDRESS: 00000010 FOLLOWUP_IP: mscorwks!PEImage::GetEntryPointToken+21 79f4ff9d f6401010 test byte ptr [eax+10h],10h BUGCHECK_STR: APPLICATION_FAULT_NULL_CLASS_PTR_DEREFERENCE_SHUTDOWN PRIMARY_PROBLEM_CLASS: NULL_CLASS_PTR_DEREFERENCE_SHUTDOWN DEFAULT_BUCKET_ID: NULL_CLASS_PTR_DEREFERENCE_SHUTDOWN LAST_CONTROL_TRANSFER: from 79ef02b5 to 79f4ff9d STACK_TEXT: 79f4ff9d mscorwks!PEImage::GetEntryPointToken+0x21 79ef02b5 mscorwks!PEFile::GetEntryPointToken+0xa0 79eefeaf mscorwks!SystemDomain::ExecuteMainMethod+0xd4 79fb9793 mscorwks!ExecuteEXE+0x59 79fb96df mscorwks!_CorExeMain+0x15c 7900b1b3 mscoree!_CorExeMain+0x2c 7c817077 kernel32!BaseProcessStart+0x23 SYMBOL_STACK_INDEX: 0 SYMBOL_NAME: mscorwks!PEImage::GetEntryPointToken+21 FOLLOWUP_NAME: MachineOwner MODULE_NAME: mscorwks IMAGE_NAME: mscorwks.dll DEBUG_FLR_IMAGE_TIMESTAMP: 471ef729 STACK_COMMAND: .cxr 0012F72C ; kb ; dds 12f9f8 ; kb FAILURE_BUCKET_ID: NULL_CLASS_PTR_DEREFERENCE_SHUTDOWN_80000003_mscorwks.dll!PEImage::GetEntryPointToken BUCKET_ID: APPLICATION_FAULT_NULL_CLASS_PTR_DEREFERENCE_SHUTDOWN_DETOURED_mscorwks!PEImage::GetEntryPointToken+21 WATSON_STAGEONE_URL: http://watson.microsoft.com/StageOne/MyProcess_exe/2_4_4_39/4a8a192c/unknown/0_0_0_0/bbbbbbb4/80000003/00000000.htm?Retriage=1 Followup: MachineOwner Edit 1:The event log details for this error say it's a .NET Runtime version 2.0.50727.3053 - Fatal Execution Engine Error (7A097706)(80131506). Edit 2 (10-7-09): This issue is still active. Edit 3 (3-29-10): This update is to let everyone know that I never did successfully solve the problem. The customer who's machine this was on lost interest in solving it and just reimaged the machine :(. Thanks for all the contributions though.

    Read the article

  • Is this question too hard for a seasoned C++ architect?

    - by Monomer
    Background Information We're looking to hire a seasoned C++ architect (10+years dev, of which at least 6years must be C++ ) for a high frequency trading platform. Job advert says STL, Boost proficiency is a must with preferences to modern uses of C++. The company I work for is a Fortune 500 IB (aka finance industry), it requires passes in all the standard SHL tests (numeric, vocab, spatial etc) before interviews can commence. Everyone on the team was given the task of coming up with one question to ask the candidates during a written/typed test, please note this is the second test provided to the candidates, the first being Advanced IKM C++ test, done in the offices supervised and without internet access. People passing that do the second test. After roughly 70 candidates, my question has been determined to be statistically the worst performing - aka least number of people attempted it, furthermore even less people were able to give meaningful answers. Please note, the second test is not timed, the candidate can literally take as long as they like (we've had one person take roughly 10.5hrs) My question to SO is this, after SHL and IKM adv c++ tests, backed up with at least 6+ years C++ development experience, is it still ok not to be able to even comment about let alone come up with some loose strategy for solving the following question. The Question There is a class C with methods foo, boo, boo_and_foo and foo_and_boo. Each method takes i,j,k and l clock cycles respectively, where i < j, k < i+j and l < i+j. class C { public: int foo() {...} int boo() {...} int boo_and_foo() {...} int foo_and_boo() {...} }; In code one might write: C c; . . int i = c.foo() + c.boo(); But it would be better to have: int i = c.foo_and_boo(); What changes or techniques could one make to the definition of C, that would allow similar syntax of the original usage, but instead have the compiler generate the latter. Note that foo and boo are not commutative. Possible Solution We were basically looking for an expression templates based approach, and were willing to give marks to anyone who had even hinted or used the phrase or related terminology. We got only two people that used the wording, but weren't able to properly describe how they accomplish the task in detail. We use such techniques all over the place, due to the use of various mathematical operators for matrix and vector based calculations, for example to decide when to use IPP or hand woven implementations at compile time for a particular architecture and many other things. The particular area of software development requires microsecond response times. I believe could/should be able to teach a junior such techniques, but given the assumed caliber of candidates I expected a little more. Is this really a difficult question? Should it be removed? Or are we just not seeing the right candidates?

    Read the article

  • Code Golf: Word Search Solver

    - by Maxim Z.
    Note: This is my first Code Golf challenge/question, so I might not be using the correct format below. I'm not really sure how to tag this particular question, and should this be community wiki? Thanks! This Code Golf challenge is about solving word searches! A word search, as defined by Wikipedia, is: A word search, word find, word seek, word sleuth or mystery word puzzle is a word game that is letters of a word in a grid, that usually has a rectangular or square shape. The objective of this puzzle is to find and mark all the words hidden inside the box. The words may be horizontally, vertically or diagonally. Often a list of the hidden words is provided, but more challenging puzzles may let the player figure them out. Many word search puzzles have a theme to which all the hidden words are related. The word searches for this challenge will all be rectangular grids with a list of words to find provided. The words can be written vertically, horizontally, or diagonally. Input/Output The user inputs their word search and then inputs a word to be found in their grid. These two inputs are passed to the function that you will be writing. It is up to you how you want to declare and handle these objects. Using a strategy described below or one of your own, the function finds the specific word in the search and outputs its starting coordinates (simply row number and column number) and ending coordinates. If you find two occurrences of the word, you must output both's set of coordinates. Example Input: A I Y R J J Y T A S V Q T Z E X B X G R Z P W V T B K U F O E A F L V F J J I A G B A J K R E S U R E P U S C Y R S Y K F B B Q Y T K O I K H E W G N G L W Z F R F H L O R W A R E J A O S F U E H Q V L O A Z B J F B G I F Q X E E A L W A C F W K Z E U U R Z R T N P L D F L M P H D F W H F E C G W Z B J S V O A O Y D L M S T C R B E S J U V T C S O O X P F F R J T L C V W R N W L Q U F I B L T O O S Q V K R O W G N D B C D E J Y E L W X J D F X M Word to find: codegolf Output: row 12, column 8 --> row 5, column 1 Strategies Here are a few strategies you might consider using. It is completely up to you to decide what strategy you want to use; it doesn't have to be in this list. Looking for the first letter of the word; on each occurrence, looking at the eight surrounding letters to see whether the next letter of the word is there. Same as above, except looking for a part of a word that has two of the same letter side-by-side. Counting how often each letter of the alphabet is present in the whole grid, then selecting one of the least-occurring letters from the word you have to find and searching for the letter. On each occurrence of the letter, you look at its eight surrounding letters to see whether the next and previous letters of the word is there.

    Read the article

  • Databinding to ObservableCollection in a different UserControl - how to preserve current selections?

    - by Dave
    Scope of question expanded on 2010-03-25 I ended up figuring out my problem, but here's a new problem that came up as a result of solving the original question, because I want to be able to award the bounty to someone!!! Once I figured out my problem, I soon found out that when the ObservableCollection updates, the databound ComboBox has its contents repopulated, but most of the selections have been blanked out. I assume that in this case, MVVM is going to make it difficult for me to remember the last selected item. I have an idea, but it seems a little nasty. I'll award the bounty to whomever comes up with a nice solution for this! Question re-written on 2010-03-24 I have two UserControls, where one is a dialog that has a TabControl, and the other is one that appears within said TabControl. I'll just call them CandyDialog and CandyNameViewer for simplicity's sake. There's also a data management class called Tracker that manages information storage, which for all intents and purposes just exposes a public property that is an ObservableCollection. I display the CandyNameViewer in CandyDialog via code behind, like this: private void CandyDialog_Loaded( object sender, RoutedEventArgs e) { _candyviewer = new CandyViewer(); _candyviewer.DataContext = _tracker; candy_tab.Content = _candyviewer; } The CandyViewer's XAML looks like this (edited for kaxaml): <Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <Page.Resources> <DataTemplate x:Key="CandyItemTemplate"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="120"></ColumnDefinition> <ColumnDefinition Width="150"></ColumnDefinition> </Grid.ColumnDefinitions> <TextBox Grid.Column="0" Text="{Binding CandyName}" Margin="3"></TextBox> <!-- just binding to DataContext ends up using InventoryItem as parent, so we need to get to the UserControl --> <ComboBox Grid.Column="1" SelectedItem="{Binding SelectedCandy, Mode=TwoWay}" ItemsSource="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type UserControl}}, Path=DataContext.CandyNames}" Margin="3"></ComboBox> </Grid> </DataTemplate> </Page.Resources> <Grid> <ListBox DockPanel.Dock="Top" ItemsSource="{Binding CandyBoxContents, Mode=TwoWay}" ItemTemplate="{StaticResource CandyItemTemplate}" /> </Grid> </Page> Now everything works fine when the controls are loaded. As long as CandyNames is populated first, and then the consumer UserControl is displayed, all of the names are there. I obviously don't get any errors in the Output Window or anything like that. The issue I have is that when the ObservableCollection is modified from the model, those changes are not reflected in the consumer UserControl! I've never had this problem before; all of my previous uses of ObservableCollection updated fine, although in those cases I wasn't databinding across assemblies. Although I am currently only adding and removing candy names to/from the ObservableCollection, at a later date I will likely also allow renaming from the model side. Is there something I did wrong? Is there a good way to actually debug this? Reed Copsey indicates here that inter-UserControl databinding is possible. Unfortunately, my favorite Bea Stollnitz article on WPF databinding debugging doesn't suggest anything that I could use for this particular problem.

    Read the article

  • CodePlex Daily Summary for Monday, October 07, 2013

    CodePlex Daily Summary for Monday, October 07, 2013Popular ReleasesGhostscript.NET: Ghostscript.NET v.1.1.0.: v.1.1.0. added GhostscriptViewer state handling (SaveState, RestoreState) GhostscriptRasterizer constructor is extended in order to support usage of the existing GhostscriptViewer instance. fixed problem while using a 32-bit assembly with 32-bit version of Ghostscript on 64-bit Windows: It couldn't find a registry key of installed Ghostscript. Reported and fixed by "r0land". v.1.0.9. implemented EPS (Encapsulated PostScript) support for the GhostscriptViewer. added GhostscriptRasterize...Ghostscript Studio: Ghostscript.Studio v.1.0.2: Ghostscript Studio is easy to use Ghostscript IDE, a tool that facilitates the use of the Ghostscript interpreter by providing you with a graphical interface for postscript editing and file conversions. Ghostscript Studio allows you to preview postscript files, edit the code and execute them in order to convert PDF documents and other formats. The program allows you to convert between PDF, Postscript, EPS, TIFF, JPG and PNG by using the Ghostscript.NET Processor. v.1.0.2. added custom -c s...cmdradio: v0.1.1 binary: Default download in win32. For other OS see here. This is alpha version. Please report all bugs.Media Companion: Media Companion MC3.579b: Fixed IMDB actor scraping for Movies and TV. Note: there are a couple of new functions that are not active, as this release needed to be done due to IMDB change. New* TV - context menu for rescrapeing Poster image/Banner image Fixed* Both - Fixed actor scraping from IMDB * Movie - Fixed Tableview if Movie's movieset was not in MC's list of moviesets * Movie - Rename with mediainfo now lowercase. * Both - added ignore "A " in titles. Separate option in General Preferences. * Tv - Changing ...VidCoder: 1.5.7 Beta: Updated HandBrake core to SVN 4819. About dialog now pulls down HandBrake version from the DLL. Added a confirmation dialog to Stop if the encode has been going on for more than 5 minutes. Fixed handling of unicode characters for input and output filenames. We now encode to UTF-8 before passing to HandBrake. Fixed a crash in the queue multiple titles dialog. Added code to rescue tool windows which get placed outside of the visible screen area.Wsus Package Publisher: Release v1.3.1310.05: Enhance the "Reboot Remote Computers", by adding a timer before the reboot occure. So that remote users can save their documents and close applications. You can also add a message to be display. In 'Tools'->'Settings'-> Misc Tab, you can set a default message. Enhance the "Compare Computers against AD", by choosing OUs to include in the comparison.Event-Based Components AppBuilder: AB3.Iteration.53: Iteration 53 (Feature): Allow drag&drop of existing component (flow, step) from component list to chart. Duplicate names are automatically recognized and solved. By the color of the draged component you can see what kind of component (flow or step) is currently draged. New: AddExistingComponentFlow, PartDragDropEventHandler, ExistingStepPreparerPulse: Pulse 0.6.7.3: Pulse is now accepting donations. To donate by Bitcoin or PayPal see https://pulse.codeplex.com/wikipage?title=Donations Lots of updates in v0.6.7.3: (Feature) New option allows you to disable wallpaper changing when a full screen application is running. This way Pulse doesn't slow down/lag your videos and games :) (Fix) Some users were getting Wallbase errors when logging in. This has been fixed. (Feature) Right click a provider and you can now make a copy of it by selecting the "Dupl...MoreTerra (Terraria World Viewer): MoreTerra 1.11.1: Release 1.11.1 =========== =Bug Fixes= =========== Added more tile blocks (Clouds, crimstone) Added items (binoculars, rope, Pirahna Gun) Added ores (Lead, Tin) Chests now work, I broke them yesterday. =============== =Known Issues= =============== I am having trouble with new background walls. So you will see a red outline for crimson then a pink inside. Same with where I think the queen bee lives.VG-Ripper & PG-Ripper: PG-Ripper 1.4.19: NEW: Added Option to login as Guest NEW: Added Menu Option to delete an Forum Account NEW: Added Support for "ImageTeam.org links FIXED: Fixed Ripping of http://forum.babeunion.com ForumsSingle Line Diagram (Electrical) Control Library: Single Line Diagram (Electrical) Control Lib - 1.2: In this release of SLD (Electrical) Control Lib I have fix code for the following symbols... 1. Switch 2. Isolator 3. Lightening Arrestor 4. Meter A new symbol is added for "Pothead Terminal" You can give it a try. I will keep improving and posting the new features.State of Decay Save Manager: Version 1.0.4: Add version at bottom of formCollection Commander for Configuration Manager 2012: CMCollCtr_1.0.0.2: Windows Installer (MSI) setup;New Ribbon Toolbar implemented; more Powershell commands...Classic WiX Burn Theme: Classic WiX Burn Theme 1.0: Project Description A WiX Burn theme inspired by the classic WiX wizard user interface.QuickConverter: QuickConverter 0.7.3 Beta: Allows access to external variables from inside lambda expressions.DNN® Form and List: DNN Form and List 06.00.07: DotNetNuke Form and List 06.00.06 Changes to 6.0.7•Fixed an error in datatypes.config that caused calculated fields to be missing in 6.0.6 Changes to 6.0.6•Add in Sql to remove 'text on row' setting for UserDefinedTable to make SQL Azure compatible. •Add new azureCompatible element to manifest. •Added a fix for importing templates. Changes to 6.0.2•Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 •Change: Data is now stored in nvarchar(max) instead of ntext C...Trace Reader for Microsoft Dynamics CRM: Trace Reader (1.2013.10.3): Fix a bug when the first caracter of a description line is '[' Add search featureSimpleExcelReportMaker: Serm 0.03: SourceCode and Sample .Net Framework 3.5 AnyCPU compile.Application Architecture Guidelines: App Architecture Guidelines 3.0.8: This document is an overview of software qualities, principles, patterns, practices, tools and libraries.BlackJumboDog: Ver5.9.6: 2013.09.30 Ver5.9.6 (1)SMTP???????、???????????????? (2)WinAPI??????? (3)Web???????CGI???????????????????????New ProjectsASP.NET and ASP.NET MVC Blog Test: As a recent convert from Windows Forms & WPF to ASP.NET and ASP.NET MVC, I've created a simple project and implemented it twice using ASP.NET and ASP.NET MVC.cmdradio: Simple command-line interface internet radio playerDa Nang College Of Technology: Cao Ð?ng Công Ngh?DNN Camera Slideshow: The Camera Slideshow module for DNNFolder Iterator: Folder Iterator is a program I made one night to iterate through a folder and output an ordered list of files contained within the chosen directory.foobarpoc: fooJamendo Search Rest: This project uses the Jamendo REST API to search songs by title.Lab Nhibernate, PRISM, WPF, FLUENT, C#, ServiceStack, JSON: This is a lab for new technology and solving real developer problem Merch ASP.NET: This project is a port of Merch built on Zend framework 2 / PHP 5.MyTest2: This is test project2.Paintbear: Paintbear is a free open-source Paint Program and Image Manipulation Program for Windows based on the .NET Framework(version4). Works on Linux,too ! With Wine RasterConversion (.Net framework): Comparison of variants for working with raster (Bitmap class) at a low level. .Net framework, C#SSIS Custom Connection Manager: Example code for creating a Custom Connection Manager in SSIS 2008 and 2012Web Scripting Project: This is a project related to web application development at the Universtity of HertfordshireWebscripting1: New Scripting Website for University of Herts course Write.NET: Another .NET Blogging Engine: Write.NET is currently under development. Write.NET will be lightweight and highly configurable blogging engine built in MVC4Yoer: ????ASP.NET MVC??,????????????

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53  | Next Page >