Search Results

Search found 284 results on 12 pages for 'gaps'.

Page 11/12 | < Previous Page | 7 8 9 10 11 12  | Next Page >

  • SQLAuthority News – Memories at Anniversary of SQL Wait Stats Book

    - by pinaldave
    SQL Wait Stats About a year ago, I had very proud moment. I had published my second book SQL Server Wait Stats with me as a primary author. It has been a long journey since then. The book got great response and it was widely accepted in the community. It was first of its kind of book written specifically on Wait Stats and Performance. The book was based on my earlier month long series written on the same subject SQL Server Wait Stats. Today, on the anniversary of the book, lots of things come to my mind let me share a few here. Idea behind Blog Series A very common question I often receive is why I wrote a 30 day series on Wait Stats. There were two reasons for it. 1) I have been working with SQL Server for a long time and have troubleshoot more than hundreds of SQL Server which are related to performance tuning. It was a great experience and it taught me a lot of new things. I always documented my experience. After a while I found that I was able to completely rely on my own notes when I was troubleshooting any servers. It is right then I decided to document my experience for the community. 2) While working with wait stats there were a few things, which I thought I knew it well as they were working. However, there was always a fear in the back of mind that what happens if what I believed was incorrect and I was on the wrong path all the time. There was only one way to get it validated. Put it out in front community with my understanding and request further help to improve my understanding. It worked, it worked beautifully. I received plenty of conversations, emails and comments. I refined my content based on various conversations and make it more relevant and near accurate. I guess above two are the major reasons for beginning my journey on writing Wait Stats blog series. Idea behind Book After writing a blog series there was a good amount of request I keep on receiving that I should convert it to eBook or proper book as reading blog posts is great but it goes not give a comprehensive understanding of the subject. The very common feedback from users who were beginning the subject that they will prefer to read it in a structured method. After hearing the feedback for more than 4 months, I decided to write a book based on the blog posts. When I envisioned book, I wanted to make sure this book addresses the wait stats concepts from the fundamentals and fill the gaps of blogs I wrote earlier. Rick Morelan and Joes 2 Pros Team I must acknowledge my co-author Rick Morelan for his unconditional support in writing this book. I had already authored one book before I published this book. The experience to write the book was out of the world. Writing blog posts are much much easier than writing books. The efforts it takes to write a book is 100 times more even though the content is ready. I could have not done it myself if there was not tremendous support of my co-author and editor’s team. We spend days and days researching and discussing various concepts covered in the book. When we were in doubt we reached out to experts as well did a practical reproduction of the scenarios to validate the concepts and claims. After continuous 3 months of hard work we were able to get this book out in the community. September 1st – the lucky day Well, we had to select any day to publish the books. When book was completed in August last week we felt very glad. We all had worked hard and having a sample draft book in hand was feeling like having a newborn baby in our hand. Every time my books are published I feel the same joy which I had when my daughter was born. The feeling of holding a new book in hand is the (almost) same feeling as holding newborn baby. I am excited. For me September 1st has been the luckiest day in mind life. My daughter Shaivi was born on September 1st. Since then every September first has been excellent day and have taken me to the next step in life. I believe anything and everything I do on September 1st it is turning out to be successful and blessed. Rick and I had finished a book in the last week of August. We sent it to the publisher (printer) and asked him to take the book live as soon as possible. We did not decide on any date as we wanted the book to get out as fast as it can. Interesting enough, the publisher/printer selected September 1st for publishing the book. He published the book on 1st September and I knew it at the same time that this book will go next level. Book Model – The Most Beautiful Girl We were done with book. We had no budget left for marketing. Rick and I had a long conversation regarding how to spread the words for the book so it can reach to many people. While we were talking about marketing Rick come up with the idea that we should hire a most beautiful girl around who acknowledge our book and genuinely care for book. It was a difficult task and Rick asked me to find a more beautiful girl. I am a father and the most beautiful girl for me my daughter. This was not a difficult task for me. Rick had given me task to find the most beautiful girl and I just could not think of anyone else than my own daughter. I still do not know what Rick thought about this idea but I had already made up my mind. You can see the detailed blog post here. The Fun Experiments Book Signing Event We had lots of fun moments along this book. We have given away more books to people for free than we have sold them actually. We had done book signing events, contests, and just plain give away when we found people can be benefited from this book. There was never an intention to make money and get rich. We just wanted that more and more people know about this new concept and learn from it. Today when I look back to the earnings there is nothing much we have earned if you talk about dollars. However the best reward which we have received is the satisfaction and love of community. The amount of emails, conversations we have so far received for this book is over thousands. We had fun writing this book, it was indeed a very satisfying journey. I have earned lots of friends while learning and exploring. Availability The book is one year old but still very relevant when it is about performance tuning. It is available at various online book stores. If you have read the book, do let me know what you think of it. Amazon | Kindle | Flipkart | Indiaplaza Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority, SQLAuthority Book Review, T SQL, Technology

    Read the article

  • Too Many Kittens To Juggle At Once

    - by Bil Simser
    Ahh, the Internet. That crazy, mixed up place where one tweet turns into a conversation between dozens of people and spawns a blogpost. This is the direct result of such an event this morning. It started innocently enough, with this: Then followed up by a blog post by Joel here. In the post, Joel introduces us to the term Business Solutions Architect with mad skillz like InfoPath, Access Services, Excel Services, building Workflows, and SSRS report creation, all while meeting the business needs of users in a SharePoint environment. I somewhat disagreed with Joel that this really wasn’t a new role (at least IMHO) and that a good Architect or BA should really be doing this job. As Joel pointed out when you’re building a SharePoint team this kind of role is often overlooked. Engineers might be able to build workflows but is the right workflow for the right problem? Michael Pisarek wrote about a SharePoint Business Architect a few months ago and it’s a pretty solid assessment. Again, I argue you really shouldn’t be looking for roles that don’t exist and I don’t suggest anyone create roles to hire people to fill them. That’s basically creating a solution looking for problems. Michael’s article does have some great points if you’re lost in the quagmire of SharePoint duties though (and I especially like John Ross’ quote “The coolest shit is worthless if it doesn’t meet business needs”). SharePoinTony summed it up nicely with “SharePoint Solutions knowledge is both lacking and underrated in most environments. Roles help”. Having someone on the team who can dance between a business user and a coder can be difficult. Remember the idea of telling something to someone and them passing it on to the next person. By the time the story comes round the circle it’s a shadow of it’s former self with little resemblance to the original tale. This is very much business requirements as they’re told by the user to a business analyst, written down on paper, read by an architect, tuned into a solution plan, and implemented by a developer. Transformations between what was said, what was heard, what was written down, and what was developed can be distant cousins. Not everyone has the skill of communication and even less have negotiation skills to suit the SharePoint platform. Negotiation is important because not everything can be (or should be) done in SharePoint. Sometimes it’s just not appropriate to build it on the SharePoint platform but someone needs to know enough about the platform and what limitations it might have, then communicate that (and/or negotiate) with a customer or user so it’s not about “You can’t have this” to “Let’s try it this way”. Visualize the possible instead of denying the impossible. So what is the right SharePoint team? My cromag brain came with a fairly simpleton answer (and I’m sure people will just say this is a cop-out). The perfect SharePoint team is just enough people to do the job that know the technology and business problem they’re solving. Bridge the gap between business need and technology platform and you have an architect. Communicate the needs of the business effectively so the entire team understands it and you have a business analyst. Can you get this with full time workers? Maybe but don’t expect miracles out of the gate. Also don’t take a consultant’s word as gospel. Some consultants just don’t have the diversity of the SharePoint platform to be worth their value so be careful. You really need someone who knows enough about SharePoint to be able to validate a consultants knowledge level. This is basically try for any consultant, not just a SharePoint one. Specialization is good and needed. A good, well-balanced SharePoint team is one of people that can solve problems with work with the technology, not against it. Having a top developer is great, but don’t rely on them to solve world hunger if they can’t communicate very well with users. An expert business analyst might be great at gathering requirements so the entire team can understand them, but if it means building 100% custom solutions because they don’t fit inside the SharePoint boundaries isn’t of much value. Just repeat. There is no silver bullet. There is no silver bullet. There is no silver bullet. A few people pointed out Nick Inglis’ article Excluding The Information Professional In SharePoint. It’s a good read too and hits home that maybe some developers and IT pros need some extra help in the information space. If you’re in an organization that needs labels on people, come up with something everyone understands and go with it. If that’s Business Solutions Architect, SharePoint Advisor, or Guy Who Knows A Lot About Portals, make it work for you. We all wish that one person could master all that is SharePoint but we also know that doesn’t scale very well and you quickly get into the hit-by-a-bus syndrome (with the organization coming to a full crawl when the guy or girl goes on vacation, gets sick, or pops out a baby). There are too many gaps in SharePoint knowledge to have any one person know it all and too many kittens to juggle all at once. We like to consider ourselves experts in our field, but trying to tackle too many roles at once and we end up being mediocre jack of all trades, master of none. Don't fall into this pit. It's a deep, dark hole you don't want to try to claw your way out of. Trust me. Been there. Done that. Got the t-shirt. In the end I don’t disagree with Joel. SharePoint is a beast and not something that should be taken on by newbies. If you just read “Teach Yourself SharePoint in 24 Hours” and want to go build your corporate intranet or the next killer business solution with all your new found knowledge plan to pony up consultant dollars a few months later when everything goes to Hell in a handbasket and falls over. I’m not saying don’t build solutions in SharePoint. I’m just saying that building effective ones takes skill like any craft and not something you can just cobble together with a little bit of cursory knowledge. Thanks to *everyone* who participated in this tweet rush. It was fun and educational.

    Read the article

  • Design for complex ATG applications

    - by Glen Borkowski
    Overview Needless to say, some ATG applications are more complex than others.  Some ATG applications support a single site, single language, single catalog, single currency, have a single development staff, single business team, and a relatively simple business model.  The real complex applications have to support multiple sites, multiple languages, multiple catalogs, multiple currencies, a couple different development teams, multiple business teams, and a highly complex business model (and processes to go along with it).  While it's still important to implement a proper design for simple applications, it's absolutely critical to do this for the complex applications.  Why?  It's all about time and money.  If you are unable to manage your complex applications in an efficient manner, the cost of managing it will increase dramatically as will the time to get things done (time to market).  On the positive side, your competition is most likely in the same situation, so you just need to be more efficient than they are. This article is intended to discuss a number of key areas to think about when designing complex applications on ATG.  Some of this can get fairly technical, so it may help to get some background first.  You can get enough of the required background information from this post.  After reading that, come back here and follow along. Application Design Of all the various types of ATG applications out there, the most complex tend to be the ones in the telecommunications industry - especially the ones which operate in multiple countries.  To get started, let's assume that we are talking about an application like that.  One that has these properties: Operates in multiple countries - must support multiple sites, catalogs, languages, and currencies The organization is fairly loosely-coupled - single brand, but different businesses across different countries There is some common functionality across all sites in all countries There is some common functionality across different sites within the same country Sites within a single country may have some unique functionality - relative to other sites in the same country Complex product catalog (mostly in terms of bundles, eligibility, and compatibility) At this point, I'll assume you have read through the required reading and have a decent understanding of how ATG modules work... Code / configuration - assemble into modules When it comes to defining your modules for a complex application, there are a number of goals: Divide functionality between the modules in a way that maps to your business Group common functionality 'further down in the stack of modules' Provide a good balance between shared resources and autonomy for countries / sites Now I'll describe a high level approach to how you could accomplish those goals...  Let's start from the bottom and work our way up.  At the very bottom, you have the modules that ship with ATG - the 'out of the box' stuff.  You want to make sure that you are leveraging all the modules that make sense in order to get the most value from ATG as possible - and less stuff you'll have to write yourself.  On top of the ATG modules, you should create what we'll refer to as the Corporate Foundation Module described as follows: Sits directly on top of ATG modules Used by all applications across all countries and sites - this is the foundation for everyone Contains everything that is common across all countries / all sites Once established and settled, will change less frequently than other 'higher' modules Encapsulates as many enterprise-wide integrations as possible Will provide means of code sharing therefore less development / testing - faster time to market Contains a 'reference' web application (described below) The next layer up could be multiple modules for each country (you could replace this with region if that makes more sense).  We'll define those modules as follows: Sits on top of the corporate foundation module Contains what is unique to all sites in a given country Responsible for managing any resource bundles for this country (to handle multiple languages) Overrides / replaces corporate integration points with any country-specific ones Finally, we will define what should be a fairly 'thin' (in terms of functionality) set of modules for each site as follows: Sits on top of the country it resides in module Contains what is unique for a given site within a given country Will mostly contain configuration, but could also define some unique functionality as well Contains one or more web applications The graphic below should help to indicate how these modules fit together: Web applications As described in the previous section, there are many opportunities for sharing (minimizing costs) as it relates to the code and configuration aspects of ATG modules.  Web applications are also contained within ATG modules, however, sharing web applications can be a bit more difficult because this is what the end customer actually sees, and since each site may have some degree of unique look & feel, sharing becomes more challenging.  One approach that can help is to define a 'reference' web application at the corporate foundation layer to act as a solid starting point for each site.  Here's a description of the 'reference' web application: Contains minimal / sample reference styling as this will mostly be addressed at the site level web app Focus on functionality - ensure that core functionality is revealed via this web application Each individual site can use this as a starting point There may be multiple types of web apps (i.e. B2C, B2B, etc) There are some techniques to share web application assets - i.e. multiple web applications, defined in the web.xml, and it's worth investigating, but is out of scope here. Reference infrastructure In this complex environment, it is assumed that there is not a single infrastructure for all countries and all sites.  It's more likely that different countries (or regions) could have their own solution for infrastructure.  In this case, it will be advantageous to define a reference infrastructure which contains all the hardware and software that make up the core environment.  Specifications and diagrams should be created to outline what this reference infrastructure looks like, as well as it's baseline cost and the incremental cost to scale up with volume.  Having some consistency in terms of infrastructure will save time and money as new countries / sites come online.  Here are some properties of the reference infrastructure: Standardized approach to setup of hardware Type and number of servers Defines application server, operating system, database, etc... - including vendor and specific versions Consistent naming conventions Provides a consistent base of terminology and understanding across environments Defines which ATG services run on which servers Production Staging BCC / Preview Each site can change as required to meet scale requirements Governance / organization It should be no surprise that the complex application we're talking about is backed by an equally complex organization.  One of the more challenging aspects of efficiently managing a series of complex applications is to ensure the proper level of governance and organization.  Here are some ideas and goals to work towards: Establish a committee to make enterprise-wide decisions that affect all sites Representation should be evenly distributed Should have a clear communication procedure Focus on high level business goals Evaluation of feature / function gaps and how that relates to ATG release schedule / roadmap Determine when to upgrade & ensure value will be realized Determine how to manage various levels of modules Who is responsible for maintaining corporate / country / site layers Determine a procedure for controlling what goes in the corporate foundation module Standardize on source code control, database, hardware, OS versions, J2EE app servers, development procedures, etc only use tested / proven versions - this is something that should be centralized so that every country / site does not have to worry about compatibility between versions Create a innovation team Quickly develop new features, perform proof of concepts All teams can benefit from their findings Summary At this point, it should be clear why the topics above (design, governance, organization, etc) are critical to being able to efficiently manage a complex application.  To summarize, it's all about competitive advantage...  You will need to reduce costs and improve time to market with the goal of providing a better experience for your end customers.  You can reduce cost by reducing development time, time allocated to testing (don't have to test the corporate foundation module over and over again - do it once), and optimizing operations.  With an efficient design, you can improve your time to market and your business will be more flexible  and agile.  Over time, you'll find that you're becoming more focused on offering functionality that is new to the market (creativity) and this will be rewarded - you're now a leader. In addition to the above, you'll realize soft benefits as well.  Your staff will be operating in a culture based on sharing.  You'll want to reward efforts to improve and enhance the foundation as this will benefit everyone.  This culture will inspire innovation, which can only lend itself to your competitive advantage.

    Read the article

  • From J2EE to Java EE: what has changed?

    - by Bruno.Borges
    See original @Java_EE tweet on 29 May 2014 Yeap, it has been 8 years since the term J2EE was replaced, and still some people refer to it (mostly recruiters, luckily!). But then comes the question: what has changed besides the name? Our community friend Abhishek Gupta worked on this question and provided an excellent response titled "What's in a name? Java EE? J2EE?". But let me give you a few highlights here so you don't lose yourself with YATO (yet another tab opened): J2EE used to be an infrastructure and resources provider only, requiring developers to depend on external 3rd-party frameworks to then implement application requirements or improve productivity J2EE used to require hundreds of XML lines of codes to define just a dozen of resources like EJBs, MDBs, Servlets, and so on J2EE used to support only EAR (Enterprise Archives) with a bunch of other archives like JARs and WARs just to run a simple Web application And so on, and so on! It was a great technology but still required a lot of work to get something up and running. Remember xDoclet? Remember Struts? The old days of pure Hibernate code? Or when Ajax became a trending topic and we were all implementing it with DWR Servlet? Still, we J2EE developers survived, and learned, and helped evolve the platform to a whole new level of DX (Developer Experience). A new DX for J2EE suggested a new name. One that referred to the platform as the Enterprise Edition of Java, because "Java is why we're here" quoting Bill Shannon. The release of Java EE 5 included so many features that clearly showed developers the platform was going after all those DX gaps. Radical simplification of the persistence model with the introduction of JPA Support of Annotations following the launch of Java SE 5.0 Updated XML APIs with the introduction of StAX Drastic simplification of the EJB component model (with annotations!) Convention over Configuration and Dependency Injection A few bullets you may say but that represented a whole new DX and a vision for upcoming versions. Clearly, the release of Java EE 5 helped drive the future of the platform by reducing the number of XMLs, Java Interfaces, simplified configurations, provided convention-over-configuration, etc! We then saw the release of Java EE 6 with even more great features like Managed Beans, CDI, Bean Validation, improved JSP and Servlets APIs, JASPIC, the posisbility to deploy plain WARs and so many other improvements it is difficult to list in one sentence. And we've gotta give Spring Framework some credit here: thanks to Rod Johnson and team, concepts like Dependency Injection fit perfectly into the Java EE Platform. Clearly, Spring used to be one of the most inspiring frameworks for the Java EE platform, and it is great to see things like Pivotal and Spring supporting JSR 352 Batch API standard! Cooperation to keep improving DX at maximum in the server-side Java landscape.  The master piece result of these previous releases is seen and called today as Java EE 7, which by providing a newly and improved JavaServer Faces release, with new features for Web Development like WebSockets API, improved JAX-RS, and JSON-P, but also including Batch API and so many other great improvements, has increased developer productivity and brought innovation to server-side Java developers. Java EE is not just a new name (which was introduced back in May 2006!) but a new Developer Experience for server-side Java developers. To show you why we are here and where we are going (see the Java EE 8 update), we wanted to share with you a draft of the new Java EE logos that the evangelist team created, to help you spread the word about Java EE. You can get access to these images at the Java EE Platform Facebook Album, or the Google+ Java EE Platform Album whichever is better for you, but don't forget to like and/or +1 those social network profiles :-) A message to all job recruiters: stop using J2EE and start using Java EE if you want to find great Java EE 5, Java EE 6, or Java EE 7 developers To not only save you recruiter valuable characters when tweeting that job opportunity but to also match the correct term, we invite you to replace long terms like "Java/J2EE" or even worse "#Java #J2EE #JEE" or all these awkward combinations with the only acceptable hashtag: #JavaEE. And to prove that Java EE is catching among developers and even recruiters, and that J2EE is past, let me highlight here how are the jobs trends! The image below is from Indeed.com trends page, for the following keywords: J2EE, Java/J2EE, Java/JEE, JEE. As you can see, J2EE is indeed going away, while JEE saw some increase. Perhaps because some people are just lazy to type "Java" but at the same time they are aware that J2EE (the '2') is past. We shall forgive that for a while :-) Another proof that J2EE is going away is by looking at its trending statistics at Google. People have been showing less and less interest in the term J2EE. See the chart below:  Recruiter, if you still need proof that J2EE is past, that Java EE is trending, and that other job recruiters are seeking for Java EE developers, and that the developer community is aware of the new term, perhaps these other charts can show you what term you should be using. See for example the Job Trends for Java EE at Indeed.com and notice where it started... 2006! 8 years ago :-) Last but not least, the Google Trends for Java EE term (including the still wrong but forgivable JavaEE term) shows us that the new term is catching up very well. J2EE is past. Oh, and don't worry about the curves going down. We developers like to be hipsters sometimes and today only AngularJS, NodeJS, BigData are going up. Java EE and other traditional server-side technologies such as Spring, or even from other platforms such as Ruby on Rails, PHP, Grails, are pretty much consolidated and the curves... well, they are consolidated too. So If you are a Java EE developer, drop that J2EE from your résumé, and let recruiters also know that this term is past. Embrace Java EE, and enjoy a new developer experience for server-side Java developers. Java EE on TwitterJava EE on Google+Java EE on Facebook

    Read the article

  • Partner Blog Series: PwC Perspectives Part 2 - Jumpstarting your IAM program with R2

    - by Tanu Sood
    Identity and access management (IAM) isn’t a new concept. Over the past decade, companies have begun to address identity management through a variety of solutions that have primarily focused on provisioning. . The new age workforce is converging at a rapid pace with ever increasing demand to use diverse portfolio of applications and systems to interact and interface with their peers in the industry and customers alike. Oracle has taken a significant leap with their release of Identity and Access Management 11gR2 towards enabling this global workforce to conduct their business in a secure, efficient and effective manner. As companies deal with IAM business drivers, it becomes immediately apparent that holistic, rather than piecemeal, approaches better address their needs. When planning an enterprise-wide IAM solution, the first step is to create a common framework that serves as the foundation on which to build the cost, compliance and business process efficiencies. As a leading industry practice, IAM should be established on a foundation of accurate data for identity management, making this data available in a uniform manner to downstream applications and processes. Mature organizations are looking beyond IAM’s basic benefits to harness more advanced capabilities in user lifecycle management. For any organization looking to embark on an IAM initiative, consider the following use cases in managing and administering user access. Expanding the Enterprise Provisioning Footprint Almost all organizations have some helpdesk resources tied up in handling access requests from users, a distraction from their core job of handling problem tickets. This dependency has mushroomed from the traditional acceptance of provisioning solutions integrating and addressing only a portion of applications in the heterogeneous landscape Oracle Identity Manager (OIM) 11gR2 solves this problem by offering integration with third party ticketing systems as “disconnected applications”. It allows for the existing business processes to be seamlessly integrated into the system and tracked throughout its lifecycle. With minimal effort and analysis, an organization can begin integrating OIM with groups or applications that are involved with manually intensive access provisioning and de-provisioning activities. This aspect of OIM allows organizations to on-board applications and associated business processes quickly using out of box templates and frameworks. This is especially important for organizations looking to fold in users and resources from mergers and acquisitions. Simplifying Access Requests Organizations looking to implement access request solutions often find it challenging to get their users to accept and adopt the new processes.. So, how do we improve the user experience, make it intuitive and personalized and yet simplify the user access process? With R2, OIM helps organizations alleviate the challenge by placing the most used functionality front and centre in the new user request interface. Roles, application accounts, and entitlements can all be found in the same interface as catalog items, giving business users a single location to go to whenever they need to initiate, approve or track a request. Furthermore, if a particular item is not relevant to a user’s job function or area inside the organization, it can be hidden so as to not overwhelm or confuse the user with superfluous options. The ability to customize the user interface to suit your needs helps in exercising the business rules effectively and avoiding access proliferation within the organization. Saving Time with Templates A typical use case that is most beneficial to business users is flexibility to place, edit, and withdraw requests based on changing circumstances and business needs. With OIM R2, multiple catalog items can now be added and removed from the shopping cart, an ecommerce paradigm that many users are already familiar with. This feature can be especially useful when setting up a large number of new employees or granting existing department or group access to a newly integrated application. Additionally, users can create their own shopping cart templates in order to complete subsequent requests more quickly. This feature saves the user from having to search for and select items all over again if a request is similar to a previous one. Advanced Delegated Administration A key feature of any provisioning solution should be to empower each business unit in managing their own access requests. By bringing administration closer to the user, you improve user productivity, enable efficiency and alleviate the administration overhead. To do so requires a federated services model so that the business units capable of shouldering the onus of user life cycle management of their business users can be enabled to do so. OIM 11gR2 offers advanced administrative options for creating, managing and controlling business logic and workflows through easy to use administrative interface and tools that can be exposed to delegated business administrators. For example, these business administrators can establish or modify how certain requests and operations should be handled within their business unit based on a number of attributes ranging from the type of request or the risk level of the individual items requested. Closed-Loop Remediation Security continues to be a major concern for most organizations. Identity management solutions bolster security by ensuring only the right users have the right access to the right resources. To prevent unauthorized access and where it already exists, the ability to detect and remediate it, are key requirements of an enterprise-grade proven solution. But the challenge with most solutions today is that some of this information still exists in silos. And when changes are made to systems directly, not all information is captured. With R2, oracle is offering a comprehensive Identity Governance solution that our customer organizations are leveraging for closed loop remediation that allows for an automated way for administrators to revoke unauthorized access. The change is automatically captured and the action noted for continued management. Conclusion While implementing provisioning solutions, it is important to keep the near term and the long term goals in mind. The provisioning solution should always be a part of a larger security and identity management program but with the ability to seamlessly integrate not only with the company’s infrastructure but also have the ability to leverage the information, business models compiled and used by the other identity management solutions. This allows organizations to reduce the cost of ownership, close security gaps and leverage the existing infrastructure. And having done so a multiple clients’ sites, this is the approach we recommend. In our next post, we will take a journey through our experiences of advising clients looking to upgrade to R2 from a previous version or migrating from a different solution. Meet the Writers:   Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years.

    Read the article

  • Need personal advice on how to get out of a company..

    - by SOfan
    Hi, I am an SO user since past 6 months and this is the first time I am turning to SO for personal help. I have asked technical questions before with my real ID. I am stuck inside a service based IT company for the past one year and haven't been able to decide if to leave it, when to leave it and how to leave it. I had taken 2 weeks LWP on medical reason roughly at end of 1 year and then soon after reporting, I applied for 2 months more LWP (on medical/personal ground) with the intention of working on my health,take up a hobby class to ward off depression,pessimism, to have some fun in life, and to look for a job which I really would be excited about - that interests me and which matches with my strength. My leave starts from this Monday. So in any case, I had hard set in mind that I will leave the company after I join them back hopefully with some job offer already in hand (after figuring out what I want do). Neither I can stand the past project,past colleagues,company, HR, pathetically low salary. But if I really listen to my heart, I don't want to have to go back to that office after my sabbatical and again have to see those people. I will have to resign it after my sabbatical ends. Then HR people perhaps wont like it, may even accuse me on face or behind back that primary purpose of my leave must have been to hunt for a better job and I lied about medical and person reasons. Also, if they get nasty and force me to serve 2 months notice period. There is no way I see myself after sabbatical resuming in old project or starting new work. It will be a pain. Since they have already approved 2 months leave and stuff, ideally if they want, they should be just able to relieve me right on the next day after I join back. But, I don't know if they want to get nasty, will they mention about my 2 months sabbatical leave in my experience letter or more scary, the term medical/personal reason. I have hard earned my experience here, have worked against my will, mostly it has been painful and slogged like anything, because I realize the importance of work experience in IT industry. I don't have greed to have those 2 months included extra in my experience letter, but I don't want to mess up with my experience letter in a way which makes my next employer ask question, get suspicious, or be wary if I have any medical reason going on. Being an emotional,moody person or somebody who can't be in an environment, once my mind and heart starts hating it. I think it perhaps is best, if I resign on Monday itself telling them (in polite manner) something that look I took sabbatical for some reason but I don't want to resume working in the company after my sabbatical ends. So please accept my resignation. Now tell me what you want to do about my leave request, my notice period and when you are willing to relieve me. What should I write and how? Some background: I am working in an IT company in India.I am overqualified in the company. It is grossly underpaying me. My education qualifications far exceed anyone's in the whole company being a CS undergrad as well as a CS grad. I joined this company after finishing the grad. I had self-doubts about my skills and interest as a programmer. I like doing research oriented work, though didn't have any particular success during grad. My life here has been very hectic. The project containing many many sub-projects has kept me on my toes and I have never really liked the work. I have been playing against my strength. Also the company strict internet usage policy (you can't read gmails, can't browse any non-work related sites not even news). When working for a client, from the machine we can't even check company related emails.For this one has to go to kiosk like 5 machines in a small room etc. Most of the times those machines are not available, so it was not unusual to keep making rounds to these kiosk machines to check company emails, browse company related emails etc.So it was not so easy to keep in touch with company related basic affairs for a not particular careful person. Things like this which are new to me, make me feel restricted. I am an undecisive person with a sense of failure, self-doubt, not meeting up unrealistic expectation. Somewhere at back of mind, I envy my classmates who make a smooth transition from company to company without causing any gap in their resume. I on other hand have gaps in resume. I get tired after working in a place for sometime. problem with colleagues in general. I am not particular great with people, have few friends, not known for a fun nature, rather serious, scholar. I am not a typical conventional female. I think females are usually more disciplined. But I am not so. I reach office late (though after informing manager). I don't want to blame them entirely, because from my past, it is not unusual for me to get undecisive on things. Also I had doubts about my ability as researched and to succeed there. of building a relationship in a group, to have something to talk about, newspaper. I get cut-off from people. peer pressure. I make blunders in coding, lose patience. Consciously or unconsciously I feel contempt for people here, work here, environment here. I have doubts that either I go to a place which does innovation, does research oriented work, product biggies, have great motivated people, have competent people passionate about products they are building. But then I also doubt my ability to survive there. I have identified that an idea job for me would be 4 days a week, a high salary job. When among people in company/team, I can't think much. I need some time at home to read good authentic books written in good style on what work I am doing.So that I am comfortable with my understanding of work. I get into pressure easily under deadline and need 5th day to cool myself off. I took for 2 weeks leave, because each day was hell for me. May be the depression phase of bipolar is on and also partially it could be that being a work centered person, who derives happiness,self-esteem from work, haven't been enjoying work and have been working for the sole person of proving stability, and ability to stick, against all odds, and facing what challenges I see, bonding with people, identifying opportunities to learn in given task etc.have been averaging one day LWP in 1 week or 10 days. or may be because of my nature,ADD,not being able to switch context,out of touch with news, don't have a circle of friends with who I enjoy. less knowledge in general to talk about, just some technical stuff.anyway, so due to emotional reason, some practical reason etc, I wanted to be very sure before leaving. So my leave starts from Monday and I should feel happy about it. I have taken the leave to for a few purposes - to take care of my health by regular yoga/exercise (with project on, I just can't do anything regular), reassess myself to see what I want to try next which work I might like, look for next job, take up a hobby which I like say singing. I am not clear on my career,job aspiration. I have tried my hands on research. During this year appraisal yesterday, I even had some conflict with my last manager. In meeting with me one on one, he would say all nice things about me, but in feedback to new manager, he hasn't given any excellent feedback. It is all only good. I am angry at this old Manager. Also new manager also scolded me as I didn't agree to his appraisal and waited to hear myself from old Manager. He kind of scolded me for wasting his time. Am I being unethical somewhere? I am always very conscious of if I am cheating anywhere. What advice I am seeking? How to resign and what to write in resignation letter

    Read the article

  • e2fsck extremly slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there are some problems that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg | grep sdb [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and he scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So this number of the lseeks before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if the numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. (times are in CET)

    Read the article

  • Odd tcp deadlock under windows

    - by John Robertson
    We are moving large amounts of data on a LAN and it has to happen very rapidly and reliably. Currently we use windows TCP as implemented in C++. Using large (synchronous) sends moves the data much faster than a bunch of smaller (synchronous) sends but will frequently deadlock for large gaps of time (.15 seconds) causing the overall transfer rate to plummet. This deadlock happens in very particular circumstances which makes me believe it should be preventable altogether. More importantly if we don't really know the cause we don't really know it won't happen some time with smaller sends anyway. Can anyone explain this deadlock? Deadlock description (OK, zombie-locked, it isn't dead, but for .15 or so seconds it stops, then starts again) The receiving side sends an ACK. The sending side sends a packet containing the end of a message (push flag is set) The call to socket.recv takes about .15 seconds(!) to return About the time the call returns an ACK is sent by the receiving side The the next packet from the sender is finally sent (why is it waiting? the tcp window is plenty big) The odd thing about (3) is that typically that call doesn't take much time at all and receives exactly the same amount of data. On a 2Ghz machine that's 300 million instructions worth of time. I am assuming the call doesn't (heaven forbid) wait for the received data to be acked before it returns, so the ack must be waiting for the call to return, or both must be delayed by something else. The problem NEVER happens when there is a second packet of data (part of the same message) arriving between 1 and 2. That part very clearly makes it sound like it has to do with the fact that windows TCP will not send back a no-data ACK until either a second packet arrives or a 200ms timer expires. However the delay is less than 200 ms (its more like 150 ms). The third unseemly character (and to my mind the real culprit) is (5). Send is definitely being called well before that .15 seconds is up, but the data NEVER hits the wire before that ack returns. That is the most bizarre part of this deadlock to me. Its not a tcp blockage because the TCP window is plenty big since we set SO_RCVBUF to something like 500*1460 (which is still under a meg). The data is coming in very fast (basically there is a loop spinning out data via send) so the buffer should fill almost immediately. According to msdn the buffer being full and at least one pending send should cause the data to be sent (though in another place it mentions that there various "heuristics" used in deciding when a send hits the wire). Anway, why the sender doesn't actually send more data during that .15 second pause is the most bizarre part to me. The information above was captured on the receiving side via wireshark (except of course the socket.recv return times which were logged in a text file). We tried changing the send buffer to zero and turning off Nagle on the sender (yes, I know Nagle is about not sending small packets - but we tried turning Nagle off in case that was part of the unstated "heuristics" affecting whether the message would be posted to the wire. Technically microsoft's Nagle is that a small packet isn't sent if the buffer is full and there is an outstanding ACK, so it seemed like a possibility).

    Read the article

  • How can I handle parameterized queries in Drupal?

    - by Anthony Gatlin
    We have a client who is currently using Lotus Notes/Domino as their content management system and web server. For many reasons, we are recommending they sunset their Notes/Domino implementation and transition onto a more modern platform--such as Drupal. The client has several web applications which would be a natural fit for Drupal. However, I am unsure of the best way to implement one of the web applications in Drupal. I am running into a knowledge barrier and wondered if any of you could fill in the gaps. Situation The client has a Lotus Domino application which serves as a front-end for querying a large DB2 data store and returning a result set (generally in table form) to a user via the web. The web application provides access to approximately 100 pre-defined queries--50 of which are public and 50 of which are secured. Most of the queries accept some set of user selected parameters as input. The output of the queries is typically returned to users in a list (table) format. A limited number of result sets allow drill-down through the HTML table into detail records. The query parameters often involve database queries themselves. For example, a single query may pull a list of company divisions into a drop-down. Once a division is selected, second drop-down with the departments from that division is populated--but perhaps only departments which meet some special criteria--such as those having taken a loss within a specific time frame. Most queries have 2-4 parameters with the average probably being 3. The application involves no data entry. None of the back-end data is ever modified by the web application. All access is purely based around querying data and viewing results. The queries change relatively infrequently, and the current system has been in place for approximately 10 years. There may be 10-20 query additions, modifications, or other changes in a given year. The client simply desires to change the presentation platform but absolutely does not want to re-do the 100 database queries. Once the project is implemented, the client wants their staff to take over and manage future changes. The client's staff have no background in Drupal or PHP but are somewhat willing to learn as necessary. How would you transition this into Drupal? My major knowledge void relates to how we would manage the query parameters and access the queries themselves. Here are a few specific questions but feel free to chime in on any issue related to this implementation. Would we have to build 100 forms by hand--with each form containing the parameters for a given query? If so, how would we do this? Approximately how long would it take to build/configure each of these forms? Is there a better way than manually building 100 forms? (I understand using CCK to enter data into custom content types but since we aren't adding any nodes, I am a little stuck as to how this might work.) Would it be possible for the internal staff to learn to create these query parameter forms--even if they are unfamiliar with Drupal today? Would they be required to do any PHP programming? How would we take the query parameters from a form and execute a query against DB2? Would this require a custom module? If so, would it require one module total or one module per query? (Note: There is apparently a DB2 driver available for Drupal. See http://groups.drupal.org/node/5511.) Note: I am not looking for CMS recommendations other than Drupal as Drupal nicely fits all of the client's other requirements, and I hope to help them standardize on a single platform. Any assistance you can provide would be helpful. Thank you in advance for your help!

    Read the article

  • Flatten XML using XSLT but based on nesting level

    - by user2532750
    I'm new to XSLT and I'm trying to write some XSLT that will flatten any given XML such that a new line occurs whenever the nesting level changes. My input can be any XML document, with any number of nested levels so the structure isn't known to the XSLT. Due to the tools available to me, my solution has to use XSLT version 1.0. For example. <?xml version="1.0"?> <ROWSET> <ROW> <CUSTOMER_ID>0</CUSTOMER_ID> <NAME>Default Company</NAME> <BONUSES> <BONUSES_ROW> <BONUS_ID>21</BONUS_ID> <DESCRIPTION>Performance Bonus</DESCRIPTION> </BONUSES_ROW> <BONUSES_ROW> <BONUS_ID>26</BONUS_ID> <DESCRIPTION>Special Bonus</DESCRIPTION> </BONUSES_ROW> </BONUSES> </ROW> <ROW> <CUSTOMER_ID>1</CUSTOMER_ID> <NAME>Dealer 1</NAME> <BONUSES> <BONUSES_ROW> <BONUS_ID>27</BONUS_ID> <DESCRIPTION>June Bonus</DESCRIPTION> <BONUS_VALUES> <BONUS_VALUES_ROW> <VALUE>10</VALUE> <PERCENT>N</PERCENT> </BONUS_VALUES_ROW> <BONUS_VALUES_ROW> <VALUE>11</VALUE> <PERCENT>Y</PERCENT> </BONUS_VALUES_ROW> </BONUS_VALUES> </BONUSES_ROW> </BONUSES> </ROW> <ROWSET> needs to becomes.... 0, Default Company 21, Performance Bonus 26, Special Bonus 1, Dealer 1 27, June Bonus 10, N 11, Y The XSLT I've written so far is... <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="text" encoding="iso-8859-1"/> <xsl:strip-space elements="*" /> <xsl:template match="/*/child::*"> <xsl:apply-templates select="*"/> </xsl:template> <xsl:template match="*"> <xsl:value-of select="text()" /> <xsl:if test="position()!= last()"><xsl:text>,</xsl:text></xsl:if> <xsl:if test="position()= last()"><xsl:text>&#xD;</xsl:text></xsl:if> <xsl:apply-templates select="./child::*"/> </xsl:template> </xsl:stylesheet> but my output just isn't correct, with gaps and unnecessary data. 0,Default Company, ,21,Performance Bonus 26,Special Bonus 1,Dealer 1, 27,June Bonus, ,10,N 11,Y It seems there needs to be a check as whether or not a node can contain text, but I'm stuck and could do with an XSLT expert's help.

    Read the article

  • MySQL and INT auto_increment fields

    - by PHPguy
    Hello folks, I'm developing in LAMP (Linux+Apache+MySQL+PHP) since I remember myself. But one question was bugging me for years now. I hope you can help me to find an answer and point me into the right direction. Here is my challenge: Say, we are creating a community website, where we allow our users to register. The MySQL table where we store all users would look then like this: CREATE TABLE `users` ( `uid` int(2) unsigned NOT NULL auto_increment COMMENT 'User ID', `name` varchar(20) NOT NULL, `password` varchar(32) NOT NULL COMMENT 'Password is saved as a 32-bytes hash, never in plain text', `email` varchar(64) NOT NULL, `created` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of registration', `updated` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of profile update, e.g. change of email', PRIMARY KEY (`uid`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; So, from this snippet you can see that we have a unique and automatically incrementing for every new user 'uid' field. As on every good and loyal community website we need to provide users with possibility to completely delete their profile if they want to cancel their participation in our community. Here comes my problem. Let's say we have 3 registered users: Alice (uid = 1), Bob (uid = 2) and Chris (uid = 3). Now Bob want to delete his profile and stop using our community. If we delete Bob's profile from the 'users' table then his missing 'uid' will create a gap which will be never filled again. In my opinion it's a huge waste of uid's. I see 3 possible solutions here: 1) Increase the capacity of the 'uid' field in our table from SMALLINT (int(2)) to, for example, BIGINT (int(8)) and ignore the fact that some of the uid's will be wasted. 2) introduce the new field 'is_deleted', which will be used to mark deleted profiles (but keep them in the table, instead of deleting them) to re-utilize their uid's for newly registered users. The table will look then like this: CREATE TABLE `users` ( `uid` int(2) unsigned NOT NULL auto_increment COMMENT 'User ID', `name` varchar(20) NOT NULL, `password` varchar(32) NOT NULL COMMENT 'Password is saved as a 32-bytes hash, never in plain text', `email` varchar(64) NOT NULL, `is_deleted` int(1) unsigned NOT NULL default '0' COMMENT 'If equal to "1" then the profile has been deleted and will be re-used for new registrations', `created` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of registration', `updated` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of profile update, e.g. change of email', PRIMARY KEY (`uid`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; 3) Write a script to shift all following user records once a previous record has been deleted. E.g. in our case when Bob (uid = 2) decides to remove his profile, we would replace his record with the record of Chris (uid = 3), so that uid of Chris becomes qual to 2 and mark (is_deleted = '1') the old record of Chris as vacant for the new users. In this case we keep the chronological order of uid's according to the registration time, so that the older users have lower uid's. Please, advice me now which way is the right way to handle the gaps in the auto_increment fields. This is just one example with users, but such cases occur very often in my programming experience. Thanks in advance!

    Read the article

  • urllib2 misbehaving with dynamically loaded content

    - by Sheena
    Some Code headers = {} headers['user-agent'] = 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0' headers['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' headers['Accept-Language'] = 'en-gb,en;q=0.5' #headers['Accept-Encoding'] = 'gzip, deflate' request = urllib.request.Request(sURL, headers = headers) try: response = urllib.request.urlopen(request) except error.HTTPError as e: print('The server couldn\'t fulfill the request.') print('Error code: {0}'.format(e.code)) except error.URLError as e: print('We failed to reach a server.') print('Reason: {0}'.format(e.reason)) else: f = open('output/{0}.html'.format(sFileName),'w') f.write(response.read().decode('utf-8')) A url http://groupon.cl/descuentos/santiago-centro The situation Here's what I did: enable javascript in browser open url above and keep an eye on the console disable javascript repeat step 2 use urllib2 to grab the webpage and save it to a file enable javascript open the file with browser and observe console repeat 7 with javascript off results In step 2 I saw that a whole lot of the page content was loaded dynamically using ajax. So the HTML that arrived was a sort of skeleton and ajax was used to fill in the gaps. This is fine and not at all surprising Since the page should be seo friendly it should work fine without js. in step 4 nothing happens in the console and the skeleton page loads pre-populated rendering the ajax unnecessary. This is also completely not confusing in step 7 the ajax calls are made but fail. this is also ok since the urls they are using are not local, the calls are thus broken. The page looks like the skeleton. This is also great and expected. in step 8: no ajax calls are made and the skeleton is just a skeleton. I would have thought that this should behave very much like in step 4 question What I want to do is use urllib2 to grab the html from step 4 but I cant figure out how. What am I missing and how could I pull this off? To paraphrase If I was writing a spider I would want to be able to grab plain ol' HTML (as in that which resulted in step 4). I dont want to execute ajax stuff or any javascript at all. I don't want to populate anything dynamically. I just want HTML. The seo friendly site wants me to get what I want because that's what seo is all about. How would one go about getting plain HTML content given the situation I outlined? To do it manually I would turn off js, navigate to the page and copy the html. I want to automate this. stuff I've tried I used wireshark to look at packet headers and the GETs sent off from my pc in steps 2 and 4 have the same headers. Reading about SEO stuff makes me think that this is pretty normal otherwise techniques such as hijax wouldn't be used. Here are the headers my browser sends: Host: groupon.cl User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Here are the headers my script sends: Accept-Encoding: identity Host: groupon.cl Accept-Language: en-gb,en;q=0.5 Connection: close Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent: User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0 The differences are: my script has Connection = close instead of keep-alive. I can't see how this would cause a problem my script has Accept-encoding = identity. This might be the cause of the problem. I can't really see why the host would use this field to determine the user-agent though. If I change encoding to match the browser request headers then I have trouble decoding it. I'm working on this now... watch this space, I'll update the question as new info comes up

    Read the article

  • Ada and 'The Book'

    - by Phil Factor
    The long friendship between Charles Babbage and Ada Lovelace created one of the most exciting and mysterious of collaborations ever to have resulted in a technological breakthrough. The fireworks that created by the collision of two prodigious mathematical and creative talents resulted in an invention, the Analytical Engine, which went on to change society fundamentally. However, beyond that, we just don't know what the bulk of their collaborative work was about:;  it was done in strictest secrecy. Even the known outcome of their friendship, the first programmable computer, was shrouded in mystery. At the time, nobody, except close friends and family, had any idea of Ada Byron's contribution to the invention of the ‘Engine’, and how to program it. Her great insight was published in August 1843, under the initials AAL, standing for Ada Augusta Lovelace, her title then being the Countess of Lovelace. It was contained in a lengthy ‘note’ to her translation of a publication that remains the best description of Babbage's amazing Analytical Engine. The secret identity of the person behind those enigmatic initials was finally revealed by Prince de Polignac who, seventy years later, wrote to Ada's daughter to seek confirmation that her mother had, indeed, been the author of the brilliant sentences that described so accurately how Babbage's mechanical computer could be programmed with punch-cards. L.F. Menabrea's paper on the Analytical Engine first appeared in the 'Bibliotheque Universelle de Geneve' in October 1842, and Ada translated it anonymously for Taylor's 'Scientific Memoirs'. Charles Babbage was surprised that she had not written an original paper as she already knew a surprising amount about the way the machine worked. He persuaded her to at least write some explanatory notes. These notes ended up extending to four times the length of the original article and represented the first published account of how a machine could be programmed to perform any calculation. Her example of programming the Bernoulli sequence would have worked on the Analytical engine had the device’s construction been completed, and gave Ada an unassailable claim to have invented the art of programming. What was the reason for Ada's secrecy? She was the only legitimate child of Lord Byron, who was probably the best known celebrity of the age, so she was already famous. She was a senior aristocrat, with titles, a fortune in money and vast estates in the Midlands. She had political influence, and was the cousin of Lord Melbourne, who was the Prime Minister at that time. She was friendly with the young Queen Victoria. Her mathematical activities were a pastime, and not one that would be considered by others to be in keeping with her roles and responsibilities. You wouldn't dare to dream up a fictional heroine like Ada. She was dazzlingly beautiful and talented. She could speak several languages fluently, and play some musical instruments with professional skill. Contemporary accounts refer to her being 'accomplished in science, art and literature'. On top of that, she was a brilliant mathematician, a talent inherited from her mother, Annabella Milbanke. In her mother's circle of literary and scientific friends was Charles Babbage, and Ada's friendship with him dates from her teenage zest for Mathematics. She was one of the first people he'd ever met who understood what he had attempted to achieve with the 'Difference Engine', and with whom he could converse as intellectual equals. He arranged for her to have an education from the most talented academics in the country. Ada melted the heart of the cantankerous genius to the point that he became a faithful and loyal father-figure to her. She was one of the very few who could grasp the principles of the later, and very different, ‘Analytical Engine’ which was designed from the start to tackle a variety of tasks. Sadly, Ada Byron's life ended less than a decade after completing the work that assured her long-term fame, in November 1852. She was dying of cancer, her gambling habits had caused her to run up huge debts, she'd had more than one affairs, and she was being blackmailed. Her brilliant but unempathic mother was nursing her in her final illness, destroying her personal letters and records, and repaying her debts. Her husband was distraught but helpless. Charles Babbage, however, maintained his steadfast paternalistic friendship to the end. She appointed her loyal friend to be her executor. For years, she and Babbage had been working together on a secret project, known only as 'The Book'. We have a clue to what it was in a letter written by her nine years earlier, on 11th August 1843. It was a joint project by herself and Lord Lovelace, her husband, and was intended to involve Babbage's 'undivided energies'. It involved 'consulting your Engine' (it required Babbage’s computer). The letter gives no hint about the project except for the high-minded nature of its purpose, and its highly mathematical nature.  From then on, the surviving correspondence between the two gives only veiled references to 'The Book'. There isn't much, since Babbage later destroyed any letters that could have damaged her reputation within the Establishment. 'I cannot spare the book today, which I am very sorry for. At the moment I want it for constant reference, but I think you can have it tomorrow' (Oct 1844)  And 'I will send you the book directly, and you can say, when you receive it, how long you will want to keep it'. (Nov 1844)  The two of them were obviously intent on the work: She writes, four years later, 'I have an engagement for Wednesday which will prevent me from attending to your wishes about the book' (Dec 1848). This was something that they both needed to work on, but could not do in parallel: 'I will send the book on Tuesday, and it can be left with you till Friday' (11 Feb 1849). After six years work, it had been so well-handled that it was beginning to fall apart: 'Don't forget the new cover you promised for the book. The poor book is very shabby and wants one' (20 Sept 1849). So what was going on? The word 'book' was not a code-word: it was a real book, probably a 'printer's blank', plain paper, but properly bound so printers and publishers could show off how the published work might look. The hints from the correspondence are of advanced mathematics. It is obvious that the book was travelling between them, back and forth, each one working on it for less than a week before passing it back. Ada and her husband were certainly involved in gambling large sums of money on the horses, and so most biographers have concluded that the three of them were trying to calculate the mathematical odds on the horses. This theory has three large problems. Firstly, Ada's original letter proposing the project refers to its high-minded nature. Babbage was temperamentally opposed to gambling and would scarcely have given so much time to the project, even though he was devoted to Ada. Secondly, Babbage would have very soon have realized the hopelessness of trying to beat the bookies. This sort of betting never attracts his type of intellectual background. The third problem is that any work on calculating the odds on horses would not need a well-thumbed book to pass back and forth between them; they would have not had to work in series. The original project was instigated by Ada, along with her husband, William King-Noel, 1st Earl of Lovelace. Charles Babbage was invited to join the project after the couple had come up with the idea. What could William have contributed? One might assume that William was a Bertie Wooster character, addicted only to the joys of the turf, but this was far from the truth. He was a scientist, a Cambridge graduate who was later elected to be a Fellow of the Royal Society. After Eton, he went to Trinity College, Cambridge. On graduation, he entered the diplomatic service and acted as secretary under Lord Nugent, who was Lord Commissioner of the Ionian Islands. William was very friendly with Babbage too, able to discuss scientific matters on equal terms. He was a capable engineer who invented a process for bending large timbers by the application of steam heat. He delivered a paper to the Institution of Civil Engineers in 1849, and received praise from the great engineer, Isambard Kingdom Brunel. As well as being Lord Lieutenant of the County of Surrey for most of Victoria's reign, he had time for a string of scientific and engineering achievements. Whatever the project was, it is unlikely that William was a junior partner. After Ada's death, the project disappeared. Then, two years later, Babbage, through one of his occasional outbursts of temper, demonstrated that he was able to decrypt one of the most powerful of secret codes, Vigenère's autokey cipher.  All contemporary diplomatic and military messages used a variant of this cipher. Babbage had made three important discoveries, namely, the mathematical law of this cipher, the principle of the key periodicity, and the technique of the symmetry of position. The technique is now known as the Kasiski examination, also called the Kasiski test, but Babbage got there first. At one time, he listed amongst his future projects, the writing of a book 'The Philosophy of Decyphering', but it never came to anything. This discovery was going to change the course of history, since it was used to decipher the Russians’ military dispatches in the Crimean war. Babbage himself played a role during the Crimean War as a cryptographical adviser to his friend, Rear-Admiral Sir Francis Beaufort of the Admiralty. This is as much as we can be certain about in trying to make sense of the bulk of the time that Charles Babbage and Ada Lovelace worked together. Nine years of intensive work, involving the 'Engine' and a great deal of mathematics and research seems to have been lost: or has it? I've argued in the past http://www.simple-talk.com/community/blogs/philfactor/archive/2008/06/13/59614.aspx that the cracking of the Vigenère autokey cipher, was a fundamental motive behind the British Government's support and funding of the 'Difference Engine'. The Duke of Wellington, whose understanding of the military significance of being able to read enemy dispatches, was the most steadfast advocate of the project. If the three friends were actually doing the work of cracking codes by mathematical techniques that used the techniques of key periodicity, and symmetry of position (the use of a book being passed quickly to and fro is very suggestive), intending to then use the 'Engine' to do the routine cracking of each dispatch, then this is a rather different story. The project was Ada and William's idea. (William had served in the diplomatic service and would be familiar with the use of codes). This makes Ada Lovelace the initiator of a project which, by giving both Britain, and probably the USA, a diplomatic and military advantage in the second part of the Nineteenth century, changed world history. Ada would never have wanted any credit for cracking the cipher, and developing the method that rendered all contemporary military and diplomatic ciphering techniques nugatory; quite the reverse. And it is clear from the gaps in the record of the letters between the collaborators that the evidence was destroyed, probably on her request by her irascible but intensely honorable executor, Charles Babbage. Charles Babbage toyed with the idea of going public, but the Crimean war put an end to that. The British Government had a valuable secret, and intended to keep it that way. Ada and Charles had quite often discussed possible moneymaking projects that would fund the development of the Analytic Engine, the first programmable computer, but their secret work was never in the running as a potential cash cow. I suspect that the British Government was, even then, working on the concealment of a discovery whose value to the nation depended on it remaining so. The success of code-breaking in the Crimean war, and the American Civil war, led to the British and Americans  subsequently giving much more weight and funding to the science of decryption. Paradoxically, this makes Ada's contribution even closer to the creation of Colossus, the first digital computer, at Bletchley Park, specifically to crack the Nazi’s secret codes.

    Read the article

  • CodePlex Daily Summary for Monday, April 19, 2010

    CodePlex Daily Summary for Monday, April 19, 2010New Projects8085 Microprocessor simulator: This program allows you to write 8085 programs in assembly and run those programs on your PC. It comes with lots of help, plus you can put breakpo...Additional.NET framework: The Additional.Net framework extends the functionality of the .NET framework for easier application development. It is developed in C#.Astoria Contrib: A contrib project for filling the gaps in WCF Data Services, providing missing functionality or augmenting with T4 templates, helpers, etc.ClipoWeb: ClipoWeb is a web clipboard that allows you to copy text and files between computers. Users access a web page on the source and destination compute...elearning Center: Đây là một ứng dụng web viết hoàn toàn bằng Sliverlight. Ứng dụng này là một dạng elearning với đầy đủ chức năng và có khả năng tương tác tối đa v...Excel VSTO SQL Server Browser: Get Data from SQL Server and put it in Excel directly. The objective is to get more control about what do you need to pull and create automatic pro...Generic Tree Structure: Generic Base Classes that helps you to create complex tree structures without writing it again and again. Simply to use Like "var Node<Folder> fold...LAN Lordz LAN Party System: The LAN Lordz LAN Party System makes it easier for medium and large size events to track their attendance, sponsors, door prizes, tournaments, and...LiteFx: O LiteFx é um framework que ajuda na implementação de DDD (anêmico ou rico) ele foi desenvolvido por Douglas Aguiar (http://twitter.com/DouglasAgui...Managed UI Flow for ASP.NET MVC Framework: If your web application getting more complex, understanding and managing of complex UI flows(pageflow of application) getting harder and harder, If...Meus Exemplos: Meus ExemplosOrchard Blueprint Theme: Orchard BluePrint is a project that provides a WYSIWYG reference implementation of a Orchard theme to help designers get started with theme design....Outlook Social Network Connector - Avatar: Avatar 是一个开源的MS Outlook的插件,豆瓣用户可以在Outlook 2010中使用豆瓣。查看一封邮件中相关的收件人、发件人的用户广播、同城活动以及豆邮。不用上豆瓣也能方便了解好友动态。这个插件使用C#, .NET 4.0 开发。API 请求认证使用OAuth 认证。 (Avat...Quadro Tree: This is Quadri tree library.Sharepoint 2010 Alert Controller: In MOSS 2007 or Sharepoint Server 2010 if you want to see your alerts by list name you should use this tool.SharePoint Web Parts: The goal of this project is to develop a set of web parts for SharePoint.Silverlight Image Cropper: This is a silverlight 4 util that makes it easy to crop out a number images of a specific resolution screen or screens. ie. an easy way to crop ...SilverlightFTP: Silverlight ftp clientsplibex: libraries for sharepoint lists manipulationStardustExtensions: Official Extensions for StardustSwim Team Manager: Swim Team Manager is designed for managing and tracking administrative and performance information for your club, school, or swim team. Swim Tea...ToDoListWpf: A To Do List, I used it to manage my work items. I am sorry for my poor English.Trance Layer: TranceLayer is a fast and flexible logging or diagnostics framework for .Net. It allows you to plug it into an existing or new application with m...Unoficcial NeoFM.hu NowPlaying: A little windows tray program. Shows what's on neofm.hu right now.WabbitStudio Z80 Software Tools: The software suite provides all of the tools you need to create high quality Z80 software in Z80 assembly language, with a focus on TI calculators....WinToolbar: Windows.Toolbar is Silverlight library that implements common widgets that allows us to build a rich toolbar control in our applications, it incor...XP-More: XP-More is a tool that helps manage Windows 7 Virtual Machines (XP Mode and any other). Specifically, it makes duplication of VMs a no brainer - no...Yodelay .NET Framework Extensions: The Yodelay .NET Framework Extensions project provides a library of components that make many kinds of programming tasks simpler. These include bas...New ReleasesClipoWeb: ClipoWeb 1.0: First Beta release of the ClipoWeb web applicationDDDSample.Net: 0.8: This release contains all four versions of DDDSample.Net available in previous, 0.7 and a brand new one: Layered Model version. Layered Model demon...DotNetNuke Blueprint: 00.00.02: Added to this version CSS Reset Skin version including Grids This version will soon be updated with corresponding HTML version and DNN templateEsferatec.Text.RegularExpressions: 3.5.1003.1001: first stable release of the class; the assembly file is ready to use, the documentation is complete;Excel VSTO SQL Server Browser: Sample Only: Sample without Ribbon UI, if you close the TaskPane you will no longer able to open it without restart ExcelFolder Bookmarks: Folder Bookmarks 1.5.5: This is the latest version of Folder Bookmarks (1.5.5), with the new Archive Manager and Archive Viewer. It has an installer - it will create a dir...Gardens Point LEX: Gardens Point LEX, Version 1.1.3: The main distribution is a zip file. This contains the binary executable, documentation, source code and the examples. ChangesVersion 1.1.3 corre...Gardens Point Parser Generator: Gardens Point Parser Generator V1.4.0: The distribution is a zip archive which contains the binary executables, documentation, source code and examples. ChangesVersion 1.4.0 of GPPG has...HKGolden Express: HKGoldenExpress (Build 201004181455): New features: Added rating of each topic. (Note: This feature is availabe since Build 201004172120) Bug fix: Handle invalid XML character in XML...Home Access Plus+: v4.0.0.0 Beta: v4.0.0.0 Beta Change Log: Moved to using .net 4.0 New Silverlight Uploader Various .net 4 fixes and tweaks File Changes: All fixes have changedHTML Ruby: 6.21.6: Reduced performance hit on pages with heavy DOM manipulations Fixed issue where empty tags caused it to apply invalid spacing values Stop spaci...LINQ to VFP: LinqToVfp (v1.0.17.2): Modified to allow using RecNo as a primary key. This build requires IQToolkit v0.17b.Managed UI Flow for ASP.NET MVC Framework: Preview 1: The source available on this site, does not reflect the final state of the project, it is a preview of what will be shipping in the framework in th...MVVM Light Toolkit: MVVM Light Toolkit V3 SP1 (2): Super minor update to accommodate the new Blend 4 RC. Only changes: The path to the Blend 4 templates changed to be "My Documents\Expression\Blend...N2 CMS: 2.0 rc: N2 is a lightweight CMS framework for ASP.NET. It helps you build great web sites that anyone can update. Major Changes (1.5 -> 2.0 release candid...OpenGL ES 2.0 Compact Framework Wrapper: Sample application CAB with texturing: This took some time as it was pretty hard to get the texture loaded and setup so that it would bind to the sampler2D in the fragment shader. Featu...Orchard Blueprint Theme: 00.00.01: This is the first release of this project, still in a very alpha version. Very soon this release is to be updated with the HTML version of the them...RoughJs: RoughJsSL: This is Silverlight library's CompilerSharepoint 2010 Alert Controller: Sharepoint 2010 Alert Controller: After you download WSP file you can get help from Home PageSharePoint LogViewer: SharePoint LogViewer 2.5: Minimize log viewer to tray Get popup notification of SharePoint log events from tray Redirect log entries to event log Send email notifications on...Site Directory for SharePoint 2010 (from Microsoft Consulting Services, UK): v1.1: This is a minor update which includes the following changes: Code consolidation across the whole project Additional site data captured. See solut...Stardust: Stardust 1.0: First stable version of Stardust (Build 172)StardustExtensions: Facebook Extension: Extension for stardust to upload and post images on Facebook.StardustExtensions: Facebook Extension (Source): The source code of an extension for Stardust used to post images on facebook.StardustExtensions: WPF Example: This is an example extension. Uses WPF to create a Window and say "Hello World!" Is a perfect download if want to start writing Stardust ExtensionsStardustExtensions: WPF Example Source: This is the source code of an extension that creates a Window using WPF & displays a simple text. Is great as an example of creating Stardust Exten...TFTP Server: TFTP Server 1.1 Beta installer: New MSI based installer Installs a TFTP service Supports multiple servers on different endpoints, with every server pointing to its own root di...TiledLib: TiledLib 1.1: This download is for prebuilt DLLs and a demo project. For the full source code, use the Source Code tab. Changes: Bug fixes in a few methods Ad...Trance Layer: TranceLayer Digger: Digger version is a beta. It is intended to be used as a demonstration of muscles while lacking a set of features that are in the docs. The set of ...uManage - Active Directory Self-Service Portal: uManage v1.2 (.NET 4.0 RTM): New Releasev1.2 Adds the Administrative Portal as well as the requirement of a MSSQL database (2005+). The Setup Wizard has also been updated to i...Unoficcial NeoFM.hu NowPlaying: NeoNotifier: First release. Aplha, but usable.VidCoder: 0.2.1: Changes: Added 2-pass encoding Fixed x264 options getting mangled during p-invoke Fixed intermittent crash with logging window open due to thre...WCF RIA Services Contrib: WCF RIA Services Contrib RC2 Release: This version is for the WCF RIA Services RC2 (SL4 RTM) release. The ApplyState has been modifed in this version to disable validation during proces...WiiCIS.NET: WiiCIS.NET v0.2: Changes... - Removal of WiimoteManager, connection must be done manually - Accelerometer orientation was originally in degrees, is now in radians -...WinToolbar: WinToolbar Source code plus sample: This zip file contains the current version source code and libraries plus a testrunner (sample app).XP-More: 0.9 (Beta): Most of the functionality is in place, final polishing will be done soon.Most Popular ProjectsFacebook Developer ToolkitWSPBuilder (SharePoint WSP tool)QuickGraph, Graph Data Structures And Algorithms for .NetPerformance Analysis of Logs (PAL) Toolpatterns & practices: Team Development with Visual Studio Team Foundation ServerTFS Integration Platformpatterns & practices: Performance Testing Guidance for Web Applicationspatterns & practices: Enterprise Library ContribJSON ViewerManaged Wifi APIMost Active ProjectsRawrpatterns & practices – Enterprise LibraryIndustrial DashboardIonics Isapi Rewrite FilterFarseer Physics EngineMVVM Light ToolkitjQuery Library for SharePoint Web ServicesN2 CMSCaliburn: An Application Framework for WPF and SilverlightBlogEngine.NET

    Read the article

  • Finding the Right Solution to Source and Manage Your Contractors

    - by mark.rosenberg(at)oracle.com
    Many of our PeopleSoft Enterprise applications customers operate in service-based industries, and all of our customers have at least some internal service units, such as IT, marketing, and facilities. Employing the services of contractors, often referred to as "contingent labor," to deliver either or both internal and external services is common practice. As we've transitioned from an industrial age to a knowledge age, talent has become a primary competitive advantage for most organizations. Contingent labor offers talent on flexible terms; it offers the ability to scale up operations, close skill gaps, and manage risk in the process of delivering services. Talent comes from many sources and the rise in the contingent worker (contractor, consultant, temporary, part time) has increased significantly in the past decade and is expected to reach 40 percent in the next decade. Managing the total pool of talent in a seamless integrated fashion not only saves organizations money and increases efficiency, but creates a better place for workers of all kinds to work. Although the term "contingent labor" is frequently used to describe both contractors and employees who have flexible schedules and relationships with an organization, the remainder of this discussion focuses on contractors. The term "contingent labor" is used interchangeably with "contractor." Recognizing the importance of contingent labor, our PeopleSoft customers often ask our team, "What Oracle vendor management system (VMS) applications should I evaluate for managing contractors?" In response, I thought it would be useful to describe and compare the three most common Oracle-based options available to our customers. They are:   The enterprise licensed software model in which you implement and utilize the PeopleSoft Services Procurement (sPro) application and potentially other PeopleSoft applications;  The software-as-a-service model in which you gain access to a derivative of PeopleSoft sPro from an Oracle Business Process Outsourcing Partner; and  The managed service provider (MSP) model in which staffing industry professionals utilize either your enterprise licensed software or the software-as-a-service application to administer your contingent labor program. At this point, you may be asking yourself, "Why three options?" The answer is that since there is no "one size fits all" in terms of talent, there is also no "one size fits all" for effectively sourcing and managing contingent workers. Various factors influence how an organization thinks about and relates to its contractors, and each of the three Oracle-based options addresses an organization's needs and preferences differently. For the purposes of this discussion, I will describe the options with respect to (A) pricing and software provisioning models; (B) control and flexibility; (C) level of engagement with contractors; and (D) approach to sourcing, employment law, and financial settlement. Option 1:  Enterprise Licensed Software In this model, you purchase from Oracle the license and support for the applications you need. Typically, you license PeopleSoft sPro as your VMS tool for sourcing, monitoring, and paying your contract labor. In conjunction with sPro, you can also utilize PeopleSoft Human Capital Management (HCM) applications (if you do not already) to configure more advanced business processes for recruiting, training, and tracking your contractors. Many customers choose this enterprise license software model because of the functionality and natural integration of the PeopleSoft applications and because the cost for the PeopleSoft software is explicit. There is no fee per transaction to source each contractor under this model. Our customers that employ contractors to augment their permanent staff on billable client engagements often find this model appealing because there are no fees to affect their profit margins. With this model, you decide whether to have your own IT organization run the software or have the software hosted and managed by either Oracle or another application services provider. Your organization, perhaps with the assistance of consultants, configures, deploys, and operates the software for managing your contingent workforce. This model offers you the highest level of control and flexibility since your organization can configure the contractor process flow exactly to your business and security requirements and can extend the functionality with PeopleTools. This option has proven very valuable and applicable to our customers engaged in government contracting because their contingent labor management practices are subject to complex standards and regulations. Customers find a great deal of value in the application functionality and configurability the enterprise licensed software offers for managing contingent labor. Some examples of that functionality are... The ability to create a tiered network of preferred suppliers including competencies, pricing agreements, and elaborate candidate management capabilities. Configurable alerts and online collaboration for bid, resource requisition, timesheet, and deliverable entry, routing, and approval for both resource and deliverable-based services. The ability to manage contractors with the same PeopleSoft HCM and Projects applications that are used to manage the permanent workforce. Because it allows you to utilize much of the same PeopleSoft HCM and Projects application functionality for contractors that you use for permanent employees, the enterprise licensed software model supports the deepest level of engagement with the contingent workforce. For example, you can: fill job openings with contingent labor; guide contingent workers through essential safety and compliance training with PeopleSoft Enterprise Learning Management; and source contingent workers directly to project-based assignments in PeopleSoft Resource Management and PeopleSoft Program Management. This option enables contingent workers to collaborate closely with your permanent staff on complex, knowledge-based efforts - R&D projects, billable client contracts, architecture and engineering projects spanning multiple years, and so on. With the enterprise licensed software model, your organization maintains responsibility for the sourcing, onboarding (including adherence to employment laws), and financial settlement processes. This means your organization maintains on staff or hires the expertise in these domains to utilize the software and interact with suppliers and contractors. Option 2:  Software as a Service (SaaS) The effort involved in setting up and operating VMS software to handle a contingent workforce leads many organizations to seek a system that can be activated and configured within a few days and for which they can pay based on usage. Oracle's Business Process Outsourcing partner, Provade, Inc., provides exactly this option to our customers. Provade offers its vendor management software as a service over the Internet and usually charges your organization a fee that is a percentage of your total contingent labor spending processed through the Provade software. (Percentage of spend is the predominant fee model, although not the only one.) In addition to lower implementation costs, the effort of configuring and maintaining the software is largely upon Provade, not your organization. This can be very appealing to IT organizations that are thinly stretched supporting other important information technology initiatives. Built upon PeopleSoft sPro, the Provade solution is tailored for simple and quick deployment and administration. Provade has added capabilities to clone users rapidly and has simplified business documents, like work orders and change orders, to facilitate enterprise-wide, self-service adoption with little to no training. Provade also leverages Oracle Business Intelligence Enterprise Edition (OBIEE) to provide integrated spend analytics and dashboards. Although pure customization is more limited than with the enterprise licensed software model, Provade offers a very effective option for organizations that are regularly on-boarding and off-boarding high volumes of contingent staff hired to perform discrete support tasks (for example, order fulfillment during the holiday season, hourly clerical work, desktop technology repairs, and so on) or project tasks. The software is very configurable and at the same time very intuitive to even the most computer-phobic users. The level of contingent worker engagement your organization can achieve with the Provade option is generally the same as with the enterprise licensed software model since Provade can automatically establish contingent labor resources in your PeopleSoft applications. Provade has pre-built integrations to Oracle's PeopleSoft and the Oracle E-Business Suite procurement, projects, payables, and HCM applications, so that you can evaluate, train, assign, and track contingent workers like your permanent employees. Similar to the enterprise licensed software model, your organization is responsible for the contingent worker sourcing, administration, and financial settlement processes. This means your organization needs to maintain the staff expertise in these domains. Option 3:  Managed Services Provider (MSP) Whether you are using the enterprise licensed model or the SaaS model, you may want to engage the services of sourcing, employment, payroll, and financial settlement professionals to administer your contingent workforce program. Firms that offer this expertise are often referred to as "MSPs," and they are typically staffing companies that also offer permanent and temporary hiring services. (In fact, many of the major MSPs are Oracle applications customers themselves, and they utilize the PeopleSoft Solution for the Staffing Industry to run their own business operations.) Usually, MSPs place their staff on-site at your facilities, and they can utilize either your enterprise licensed PeopleSoft sPro application or the Provade VMS SaaS software to administer the network of suppliers providing contingent workers. When you utilize an MSP, there is a separate fee for the MSP's service that is typically funded by the participating suppliers of the contingent labor. Also in this model, the suppliers of the contingent labor (not the MSP) usually pay the contingent labor force. With an MSP, you are intentionally turning over business process control for the advantages associated with having someone else manage the processes. The software option you choose will to a certain extent affect your process flexibility; however, the MSPs are often able to adapt their processes to the unique demands of your business. When you engage an MSP, you will want to give some thought to the level of engagement and "partnering" you need with your contingent workforce. Because the MSP acts as an intermediary, it can be very valuable in handling high volume, routine contracting for which there is a relatively low need for "partnering" with the contingent workforce. However, if your organization (or part of your organization) engages contingent workers for high-profile client projects that require diplomacy, intensive amounts of interaction, and personal trust, introducing an MSP into the process may prove less effective than handling the process with your own staff. In fact, in many organizations, it is common to enlist an MSP to handle contractors working on internal projects and to have permanent employees handle the contractor relationships that affect the portion of the services portfolio focused on customer-facing, billable projects. One of the key advantages of enlisting an MSP is that you do not have to maintain the expertise required for orchestrating the sourcing, hiring, and paying of contingent workers.  These are the domain of the MSPs. If your own staff members are not prepared to manage the essential "overhead" processes associated with contingent labor, working with an MSP can make solid business sense. Proper administration of a contingent workforce can make the difference between project success and failure, operating profit and loss, and legal compliance and fines. Concluding Thoughts There is little doubt that thoughtfully and purposefully constructing a service delivery strategy that leverages the strengths of contingent workers can lead to better projects, deliverables, and business results. What requires a bit more thinking is determining the platform (or platforms) that will enable each part of your organization to best deliver on its mission.

    Read the article

  • Taming Hopping Windows

    - by Roman Schindlauer
    At first glance, hopping windows seem fairly innocuous and obvious. They organize events into windows with a simple periodic definition: the windows have some duration d (e.g. a window covers 5 second time intervals), an interval or period p (e.g. a new window starts every 2 seconds) and an alignment a (e.g. one of those windows starts at 12:00 PM on March 15, 2012 UTC). var wins = xs     .HoppingWindow(TimeSpan.FromSeconds(5),                    TimeSpan.FromSeconds(2),                    new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc)); Logically, there is a window with start time a + np and end time a + np + d for every integer n. That’s a lot of windows. So why doesn’t the following query (always) blow up? var query = wins.Select(win => win.Count()); A few users have asked why StreamInsight doesn’t produce output for empty windows. Primarily it’s because there is an infinite number of empty windows! (Actually, StreamInsight uses DateTimeOffset.MaxValue to approximate “the end of time” and DateTimeOffset.MinValue to approximate “the beginning of time”, so the number of windows is lower in practice.) That was the good news. Now the bad news. Events also have duration. Consider the following simple input: var xs = this.Application                 .DefineEnumerable(() => new[]                     { EdgeEvent.CreateStart(DateTimeOffset.UtcNow, 0) })                 .ToStreamable(AdvanceTimeSettings.IncreasingStartTime); Because the event has no explicit end edge, it lasts until the end of time. So there are lots of non-empty windows if we apply a hopping window to that single event! For this reason, we need to be careful with hopping window queries in StreamInsight. Or we can switch to a custom implementation of hopping windows that doesn’t suffer from this shortcoming. The alternate window implementation produces output only when the input changes. We start by breaking up the timeline into non-overlapping intervals assigned to each window. In figure 1, six hopping windows (“Windows”) are assigned to six intervals (“Assignments”) in the timeline. Next we take input events (“Events”) and alter their lifetimes (“Altered Events”) so that they cover the intervals of the windows they intersect. In figure 1, you can see that the first event e1 intersects windows w1 and w2 so it is adjusted to cover assignments a1 and a2. Finally, we can use snapshot windows (“Snapshots”) to produce output for the hopping windows. Notice however that instead of having six windows generating output, we have only four. The first and second snapshots correspond to the first and second hopping windows. The remaining snapshots however cover two hopping windows each! While in this example we saved only two events, the savings can be more significant when the ratio of event duration to window duration is higher. Figure 1: Timeline The implementation of this strategy is straightforward. We need to set the start times of events to the start time of the interval assigned to the earliest window including the start time. Similarly, we need to modify the end times of events to the end time of the interval assigned to the latest window including the end time. The following snap-to-boundary function that rounds a timestamp value t down to the nearest value t' <= t such that t' is a + np for some integer n will be useful. For convenience, we will represent both DateTime and TimeSpan values using long ticks: static long SnapToBoundary(long t, long a, long p) {     return t - ((t - a) % p) - (t > a ? 0L : p); } How do we find the earliest window including the start time for an event? It’s the window following the last window that does not include the start time assuming that there are no gaps in the windows (i.e. duration < interval), and limitation of this solution. To find the end time of that antecedent window, we need to know the alignment of window ends: long e = a + (d % p); Using the window end alignment, we are finally ready to describe the start time selector: static long AdjustStartTime(long t, long e, long p) {     return SnapToBoundary(t, e, p) + p; } To find the latest window including the end time for an event, we look for the last window start time (non-inclusive): public static long AdjustEndTime(long t, long a, long d, long p) {     return SnapToBoundary(t - 1, a, p) + p + d; } Bringing it together, we can define the translation from events to ‘altered events’ as in Figure 1: public static IQStreamable<T> SnapToWindowIntervals<T>(IQStreamable<T> source, TimeSpan duration, TimeSpan interval, DateTime alignment) {     if (source == null) throw new ArgumentNullException("source");     // reason about DateTime and TimeSpan in ticks     long d = Math.Min(DateTime.MaxValue.Ticks, duration.Ticks);     long p = Math.Min(DateTime.MaxValue.Ticks, Math.Abs(interval.Ticks));     // set alignment to earliest possible window     var a = alignment.ToUniversalTime().Ticks % p;     // verify constraints of this solution     if (d <= 0L) { throw new ArgumentOutOfRangeException("duration"); }     if (p == 0L || p > d) { throw new ArgumentOutOfRangeException("interval"); }     // find the alignment of window ends     long e = a + (d % p);     return source.AlterEventLifetime(         evt => ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p)),         evt => ToDateTime(AdjustEndTime(evt.EndTime.ToUniversalTime().Ticks, a, d, p)) -             ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p))); } public static DateTime ToDateTime(long ticks) {     // just snap to min or max value rather than under/overflowing     return ticks < DateTime.MinValue.Ticks         ? new DateTime(DateTime.MinValue.Ticks, DateTimeKind.Utc)         : ticks > DateTime.MaxValue.Ticks         ? new DateTime(DateTime.MaxValue.Ticks, DateTimeKind.Utc)         : new DateTime(ticks, DateTimeKind.Utc); } Finally, we can describe our custom hopping window operator: public static IQWindowedStreamable<T> HoppingWindow2<T>(     IQStreamable<T> source,     TimeSpan duration,     TimeSpan interval,     DateTime alignment) {     if (source == null) { throw new ArgumentNullException("source"); }     return SnapToWindowIntervals(source, duration, interval, alignment).SnapshotWindow(); } By switching from HoppingWindow to HoppingWindow2 in the following example, the query returns quickly rather than gobbling resources and ultimately failing! public void Main() {     var start = new DateTimeOffset(new DateTime(2012, 6, 28), TimeSpan.Zero);     var duration = TimeSpan.FromSeconds(5);     var interval = TimeSpan.FromSeconds(2);     var alignment = new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc);     var events = this.Application.DefineEnumerable(() => new[]     {         EdgeEvent.CreateStart(start.AddSeconds(0), "e0"),         EdgeEvent.CreateStart(start.AddSeconds(1), "e1"),         EdgeEvent.CreateEnd(start.AddSeconds(1), start.AddSeconds(2), "e1"),         EdgeEvent.CreateStart(start.AddSeconds(3), "e2"),         EdgeEvent.CreateStart(start.AddSeconds(9), "e3"),         EdgeEvent.CreateEnd(start.AddSeconds(3), start.AddSeconds(10), "e2"),         EdgeEvent.CreateEnd(start.AddSeconds(9), start.AddSeconds(10), "e3"),     }).ToStreamable(AdvanceTimeSettings.IncreasingStartTime);     var adjustedEvents = SnapToWindowIntervals(events, duration, interval, alignment);     var query = from win in HoppingWindow2(events, duration, interval, alignment)                 select win.Count();     DisplayResults(adjustedEvents, "Adjusted Events");     DisplayResults(query, "Query"); } As you can see, instead of producing a massive number of windows for the open start edge e0, a single window is emitted from 12:00:15 AM until the end of time: Adjusted Events StartTime EndTime Payload 6/28/2012 12:00:01 AM 12/31/9999 11:59:59 PM e0 6/28/2012 12:00:03 AM 6/28/2012 12:00:07 AM e1 6/28/2012 12:00:05 AM 6/28/2012 12:00:15 AM e2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM e3 Query StartTime EndTime Payload 6/28/2012 12:00:01 AM 6/28/2012 12:00:03 AM 1 6/28/2012 12:00:03 AM 6/28/2012 12:00:05 AM 2 6/28/2012 12:00:05 AM 6/28/2012 12:00:07 AM 3 6/28/2012 12:00:07 AM 6/28/2012 12:00:11 AM 2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM 3 6/28/2012 12:00:15 AM 12/31/9999 11:59:59 PM 1 Regards, The StreamInsight Team

    Read the article

  • The Internet of Things & Commerce: Part 3 -- Interview with Kristen J. Flanagan, Commerce Product Management

    - by Katrina Gosek, Director | Commerce Product Strategy-Oracle
    Internet of Things & Commerce Series: Part 3 (of 3) And now for the final installment my three part series on the Internet of Things & Commerce. Post one, “The Next 7,000 Days”, introduced the idea of the Internet of Things, followed by a second post interviewing one of our chief commerce innovation strategists, Brian Celenza.  This final post in the series is an interview with Kristen J. Flanagan, lead product manager for Oracle Commerce omnichannel strategy. She takes us through the past, present, and future of how our Commerce Solution is re-imagining the way physical and digital shopping come together. ------- QUESTION: It’s your job to stay on top of what our customers’ need to not only run their online businesses effectively, but also to make sure they have product capabilities they can innovate and grow on. What key trend has been top-of-mind for you and our customers around this collision of physical and digital shopping? Kristen: I’ll agree with Brian Celenza that hands down mobile has forced a major disruption in shopping and selling behavior. A few years ago, mobile exploded at a pace I don't think anyone was expecting. Early on, we saw our customers scrambling to establish a mobile presence---mostly through "screen scraping" technologies. As smartphones continued to advance (at lightening speed!), our customers started to investigate ways to truly tap in to their eCommerce capabilities to deliver the mobile experience. They started looking to us for a means of using the eCommerce services and capabilities to deliver a mobile experience that is tailored for mobile rather than the desktop experience on a smaller screen. In the future, I think we'll see customers starting to really understand what their shoppers need and expect from a mobile offering and how they can adapt their content and delivery of that content to meet those needs. And, mobile shopping doesn’t stop at the consumer / buyer. Because the in-store experience is compelling and has advantages that digital just can't offer, we're also starting to see the eCommerce services being leveraged for mobile for in-store sales associates. Brick-and-mortar retailers are interested in putting the omnichannel product catalog, promotions, and cart into the hands of knowledgeable associates. Retailers are now looking to connect and harness the eCommerce data in-store so that shoppers have a reason to walk-in. I think we'll be seeing a lot more customers thinking about melding the in-store and digital experiences to present a richer offering for shoppers.    QUESTION: What are some examples of what our customers are doing currently to bring these concepts to reality? Kristen: Well, without question, connecting digital and brick-and-mortar worlds is becoming tablestakes for selling experiences. If a brand has a foot in both worlds (i.e., isn’t a pureplay online retailer), they have to connect the dots because shoppers – whether consumers or B2B buyers –don't think in clearly defined channels anymore. The expectation is connectedness – for on- and offline experiences, promotions, products, and customer data. What does this mean practically for businesses selling goods on- and offline? It touches a lot of systems: inventory info on the eCommerce site, fulfillment options across channels (buy online/pickup in store), order information (representing various channels for a cohesive view of shopper order history), promotions across digital and store, etc.  A few years ago, the main link between store and digital was the smartphone. We all remember when “apps” became a thing and many of our customers were scrambling to get a native app out there. Now we're seeing more strategic thinking around the benefits of mobile web vs. native and how that ties in to the purpose and role of mobile within the digital channel. Put it more broadly, how these pieces fit together in the overall brand puzzle.  The same could be said for “showrooming.” Where it was a major concern (i.e., shoppers using stores to look at merchandise and then order online from Amazon), in recent months, it’s emerged that the inverse is now becoming a a reality as well. "Webrooming" (using digital sites to do research before making a purchase in the store) is a new behavior pure play retailers are challenged with. There are many technologies, behaviors, and information that need to tie together to offer a holistic omnichannel shopping experience. As a result, brands are looking for ways to connect the digital and in-store experiences to bridge the gaps: shared assortments across channels, assisted selling apps that arm associates with information about shoppers, shared promotions, inventory, etc. QUESTION: How has Oracle Commerce been built to help brands make the link between in-store and digital over the last few years? Kristen: Over the last seven years, the product has been in step with the changes in industry needs. Here is a brief history of the evolution: Prior to Oracle’s acquisition of ATG and Endeca, key investments were made to cross-channel functionality that we are still building on today. Commerce Service Center (v2007.1) ATG introduced the Commerce Service Center in 2007.1 and marked the first entry into what was then called “cross-channel.” The Commerce Service Center is a call-center-agent-facing application that enables agents to see shopper orders, online catalog, promotions, and pricing. It is tightly integrated with the eCommerce capabilities of the platform and commerce engine and provided a means of connecting data from the call center and online channels.  REST services framework (v9.1)  In v9.1 we introduced the REST services framework and interface in the Platform that enabled customers to use ATG web services in other applications. This framework has become the basis for our subsequent omni-channel features and functionality. Multisite Architecture (v10) With the v10 release, we introduced the Multisite Architecture, which enabled customers to manage multiple sites (and channels) within a single instance of the BCC. Customers could create site- and channel-specific catalogs, promotions, targeters, and scenarios. Endeca Page Builder (2.x) / Experience Manager (3.x) With the introduction of Endeca for Mobile (now part of the core platform, available through the reference store – see blow) on top of Page Builder (and then eventually Experience Manager), Endeca gave business users the tools to create and manage native and mobile web applications. And since the acquisition of both ATG (2011) and Endeca (2012), Oracle Commerce has leveraged the best of each leading technology’s capabilities for omnichannel commerce to continue to drive innovation for our customers. Service enablement of core Oracle Commerce capabilities (v10.1.1, 10.2, & 11) After the establishment of the REST services framework and interface, we followed up in subsequent releases with service enablement of core Oracle Commerce capabilities throughout the iOS native app and the enablement of the core Commerce Service Center features. The result is that customers can leverage these services for their integrations with other systems, as well as their omnichannel initiatives.  Mobile web reference application (v10.1) In 10.1 we introduced the shopper-facing mobile reference application that showed how to use Oracle Commerce to deliver a mobile web experience for shoppers. This included the use of Experience Manager and cartridges to drive those experiences on select pages.  Native (iOS) reference application (v10.1.1)  We came out with the 10.1.1 shopper-facing native iOS ref app that illustrated how to use the Commerce REST services to deliver an iOS app. Also included Experience Manager-driven pages.   Assisted Selling reference application (v10.2.1)  The Assisted Selling reference application is our first reference application designed for the in-store associate. This iOS app shows customers how they can use Oracle Commerce data and information to provide a high-touch, consultative sales environment as well as to put the endless aisle into hands of their associates. Shoppers can start a cart online, and in-store associates can access that cart via the application to provide more information or add products and then transact using the ATG engine. Support for Retail promotions (v11) As part of the v11 release, we worked with teams in the Oracle Retail Global Business Unit (RGBU) to assess which promotion types and capabilities are supported across our products. Those products included Oracle Commerce, Oracle Point of Service (ORPOS), and Oracle Retail Price Management (RPM). The result is that customers can now more easily support omnichannel use cases between the store and digital.  Making sure Oracle Commerce can help support the omnichannel needs of our customers is core to our product strategy. With 89% of consumers now use two or more channels to make a single purchase, ensuring that cross-channel interactions are linked is critical to a great customer experience – and to sales. As Oracle Commerce evolves, we want to make it simple for organizations to create, deliver, and scale experiences across touchpoints with our create once, deploy commerce anywhere framework. We have a flexible, services-oriented architecture that allows data, content, catalogs, cart, experiences, personalization, and merchandising to be shared across touchpoints and easily extended in to new environments like mobile, social, in-store, Call Center, and new Websites. [For the latest downloads and Oracle Commerce documentation, please visit the Oracle Technical Network.] ------ Thank you to both Brian and Kristen for their contributions and to this blog series and their continued thought leadership for Oracle Commerce. We are all looking forward to the coming years of months of new shopping behaviors and opportunities to innovate. Because – if the digital fabric of our everyday lives continues to change at the same pace – the next five years (that just under 2,000 days), will be dramatic. ---------- THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT

    Read the article

  • CodePlex Daily Summary for Tuesday, December 28, 2010

    CodePlex Daily Summary for Tuesday, December 28, 2010Popular ReleasesMonitorWang: MonitorWang v1.0.5 (Growler): What's new?Added Growl Notification Finalisers - these are interceptor components that work exclusively with the Growl Publisher. These allow you to modify the Growl Notification just prior to it being sent by the publisher. You can inject custom logic to precisely control how the Growl Notification will appear; this includes changing the Growl Priority level and message text. I've created to two Growl Notification Finalisers - one allows you to change the Growl Notification Priorty based on ...Catel - WPF and Silverlight MVVM library: 1.0.0: And there it is, the final release of Catel, and it is no longer a beta version!Multicore Task Framework: MTF 1.0.2: Release 1.0.2 of Multicore Task Framework.EnhSim: EnhSim 2.2.7 ALPHA: 2.2.7 ALPHAThis release supports WoW patch 4.03a at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Mongoose has bee...LINQ to Twitter: LINQ to Twitter Beta v2.0.19: Mono 2.8, Silverlight, OAuth, 100% Twitter API coverage, streaming, extensibility via Raw Queries, and added documentation. Bug fixes.Hammock for REST: Hammock v1.1.4: v1.1.4 ChangesOAuth fixes for post content handling, encoding, and url parameter doubling v1.1.3 ChangesAdded an event handler for use with retries Made improvements to OAuth for performance, plus additional fixes Added OAuth token refresh support Fixed memory leak in content streaming, and regression issue with async GET call response handling v1.1.2 ChangesAdded OAuth Echo native support and static helper methods on OAuthCredentials Fixes for multi-part stream writing and recovery...Rocket Framework (.Net 4.0): Rocket Framework for Windows V 1.0.0: Architecture is reviewed and adjusted in a way so that I can introduce the Web version and WPF version of this framework next. - Rocket.Core is introduced - Controller button functions revisited and updated - DB is renewed to suite the implemented features - Create New button functionality is changed - Add Question Handling featuresFxCop Integrator for Visual Studio 2010: FxCop Integrtor 1.1.0: New FeatureSearch violation information FxCop Integrator provides the violation search feature. You can find out specific violation information by simple search expression.Analyze with FxCop project file FxCop Integrator supports code analysis with FxCop project file. You can customize code analysis behavior (e.g. analyze specifid types only, use specific rules only, and so on). ImprovementImproved the code analysis result list to show more information (added Proejct and File column). Change...NoSimplerAccounting: NoSimplerAccounting 6.0: -Fixed a bug in expense category report.NHibernate Mapping Generator: NHibernate Mapping Generator 2.0: Added support for Postgres (Thanks to Angelo)NewLife XCode: XCode v6.5.2010.1223 ????(????v3.5??): XCode v6.5.2010.1223 ????,??: NewLife.Core ??? NewLife.Net ??? XControl ??? XTemplate ????,??C#?????? XAgent ???? NewLife.CommonEnitty ??????(???,XCode??????) XCode?? ?????????,??????????????????,?????95% XCode v3.5.2009.0714 ??,?v3.5?v6.0???????????????,?????????。v3.5???????????,??????????????。 XCoder ??XTemplate?????????,????????XCode??? XCoder_Src ???????(????XTemplate????),??????????????????MiniTwitter: 1.64: MiniTwitter 1.64 ???? ?? 1.63 ??? URL ??????????????VivoSocial: VivoSocial 7.4.0: Please see changes: http://support.vivoware.com/project/ChangeLog.aspx?PROJID=48Umbraco CMS: Umbraco 4.6 Beta - codename JUNO: The Umbraco 4.6 beta (codename JUNO) release contains many new features focusing on an improved installation experience, a number of robust developer features, and contains more than 89 bug fixes since the 4.5.2 release. Improved installer experience Updated Starter Kits (Simple, Blog, Personal, Business) Beautiful, free, customizable skins included Skinning engine and Skin customization (see Skinning Documentation Kit) Default dashboards on install with hide option Updated Login t...SSH.NET Library: 2010.12.23: This release includes some bug fixes and few new fetures. Fixes Allow to retrieve big directory structures ssh-dss algorithm is fixed Populate sftp file attributes New Features Support for passhrase when private key is used Support added for diffie-hellman-group14-sha1,diffie-hellman-group-exchange-sha256 and diffie-hellman-group-exchange-sha1 key exchange algorithms Allow to provide multiple key files for authentication Add support for "keyboard-interactive" authentication method...ASP.NET MVC SiteMap provider: MvcSiteMapProvider 2.3.0: Using NuGet?MvcSiteMapProvider is also listed in the NuGet feed. Learn more... Like the project? Consider a donation!Donate via PayPal via PayPal. Release notesThis will be the last release targeting ASP.NET MVC 2 and .NET 3.5. MvcSiteMapProvider 3.0.0 will be targeting ASP.NET MVC 3 and .NET 4 Web.config setting skipAssemblyScanOn has been deprecated in favor of excludeAssembliesForScan and includeAssembliesForScan ISiteMapNodeUrlResolver is now completely responsible for generating th...Media Companion: Media Companion 3.400: Extract the entire archive to a folder which has user access rights, eg desktop, documents etc. A manual is included to get you startedWindows Media Player GNTP Plugin: WMP-GNTP v1.0.5: This is the installer for WMP-GNTP. Install it to get the plugin on your system.Google Geo Kit: Static Google Map WinForm Control Nightly Build: 12/22/2010 MD5sum - b8118c9970d6dc9480fe7c41f042537f add event OnGMapNotDefined. When anything went wrong in StaticGmap internally and return null stream, this event will be firedSilverlight Sockets Sample: No binaries: Whole source code in a ZIP. Shame 'Source Code' tab isn't working, so I'll just upload a ZIP.New ProjectsAsfMojo: AsfMojo is an open source .NET ASF parsing library, providing support for parsing WMA audio and WMV video files. It offers classes to create streams from packet data within a media file, gather file statistics and extract audio segments or frame accurate still frames.asp.net ajax file manager: Features : Full Ajax Support security View Image Create folder Delete file & folder Copy file & folder Move file & folder multiple selection ( Ctrl + Select ) Easy installation and configuration Open source. compatible IE7,IE8,FF,Safari,Chrome http://www.filemanager.3ntar.netBoxNet: BoxNet is a opensource library which will provide possibility for Windows Phone 7 developers to integrate with DropBox.C# Sqlite For WP7: C# Sqlite Port for Windows phone 7 and possibly Silverlight 3, 4. The core engine was slightly modified to be used with IsolatedStorage and SqliteClient were ported by using missing codes from Mono project in order to maximize usability and portability from desktop.Comparison of Managed Compression Algorithms: Visual Studio 2010 Console Test Harness with samples of MiniLZO, QuickLz, SharpZipLib, iRolz and other managed compression classes. Tests compression of a 2MB Word file and shows timings. Useful in deciding what type of compression to use. Developers can easily add their own.Cypher Bot 2011: Cypher Bot 2011 Brings the word cipher to a whole new level. You can now easily open, save, print, and send ciphers. Make a short message or completly encrypt a document. The cipher is impossible to figure out unless you have the keyword and algorithm to solve it! Try it out!Directed Graph for .NET: This project presents a simple directed graph implementation for the .NET framework using C# language.DynamicAccess: DynamicAccess is a library to aid connecting DLR languages such as ironpython and ironruby to non-dynamic languages like managed C++. It also fills in some gaps in the current C# support of dynamic objects, such as member access by string and deletion of members or indexes.Euro for Windows XP: A simple tool and sample to change Estonian currency from Estonian Krone (kr) to Euro (€). Applies to all versions of Windows and from .NET 2.0 which is default build. The sample creates a custom locale and updates existing users through Registry.Excel PowerShell Console: An Excel AddIn to enable using PowerShell for Excel automation.fantastic: fantasticGomarket Toolbar: GoMarket FireFox toolbarjs: jsLiving Agile's Common Framework: This project is a collection of commonly used routines here at Living Agile. There are a lot of helpful extension methods, logging, configuration and threading utilities. All other Living Agile projects have a dependency on this project.MapWindow 4: MapWindow4 is a free windows GIS application and uses an ActiveX control at it's core that can be embedded into many applications that don't support .Net such as excel, access, visual basic 6, or other pre-.Net languages.Mdelete API: Delete all files and directores in windows shell. Support long path (less then 32000 chars) and network path (eg. \\server\share or \\127.0.0.1\share)OP-Code SyntaxEditor: OP-Code SyntaxEditor is a Windows Forms Control, similar to a multi-line TextBox, which syntax highlights text and provides some features for code editing.passion: passionSharpHighlighter: SharpHighlighter is an extension for Visual Studio; a fairly simple code highlighter for C# I made it for anyone who wants to download and learn how to create is own parser or Syntax Highlighter. This sample will highlight only C# classes and structs but it's quite extensible.Test Center Locator: Test center locator makes it easier to find closest toefl or Ielts test center

    Read the article

  • CodePlex Daily Summary for Sunday, November 25, 2012

    CodePlex Daily Summary for Sunday, November 25, 2012Popular ReleasesMath.NET Numerics: Math.NET Numerics v2.3.0: Common: Continued major linear algebra storage rework, in this release focusing on vectors (previous release was on matrices) Static CreateRandom for all dense matrix and vector types Thin QR decomposition (in additin to existing full QR) Consistent static Sample methods for continuous and discrete distributions (was previously missing on a few) Portable build adds support for WP8 (.Net 4.0 and higher, SL5, WP8 and .NET for Windows Store apps) Various bug, performance and usability ...ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.1: +2012-11-25 v3.2.1 +????????。 -MenuCheckBox?CheckedChanged??????,??????????。 -???????window.IDS??????????????。 -?????(??TabCollection,ControlBaseCollection)???,????????????????。 +Grid??。 -??SelectAllRows??。 -??PageItems??,?????????????,?????、??、?????。 -????grid/gridpageitems.aspx、grid/gridpageitemsrowexpander.aspx、grid/gridpageitems_pagesize.aspx。 -???????????????????。 -??ExpandAllRowExpanders??,?????????????????(grid/gridrowexpanderexpandall2.aspx)。 -??????ExpandRowExpande...VidCoder: 1.4.9 Beta: Updated HandBrake core to SVN 5079. Fixed crashes when encoding DVDs with title gaps.ZXing.Net: ZXing.Net 0.10.0.0: On the way to a release 1.0 the API should be stable now with this version. sync with rev. 2521 of the java version windows phone 8 assemblies improvements and fixesCharmBar: Windows 8 Charm Bar for Windows 7: Windows 8 Charm Bar for Windows 7BlackJumboDog: Ver5.7.3: 2012.11.24 Ver5.7.3 (1)SMTP???????、?????????、??????????????????????? (2)?????????、?????????????????????????? (3)DNS???????CNAME????CNAME????????????????? (4)DNS????????????TTL???????? (5)???????????????????????、?????????????????? (6)???????????????????????????????TEncoder: 3.1: -Added: Turkish translation (Translators, please see "To translators.txt") -Added: Profiles are now stored in different files under "Profiles" folder -Added: User created Profiles will be saved in a differen directory -Added: Custom video and audio options to profiles -Added: Container options to profiles -Added: Parent folder of input file will be created in the output folder -Added: Option to use 32bit FFmpeg eventhough the OS is 64bit -Added: New skin "Mint" -Fixed: FFMpeg could not open A...Liberty: v3.4.3.0 Release 23rd November 2012: Change Log -Added -H4 A dialog which gives further instructions when attempting to open a "Halo 4 Data" file -H4 Added a short note to the weapon editor stating that dropping your weapons will cap their ammo -Reach Edit the world's gravity -Reach Fine invincibility controls in the object editor -Reach Edit object velocity -Reach Change the teams of AI bipeds and vehicles -Reach Enable/disable fall damage on the biped editor screen -Reach Make AIs deaf and/or blind in the objec...Umbraco CMS: Umbraco 4.11.0: NugetNuGet BlogRead the release blog post for 4.11.0. Whats new50 bugfixes (see the issue tracker for a complete list) Read the documentation for the MVC bits. Breaking changesGetPropertyValue now returns an object, not a string (only affects upgrades from 4.10.x to 4.11.0) NoteIf you need Courier use the release candidate (as of build 26). The code editor has been greatly improved, but is sometimes problematic in Internet Explorer 9 and lower. Previously it was just disabled for IE and...Audio Pitch & Shift: Audio Pitch And Shift 5.1.0.3: Fixed supported files list on open dialog (added .pls and .m3u) Impulse Media Player splash message (can be disabled anyway)WiX Toolset: WiX v3.7 RC: WiX v3.7 RC (3.7.1119.0) provides feature complete Bundle update and reference tracking plus several bug fixes. For more information see Rob's blog post about the release: http://robmensching.com/blog/posts/2012/11/20/WiX-v3.7-Release-Candidate-availablePicturethrill: Version 2.11.20.0: Fixed up Bing image provider on Windows 8Excel AddIn to reset the last worksheet cell: XSFormatCleaner.xla: Modified the commandbar code to use CommandBar IDs instead of English names.Json.NET: Json.NET 4.5 Release 11: New feature - Added ITraceWriter, MemoryTraceWriter, DiagnosticsTraceWriter New feature - Added StringEscapeHandling with options to escape HTML and non-ASCII characters New feature - Added non-generic JToken.ToObject methods New feature - Deserialize ISet<T> properties as HashSet<T> New feature - Added implicit conversions for Uri, TimeSpan, Guid New feature - Missing byte, char, Guid, TimeSpan and Uri explicit conversion operators added to JToken New feature - Special case...HigLabo: HigLabo_20121119: HigLabo_2012111 --HigLabo.Mail-- Modify bug fix of ExecuteAppend method. Add ExecuteXList method to ImapClient class. --HigLabo.Net.WindowsLive-- Add AsyncCall to WindowsLiveClient class.SharePoint CAML Extensions: Version 1.1: Beta version! <Membership>, <Today/>, <Now/> and <UserID /> tags are not supported!mojoPortal: 2.3.9.4: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2394-released Note that we have separate deployment packages for .NET 3.5 and .NET 4.0, but we recommend you to use .NET 4, we will probably drop support for .NET 3.5 once .NET 4.5 is available The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code and are not intended for use in Visual Studio. To download the source code see getting the lates...Keleyi: keleyi-1.1.0: ????: .NET Framework 4.0 ??:MD5??,?????IP??(??????),????? www.keleyi.com keleyi.codeplex.comHoliday Calendar: Calendar Control: Month navigation introduced.NDateTime: Version 1.0: This is the first releaseNew Projects.NET vFaceWall: .NET vFaceWall is autility script written in asp.net which allows developer to build next generation facebook wall style user profiles for asp.net websites.29th Infantry Division Engineer Corps: Code repository and project management hub for the 29th Infantry Division's Engineer Corps.A.M. Lost: A.M. Lost is game project developed during the GameDevParty Jam 3 event (23-24-25 november 2012).ABAP BLOG MIKE: ABAP blog AgileNETSlayer: Agile.Net Deobfuscator, supports all obfuscation methods.BERP Games: This project site contains XNA game development concepts and software produced by BERP Games.CFileUpload: This project will let you upload files with progress bar, without using Flash, HTML5 or similar technologies that the user's browser might not have. CharmBar: Windows 8 Charm Bar is a application that works just like the Windows 8 CharmBar. The word charmbar, the Windows 8 Logo are trademarks of Microsoft Corporation.Confidentialité des données: Développer une application permettant de faciliter la mise en œuvre, l’utilisation et la gestion de ces outils, en exploitant les API .NET.End of control: Project has not been started yet. Just absorbing attentions.Enhanced XML Search: Easy and fastest methods to manipulate XML Data.Exchange Automatic Replies Administrator: Exchange Automatic Replies Administrator is a PowerShell GUI tool for Helpdesk and Sysadmins to set a users' Out-Of-Office without using the command line.File Explorer for WPF: FileExplorer is a WPF control that's emulate the Windows Explorer, it supports both FIleSystem and non-FileSystem class.ForcePlot: Using brute force, plots any difficult equation even some popular software cannot plot. Currently supports 2D graphs, trigonometric and logarithmic functions.Game Dev: Currently in ideas phaseGroupEmulationEMK11: Discussion and sharing of member data Group Emulation Engineering Mechanical 2011Jenkins CI: Views, Jobs, Build status and color. mvcMusicOnLine: mvcMusicOnLinemy own site project no 1: still testing...Nauplius.KeyStore: Provides secure application key storage backed by SQL 2008 and Active Directory.Noctl Library: Noctl is a C# multiplatform Library.open Archive Mediator: High-performance middle-ware for process historiansOpen Source Game - Prince's Revenge: Open Source Game.PackToKindle: Project allow to send folder content to your kindle. Functionality is the same as the official SendToKindle application, but this project is designed to use froPunkPong: PunkPong is an open source "Pong" alike game totally written in DHTML (JavaScript, CSS and HTML) that uses keyboard or mouse. This cross-platform and cross-browser game was tested under BeOS, Linux, NetBSD, OpenBSD, FreeBSD, Windows and others.Quick Data Processor: A simple and quick data processor for csv files.SFProject: It'll be a game at some pointsilverlight ? flash? policy ??: silverlight? flash? policy socket server? ??Universe.WCF.Behaviors: Universe.WCF.Behaviors provides behaviors: - Easy migration from Remoting - Transparent delivery - Traffic Statistics - WCF Streaming Adaptor for BinaryWriter and TextWriter - etc WowDotNetAPI for Silverlight: Silverlight class library implementation of Briam Ramos World Of Warcraft WowDotNetAPI .NET library on GitHub ( https://github.com/briandek/WowDotNetAPI ) C# .Net library to access the new World of Warcraft Community Platform API.WP7NUMConvert: A really simple project to create a small library for number conversions in multiple formats. Written in VB for Windows PhoneWPF CodeEditor: The Dev Tools project is intended to contain a set of features, tools & controls useful for development purposes.

    Read the article

  • CodePlex Daily Summary for Friday, November 30, 2012

    CodePlex Daily Summary for Friday, November 30, 2012Popular ReleasesTFS Branch Permission Removal Event Subscriber: Release 1.0: first release of the Branch Security Inherit Only libraryMagelia WebStore Open-source Ecommerce software: Magelia WebStore 2.2: new UI for the Administration console Bugs fixes and improvement version 2.2.215.3JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.5: What's new in JayData 1.2.5For detailed release notes check the release notes. Handlebars template engine supportImplement data manager applications with JayData using Handlebars.js for templating. Include JayDataModules/handlebars.js and begin typing the mustaches :) Blogpost: Handlebars templates in JayData Handlebars helpers and model driven commanding in JayData Easy JayStorm cloud data managementManage cloud data using the same syntax and data management concept just like any other data ...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.70: Highlight features & improvements: • Performance optimization. • Search engine optimization. ID-less URLs for products, categories, and manufacturers. • Added ACL support (access control list) on products and categories. • Minify and bundle JavaScript files. • Allow a store owner to decide which billing/shipping address fields are enabled/disabled/required (like it's already done for the registration page). • Moved to MVC 4 (.NET 4.5 is required). • Now Visual Studio 2012 is required to work ...SQL Server Partition Management: Partition Management Release 3.0: Release 3.0 adds support for SQL Server 2012 and is backward compatible with SQL Server 2008 and 2005. The release consists of: • A Readme file • The Executable • The source code (Visual Studio project) Enhancements include: -- Support for Columnstore indexes in SQL Server 2012 -- Ability to create TSQL scripts for staging table and index creation operations -- Full support for global date and time formats, locale independent -- Support for binary partitioning column types -- Fixes to is...NHook - A debugger API: NHook 1.0: x86 debugger Resolve symbol from MS Public server Resolve RVA from executable's image Add breakpoints Assemble / Disassemble target process assembly More information here, you can also check unit tests that are real sample code.PDF Library: PDFLib v2.0: Release notes This new version include many bug fixes and include support for stream objects and cross-reference object streams. New FeatureExtract images from the PDFCommand Line Parser Library: 1.9.3.23 beta: Fixes an issue notified by github user sbambrick about parsing negative numbers.MCEBuddy 2.x: MCEBuddy 2.3.10: Critical Update to 2.3.9: Changelog for 2.3.10 (32bit and 64bit) 1. AsfBin executable missing from build 2. Removed extra references from build to avoid conflict 3. Showanalyzer installation now checked on remote engine machine Changelog for 2.3.9 (32bit and 64bit) 1. Added support for WTV output profile 2. Added support for minimizing MCEBuddy to the system tray 3. Added support for custom archive folder 4. Added support to disable subdirectory monitoring 5. Added support for better TS fil...DotNetNuke® Community Edition CMS: 07.00.00: Major Highlights Fixed issue that caused profiles of deleted users to be available Removed the postback after checkboxes are selected in Page Settings > Taxonomy Implemented the functionality required to edit security role names and social group names Fixed JavaScript error when using a ";" semicolon as a profile property Fixed issue when using DateTime properties in profiles Fixed viewstate error when using Facebook authentication in conjunction with "require valid profile fo...CODE Framework: 4.0.21128.0: See change notes in the documentation section for details on what's new.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.76: Fixed a typo in ObjectLiteralProperty.IsConstant that caused all object literals to be treated like they were constants, and possibly moved around in the code when they shouldn't be.Kooboo CMS: Kooboo CMS 3.3.0: New features: Dropdown/Radio/Checkbox Lists no longer references the userkey. Instead they refer to the UUID field for input value. You can now delete, export, import content from database in the site settings. Labels can now be imported and exported. You can now set the required password strength and maximum number of incorrect login attempts. Child sites can inherit plugins from its parent sites. The view parameter can be changed through the page_context.current value. Addition of c...Team Foundation Server Administration Tool: 2.2: TFS Administration Tool 2.2 supports the Team Foundation Server 2012 Object Model. Visual Studio 2012 or Team Explorer 2012 must be installed before you can install this tool. You can download and install Team Explorer 2012 from http://aka.ms/TeamExplorer2012. There are no functional changes between the previous release (2.1) and this release.Coding Guidelines for C# 3.0, C# 4.0 and C# 5.0: Coding Guidelines for CSharp 3.0, 4.0 and 5.0: See Change History for a detailed list of modifications.Math.NET Numerics: Math.NET Numerics v2.3.0: Portable Library Build: Adds support for WP8 (.Net 4.0 and higher, SL5, WP8 and .NET for Windows Store apps) New: portable build also for F# extensions (.Net 4.5, SL5 and .NET for Windows Store apps) NuGet: portable builds are now included in the main packages, no more need for special portable packages Linear Algebra: Continued major storage rework, in this release focusing on vectors (previous release was on matrices) Thin QR decomposition (in addition to existing full QR) Static Cr...ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.1: +2012-11-25 v3.2.1 +????????。 -MenuCheckBox?CheckedChanged??????,??????????。 -???????window.IDS??????????????。 -?????(??TabCollection,ControlBaseCollection)???,????????????????。 +Grid??。 -??SelectAllRows??。 -??PageItems??,?????????????,?????、??、?????。 -????grid/gridpageitems.aspx、grid/gridpageitemsrowexpander.aspx、grid/gridpageitems_pagesize.aspx。 -???????????????????。 -??ExpandAllRowExpanders??,?????????????????(grid/gridrowexpanderexpandall2.aspx)。 -??????ExpandRowExpande...VidCoder: 1.4.9 Beta: Updated HandBrake core to SVN 5079. Fixed crashes when encoding DVDs with title gaps.ZXing.Net: ZXing.Net 0.10.0.0: On the way to a release 1.0 the API should be stable now with this version. sync with rev. 2521 of the java version windows phone 8 assemblies improvements and fixesBlackJumboDog: Ver5.7.3: 2012.11.24 Ver5.7.3 (1)SMTP???????、?????????、??????????????????????? (2)?????????、?????????????????????????? (3)DNS???????CNAME????CNAME????????????????? (4)DNS????????????TTL???????? (5)???????????????????????、?????????????????? (6)???????????????????????????????New ProjectsAlpha Solutions Software Engineering Group Project: A Software Engineering Group Project from the University of Northampton.Arduino_Color_Tracker: CMUCAM arduino code for tracking the amount a pixels of the color being tracked.CAudioEndpointVolume: CAudioEndpointVolume for 32 bit and 64 bit Microsoft Office VBACollaborationItem: ?????????????codeplex??,???????????.Commerce Server Contrib Code Generation: A dll and set of T4 templates that help you generate code for interacting with Commerce Server.Commerce Server Contrib Site Templates: A set of site templates and libraries to help you get started with developing sites for Commerce Server.Creation Kit - Script Editor: CKSE is a script editor for Skyrim Papyrus scripts at the moment. Extending to other games is plannned.CSR Fiddle: CSR Fiddle is an App for SharePoint that allows you to "fiddle" with your list and form templates right from the browser.DNN RTL: RTL (Hebrew, Farsi, Arabic etc.) CSS files for Dotnetnuke. CSS files for right to left DNN sites. Dynamic Query: Uses expression tree to dynamically generate Entity Framework query. Also contains tool set for easy integration with asp.net mvc websites.FinalFrontier Autopilot: Autopilot for FinalFrontier MudHorror Encode: ¿pIEnSaS QUe eScRIbiR así eS SoLo PARA ReTrASadOS? Piénsalo dos veces, puede ser que haya un mensaje oculto y tú sólo estés suponiendo demasiado.Interop 2: Microformats for Azure Cloud with OData-InterfaceIT Security Feed Reader: Este es el proyecto de ISec PeruIVO 12_13 A5 Programmeren1 Lessen: Lessen voor de module A5 Programmeren 1 IVO Brugge William SchokkeléLingo: Lingo is a word game developed for Windows Phone 8. It's some sort of word version of Mastermind where you have to guess words in the least amount of guesses.one day one demo: Demos while learning, developing .NET programming skills, e.g. C#, Winform, WPF, WCF, ASP.NET, etc.PDF odd even merger: Merge odd pages with even pages, Useful when you scan a lot of pages from both sides and want to use a feeder ....Print list view button on SharePoint 2010 Ribbon: SP feature with new functionality where you can add "Print Button" on each type of SharePoint lists, even if it is SharePoint Calendar list,Document librariessevengen : 7 segment code calculator and generator: this software helps electrical engineers to calculate codes used in microcontrollers firmwares. SMBC Feebback Module: Bespoke feedback moduleSQL Server Compact Merge Replication Library: This library simplifies the code to do Merge Replication from a SQL Server Compact 3.5 SP2 client, with useful helper methods.Subnetwork Toolkit: The Subnetwork Toolkit is a set of tools to analyze biological subnetworks.uTreeFormat: Umbraco Tree Formatting You can format every documenttype you want by using the alias in the config. Currently only the nodetype 'content' is supported.

    Read the article

  • Software development is (mostly) a trade, and what to do about it

    - by Jeff
    (This is another cross-post from my personal blog. I don’t even remember when I first started to write it, but I feel like my opinion is well enough baked to share.) I've been sitting on this for a long time, particularly as my opinion has changed dramatically over the last few years. That I've encountered more crappy code than maintainable, quality code in my career as a software developer only reinforces what I'm about to say. Software development is just a trade for most, and not a huge academic endeavor. For those of you with computer science degrees readying your pitchforks and collecting your algorithm interview questions, let me explain. This is not an assault on your way of life, and if you've been around, you know I'm right about the quality problem. You also know the HR problem is very real, or we wouldn't be paying top dollar for mediocre developers and importing people from all over the world to fill the jobs we can't fill. I'm going to try and outline what I see as some of the problems, and hopefully offer my views on how to address them. The recruiting problem I think a lot of companies are doing it wrong. Over the years, I've had two kinds of interview experiences. The first, and right, kind of experience involves talking about real life achievements, followed by some variation on white boarding in pseudo-code, drafting some basic system architecture, or even sitting down at a comprooder and pecking out some basic code to tackle a real problem. I can honestly say that I've had a job offer for every interview like this, save for one, because the task was to debug something and they didn't like me asking where to look ("everyone else in the company died in a plane crash"). The other interview experience, the wrong one, involves the classic torture test designed to make the candidate feel stupid and do things they never have, and never will do in their job. First they will question you about obscure academic material you've never seen, or don't care to remember. Then they'll ask you to white board some ridiculous algorithm involving prime numbers or some kind of string manipulation no one would ever do. In fact, if you had to do something like this, you'd Google for a solution instead of waste time on a solved problem. Some will tell you that the academic gauntlet interview is useful to see how people respond to pressure, how they engage in complex logic, etc. That might be true, unless of course you have someone who brushed up on the solutions to the silly puzzles, and they're playing you. But here's the real reason why the second experience is wrong: You're evaluating for things that aren't the job. These might have been useful tactics when you had to hire people to write machine language or C++, but in a world dominated by managed code in C#, or Java, people aren't managing memory or trying to be smarter than the compilers. They're using well known design patterns and techniques to deliver software. More to the point, these puzzle gauntlets don't evaluate things that really matter. They don't get into code design, issues of loose coupling and testability, knowledge of the basics around HTTP, or anything else that relates to building supportable and maintainable software. The first situation, involving real life problems, gives you an immediate idea of how the candidate will work out. One of my favorite experiences as an interviewee was with a guy who literally brought his work from that day and asked me how to deal with his problem. I had to demonstrate how I would design a class, make sure the unit testing coverage was solid, etc. I worked at that company for two years. So stop looking for algorithm puzzle crunchers, because a guy who can crush a Fibonacci sequence might also be a guy who writes a class with 5,000 lines of untestable code. Fashion your interview process on ways to reveal a developer who can write supportable and maintainable code. I would even go so far as to let them use the Google. If they want to cut-and-paste code, pass on them, but if they're looking for context or straight class references, hire them, because they're going to be life-long learners. The contractor problem I doubt anyone has ever worked in a place where contractors weren't used. The use of contractors seems like an obvious way to control costs. You can hire someone for just as long as you need them and then let them go. You can even give them the work that no one else wants to do. In practice, most places I've worked have retained and budgeted for the contractor year-round, meaning that the $90+ per hour they're paying (of which half goes to the person) would have been better spent on a full-time person with a $100k salary and benefits. But it's not even the cost that is an issue. It's the quality of work delivered. The accountability of a contractor is totally transient. They only need to deliver for as long as you keep them around, and chances are they'll never again touch the code. There's no incentive for them to get things right, there's little incentive to understand your system or learn anything. At the risk of making an unfair generalization, craftsmanship doesn't matter to most contractors. The education problem I don't know what they teach in college CS courses. I've believed for most of my adult life that a college degree was an essential part of being successful. Of course I would hold that bias, since I did it, and have the paper to show for it in a box somewhere in the basement. My first clue that maybe this wasn't a fully qualified opinion comes from the fact that I double-majored in journalism and radio/TV, not computer science. Eventually I worked with people who skipped college entirely, many of them at Microsoft. Then I worked with people who had a masters degree who sucked at writing code, next to the high school diploma types that rock it every day. I still think there's a lot to be said for the social development of someone who has the on-campus experience, but for software developers, college might not matter. As I mentioned before, most of us are not writing compilers, and we never will. It's actually surprising to find how many people are self-taught in the art of software development, and that should reveal some interesting truths about how we learn. The first truth is that we learn largely out of necessity. There's something that we want to achieve, so we do what I call just-in-time learning to meet those goals. We acquire knowledge when we need it. So what about the gaps in our knowledge? That's where the most valuable education occurs, via our mentors. They're the people we work next to and the people who write blogs. They are critical to our professional development. They don't need to be an encyclopedia of jargon, but they understand the craft. Even at this stage of my career, I probably can't tell you what SOLID stands for, but you can bet that I practice the principles behind that acronym every day. That comes from experience, augmented by my peers. I'm hell bent on passing that experience to others. Process issues If you're a manager type and don't do much in the way of writing code these days (shame on you for not messing around at least), then your job is to isolate your tradespeople from nonsense, while bringing your business into the realm of modern software development. That doesn't mean you slap up a white board with sticky notes and start calling yourself agile, it means getting all of your stakeholders to understand that frequent delivery of quality software is the best way to deal with change and evolving expectations. It also means that you have to play technical overlord to make sure the education and quality issues are dealt with. That's why I make the crack about sticky notes, because without the right technique being practiced among your code monkeys, you're just a guy with sticky notes. You're asking your business to accept frequent and iterative delivery, now make sure that the folks writing the code can handle the same thing. This means unit testing, the right instrumentation, integration tests, automated builds and deployments... all of the stuff that makes it easy to see when change breaks stuff. The prognosis I strongly believe that education is the most important part of what we do. I'm encouraged by things like The Starter League, and it's the kind of thing I'd love to see more of. I would go as far as to say I'd love to start something like this internally at an existing company. Most of all though, I can't emphasize enough how important it is that we mentor each other and share our knowledge. If you have people on your staff who don't want to learn, fire them. Seriously, get rid of them. A few months working with someone really good, who understands the craftsmanship required to build supportable and maintainable code, will change that person forever and increase their value immeasurably.

    Read the article

  • Passing Strings by Ref

    - by SGWellens
    Humbled yet again…DOH! No matter how much experience you acquire, no matter how smart you may be, no matter how hard you study, it is impossible to keep fully up to date on all the nuances of the technology we are exposed to. There will always be gaps in our knowledge: Little 'dead zones' of uncertainty. For me, this time, it was about passing string parameters to functions. I thought I knew this stuff cold. First, a little review... Value Types and Ref Integers and structs are value types (as opposed to reference types). When declared locally, their memory storage is on the stack; not on the heap. When passed to a function, the function gets a copy of the data and works on the copy. If a function needs to change a value type, you need to use the ref keyword.  Here's an example:     // ---- declaration -----------------     public struct MyStruct    {        public string StrTag;    }     // ---- functions -----------------------     void SetMyStruct(MyStruct myStruct)     // pass by value    {        myStruct.StrTag = "BBB";    }     void SetMyStruct(ref MyStruct myStruct)  // pass by ref    {        myStruct.StrTag = "CCC";    }     // ---- Usage -----------------------     protected void Button1_Click(object sender, EventArgs e)    {        MyStruct Data;        Data.StrTag = "AAA";         SetMyStruct(Data);        // Data.StrTag is still "AAA"         SetMyStruct(ref Data);        // Data.StrTag is now "CCC"    } No surprises here. All value types like ints, floats, datetimes, enums, structs, etc. work the same way. And now on to... Class Types and Ref     // ---- Declaration -----------------------------     public class MyClass    {        public string StrTag;    }     // ---- Functions ----------------------------     void SetMyClass(MyClass myClass)  // pass by 'value'    {        myClass.StrTag = "BBB";    }     void SetMyClass(ref MyClass myClass)   // pass by ref    {        myClass.StrTag = "CCC";    }     // ---- Usage ---------------------------------------     protected void Button2_Click(object sender, EventArgs e)    {        MyClass Data = new MyClass();        Data.StrTag = "AAA";         SetMyClass(Data);          // Data.StrTag is now "BBB"         SetMyClass(ref Data);        // Data.StrTag is now "CCC"    }  No surprises here either. Since Classes are reference types, you do not need the ref keyword to modify an object. What may seem a little strange is that with or without the ref keyword, the results are the same: The compiler knows what to do. So, why would you need to use the ref keyword when passing an object to a function? Because then you can change the reference itself…ie you can make it refer to a completely different object. Inside the function you can do: myClass = new MyClass() and the old object will be garbage collected and the new object will be returned to the caller. That ends the review. Now let's look at passing strings as parameters. The String Type and Ref Strings are reference types. So when you pass a String to a function, you do not need the ref keyword to change the string. Right? Wrong. Wrong, wrong, wrong. When I saw this, I was so surprised that I fell out of my chair. Getting up, I bumped my head on my desk (which really hurt). My bumping the desk caused a large speaker to fall off of a bookshelf and land squarely on my big toe. I was screaming in pain and hopping on one foot when I lost my balance and fell. I struck my head on the side of the desk (once again) and knocked myself out cold. When I woke up, I was in the hospital where due to a database error (thanks Oracle) the doctors had put casts on both my hands. I'm typing this ever so slowly with just my ton..tong ..tongu…tongue. But I digress. Okay, the only true part of that story is that I was a bit surprised. Here is what happens passing a String to a function.     // ---- Functions ----------------------------     void SetMyString(String myString)   // pass by 'value'    {        myString = "BBB";    }     void SetMyString(ref String myString)  // pass by ref    {        myString = "CCC";    }     // ---- Usage ---------------------------------     protected void Button3_Click(object sender, EventArgs e)    {        String MyString = "AAA";         SetMyString(MyString);        // MyString is still "AAA"  What!!!!         SetMyString(ref MyString);        // MyString is now "CCC"    } What the heck. We should not have to use the ref keyword when passing a String because Strings are reference types. Why didn't the string change? What is going on?   I spent hours unssuccessfully researching this anomaly until finally, I had a Eureka moment: This code: String MyString = "AAA"; Is semantically equivalent to this code (note this code doesn't actually compile): String MyString = new String(); MyString = "AAA"; Key Point: In the function, the copy of the reference is pointed to a new object and THAT object is modified. The original reference and what it points to is unchanged. You can simulate this behavior by modifying the class example code to look like this:      void SetMyClass(MyClass myClass)  // call by 'value'    {        //myClass.StrTag = "BBB";        myClass = new MyClass();        myClass.StrTag = "BBB";    } Now when you call the SetMyClass function without using ref, the parameter is unchanged...just like the string example.  I hope someone finds this useful. Steve Wellens

    Read the article

  • CodePlex Daily Summary for Thursday, November 29, 2012

    CodePlex Daily Summary for Thursday, November 29, 2012Popular ReleasesJayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.5: What's new in JayData 1.2.5For detailed release notes check the release notes. Handlebars template engine supportImplement data manager applications with JayData using Handlebars.js for templating. Include JayDataModules/handlebars.js and begin typing the mustaches :) Blogpost: Handlebars templates in JayData Handlebars helpers and model driven commanding in JayData Easy JayStorm cloud data managementManage cloud data using the same syntax and data management concept just like any other data ...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.70: Highlight features & improvements: • Performance optimization. • Search engine optimization. ID-less URLs for products, categories, and manufacturers. • Added ACL support (access control list) on products and categories. • Minify and bundle JavaScript files. • Allow a store owner to decide which billing/shipping address fields are enabled/disabled/required (like it's already done for the registration page). • Moved to MVC 4 (.NET 4.5 is required). • Now Visual Studio 2012 is required to work ...SQL Server Partition Management: Partition Management Release 3.0: Release 3.0 adds support for SQL Server 2012 and is backward compatible with SQL Server 2008 and 2005. The release consists of: • A Readme file • The Executable • The source code (Visual Studio project) Enhancements include: -- Support for Columnstore indexes in SQL Server 2012 -- Ability to create TSQL scripts for staging table and index creation operations -- Full support for global date and time formats, locale independent -- Support for binary partitioning column types -- Fixes to is...PDF Library: PDFLib v2.0: Release notes This new version include many bug fixes and include support for stream objects and cross-reference object streams. New FeatureExtract images from the PDFMCEBuddy 2.x: MCEBuddy 2.3.10: Critical Update to 2.3.9: Changelog for 2.3.10 (32bit and 64bit) 1. AsfBin executable missing from build 2. Removed extra references from build to avoid conflict 3. Showanalyzer installation now checked on remote engine machine Changelog for 2.3.9 (32bit and 64bit) 1. Added support for WTV output profile 2. Added support for minimizing MCEBuddy to the system tray 3. Added support for custom archive folder 4. Added support to disable subdirectory monitoring 5. Added support for better TS fil...DotNetNuke® Community Edition CMS: 07.00.00: Major Highlights Fixed issue that caused profiles of deleted users to be available Removed the postback after checkboxes are selected in Page Settings > Taxonomy Implemented the functionality required to edit security role names and social group names Fixed JavaScript error when using a ";" semicolon as a profile property Fixed issue when using DateTime properties in profiles Fixed viewstate error when using Facebook authentication in conjunction with "require valid profile fo...CODE Framework: 4.0.21128.0: See change notes in the documentation section for details on what's new.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.76: Fixed a typo in ObjectLiteralProperty.IsConstant that caused all object literals to be treated like they were constants, and possibly moved around in the code when they shouldn't be.Kooboo CMS: Kooboo CMS 3.3.0: New features: Dropdown/Radio/Checkbox Lists no longer references the userkey. Instead they refer to the UUID field for input value. You can now delete, export, import content from database in the site settings. Labels can now be imported and exported. You can now set the required password strength and maximum number of incorrect login attempts. Child sites can inherit plugins from its parent sites. The view parameter can be changed through the page_context.current value. Addition of c...Distributed Publish/Subscribe (Pub/Sub) Event System: Distributed Pub Sub Event System Version 3.0: Important Wsp 3.0 is NOT backward compatible with Wsp 2.1. Prerequisites You need to install the Microsoft Visual C++ 2010 Redistributable Package. You can find it at: x64 http://www.microsoft.com/download/en/details.aspx?id=14632x86 http://www.microsoft.com/download/en/details.aspx?id=5555 Wsp now uses Rx (Reactive Extensions) and .Net 4.0 3.0 Enhancements I changed the topology from a hierarchy to peer-to-peer groups. This should provide much greater scalability and more fault-resi...datajs - JavaScript Library for data-centric web applications: datajs version 1.1.0: datajs is a cross-browser and UI agnostic JavaScript library that enables data-centric web applications with the following features: OData client that enables CRUD operations including batching and metadata support using both ATOM and JSON payloads. Single store abstraction that provides a common API on top of HTML5 local storage technologies. Data cache component that allows reading data ranges from a collection and storing them locally to reduce the number of network requests. Changes...Team Foundation Server Administration Tool: 2.2: TFS Administration Tool 2.2 supports the Team Foundation Server 2012 Object Model. Visual Studio 2012 or Team Explorer 2012 must be installed before you can install this tool. You can download and install Team Explorer 2012 from http://aka.ms/TeamExplorer2012. There are no functional changes between the previous release (2.1) and this release.Coding Guidelines for C# 3.0, C# 4.0 and C# 5.0: Coding Guidelines for CSharp 3.0, 4.0 and 5.0: See Change History for a detailed list of modifications.Math.NET Numerics: Math.NET Numerics v2.3.0: Portable Library Build: Adds support for WP8 (.Net 4.0 and higher, SL5, WP8 and .NET for Windows Store apps) New: portable build also for F# extensions (.Net 4.5, SL5 and .NET for Windows Store apps) NuGet: portable builds are now included in the main packages, no more need for special portable packages Linear Algebra: Continued major storage rework, in this release focusing on vectors (previous release was on matrices) Thin QR decomposition (in addition to existing full QR) Static Cr...ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.1: +2012-11-25 v3.2.1 +????????。 -MenuCheckBox?CheckedChanged??????,??????????。 -???????window.IDS??????????????。 -?????(??TabCollection,ControlBaseCollection)???,????????????????。 +Grid??。 -??SelectAllRows??。 -??PageItems??,?????????????,?????、??、?????。 -????grid/gridpageitems.aspx、grid/gridpageitemsrowexpander.aspx、grid/gridpageitems_pagesize.aspx。 -???????????????????。 -??ExpandAllRowExpanders??,?????????????????(grid/gridrowexpanderexpandall2.aspx)。 -??????ExpandRowExpande...VidCoder: 1.4.9 Beta: Updated HandBrake core to SVN 5079. Fixed crashes when encoding DVDs with title gaps.ZXing.Net: ZXing.Net 0.10.0.0: On the way to a release 1.0 the API should be stable now with this version. sync with rev. 2521 of the java version windows phone 8 assemblies improvements and fixesBlackJumboDog: Ver5.7.3: 2012.11.24 Ver5.7.3 (1)SMTP???????、?????????、??????????????????????? (2)?????????、?????????????????????????? (3)DNS???????CNAME????CNAME????????????????? (4)DNS????????????TTL???????? (5)???????????????????????、?????????????????? (6)???????????????????????????????Liberty: v3.4.3.0 Release 23rd November 2012: Change Log -Added -H4 A dialog which gives further instructions when attempting to open a "Halo 4 Data" file -H4 Added a short note to the weapon editor stating that dropping your weapons will cap their ammo -Reach Edit the world's gravity -Reach Fine invincibility controls in the object editor -Reach Edit object velocity -Reach Change the teams of AI bipeds and vehicles -Reach Enable/disable fall damage on the biped editor screen -Reach Make AIs deaf and/or blind in the objec...Umbraco CMS: Umbraco 4.11.1: NugetNuGet BlogRead the release blog post for 4.11.0. Read the release blog post for 4.11.1. Whats new50 bugfixes (see the issue tracker for a complete list) Read the documentation for the MVC bits. Breaking changesGetPropertyValue now returns an object, not a string (only affects upgrades from 4.10.x to 4.11.0) NoteIf you need Courier use the release candidate (as of build 26). The code editor has been greatly improved, but is sometimes problematic in Internet Explorer 9 and lower. Pr...New ProjectsConvection Game API: A basic game API written in C# using XNA 4.0CS^2 (Casaba's Simple Code Scanner): Casaba's simple code scanner is a tool for managing greps to be run over a source tree and correlating those greps to issues for bug generation. CSparse.NET: A Concise Sparse Matrix Package for .NETDocument Generation Utility: This is about extracting different XML entities and wrapping up with jumbled legal English alphabets and outputs as per File format defined in settings.DuinoExplorer: file manager, http, remote, copy & pasteGenomeOS: An experimental x86 object-oriented operanting system programmed in C/C++ and x86 intel assemblyHush.Project: DatabaseInvi: NALibreTimeTracker Client: Free alternative to the original TimeTracker Client.MO Virtual Router: Virtual Router For Windows 8My Word Game Publisher: This is a full web application which is developed in .NET 2.0 using c#, xml, MS SQL, aspx.MyWGP XmlDataValidator: This is designed and developed (in Silverlight 2, C#) to validate the requirements of data in xml files: the type & size of data, and the requirement status. It allows a user to choose the type of data, enter min & max size, and check the requirement status for data elements.OMX_AL_test_environment: OMX application layer simulation/testing environmentOrchard Coverflow: An Orchard CMS module that provides a way to create iTunes-like coverflow displays out of your Media items.SharePoint standart list form javascript utility (SPListFormUtility): SPListFormUtility is a small JavaScript library, that helps control the appearance and behavior of standart SharePoint list forms. SharePoint 2010, 2013 supportShuttle Core: Shuttle Core is a project that contains cross-cutting libraries for use in .net software development. The Prism architecture lack for the Interactions!: The Prism architecture lack for the Interactions! (Silverlight) The defect opens a popups more then once and creates memory leaks. Example of the lack here.YouCast: YouCast (from YouTube and Podcast) allows you to subscribe to video feeds on YouTube* as podcasts in any standard podcatcher like Zune PC, iTunes and so forth.ZEFIT: Zeiterfassungstool als Projekt im 4.Lehrjahr als Informatiker an der GIBB in Bern????: ?Windows Phone?????,??Windows Phone??????????????。

    Read the article

  • Taming Hopping Windows

    - by Roman Schindlauer
    At first glance, hopping windows seem fairly innocuous and obvious. They organize events into windows with a simple periodic definition: the windows have some duration d (e.g. a window covers 5 second time intervals), an interval or period p (e.g. a new window starts every 2 seconds) and an alignment a (e.g. one of those windows starts at 12:00 PM on March 15, 2012 UTC). var wins = xs     .HoppingWindow(TimeSpan.FromSeconds(5),                    TimeSpan.FromSeconds(2),                    new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc)); Logically, there is a window with start time a + np and end time a + np + d for every integer n. That’s a lot of windows. So why doesn’t the following query (always) blow up? var query = wins.Select(win => win.Count()); A few users have asked why StreamInsight doesn’t produce output for empty windows. Primarily it’s because there is an infinite number of empty windows! (Actually, StreamInsight uses DateTimeOffset.MaxValue to approximate “the end of time” and DateTimeOffset.MinValue to approximate “the beginning of time”, so the number of windows is lower in practice.) That was the good news. Now the bad news. Events also have duration. Consider the following simple input: var xs = this.Application                 .DefineEnumerable(() => new[]                     { EdgeEvent.CreateStart(DateTimeOffset.UtcNow, 0) })                 .ToStreamable(AdvanceTimeSettings.IncreasingStartTime); Because the event has no explicit end edge, it lasts until the end of time. So there are lots of non-empty windows if we apply a hopping window to that single event! For this reason, we need to be careful with hopping window queries in StreamInsight. Or we can switch to a custom implementation of hopping windows that doesn’t suffer from this shortcoming. The alternate window implementation produces output only when the input changes. We start by breaking up the timeline into non-overlapping intervals assigned to each window. In figure 1, six hopping windows (“Windows”) are assigned to six intervals (“Assignments”) in the timeline. Next we take input events (“Events”) and alter their lifetimes (“Altered Events”) so that they cover the intervals of the windows they intersect. In figure 1, you can see that the first event e1 intersects windows w1 and w2 so it is adjusted to cover assignments a1 and a2. Finally, we can use snapshot windows (“Snapshots”) to produce output for the hopping windows. Notice however that instead of having six windows generating output, we have only four. The first and second snapshots correspond to the first and second hopping windows. The remaining snapshots however cover two hopping windows each! While in this example we saved only two events, the savings can be more significant when the ratio of event duration to window duration is higher. Figure 1: Timeline The implementation of this strategy is straightforward. We need to set the start times of events to the start time of the interval assigned to the earliest window including the start time. Similarly, we need to modify the end times of events to the end time of the interval assigned to the latest window including the end time. The following snap-to-boundary function that rounds a timestamp value t down to the nearest value t' <= t such that t' is a + np for some integer n will be useful. For convenience, we will represent both DateTime and TimeSpan values using long ticks: static long SnapToBoundary(long t, long a, long p) {     return t - ((t - a) % p) - (t > a ? 0L : p); } How do we find the earliest window including the start time for an event? It’s the window following the last window that does not include the start time assuming that there are no gaps in the windows (i.e. duration < interval), and limitation of this solution. To find the end time of that antecedent window, we need to know the alignment of window ends: long e = a + (d % p); Using the window end alignment, we are finally ready to describe the start time selector: static long AdjustStartTime(long t, long e, long p) {     return SnapToBoundary(t, e, p) + p; } To find the latest window including the end time for an event, we look for the last window start time (non-inclusive): public static long AdjustEndTime(long t, long a, long d, long p) {     return SnapToBoundary(t - 1, a, p) + p + d; } Bringing it together, we can define the translation from events to ‘altered events’ as in Figure 1: public static IQStreamable<T> SnapToWindowIntervals<T>(IQStreamable<T> source, TimeSpan duration, TimeSpan interval, DateTime alignment) {     if (source == null) throw new ArgumentNullException("source");     // reason about DateTime and TimeSpan in ticks     long d = Math.Min(DateTime.MaxValue.Ticks, duration.Ticks);     long p = Math.Min(DateTime.MaxValue.Ticks, Math.Abs(interval.Ticks));     // set alignment to earliest possible window     var a = alignment.ToUniversalTime().Ticks % p;     // verify constraints of this solution     if (d <= 0L) { throw new ArgumentOutOfRangeException("duration"); }     if (p == 0L || p > d) { throw new ArgumentOutOfRangeException("interval"); }     // find the alignment of window ends     long e = a + (d % p);     return source.AlterEventLifetime(         evt => ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p)),         evt => ToDateTime(AdjustEndTime(evt.EndTime.ToUniversalTime().Ticks, a, d, p)) -             ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p))); } public static DateTime ToDateTime(long ticks) {     // just snap to min or max value rather than under/overflowing     return ticks < DateTime.MinValue.Ticks         ? new DateTime(DateTime.MinValue.Ticks, DateTimeKind.Utc)         : ticks > DateTime.MaxValue.Ticks         ? new DateTime(DateTime.MaxValue.Ticks, DateTimeKind.Utc)         : new DateTime(ticks, DateTimeKind.Utc); } Finally, we can describe our custom hopping window operator: public static IQWindowedStreamable<T> HoppingWindow2<T>(     IQStreamable<T> source,     TimeSpan duration,     TimeSpan interval,     DateTime alignment) {     if (source == null) { throw new ArgumentNullException("source"); }     return SnapToWindowIntervals(source, duration, interval, alignment).SnapshotWindow(); } By switching from HoppingWindow to HoppingWindow2 in the following example, the query returns quickly rather than gobbling resources and ultimately failing! public void Main() {     var start = new DateTimeOffset(new DateTime(2012, 6, 28), TimeSpan.Zero);     var duration = TimeSpan.FromSeconds(5);     var interval = TimeSpan.FromSeconds(2);     var alignment = new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc);     var events = this.Application.DefineEnumerable(() => new[]     {         EdgeEvent.CreateStart(start.AddSeconds(0), "e0"),         EdgeEvent.CreateStart(start.AddSeconds(1), "e1"),         EdgeEvent.CreateEnd(start.AddSeconds(1), start.AddSeconds(2), "e1"),         EdgeEvent.CreateStart(start.AddSeconds(3), "e2"),         EdgeEvent.CreateStart(start.AddSeconds(9), "e3"),         EdgeEvent.CreateEnd(start.AddSeconds(3), start.AddSeconds(10), "e2"),         EdgeEvent.CreateEnd(start.AddSeconds(9), start.AddSeconds(10), "e3"),     }).ToStreamable(AdvanceTimeSettings.IncreasingStartTime);     var adjustedEvents = SnapToWindowIntervals(events, duration, interval, alignment);     var query = from win in HoppingWindow2(events, duration, interval, alignment)                 select win.Count();     DisplayResults(adjustedEvents, "Adjusted Events");     DisplayResults(query, "Query"); } As you can see, instead of producing a massive number of windows for the open start edge e0, a single window is emitted from 12:00:15 AM until the end of time: Adjusted Events StartTime EndTime Payload 6/28/2012 12:00:01 AM 12/31/9999 11:59:59 PM e0 6/28/2012 12:00:03 AM 6/28/2012 12:00:07 AM e1 6/28/2012 12:00:05 AM 6/28/2012 12:00:15 AM e2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM e3 Query StartTime EndTime Payload 6/28/2012 12:00:01 AM 6/28/2012 12:00:03 AM 1 6/28/2012 12:00:03 AM 6/28/2012 12:00:05 AM 2 6/28/2012 12:00:05 AM 6/28/2012 12:00:07 AM 3 6/28/2012 12:00:07 AM 6/28/2012 12:00:11 AM 2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM 3 6/28/2012 12:00:15 AM 12/31/9999 11:59:59 PM 1 Regards, The StreamInsight Team

    Read the article

  • CodePlex Daily Summary for Monday, November 26, 2012

    CodePlex Daily Summary for Monday, November 26, 2012Popular ReleasesRedmine Reports: Redmine Reports V 1.0.7: added new sample report added new DateRange feature for report generation (see issue Tracker ID 15372) updated to latest MySql (6.6.4.0)sb0t v.5: sb0t 5.00 alpha 4: First public alpha release. Don't be surprised if it crashes. :)datajs - JavaScript Library for data-centric web applications: datajs version 1.1.0: datajs is a cross-browser and UI agnostic JavaScript library that enables data-centric web applications with the following features: OData client that enables CRUD operations including batching and metadata support using both ATOM and JSON payloads. Single store abstraction that provides a common API on top of HTML5 local storage technologies. Data cache component that allows reading data ranges from a collection and storing them locally to reduce the number of network requests. Changes...VFPX: Code Analyst 1.0.3: Code Analyst 1.0.3 addresses a bug discovered with FoxCodePlus where the initialization code does not properly recognize the location of the folder as well as some of the open issues in the Issue Tracker.Team Foundation Server Administration Tool: 2.2: TFS Administration Tool 2.2 supports the Team Foundation Server 2012 Object Model. Visual Studio 2012 or Team Explorer 2012 must be installed before you can install this tool. You can download and install Team Explorer 2012 from http://aka.ms/TeamExplorer2012. There are no functional changes between the previous release (2.1) and this release.XrmServiceToolkit - A CRM 2011 JavaScript Library: XrmServiceToolkit 1.3.1: Version: 1.3.1 Date: November, 2012 Dependency: JSON2, jQuery (latest or 1.7.2 above) New Feature - A change of logic to increase peroformance when returning large number of records New Function - XrmServiceToolkit.Soap.QueryAll: Return all available records by query options (>5k+) New Fix - XrmServiceToolkit.Rest.RetrieveMultiple not returning records more than 50 New Fix - XrmServiceToolkit.Soap.Business error when refering number fields like (int, double, float) New ...Coding Guidelines for C# 3.0, C# 4.0 and C# 5.0: Coding Guidelines for CSharp 3.0, 4.0 and 5.0: See Change History for a detailed list of modifications.Math.NET Numerics: Math.NET Numerics v2.3.0: Portable Library Build: Adds support for WP8 (.Net 4.0 and higher, SL5, WP8 and .NET for Windows Store apps) New: portable build also for F# extensions (.Net 4.5, SL5 and .NET for Windows Store apps) NuGet: portable builds are now included in the main packages, no more need for special portable packages Linear Algebra: Continued major storage rework, in this release focusing on vectors (previous release was on matrices) Thin QR decomposition (in addition to existing full QR) Static Cr...ExtJS based ASP.NET 2.0 Controls: FineUI v3.2.1: +2012-11-25 v3.2.1 +????????。 -MenuCheckBox?CheckedChanged??????,??????????。 -???????window.IDS??????????????。 -?????(??TabCollection,ControlBaseCollection)???,????????????????。 +Grid??。 -??SelectAllRows??。 -??PageItems??,?????????????,?????、??、?????。 -????grid/gridpageitems.aspx、grid/gridpageitemsrowexpander.aspx、grid/gridpageitems_pagesize.aspx。 -???????????????????。 -??ExpandAllRowExpanders??,?????????????????(grid/gridrowexpanderexpandall2.aspx)。 -??????ExpandRowExpande...VidCoder: 1.4.9 Beta: Updated HandBrake core to SVN 5079. Fixed crashes when encoding DVDs with title gaps.ZXing.Net: ZXing.Net 0.10.0.0: On the way to a release 1.0 the API should be stable now with this version. sync with rev. 2521 of the java version windows phone 8 assemblies improvements and fixesCharmBar: Windows 8 Charm Bar for Windows 7: Windows 8 Charm Bar for Windows 7BlackJumboDog: Ver5.7.3: 2012.11.24 Ver5.7.3 (1)SMTP???????、?????????、??????????????????????? (2)?????????、?????????????????????????? (3)DNS???????CNAME????CNAME????????????????? (4)DNS????????????TTL???????? (5)???????????????????????、?????????????????? (6)???????????????????????????????Liberty: v3.4.3.0 Release 23rd November 2012: Change Log -Added -H4 A dialog which gives further instructions when attempting to open a "Halo 4 Data" file -H4 Added a short note to the weapon editor stating that dropping your weapons will cap their ammo -Reach Edit the world's gravity -Reach Fine invincibility controls in the object editor -Reach Edit object velocity -Reach Change the teams of AI bipeds and vehicles -Reach Enable/disable fall damage on the biped editor screen -Reach Make AIs deaf and/or blind in the objec...Umbraco CMS: Umbraco 4.11.0: NugetNuGet BlogRead the release blog post for 4.11.0. Whats new50 bugfixes (see the issue tracker for a complete list) Read the documentation for the MVC bits. Breaking changesGetPropertyValue now returns an object, not a string (only affects upgrades from 4.10.x to 4.11.0) NoteIf you need Courier use the release candidate (as of build 26). The code editor has been greatly improved, but is sometimes problematic in Internet Explorer 9 and lower. Previously it was just disabled for IE and...Audio Pitch & Shift: Audio Pitch And Shift 5.1.0.3: Fixed supported files list on open dialog (added .pls and .m3u) Impulse Media Player splash message (can be disabled anyway)WiX Toolset: WiX v3.7 RC: WiX v3.7 RC (3.7.1119.0) provides feature complete Bundle update and reference tracking plus several bug fixes. For more information see Rob's blog post about the release: http://robmensching.com/blog/posts/2012/11/20/WiX-v3.7-Release-Candidate-availablePicturethrill: Version 2.11.20.0: Fixed up Bing image provider on Windows 8Excel AddIn to reset the last worksheet cell: XSFormatCleaner.xla: Modified the commandbar code to use CommandBar IDs instead of English names.Json.NET: Json.NET 4.5 Release 11: New feature - Added ITraceWriter, MemoryTraceWriter, DiagnosticsTraceWriter New feature - Added StringEscapeHandling with options to escape HTML and non-ASCII characters New feature - Added non-generic JToken.ToObject methods New feature - Deserialize ISet<T> properties as HashSet<T> New feature - Added implicit conversions for Uri, TimeSpan, Guid New feature - Missing byte, char, Guid, TimeSpan and Uri explicit conversion operators added to JToken New feature - Special case...New ProjectsAirlocker: Airlocker is a lightweight yet effective backup software. Right-click your folder, select "Send To -> Airlocker", and that's all! Next time you do it, only new & changed files will be copied.Architecture Lab: Domain as XML: Domain as XML - Driven Development: Visual Studio Code SamplesBase de datos 2 Farmacia distribuida: This is my summaryBullfrog project 2k11: This project was our entry for the XNA challenge 2011 at Games Fleadh. This is currently being used for a college project.CedarLogic SharePoint Utilities and Extensions: Various utilities, tools, features that provide cross cutting capabilities in support of Windows SharePoint Services 3.0 and Microsoft Office SharePoint Server 2007.Concurrent Extensions Library (CEL): Concurrent Extensions Library (CEL) provides functionality and implementations commonly useful in concurrent programming. CookieChipper - Tasty treats!: Explore your IE cookie cache dispaying forensic information, lock favorite cookies, delete all unlocked, search, manage, enjoy!CqRS Event Sourcing Sample: CqRS SampleCrivo de Erastóteles: Simples programa que devolve à você todos os números primos entre 1 e 1 bilhão em um arquivo de saída chamado "primos.txt" Simply program returns all prime numbers from 1 to 1 bi into a "primos.txt" out file.DonNicky.Common: A set of helpers for everyday coding: extensions for common purpose classes like Type, IEnumerable or Assembly, reflection routines, xml parsing and validation.DotNetDevNet iPhone App: Now you can find out what meetings are being held at DotNetDevNet through your iPhone. Information on the meeting and the speakersEuro for Windows XP: A simple tool and sample to change Estonian currency from Estonian Krone (kr) to Euro (€). Applies to all versions of Windows and from .NET 2.0 which is default build. The sample creates a custom locale and updates existing users through Registry.Floridum: Project for a XML Database.FsSignals: FsSignals is a push reactive library. Fuse8.CMF: Free Content Management System based on Microsoft ASP.NET MVC 3Fuse8.DF.Practicies: Domain Framework Best practicesFuse8.GlobalisationFramework: GlobalizationHatena Netfx Library: .NET Library for Hatena Services.Ideopuzzle: A puzzle gameinohigo: a programming language that was developed by inohiro.jasBetaApplication: Beta place for the JAS-applicationJasmine - Community Management Infrastructure: "Jasmine" is a XML based Community Management Infrastructure.Jupiter Toolkit: Jupiter Toolkit was a temporary name used for WinRT XAML Toolkit. If you clicked a link to get here - replace jupitertoolkit with winrtxamltoolkit in the URL.Kiwi Repo: Il modo più semplice per creare e gestire i tuoi repository cydia su windowsKnak: .NET APIs for CLR Type Mapping and Micro-ORM. Fast and intuitive, convention based, delivered in small C# source files.Micajah Mindtouch Deki Wiki Copier: Small C# application to move data between 2 Deki Wiki installs or, more importantly, from a wik.is account to a locally installed systemmisframework: misframeworkMulyareksa Jayasakti Accounting: Sistem Informasi Akunting Penjualan Jasa PT Mulyareksa Jayasakti SemarangMutualGuaranteeOnlineServices: MutualGuaranteeOnlineServicesMySQL PowerShell Extensions: MySQL PowerShell Extensions Provides a set of PowerShell Cmdlets to manipulate remote or local MySQL Database Server.Old Book Transaction: Website trao d?i sách cu.OneFineDay: ???Paymill Wrapper .NET: Paymill Wrapper. NET is an API for easy integration for recurring billings and payments online through the product https://www.paymill.comPoliceHealth: Police Health SystemRbac_Eshoping: Rbac Std ProjectRhombus: Rhombus is a suite of applications to make it easier for small software teams to document what they and others are doing with a minimum of effort. It is developed on the .NET 4.0 framework, primarily in C# and ASP.NET.Rollout Sharepoint Solutions - ROSS: ROSS performs the following actions: - Delete sitecollection and restart services - 'Get Latest Version' from SourceSafe - Rebuild Solution - Install all wsp solutions - Create SiteCollections - Check for build en provisioning errors - Send email to developers if errors occurredRooBooks: RooBooks - Books management tool for students and such. (circa 2004)SEI_Parkour: “Project ParkOUR” is the codename for the product being developed by members of the Software Engineering Incorporated team.Simple Memory: A simple Memory. You can define the dimension x*y. Also there are two playing modes possible. - Pairs with the same cards (classical memory) - Pairs with different cards (find the same meaning)Simples: A dot-net-a-ma-bob eco-system designed to be 'simples' to use.Slatebox.js: Real-time mind-mapping and concept drawing.Sychevs: ?????????? ?????Tambourine.NUnit: Provides pivot report creating from NUnit combinatorial test result xml files. Also supports viewing of pivot data and its exporting to xlsx.TheAssassin's OpenSource Software: Open source software for everyone's use. Mostly written in Python 3 (3.2, 3.3).Titan: a lightweight object-relational mapping framework TT: Awesome TT appUkázkové projekty: Obsahuje ukázkové projekty uživatele TenCoKaciStromy.Unified Cloud API: This API aims to provide a single interface for different cloud-based services and their corresponding providers.VolgaTransTelecomClient: VolgaTransTelecomClient makes it easier for clients of "Volga TransTelecom" company to get info about account. It's developed in C#.Windows Phone Shortcuts: This a windows phone app project. Shortcuts for Bluetooth, Wi-Fi, Cellular, Airplane Mode from the Start Screen.WPFResumeVideo: Play video on any computer, and resume where you left offWyvern's Depot: Personal code repository.?????????: ??????????,???。

    Read the article

< Previous Page | 7 8 9 10 11 12  | Next Page >