Search Results

Search found 5589 results on 224 pages for 'rules and constraints'.

Page 90/224 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Unity Bar auto-hide behaviour and application icons placement

    - by Andrei
    The first issue: It seems that sometimes when I hover to the left edge of the screen the Unity Bar will not stay on top of other windows even if I continue to hover the cursor above it, at other times it will stay on top. Is this a normal behaviour? Or am I affected by some bug / inconsistency? If it's normal, what's the logic behind it? The second issue: Application icons for running applications do not maintain their position in the Unity Bar but instead move around according to some weird rules (if any?) that I can't understand. Is this to be expected, or is it a bug? Is there a way to force them to stop moving around? I like to see certain apps in certain positions and this bothers me.

    Read the article

  • Webcast: Leveraging Mobile And Social Commerce To Deliver A Complete Customer Experience

    - by Michael Hylton
      Mobile and social media are emerging as new channels for customers to interact and transact with brands. Mobile users demand experiences that are relevant and engaging and are designed with the capabilities and constraints of devices in mind. Just having a mobile app or mobile-specific website is not a long-term strategy. Brands must invest in an optimized experience, especially as mobile becomes critical to an overall digital commerce strategy.Debating the merits of using Facebook or not is missing the point when it comes to social media. True innovators are thinking beyond the social channel and are building programs that leverage Facebook data to drive conversions and engagement both on and off Facebook.  Learn how to be more strategic about mobile and social commerce in this informative editorial webcast.Attend this webcast and you will learn: How to leverage mobile and social touchpoints in digital commerce Why having a Facebook page or a mobile app is not enough The benefits of a consistent, personalized and relevant customer experience Strategies for integrating mobile and social into an overall digital commerce strategy Featured Speakers: Peter Sheldon, Senior Analyst, eBusiness & Channel Strategy Professionals, Forrester Research Brenna Johnson, Product Manager, Oracle Commerce Click here to register.

    Read the article

  • Webcast: Leveraging Mobile And Social Commerce To Deliver A Complete Customer Experience

    - by Michael Hylton
      Mobile and social media are emerging as new channels for customers to interact and transact with brands. Mobile users demand experiences that are relevant and engaging and are designed with the capabilities and constraints of devices in mind. Just having a mobile app or mobile-specific website is not a long-term strategy. Brands must invest in an optimized experience, especially as mobile becomes critical to an overall digital commerce strategy.Debating the merits of using Facebook or not is missing the point when it comes to social media. True innovators are thinking beyond the social channel and are building programs that leverage Facebook data to drive conversions and engagement both on and off Facebook.  Learn how to be more strategic about mobile and social commerce in this informative editorial webcast.Attend this webcast and you will learn: How to leverage mobile and social touchpoints in digital commerce Why having a Facebook page or a mobile app is not enough The benefits of a consistent, personalized and relevant customer experience Strategies for integrating mobile and social into an overall digital commerce strategy Featured Speakers: Peter Sheldon, Senior Analyst, eBusiness & Channel Strategy Professionals, Forrester Research Brenna Johnson, Product Manager, Oracle Commerce Click here to register.

    Read the article

  • REST API rule about tunneling

    - by miku
    Just read this in the REST API Rulebook: GET and POST must not be used to tunnel other request methods. Tunneling refers to any abuse of HTTP that masks or misrepresents a message’s intent and undermines the protocol’s transparency. A REST API must not compromise its design by misusing HTTP’s request methods in an effort to accommodate clients with limited HTTP vocabulary. Always make proper use of the HTTP methods as specified by the rules in this section. [highlights by me] But then a lot of frameworks use tunneling to expose REST interfaces via HTML forms, since <form> knows only about GET and POST. My most recent example is a MethodRewriteMiddleware for flask (submitted by the author of the framework): http://flask.pocoo.org/snippets/38/. Any ways to comply to the "Rule" without hacks or add-ons in web frameworks?

    Read the article

  • Imaging: Paper Paper Everywhere, but None Should be in Sight

    - by Kellsey Ruppel
    Author: Vikrant Korde, Technical Architect, Aurionpro's Oracle Implementation Services team My wedding photos are stored in several empty shoeboxes. Yes...I got married before digital photography was mainstream...which means I'm old. But my parents are really old. They have shoeboxes filled with vacation photos on slides (I doubt many of you have even seen a home slide projector...and I hope you never do!). Neither me nor my parents should have shoeboxes filled with any form of photographs whatsoever. They should obviously live in the digital world...with no physical versions in sight (other than a few framed on our walls). Businesses grapple with similar challenges. But instead of shoeboxes, they have file cabinets and warehouses jam packed with paper invoices, legal documents, human resource files, material safety data sheets, incident reports, and the list goes on and on. In fact, regulatory and compliance rules govern many industries, requiring that this paperwork is available for any number of years. It's a real challenge...especially trying to find archived documents quickly and many times with no backup. Which brings us to a set of technologies called Image Process Management (or simply Imaging or Image Processing) that are transforming these antiquated, paper-based processes. Oracle's WebCenter Content Imaging solution is a combination of their WebCenter suite, which offers a robust set of content and document management features, and their Business Process Management (BPM) suite, which helps to automate business processes through the definition of workflows and business rules. Overall, the solution provides an enterprise-class platform for end-to-end management of document images within transactional business processes. It's a solution that provides all of the capabilities needed - from document capture and recognition, to imaging and workflow - to effectively transform your ‘shoeboxes’ of files into digitally managed assets that comply with strict industry regulations. The terminology can be quite overwhelming if you're new to the space, so we've provided a summary of the primary components of the solution below, along with a short description of the two paths that can be executed to load images of scanned documents into Oracle's WebCenter suite. WebCenter Imaging (WCI): the electronic document repository that provides security, annotations, and search capabilities, and is the primary user interface for managing work items in the imaging solution SOA & BPM Suites (workflow): provide business process management capabilities, including human tasks, workflow management, service integration, and all other standard SOA features. It's interesting to note that there a number of 'jumpstart' processes available to help accelerate the integration of business applications, such as the accounts payable invoice processing solution for E-Business Suite that facilitates the processing of large volumes of invoices WebCenter Enterprise Capture (WEC): expedites the capture process of paper documents to digital images, offering high volume scanning and importing from email, and allows for flexible indexing options WebCenter Forms Recognition (WFR): automatically recognizes, categorizes, and extracts information from paper documents with greatly reduced human intervention WebCenter Content: the backend content server that provides versioning, security, and content storage There are two paths that can be executed to send data from WebCenter Capture to WebCenter Imaging, both of which are described below: 1. Direct Flow - This is the simplest and quickest way to push an image scanned from WebCenter Enterprise Capture (WEC) to WebCenter Imaging (WCI), using the bare minimum metadata. The WEC activities are defined below: The paper document is scanned (or imported from email). The scanned image is indexed using a predefined indexing profile. The image is committed directly into the process flow 2. WFR (WebCenter Forms Recognition) Flow - This is the more complex process, during which data is extracted from the image using a series of operations including Optical Character Recognition (OCR), Classification, Extraction, and Export. This process creates three files (Tiff, XML, and TXT), which are fed to the WCI Input Agent (the high speed import/filing module). The WCI Input Agent directory is a standard ingestion method for adding content to WebCenter Imaging, the process for doing so is described below: WEC commits the batch using the respective commit profile. A TIFF file is created, passing data through the file name by including values separated by "_" (underscores). WFR completes OCR, classification, extraction, export, and pulls the data from the image. In addition to the TIFF file, which contains the document image, an XML file containing the extracted data, and a TXT file containing the metadata that will be filled in WCI, are also created. All three files are exported to WCI's Input agent directory. Based on previously defined "input masks", the WCI Input Agent will pick up the seeding file (often the TXT file). Finally, the TIFF file is pushed in UCM and a unique web-viewable URL is created. Based on the mapping data read from the TXT file, a new record is created in the WCI application.  Although these processes may seem complex, each Oracle component works seamlessly together to achieve a high performing and scalable platform. The solution has been field tested at some of the largest enterprises in the world and has transformed millions and millions of paper-based documents to more easily manageable digital assets. For more information on how an Imaging solution can help your business, please contact [email protected] (for U.S. West inquiries) or [email protected] (for U.S. East inquiries). About the Author: Vikrant is a Technical Architect in Aurionpro's Oracle Implementation Services team, where he delivers WebCenter-based Content and Imaging solutions to Fortune 1000 clients. With more than twelve years of experience designing, developing, and implementing Java-based software solutions, Vikrant was one of the founding members of Aurionpro's WebCenter-based offshore delivery team. He can be reached at [email protected].

    Read the article

  • apply non-hierarchial transforms to hierarchial skeleton?

    - by user975135
    I use Blender3D, but the answer might not API-exclusive. I have some matrices I need to assign to PoseBones. The resulting pose looks fine when there is no bone hierarchy (parenting) and messed up when there is. I've uploaded an archive with sample blend of the rigged models, text animation importer and a test animation file here: http://www.2shared.com/file/5qUjmnIs/sample_files.html Import the animation by selecting an Armature and running the importer on "sba" file. Do this for both Armatures. This is how I assign the poses in the real (complex) importer: matrix_bases = ... # matrix from file animation_matrix = matrix_basis * pose.bones['mybone'].matrix.copy() pose.bones[bonename].matrix = animation_matrix If I go to edit mode, select all bones and press Alt+P to undo parenting, the Pose looks fine again. The API documentation says the PoseBone.matrix is in "object space", but it seems clear to me from these tests that they are relative to parent bones. Final 4x4 matrix after constraints and drivers are applied (object space) I tried doing something like this: matrix_basis = ... # matrix from file animation_matrix = matrix_basis * (pose.bones['mybone'].matrix.copy() * pose.bones[bonename].bone.parent.matrix_local.copy().inverted()) pose.bones[bonename].matrix = animation_matrix But it looks worse. Experimented with order of operations, no luck with all. For the record, in the old 2.4 API this worked like a charm: matrix_basis = ... # matrix from file animation_matrix = armature.bones['mybone'].matrix['ARMATURESPACE'].copy() * matrix_basis pose.bones[bonename].poseMatrix = animation_matrix pose.update() Link to Blender API ref: http://www.blender.org/documentation/blender_python_api_2_63_17/bpy.types.BlendData.html#bpy.types.BlendData http://www.blender.org/documentation/blender_python_api_2_63_17/bpy.types.PoseBone.html#bpy.types.PoseBone

    Read the article

  • I feel stuck in the center of Python, How to get past beginner

    - by Isov5
    I really apologize if this doesn't follow the S.O rules but I need a little help, I personally still classify myself as a beginner in python, Yet I've wrote a very small and VERY SURE impractical program for my boss to use. I know I'm still a beginner because simple things still perplex me but every book I read for beginners honestly just rehashes what I do already know but every 'more advanced' book doesn't really allow me to learn, they depend on example files and I never really understand why they built 'said' function or 'said' class. So onto my question... Is there any recommendations on a book or ANYTHING that pushes me out of this stage, I've used head first and normally they are really good but my issue there is they have me back tracking just to move forward again, It worked in HTML but its confusing in Python, basically I think I need to build a program while following along, Again I like HeadFirst's style but I need something that isn't going to make me have to remember one thing just to forget it... for record, I've checked into some O'Reilly books

    Read the article

  • .htaccess rewrite www to non-www and remove .html

    - by lester8891
    I need rewrite rules to redirect the following: http://www. to http:// /file.html to /file I've tried using a combination of these but each time it results in a redirect loop on one of the situations RewriteBase / RewriteRule ^(.*)\.html$ $1 [NC] RewriteCond %{HTTP_HOST} ^www.domain.co.uk [NC, L] RewriteRule ^(.*)$ http://domain.co.uk/$1 [R=301] I figure it's probably something to do with the flags but I don't know how to fix it. Just to be clear it needs to do all these situations: http://www.domain.co.uk to http://domain.co.uk http://www.domain.co.uk/file.html to http://domain.co.uk/file http://domain.co.uk to http://domain.co.uk http://domain.co.uk/file.html to http://domain.co.uk/file Thanks!

    Read the article

  • EF4 CPT5 Code First Remove Cascading Deletes

    - by Dane Morgridge
    I have been using EF4 CTP5 with code first and I really like the new code.  One issue I was having however, was cascading deletes is on by default.  This may come as a surprise as using Entity Framework with anything but code first, this is not the case.  I ran into an exception with some one-to-many relationships I had: Introducing FOREIGN KEY constraint 'ProjectAuthorization_UserProfile' on table 'ProjectAuthorizations' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. See previous errors. To get around this, you can use the fluent API and put some code in the OnModelCreating: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Entity<UserProfile>() 4: .HasMany(u => u.ProjectAuthorizations) 5: .WithRequired(a => a.UserProfile) 6: .WillCascadeOnDelete(false); 7: } This will work to remove the cascading delete, but I have to use the fluent API and it has to be done for every one-to-many relationship that causes the problem. I am personally not a fan of cascading deletes in general (for several reasons) and I’m not a huge fan of fluent APIs.  However, there is a way to do this without using the fluent API.  You can in the OnModelCreating, remove the convention that creates the cascading deletes altogether. 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Remove<OneToManyCascadeDeleteConvention>(); 4: } Thanks to Jeff Derstadt from Microsoft for the info on removing the convention all together.  There is a way to build a custom attribute to remove it on a case by case basis and I’ll have a post on how to do this in the near future.

    Read the article

  • Conditional formatting of duplicate values in Excel

    - by jamiet
    One of the infrequent pleasures of being a data geek like me is that one does occasionally stumble across little-known yet incredibly useful features in a tool that you use day-in, day-out. Today this happened to me and the feature is Excel’s ability to highlight dupicate rows in a worksheet. Check this out: Notice that I have got some data in my worksheet that contains duplicated values and simply by selecting Conditional Formatting->Highlight Cells Rules->Duplicate Values… Excel will highlight (shown here in red) which rows are duplicated. It seem such a simple thing but when you’re working on a data integration project and the data that is being sent is of, well, let’s say dubious quality features like this are worth their weight in gold. I tweeted about this and it happened to catch a few people’s attention so I figured it might be worth blogging too. Note that I am using Excel 2013 but I happen to know that the feature exists in Excel 2010 and possibly in earlier versions too. Have a great weekend! @Jamiet

    Read the article

  • Cowboy Agile?

    - by Robert May
    In a previous post, I outlined the rules of Scrum.  This post details one of those rules. I’ve often heard similar phrases around Scrum that clue me in to someone who doesn’t understand Scrum.  The phrases go something like this: “We don’t do Agile because the idea of letting people just do whatever they want is wrong.  We believe in a more structured approach.” (i.e. Work is Prison, and I’m the Warden!) “I love Agile.  Agile lets us do whatever we want!” (Cowboy Agile?) “We’re Agile, but we use a process that I’ve created.” (Cowboy Agile?) All of those phrases have one thing in common:  The assumption that Agile, and I mean Scrum, lets you do whatever you want.  This is simply not true. Executing Scrum properly requires more dedication, rigor, and diligence than happens in most traditional development methods. Scrum and Waterfall Compared Since Scrum and Waterfall are two of the most commonly used methodologies, a little bit of contrasting and comparing is in order. Waterfall Scrum A project manager defines all tasks and then manages the tasks that team members are working on. The team members define the tasks and estimates of the stories for the current iteration.  Any team member may work on any task in the iteration. Usually only a few milestones that need to be met, the milestones are measured in months, and these milestones are expected to be missed.  Little work is ever done to improve estimates and poor estimators can hide behind high estimates. Stories must be delivered every iteration, milestones are measured in hours, and the team is expected to figure out why their estimates were wrong, even when they were under.  Repeated misses can get the entire team fired. Partially completed work is normal. Partially completed work doesn’t count. Nobody knows the task you’re working on. Everyone knows what you’re working on, whether or not you’re making progress and how much longer you think its going to take, in hours. Little requirement to show working code.  Prototypes are ok. Working code must be shown each iteration.  No smoke and mirrors allowed.  Testing is done in lengthy cycles at the end of development.  Developers aren’t held accountable. Testing is part of the team.  If the testers don’t accept the story as complete, the team can’t count it.  Complete means that the story’s functionality works as designed.  The team can’t have any open defects on the story. Velocity is rarely truly measured and difficult to evaluate. Velocity is integral to the process and can be seen at a glance and everyone in the company knows what it is. A business analyst writes requirements.  Designers mock up screens.  Developers hide behind “I did it just like the spec doc told me to and made the screen exactly like the picture” Developers are expected to collaborate in real time.  If a design is bad or lacks needed details, the developers are required to get it right in the iteration, because all software must be functional.  Designers and Business Analysts are part of the team and must do their work in iterations slightly ahead of the developers. Upper Management is often surprised.  “You told me things were going well two months ago!” Management receives updates at the end of every iteration showing them exactly what the team did and how that compares to what' is remaining in the backlog.  Managers know every iteration what their money is buying. Status meetings are rare or don’t occur.  Email is a primary form of communication. Teams coordinate every single day with each other and use other high bandwidth communication channels to make sure they’re making progress.  Email is used only as a last resort.  Instead, team members stand up, walk to each other, and talk, face to face.  If that’s not possible, they pick up the phone. IF someone asks what happened, its at the end of a lengthy development cycle measured in months, and nobody really knows why it happened. Someone asks what happened every iteration.  The team talks about what happened, and then adapts to make sure that what happened either never happens again or happens every time.   That’s probably enough for now.  As you can see, a lot is required of Scrum teams! One of the key differences in Scrum is that the burden for many activities is shifted to a group of people who share responsibility, instead of a single person having responsibility.  This is a very good thing, since small groups usually come up with better and more insightful work than single individuals.  This shift also results in better velocity.  Team members can take vacations and the rest of the team simply picks up the slack.  With Waterfall, if a key team member takes a vacation, delays can ensue. Scrum requires much more out of every team member and as a result, Scrum teams outperform non-Scrum teams working 60 hour weeks. Recommended Reading Everyone considering Scrum should read Mike Cohn’s excellent book, User Stories Applied. Technorati Tags: Agile,Scrum,Waterfall

    Read the article

  • Access Control and Accessibility in Oracle IRM 11g

    - by martin.abrahams
    A recurring theme you'll find throughout this blog is that IRM needs to balance security with usability and manageability. One of the innovations in Oracle IRM 11g typifies this, as we have introduced a new right that may be included in any role - Accessibility. When creating or modifying a role, you simply select Accessibility along with Open, Print, Edit or whatever rights you want to include in the role. You might, for example, have parallel roles of Reader and Reader with Accessibility and Contributor and Contributor with Accessibility. The effect of the Accessibility right is to relax some of the protection of content in use such that selected users can use accessibility tools. For example, a user with the Accessibility right would be able to use the screen magnification tool, which IRM would ordinarily prevent because it involves screen capture. This new right makes it easy for you to apply security to documents yet, subject to suitable approval processes, cater for the fact that a subset of users might be disproportionately inconvenienced by some of the normal usage constraints. Rather than make those users put up with the restrictions, or perhaps exempt them from using sealed documents altogether, this new right allows you to accommodate them in a controlled manner, and to balance security with corporate accessibility goals.

    Read the article

  • How to set up a (relatively) secure kiosk on Natty?

    - by Tim
    I'm trying to get my application to be the only thing a user sees when the machine is powered on - like a kiosk, but a little more secure. Ideally, what I'd like to happen is this: At machine power-on, the user sees the Ubuntu splash image, then my app. While the app is running, the user can't get back to the desktop or a text login prompt via any keyboard shortcut. This is the (relatively) secure bit. When the user exits the app, the user sees a shutdown image, then the machine powers off. In particular, I'd like to configure things so that the user never sees the Gnome desktop on startup or shutdown. At the moment, I've configured a default user to be logged on automatically, with an autostart item which starts my app, but after the Ubuntu startup screen the user sees the Gnome desktop briefly before my app is started. When the app is exited, the user is taken back to the Gnome desktop and has to shut the machine down manually. Also, because of time constraints, I can't really start over with a different window manager. Is there an easy way to configure all of this?

    Read the article

  • Do programmers at non-software companies need the same things as at software companies?

    - by Michael
    There is a lot of evidence that things like offices, multiple screens, administration rights of your own computer, and being allowed whatever software you want is great for productivity while developing. However, the studies I've seen tend toward companies that sell software. Therefore, keeping the programmers productive is paramount to the company's profitability. However, at companies that produce software simply to support their primary function, programming is merely a support role. Do the same rules apply at a company that only uses the software they produce to support their business, and a lot of a programmer's work is maintainence?

    Read the article

  • What package do I need to build a Qt 5 & CMake application?

    - by Kevin Reid
    I'm trying to build sdrangelove, which wants Qt 5 and uses CMake for its build system, on Ubuntu 13.10. What package do I need to install to give it the file it's asking for here? There are a lot of *qt5* packages, and I've tried installing the promising looking ones to no effect. All the discussions I've found either have things working fine or are talking about writing CMake build rules rather than executing them. I don't have a lot of experience with the organization of Debian/Ubuntu packaging. CMake Error at CMakeLists.txt:14 (find_package): By not providing "FindQt5Core.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "Qt5Core", but CMake did not find one. Could not find a package configuration file provided by "Qt5Core" (requested version 5.0) with any of the following names: Qt5CoreConfig.cmake qt5core-config.cmake Add the installation prefix of "Qt5Core" to CMAKE_PREFIX_PATH or set "Qt5Core_DIR" to a directory containing one of the above files. If "Qt5Core" provides a separate development package or SDK, be sure it has been installed.

    Read the article

  • How does flocking algorithm work?

    - by Chan
    I read and understand the basic of flocking algorithm. Basically, we need to have 3 behaviors: 1. Cohesion 2. Separation 3. Alignment From my understanding, it's like a state machine. Every time we do an update (then draw), we check all the constraints on both three behaviors. And each behavior returns a Vector3 which is the "correct" orientation that an object should transform to. So my initial idea was /// <summary> /// Objects stick together /// </summary> /// <returns></returns> private Vector3 Cohesion() { Vector3 result = new Vector3(0.0f, 0.0f, 0.0f); return result; } /// <summary> /// Object align /// </summary> /// <returns></returns> private Vector3 Align() { Vector3 result = new Vector3(0.0f, 0.0f, 0.0f); return result; } /// <summary> /// Object separates from each others /// </summary> /// <returns></returns> private Vector3 Separate() { Vector3 result = new Vector3(0.0f, 0.0f, 0.0f); return result; } Then I search online for pseudocode but many of them involve velocity and acceleration plus other stuffs. This part confused me. In my game, all objects move at constant speed, and they have one leader. So can anyone share me an idea how to start on implement this flocking algorithm? Also, did I understand it correctly? (I'm using XNA 4.0)

    Read the article

  • Is there a resource that explains the benefits of layered programming?

    - by P.Brian.Mackey
    Some developers I know favor what I would call a procedural programming style. I recognize that procedural programming has its uses, albeit not in the business application world of .NET programming. So let's say we have a winform application with a buttonclick event. The buttonclick handles everything from the UI configuration to the database call and data manipulation. So you end up with a method that is 100's of lines of code long. Outside the fact that this code can't be considered test-able for various reasons, this style of programming is fragile to change. I can talk bout OO, Anti-patterns, etc. The problem is that any distinct topic I can dream up requires a great deal of explanation to understand the potential benefits. Outside of finding a new job (lots of businesses program this way), how can I teach these kinds of developers how to write better code? Obviously we can't sit around a round table and discuss pro's and con's all day due to time constraints and real work that has to be done. Although, training and intense training is the only thing I can think of to fix these problems. Not to say I write perfect code, I most certainly do not. I do believe there are certain best practices that should be followed as a rule E.G. OO in the context of .NET. The most common excuse I hear is "we can't write code fast enough if we do it like that".

    Read the article

  • IL and case-sensitivity

    - by Ali .NET
    Quoted from A Brief Introduction To IL code, CLR, CTS, CLS and JIT In .NET CLS stands for Common Language Specifications. It is a subset of CTS. CLS is a set of rules or guidelines which if followed ensures that code written in one .NET language can be used by another .NET language. For example one rule is that we cannot have member functions with same name with case difference only i.e we should not have add() and Add(). This may work in C# because it is case-sensitive but if try to use that C# code in VB.NET, it is not possible because VB.NET is not case-sensitive. Based on above text I want to confirm two points here: Does the case-sensitivity of IL is a condition for member functions only, and not for member properties? Is it true that C# wouldn't be inter-operable with VB.NET if it didn't take care of the case sensitivity?

    Read the article

  • Creating a secure SQL server login - CHECK_EXPIRATION & CHECK_POLICY

    - by cabhilash
    In SQL Server you can create users using T-SQL or using the options provided by SQL Server Management Studio.   CREATE LOGIN sql_user WITH PASSWORD ='sql_user_password' MUST_CHANGE, DEFAULT_DATABASE = defDB, CHECK_EXPIRATION = ON, CHECK_POLICY = ONAs mentioned in the previous article (http://weblogs.asp.net/cabhilash/archive/2010/04/07/login-failed-for-user-sa-because-the-account-is-currently-locked-out-the-system-administrator-can-unlock-it.aspx) when CHECK_POLICY = ON user account follows the password rules provided in the system on which the SQL server is installed. When MUST_CHANGE keyword is used user is forced to change the password when he/she tries to login for the first time. CHECK_EXPIRATION and CHECK_POLICY are only enforced on Windows Server 2003 and later. If you want to turn off the password expiration enforcement or security policy enforcement, you can do by using the following statements. (But these wont work if you have created your login with MUST_CHANGE and user didn't change the default password) ALTER LOGIN sql_login WITH CHECK_EXPIRATION = OFF go ALTER LOGIN sql_login WITH CHECK_POLICY = OFF

    Read the article

  • Should I post my PDF library for SEO? [closed]

    - by Iunknown
    Possible Duplicate: Do search engines crawl PDFs and if so are there any rules to follow when making them When a Sales call comes in, the caller often says something like: 'I searched for 3 days before finding your product and it's exactly what I need!' That's telling me that I need some SEO work. We redid our website and streamlined it which removed many of our 'How-To' documents. Since those PDF documents contain words that people might search for, I was wondering if I could add a 'Complete library' link to the bottom of a page that will load up the entire PDF library. Would that help my ranking?

    Read the article

  • The Virtues and Challenges of Implementing Basel III: What Every CFO and CRO Needs To Know

    - by Jenna Danko
    The Basel Committee on Banking Supervision (BCBS) is a group tasked with providing thought-leadership to the global banking industry.  Over the years, the BCBS has released volumes of guidance in an effort to promote stability within the financial sector.  By effectively communicating best-practices, the Basel Committee has influenced financial regulations worldwide.  Basel regulations are intended to help banks: More easily absorb shocks due to various forms of financial-economic stress Improve risk management and governance Enhance regulatory reporting and transparency In June 2011, the BCBS released Basel III: A global regulatory framework for more resilient banks and banking systems.  This new set of regulations included many enhancements to previous rules and will have both short and long term impacts on the banking industry.  Some of the key features of Basel III include: A stronger capital base More stringent capital standards and higher capital requirements Introduction of capital buffers  Additional risk coverage Enhanced quantification of counterparty credit risk Credit valuation adjustments  Wrong  way risk  Asset Value Correlation Multiplier for large financial institutions Liquidity management and monitoring Introduction of leverage ratio Even more rigorous data requirements To implement these features banks need to embark on a journey replete with challenges. These can be categorized into three key areas: Data, Models and Compliance. Data Challenges Data quality - All standard dimensions of Data Quality (DQ) have to be demonstrated.  Manual approaches are now considered too cumbersome and automation has become the norm. Data lineage - Data lineage has to be documented and demonstrated.  The PPT / Excel approach to documentation is being replaced by metadata tools.  Data lineage has become dynamic due to a variety of factors, making static documentation out-dated quickly.  Data dictionaries - A strong and clean business glossary is needed with proper identification of business owners for the data.  Data integrity - A strong, scalable architecture with work flow tools helps demonstrate data integrity.  Manual touch points have to be minimized.   Data relevance/coverage - Data must be relevant to all portfolios and storage devices must allow for sufficient data retention.  Coverage of both on and off balance sheet exposures is critical.   Model Challenges Model development - Requires highly trained resources with both quantitative and subject matter expertise. Model validation - All Basel models need to be validated. This requires additional resources with skills that may not be readily available in the marketplace.  Model documentation - All models need to be adequately documented.  Creation of document templates and model development processes/procedures is key. Risk and finance integration - This integration is necessary for Basel as the Allowance for Loan and Lease Losses (ALLL) is calculated by Finance, yet Expected Loss (EL) is calculated by Risk Management – and they need to somehow be equal.  This is tricky at best from an implementation perspective.  Compliance Challenges Rules interpretation - Some Basel III requirements leave room for interpretation.  A misinterpretation of regulations can lead to delays in Basel compliance and undesired reprimands from supervisory authorities. Gap identification and remediation - Internal identification and remediation of gaps ensures smoother Basel compliance and audit processes.  However business lines are challenged by the competing priorities which arise from regulatory compliance and business as usual work.  Qualification readiness - Providing internal and external auditors with robust evidence of a thorough examination of the readiness to proceed to parallel run and Basel qualification  In light of new regulations like Basel III and local variations such as the Dodd Frank Act (DFA) and Comprehensive Capital Analysis and Review (CCAR) in the US, banks are now forced to ask themselves many difficult questions.  For example, executives must consider: How will Basel III play into their Risk Appetite? How will they create project plans for Basel III when they haven’t yet finished implementing Basel II? How will new regulations impact capital structure including profitability and capital distributions to shareholders? After all, new regulations often lead to diminished profitability as well as an assortment of implementation problems as we discussed earlier in this note.  However, by requiring banks to focus on premium growth, regulators increase the potential for long-term profitability and sustainability.  And a more stable banking system: Increases consumer confidence which in turn supports banking activity  Ensures that adequate funding is available for individuals and companies Puts regulators at ease, allowing bankers to focus on banking Stability is intended to bring long-term profitability to banks.  Therefore, it is important that every banking institution takes the steps necessary to properly manage, monitor and disclose its risks.  This can be done with the assistance and oversight of an independent regulatory authority.  A spectrum of banks exist today wherein some continue to debate and negotiate with regulators over the implementation of new requirements, while others are simply choosing to embrace them for the benefits I highlighted above. Do share with me how your institution is coping with and embracing these new regulations within your bank. Dr. Varun Agarwal is a Principal in the Banking Practice for Capgemini Financial Services.  He has over 19 years experience in areas that span from enterprise risk management, credit, market, and to country risk management; financial modeling and valuation; and international financial markets research and analyses.

    Read the article

  • Redirect a url to another url in IIS 7.5

    - by Jason White
    I have no idea why this isn't working. I've tried creating map rules and then rewritng and redirecting the url. I've tried just redirecting it with a simple rewrite rule and no matter what, the only time I can get it to work is if I set the match url to match this regex .*. I'm trying to redirect webmail.example.com to mail.example.com. Seemed like it would have taken but a couple seconds; boy was I wrong. I'm thinking I must be doing something wrong with the regex, but I'm not sure what as when I test it it seems to work fine. <rule name="webmail" patternSyntax="ECMAScript" stopProcessing="true"> <match url=".*webmail.*" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> </conditions> <action type="Redirect" url="https://mail.example.com:8000" appendQueryString="false" logRewrittenUrl="true" /> </rule> Thanks

    Read the article

  • Describe business logic with diagrams

    - by Nikos M.
    I am currently developing a web application for my thesis.I was asked by my professor to make diagrams to describe the business logic. Since I don't have a prior experience, I am pretty confused with all the terminology. I managed to clarify,I think, what business rules and business logic are, but I can't find out how you describe the business logic. Is it something particular or is it something more general? Do I need to learn UML? Does the fact that I use MVC affects the way I'll describe it?

    Read the article

  • Pattern for Accessing MySQL connection

    - by Dipan Mehta
    We have an application which is C++ trying to access MySQL database. There are several (about 5 or so) threads in the application (with Boost library for threading) and in each thread has a few objects, each of which is trying to access Database for its' own purpose. It has a simple ORM kind of model but that really is not an important factor here. There are three potential access patterns i can think of: There could be single connection object per application or thread and is shared between all (or group). The object needs to be thread safe and there will be contentions but MySQL will not be fired with too many connections. Every object could initiate connection on its own. The database needs to take care of concurrency (which i think MySQL can) and the design could be much simpler. There could be two possibilities here. a. either object keeps a persistent connection for its life OR b. object initiate connection as and when needed. To simplify the contention as in case of 1 and not to create too many sockets as in case of 2, we can have group/set based connections. So there could be there could be more than one connection (say N), each of this connection could be shared connection across M objects. Naturally, each of the pattern has different resource cost and would work under different constraints and objectives. What criteria should i use to choose the pattern of this for my own application? What are some of the advantages and disadvantages of each of these pattern over the other? Are there any other pattern which is better? PS: I have been through these questions: mysql, one connection vs multiple and MySQL with mutiple threads and processes But they don't quite answer exactly what i am trying to ask.

    Read the article

  • Are there any resources on how to identify problems that could best be solved with templates?

    - by sap
    I decided to improve my knowledge in template meta-programming. I know the syntax and rules and been playing with counteless examples from online resources. I understand how powerful templates can be and how much compile time optimization they can provide but I still cant "think in templates", I can't seem to know by myself if a certain problem could be best solved with templates and if it can, how to adapt that problem to templates. Is there some kind of online resource or book that teaches how to identify problems that could best be solved with templates and how to adapt that problem?

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >