Search Results

Search found 24792 results on 992 pages for 'chris may'.

Page 115/992 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • SQL SERVER – Introduction to Big Data – Guest Post

    - by pinaldave
    BIG Data – such a big word – everybody talks about this now a days. It is the word in the database world. In one of the conversation I asked my friend Jasjeet Sigh the same question – what is Big Data? He instantly came up with a very effective write-up.  Jasjeet is working as a Technical Manager with Koenig Solutions. He leads the SQL domain, and holds rich IT industry experience. Talking about Koenig, it is a 19 year old IT training company that offers several certification choices. Some of its courses include SharePoint Training, Project Management certifications, Microsoft Trainings, Business Intelligence programs, Web Design and Development courses etc. Big Data, as the name suggests, is about data that is BIG in nature. The data is BIG in terms of size, and it is difficult to manage such enormous data with relational database management systems that are quite popular these days. Big Data is not just about being large in size, it is also about the variety of the data that differs in form or type. Some examples of Big Data are given below : Scientific data related to weather and atmosphere, Genetics etc Data collected by various medical procedures, such as Radiology, CT scan, MRI etc Data related to Global Positioning System Pictures and Videos Radio Frequency Data Data that may vary very rapidly like stock exchange information Apart from difficulties in managing and storing such data, it is difficult to query, analyze and visualize it. The characteristics of Big Data can be defined by four Vs: Volume: It simply means a large volume of data that may span Petabyte, Exabyte and so on. However it also depends organization to organization that what volume of data they consider as Big Data. Variety: As discussed above, Big Data is not limited to relational information or structured Data. It can also include unstructured data like pictures, videos, text, audio etc. Velocity:  Velocity means the speed by which data changes. The higher is the velocity, the more efficient should be the system to capture and analyze the data. Missing any important point may lead to wrong analysis or may even result in loss. Veracity: It has been recently added as the fourth V, and generally means truthfulness or adherence to the truth. In terms of Big Data, it is more of a challenge than a characteristic. It is difficult to ascertain the truth out of the enormous amount of data and the one that has high velocity. There are always chances of having un-precise and uncertain data. It is a challenging task to clean such data before it is analyzed. Big Data can be considered as the next big thing in the IT sector in terms of innovation and development. If appropriate technologies are developed to analyze and use the information, it can be the driving force for almost all industrial segments. These include Retail, Manufacturing, Service, Finance, Healthcare etc. This will help them to automate business decisions, increase productivity, and innovate and develop new products. Thanks Jasjeet Singh for an excellent write up.  Jasjeet Sign is working as a Technical Manager with Koenig Solutions. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Database, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Big Data

    Read the article

  • Interrupted Upgrade from 11.10 to 12.04

    - by Tamil
    My upgrade using alternative iso from 11.10 to 12.04 got interrupted and I had to hard restart my machine. Now I feel that everything is recovered except my already installed packages like vim. How do I backup my home folder for fresh installation of ubuntu? Following are the errors I'm facing I couldn't mark any package for re-installation in synaptic or remove and install too. output of sudo apt-get install vim Building dependency tree Reading state information... Done Package vim is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'vim' has no installation candidate If I try installing it from synaptic I get apache2.2-common: Package apache2.2-common has no available version, but exists in the database. This typically means that the package was mentioned in a dependency and never uploaded, has been obsoleted or is not available with the contents of sources.list my sources.list file # added by the release upgrader # deb cdrom:[Ubuntu 12.04.1 LTS _Precise Pangolin_ - Release amd64 (20120822.4)]/ precise main restricted # added by the release upgrader # # deb cdrom:[Ubuntu 12.04.1 LTS _Precise Pangolin_ - Release amd64 (20120822.4)]/ precise main restricted # deb cdrom:[Ubuntu 11.04 _Natty Narwhal_ - Release amd64 (20110427.1)]/ natty main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://archive.ubuntu.com/ubuntu precise main restricted deb-src http://archive.ubuntu.com/ubuntu precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://archive.ubuntu.com/ubuntu precise-updates main restricted deb-src http://archive.ubuntu.com/ubuntu precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu precise universe deb-src http://archive.ubuntu.com/ubuntu precise universe deb http://archive.ubuntu.com/ubuntu precise-updates universe deb-src http://archive.ubuntu.com/ubuntu precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://archive.ubuntu.com/ubuntu precise multiverse deb-src http://archive.ubuntu.com/ubuntu precise multiverse deb http://archive.ubuntu.com/ubuntu precise-updates multiverse deb-src http://archive.ubuntu.com/ubuntu precise-updates multiverse ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. # deb http://us.archive.ubuntu.com/ubuntu/ natty-backports main restricted universe multiverse # deb-src http://us.archive.ubuntu.com/ubuntu/ natty-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu precise-security main restricted deb-src http://archive.ubuntu.com/ubuntu precise-security main restricted deb http://archive.ubuntu.com/ubuntu precise-security universe deb-src http://archive.ubuntu.com/ubuntu precise-security universe deb http://archive.ubuntu.com/ubuntu precise-security multiverse deb-src http://archive.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu natty partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu precise main deb-src http://extras.ubuntu.com/ubuntu precise main # deb http://tamil.3758_gmail.com:[email protected]/free unstable main # disabled on upgrade to oneiric # deb http://debian.datastax.com/natty oneiric main # disabled on upgrade to oneiric sudo apt-get update Err http://archive.ubuntu.com precise InRelease Err http://archive.canonical.com precise InRelease Err http://archive.ubuntu.com precise-updates InRelease Err http://archive.ubuntu.com precise-security InRelease Err http://extras.ubuntu.com precise InRelease Err http://archive.canonical.com precise Release.gpg Unable to connect to 172.16.140.249:3142: Err http://archive.ubuntu.com precise Release.gpg Unable to connect to 172.16.140.249:3142: Err http://archive.ubuntu.com precise-updates Release.gpg Unable to connect to 172.16.140.249:3142: Err http://extras.ubuntu.com precise Release.gpg Unable to connect to 172.16.140.249:3142: Err http://archive.ubuntu.com precise-security Release.gpg Unable to connect to 172.16.140.249:3142: W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/precise/InRelease

    Read the article

  • Windows Phone 7 Silverlight / XNA development talk

    - by subodhnpushpak
    Hi, I presented on Windows Phone 7 app development using Silverlight. Here are few pics from the event Windows Phone 7 development VIEW SLIDE SHOW DOWNLOAD ALL     I demonstrated the Visual studio, emulator capabilities/ features. An demo on Wp7 app communication with an OData Service, along with a demo on XNA app. There was lot of curious questions; I am listing them here because these keep on popping up again and again: 1. What tools does it takes to develop Wp7 app? Are they free? A typical WP7 app can be developed either using Silverlight or XNA. For developers, Visual Studio 2010 is a good choice as it provides an integrated development environment with lots of useful project templates; which makes the task really easy. For designers, Blend may be used to develop the UI in XAML. Both the tools are FREE (express version) to download and very intuitive to use. 2. What about the learning curve? If you know C#, (or any other programming language), learning curve is really flat. XAML (used for UI) may be new for you, but trust me; its very intuitive. Also you can use Microsoft Blend to generate the UI (XAML) for you. 3. How can I develop /test app without using actual device? How can I be sure my app runs as expected on actual device? The WP7 SDK comes along with an excellent emulator; which you can use for development/ testing on a computer. Later you can just change a setting and deploy the application on WP7. You will require Zune software for deploying the application on phone along with Developers key from WP7 marketplace. You can obtain key from marketplace by filling a form. The whole process for registering  is easy; just follow the steps on the site. 4. Which one should I use? Silverlight or XNA? Use Silverlight for enterprise/ business / utility apps. Use XNA for Games app. While each platform is capable / strong and may be used in conjunction as well; The methodologies used for development in these platforms are very different. XNA works on typical Do..While loop where as Silverlight works on event based methodology. 5. Where are the learning resources? Are they free? There is lots of stuff on WP7. Most of them are free. There is a excellent free book by Charles Petzold to download and http://www.microsoft.com/windowsphone is full of demos /todos / vidoes. All the exciting stuff was captured live and you can view it here; in case you were not able to catch it live!! @ http://livestre.am/AUfx. My talk starts from 3:19:00 timeline in the video!! Is there an app you miss on WP7? Do let me know about it and I may work on it for free !!! Keep discovering. Keep is Simple. WP7. Subodh

    Read the article

  • What information must never appear in logs?

    - by MainMa
    I'm about to write the company guidelines about what must never appear in logs (trace of an application). In fact, some developers try to include as many information as possible in trace, making it risky to store those logs, and extremely dangerous to submit them, especially when the customer doesn't know this information is stored, because she never cared about this and never read documentation and/or warning messages. For example, when dealing with files, some developers are tempted to trace the names of the files. For example before appending file name to a directory, if we trace everything on error, it will be easy to notice for example that the appended name is too long, and that the bug in the code was to forget to check for the length of the concatenated string. It is helpful, but this is sensitive data, and must never appear in logs. In the same way: Passwords, IP addresses and network information (MAC address, host name, etc.)¹, Database accesses, Direct input from user and stored business data must never appear in trace. So what other types of information must be banished from the logs? Are there any guidelines already written which I can use? ¹ Obviously, I'm not talking about things as IIS or Apache logs. What I'm talking about is the sort of information which is collected with the only intent to debug the application itself, not to trace the activity of untrusted entities. Edit: Thank you for your answers and your comments. Since my question is not too precise, I'll try to answer the questions asked in the comments: What I'm doing with the logs? The logs of the application may be stored in memory, which means either in plain on hard disk on localhost, in a database, again in plain, or in Windows Events. In every case, the concern is that those sources may not be safe enough. For example, when a customer runs an application and this application stores logs in plain text file in temp directory, anybody who has a physical access to the PC can read those logs. The logs of the application may also be sent through internet. For example, if a customer has an issue with an application, we can ask her to run this application in full-trace mode and to send us the log file. Also, some application may sent automatically the crash report to us (and even if there are warnings about sensitive data, in most cases customers don't read them). Am I talking about specific fields? No. I'm working on general business applications only, so the only sensitive data is business data. There is nothing related to health or other fields covered by specific regulations. But thank you to talk about that, I probably should take a look about those fields for some clues about what I can include in guidelines. Isn't it easier to encrypt the data? No. It would make every application much more difficult, especially if we want to use C# diagnostics and TraceSource. It would also require to manage authorizations, which is not the easiest think to do. Finally, if we are talking about the logs submitted to us from a customer, we must be able to read the logs, but without having access to sensitive data. So technically, it's easier to never include sensitive information in logs at all and to never care about how and where those logs are stored.

    Read the article

  • Enterprise Manager in EPM 11.1.2.x...a game of hide and seek!

    - by THE
    Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} guest article: Maurice Bauhahn: Users of Oracle Hyperion Enterprise Performance Management 11.1.2.0 and 11.1.2.1 may puzzle why the URL http://<servername>:7001/em may not conjure up Enterprise Manager Fusion Middleware Control. This powerful tool has been installed by default...but WebLogic may not have been 'Extended' to allow you to call it up (we are hopeful this ‘Extend’  step will not be needed with 11.1.2.2). The explanation is on pages 425 and following of the following document: http://www.oracle.com/technetwork/middleware/bi-foundation/epm-tips-issues-1-72-427329.pdf A close look at the screen dumps in that section reveals a somewhat scary prospect, however: the non-AdminServer servlets had all failed (see the red down-arrow icons to the right of their names) after the configuration! Of course you would want to avoid that scenario! A rephrasing of the instructions might help: Ensure the WebLogic AdminServer is not running (in a default scenario that would mean port 7001 is not active). Ensure you have logged into the computer as the installing owner of EPM. Since Enterprise Manager uses a LOT of resources, be sure that there is adequate free RAM to accommodate the added load. On the machine where WebLogic AdminServer is set up (typically the Foundation Services machine), run \Oracle\Middleware\wlserver_10.3\common\bin\config (config.sh on Unix). Select the 'Extend an existing WebLogic domain' option, and click the 'Next' button. Select the domain being used by EPM System. - Typically, the default domain is created under /Oracle/Middleware/user_projects/domains and is named EPMSystem. - Click 'Next'. Under 'Extend my domain automatically to support the following added products' - place a check mark before 'Oracle Enterprise Manager - 11.1.1.0 [oracle_common]' to select it. - Continue accepting the defaults by clicking 'Next' on each page until - on the last page you click 'Extend'. - The system will grind for a few minutes while it configures (deploys?) EM. - Start the AdminServer. Sometimes there is contention in the startup order of the various servlets (resulting in some not coming up). To avoid that problem on Microsoft Windows machines you may start and stop services via the following analogous command line commands to those run on Linux/Unix (these more carefully space out timings of these events): Ensure EPM is up:\Oracle\Middleware\user_projects\epmsystem1\bin\start.bat Ensure WebLogic is up:\Oracle\Middleware\user_projects\domains\EPMSystem\bin\startWebLogic.cmd Shut down WebLogic:\Oracle\Middleware\user_projects\domains\EPMSystem\bin\stopWebLogic.cmd Shut down EPM:\Oracle\Middleware\user_projects\epmsystem1\bin\stop.bat  Now you should be able to more successfully troubleshoot with the EM tool:

    Read the article

  • EPM troubleshooting Utilities

    - by THE
    (in via Maurice) "Are you keeping up-to-date with the latest troubleshooting utilities introduced from EPM 11.1.2.2? These are typically not described in product documentation, so you might miss references to them. The following five utilities may be run from the command line.(1) Deployment Report was introduced with EPM 11.1.2.2 (11 April 2012). It details logical web addresses, web servers, application ports, database connections, user directories, database repositories configured for the EPM system, data directories used by EPM system products, instance directories, FMW homes, deployment distory, et cetera. It also helps to keep you honest about whether you made changes to the system and at what times! Download Shared Services patch 13530721 to get the backported functionality in EPM 11.1.2.1. Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/epmsys_registry.sh report deployment (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\epmsys_registry.bat report deployment (on Microsoft Windows). The output is saved under \Oracle\Middleware\user_projects\epmsystem1\diagnostics\reports\deployment_report.html (2) Log Analysis has received more "press". It was released with EPM 11.1.2.3 and helps the user to slice and dice EPM logs. It has many parameters which are documented when run without parameters, when run with the -h parameter, or in the 'Readme' file. It has also been released as a standalone utility for EPM 11.1.2.3 and earlier versions. (Sign in to  My Oracle Support, click the 'Patches & Updates' tab, enter the patch number 17425397, and click the Search button. Download the appropriate platform-specific zip file, unzip, and read the 'Readme' file. Note that you must provide a proper value to a JAVA_HOME environment variable [pointer to the mother directory of the Java /bin subdirectory] in the loganalysis.bat | .sh file and use the -d parameter when running standalone.) Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/loganalysis.sh -h (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\loganalysis.bat -h (on Microsoft Windows). The output is saved under the \Oracle\Middleware\user_projects\epmsystem1\diagnostics\reports\ subdirectory. (3) The Registry Cleanup command may be used (without fear!) to clean up various corruptions which can  affect the Hyperion (database-based) Repository. Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/registry-cleanup.sh (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\registry-cleanup.bat (on Microsoft Windows). The actions are described on the command line. (4) The Remove Instance Command is only used if there are two or more instances configured on one computer and one of those should be deleted. Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/remove-instance.sh (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\remove-instance.bat (on Microsoft Windows). (5) The Reset Configuration Tool was introduced with EPM 11.1.2.2. It nullifies Shared Services Hyperion Registry settings so that a service may be reconfigured. You may locate the values to substitute for <product> or <task> by scanning registry.html (generated by running epmsys_registry.bat | .sh). Find productNAME in INSTANCE_TASKS_CONFIGURATION and SYSTEM_TASKS_CONFIGURATION nodes and identify tasks by property pairs that have values of 'Configurated' or 'Pending'. Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/resetConfigTask.sh -product <product> -task <task> (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\resetConfigTask.bat -product <product> -task <task> (on Microsoft Windows). "Thanks to Maurice for this collection of utilities.

    Read the article

  • what differs a computer scientist/software engineer to regular people who learn programming language and APIs?

    - by Amumu
    In University, we learn and reinvent the wheel a lot to truly learn the programming concepts. For example, we may learn assembly language to understand, what happens inside the box, and how the system operates, when we execute our code. This helps understanding higher level concepts deeper. For example, memory management like in C is just an abstraction of manually managed memory contents and addresses. The problem is, when we're going to work, usually productivity is required more. I could program my own containers, or string class, or date/time (using POSIX with C system call) to do the job, but then, it would take much longer time to use existing STL or Boost library, which abstract all of those thing and very easy to use. This leads to an issue, that a regular person doesn't need to get through all the low level/under the hood stuffs, who learns only one programming language and using language-related APIs. These people may eventually compete with the mainstream graduates from computer science or software engineer and call themselves programmers. At first, I don't think it's valid to call them programmers. I used to think, a real programmer needs to understand the computer deeply (but not at the electronic level). But then I changed my mind. After all, they get the job done and satisfy all the test criteria (logic, performance, security...), and in business environment, who cares if you're an expert and understand how computer works or not. You may get behind the "amateurs" if you spend to much time learning about how things work inside. It is totally valid for those people to call themselves programmers. This makes me confuse. So, after all, programming should be considered an universal skill? Does programming language and concepts matter or the problems we solve matter? For example, many C/C++ vs Java and other high level language, one of the main reason is because C/C++ features performance, as well as accessing low level facility. One of the main reason (in my opinion), is coding in C/C++ seems complex, so people feel good about it (not trolling anyone, just my observation, and my experience as well. Try to google "C hacker syndrome"). While Java on the other hand, made for simplifying programming tasks to help developers concentrate on solving their problems. Based on Java rationale, if the programing language keeps evolve, one day everyone can map their logic directly with natural language. Everyone can program. On that day, maybe real programmers are mathematicians, who could perform most complex logic (including business logic and academic logic) without worrying about installing/configuring compiler, IDEs? What's our job as a computer scientist/software engineer? To solve computer specific problems or to solve problems in general? For example, take a look at this exame: http://cm.baylor.edu/ICPCWiki/attach/Problem%20Resources/2010WorldFinalProblemSet.pdf . The example requires only basic knowledge about the programming language, but focus more on problem solving with the language. In sum, what differs a computer scientist/software engineer to regular people who learn programming language and APIs? A mathematician can be considered a programmer, if he is good enough to use programming language to implement his formula. Can we programmer do this? Probably not for most of us, since we specialize about computer, not math. An electronic engineer, who learns how to use C to program for his devices, can be considered a programmer. If the programming languages keep being simplified, may one day the software engineers, who implements business logic and create softwares, be obsolete? (Not for computer scientist though, since many of the CS topics are scientific, and science won't change, but technology will).

    Read the article

  • Advanced Record-Level Business Intelligence with Inner Queries

    - by gt0084e1
    While business intelligence is generally applied at an aggregate level to large data sets, it's often useful to provide a more streamlined insight into an individual records or to be able to sort and rank them. For instance, a salesperson looking at a specific customer could benefit from basic stats on that account. A marketer trying to define an ideal customer could pull the top entries and look for insights or patterns. Inner queries let you do sophisticated analysis without the overhead of traditional BI or OLAP technologies like Analysis Services. Example - Order History Constancy Let's assume that management has realized that the best thing for our business is to have customers ordering every month. We'll need to identify and rank customers based on how consistently they buy and when their last purchase was so sales & marketing can respond accordingly. Our current application may not be able to provide this and adding an OLAP server like SSAS may be overkill for our needs. Luckily, SQL Server provides the ability to do relatively sophisticated analytics via inner queries. Here's the kind of output we'd like to see. Creating the Queries Before you create a view, you need to create the SQL query that does the calculations. Here we are calculating the total number of orders as well as the number of months since the last order. These fields might be very useful to sort by but may not be available in the app. This approach provides a very streamlined and high performance method of delivering actionable information without radically changing the application. It's also works very well with self-service reporting tools like Izenda. SELECT CustomerID,CompanyName, ( SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID ) As Orders, DATEDIFF(mm, ( SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) ,getdate() ) AS MonthsSinceLastOrder FROM Customers Creating Views To turn this or any query into a view, just put CREATE VIEW AS before it. If you want to change it use the statement ALTER VIEW AS. Creating Computed Columns If you'd prefer not to create a view, inner queries can also be applied by using computed columns. Place you SQL in the (Formula) field of the Computed Column Specification or check out this article here. Advanced Scoring and Ranking One of the best uses for this approach is to score leads based on multiple fields. For instance, you may be in a business where customers that don't order every month require more persistent follow up. You could devise a simple formula that shows the continuity of an account. If they ordered every month since their first order, they would be at 100 indicating that they have been ordering 100% of the time. Here's the query that would calculate that. It uses a few SQL tricks to make this happen. We are extracting the count of unique months and then dividing by the months since initial order. This query will give you the following information which can be used to help sales and marketing now where to focus. You could sort by this percentage to know where to start calling or to find patterns describing your best customers. Number of orders First Order Date Last Order Date Percentage of months order was placed since last order. SELECT CustomerID, (SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) As Orders, (SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS LastOrder, (SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS FirstOrder, DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) AS MonthsSinceFirstOrder, 100*(SELECT COUNT(DISTINCT 100*DATEPART(yy,OrderDate) + DATEPART(mm,OrderDate)) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) / DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) As OrderPercent FROM Customers

    Read the article

  • Heterogeneous Datacenter Management with Enterprise Manager 12c

    - by Joe Diemer
    The following is a Guest Blog, contributed by Bryce Kaiser, Product Manager at Blue MedoraWhen I envision a perfect datacenter, it would consist of technologies acquired from a single vendor across the entire server, middleware, application, network, and storage stack - Apps to Disk - that meets your organization’s every IT requirement with absolute best-of-breed solutions in every category.   To quote a familiar motto, your datacenter would consist of "Hardware and Software, Engineered to Work Together".  In almost all cases, practical realities dictate something far less than the IT Utopia mentioned above.   You may wish to leverage multiple vendors to keep licensing costs down, a single vendor may not have an offering in the IT category you need, or your preferred vendor may quite simply not have the solution that meets your needs.    In other words, your IT needs dictate a heterogeneous IT environment.  Heterogeneity, however, comes with additional complexity. The following are two pretty typical challenges:1) No End-to-End Visibility into the Enterprise Wide Application Deployment. Each vendor solution which is added to an infrastructure may bring its own tooling creating different consoles for different vendor applications and platforms.2) No Visibility into Performance Bottlenecks. When multiple management tools operate independently, you lose diagnostic capabilities including identifying cross-tier issues with database, hung-requests, slowness, memory leaks and hardware errors/failures causing DB/MW issues. As adoption of Oracle Enterprise Manager (EM) has increased, especially since the release of Enterprise Manager 12c, Oracle has seen an increase in the number of customers who want to leverage their investments in EM to manage non-Oracle workloads.  Enterprise Manager provides a single pane of glass view into their entire datacenter.  By creating a highly extensible framework via the Oracle EM Extensibility Development Kit (EDK), Oracle has provided the tooling for business partners such as my company Blue Medora as well as customers to easily fill gaps in the ecosystem and enhance existing solutions.  As mentioned in the previous post on the Enterprise Manager Extensibility Exchange, customers have access to an assortment of Oracle and Partner provided solutions through this Exchange, which is accessed at http://www.oracle.com/goto/emextensibility.  Currently, there are over 80 Oracle and partner provided plug-ins across the EM 11g and EM 12c versions.  Blue Medora is one of those contributing partners, for which you will find 3 of our solutions including our flagship plugin for VMware.  Let's look at Blue Medora’s VMware plug-in as an example to what I'm trying to convey.  Here is a common situation solved by true visibility into your entire stack:Symptoms•    My database is bogging down, however the database appears okay internally.  Maybe it’s starved for resources?•    My OS tooling is showing everything is “OK”.  Something doesn’t add up. Root cause•    Through the VMware plugin we can see the problem is actually on the virtualization layer Solution•    From within Enterprise Manager  -- the same tool you use for all of your database tuning -- we can overlay the data of the database target, host target, and virtual machine target for a true picture of the true root cause. Here is the console view: Perhaps your monitoring conditions are more specific to your environment.  No worries, Enterprise Manager still has you covered.  With Metric Extensions you have the “Next Generation” of User-Defined Metrics, which easily bring the power of your existing management scripts into a single console while leveraging the proven Enterprise Manager framework. Simply put, Oracle Enterprise manager boasts a growing ecosystem that provides the single pane of glass for your entire datacenter from the database and beyond.  Bryce can be contacted at [email protected]

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • Incentivizing Work with Development Teams

    - by MarkPearl
    Recently I saw someone on twitter asking about incentives and if anyone had past experience with incentivizing work. I promised to respond with some of the experiences I have had in the past so here goes... **Disclaimer** - these are my experiences with incentives, generally in software development - in some other industries this may not be applicable – this is also my thinking at this point in time, with more experience my opinion may change. Incentivize at the level that you want people to group at If you are wanting to promote a team mentality, incentivize teams. If you want to promote an individual mentality, incentivize individuals. There is nothing worse than mixing this up. Some organizations put a lot of effort in establishing teams and team mentalities but reward individuals. This has a counter effect on the resources they have put towards establishing a team mentality. In the software projects that I work with we want promote cross functional teams that collaborate. Personally, if I was on a team and knew that there was an opportunity to work on a critical component of the system, and that by doing so I would get a bigger bonus, then I would be hesitant to include other people in solving that problem. Thus, I would hinder the teams efforts in being cross functional and reduce collaboration levels. Does that mean everyone in the team should get an even share of an incentive? In most situations I would say yes - even though this may feel counter-intuitive. I have heard arguments put forward that if “person x contributed more than person Y then they should be rewarded more” – This may sound controversial but I would rather treat people how would you like them to perform, not where they currently are at. To add to this approach, if someone is free loading, you bet your bottom dollar that the team is going to make this a lot more transparent if they feel that individual is going to be rewarded at the same level that everyone else is. Bad incentives promote destructive work If you are going to incentivize people, pick you incentives very carefully. I had an experience once with a sales person who was told they would get a bonus provided that they met an ordering target with a particular supplier. What did this person do? They sold everything at cost for the next month or so. They reached the goal, but the company didn't gain anything from it. It was a bad incentive. Expect the same with development teams, if you incentivize zero bug levels, you will get zero code committed to the solution. If you incentivize lines of code, you will get many many lines of bad code. Is there such a thing as a good incentives? Monetary wise, I am not sure there is. I would much rather encourage organizations to pay their people what they are worth upfront. I would also advise against paying money to teams as an incentive or even a bonus or reward for reaching a milestone. Rather have a breakaway for the team that promotes team building as a reward if they reach a milestone than pay them more money. I would also advise against making the incentive the reason for them to reach the milestone. If this becomes the norm it promotes people to begin to only do their job if there is an incentive at the end of the line. This is not a behaviour one wants to encourage. If the team or individual is in the right mind-set, they should not work any harder than they are right now with normal pay.

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • Using WKA in Large Coherence Clusters (Disabling Multicast)

    - by jpurdy
    Disabling hardware multicast (by configuring well-known addresses aka WKA) will place significant stress on the network. For messages that must be sent to multiple servers, rather than having a server send a single packet to the switch and having the switch broadcast that packet to the rest of the cluster, the server must send a packet to each of the other servers. While hardware varies significantly, consider that a server with a single gigabit connection can send at most ~70,000 packets per second. To continue with some concrete numbers, in a cluster with 500 members, that means that each server can send at most 140 cluster-wide messages per second. And if there are 10 cluster members on each physical machine, that number shrinks to 14 cluster-wide messages per second (or with only mild hyperbole, roughly zero). It is also important to keep in mind that network I/O is not only expensive in terms of the network itself, but also the consumption of CPU required to send (or receive) a message (due to things like copying the packet bytes, processing a interrupt, etc). Fortunately, Coherence is designed to rely primarily on point-to-point messages, but there are some features that are inherently one-to-many: Announcing the arrival or departure of a member Updating partition assignment maps across the cluster Creating or destroying a NamedCache Invalidating a cache entry from a large number of client-side near caches Distributing a filter-based request across the full set of cache servers (e.g. queries, aggregators and entry processors) Invoking clear() on a NamedCache The first few of these are operations that are primarily routed through a single senior member, and also occur infrequently, so they usually are not a primary consideration. There are cases, however, where the load from introducing new members can be substantial (to the point of destabilizing the cluster). Consider the case where cluster in the first paragraph grows from 500 members to 1000 members (holding the number of physical machines constant). During this period, there will be 500 new member introductions, each of which may consist of several cluster-wide operations (for the cluster membership itself as well as the partitioned cache services, replicated cache services, invocation services, management services, etc). Note that all of these introductions will route through that one senior member, which is sharing its network bandwidth with several other members (which will be communicating to a lesser degree with other members throughout this process). While each service may have a distinct senior member, there's a good chance during initial startup that a single member will be the senior for all services (if those services start on the senior before the second member joins the cluster). It's obvious that this could cause CPU and/or network starvation. In the current release of Coherence (3.7.1.3 as of this writing), the pure unicast code path also has less sophisticated flow-control for cluster-wide messages (compared to the multicast-enabled code path), which may also result in significant heap consumption on the senior member's JVM (from the message backlog). This is almost never a problem in practice, but with sufficient CPU or network starvation, it could become critical. For the non-operational concerns (near caches, queries, etc), the application itself will determine how much load is placed on the cluster. Applications intended for deployment in a pure unicast environment should be careful to avoid excessive dependence on these features. Even in an environment with multicast support, these operations may scale poorly since even with a constant request rate, the underlying workload will increase at roughly the same rate as the underlying resources are added. Unless there is an infrastructural requirement to the contrary, multicast should be enabled. If it can't be enabled, care should be taken to ensure the added overhead doesn't lead to performance or stability issues. This is particularly crucial in large clusters.

    Read the article

  • Rethinking Oracle Optimizer Statistics for P6 Part 2

    - by Brian Diehl
    In the previous post (Part 1), I tried to draw some key insights about the relationship between P6 and Oracle Optimizer Statistics.  The first is that average cardinality has the greatest impact on query optimization and that the particular queries generated by P6 are more likely to use this average during calculations. The second is that these are statistics that are unlikely to change greatly over the life of the application. Ultimately, our goal is to get the best query optimization possible.  Or is it? Stability No application administrator wants to get the call at 9am that their application users cannot get there work done because everything is running slow. This is a possibility with a regularly scheduled nightly collection of statistics. It may not just be slow performance, but a complete loss of service because one or more queries are optimized poorly. Ideally, this should not be the case. The database optimizer should make better decisions with more up-to-date data. Better statistics may give incremental performance benefit. However, this benefit must be balanced against the potential cost of system down time.  It is stability that we ultimately desire and not absolute optimal performance. We do want the benefit from more accurate statistics and better query plans, but not at the risk of an unusable system. As a result, I've developed the following methodology around managing database statistics for the P6 database.  1. No Automatic Re-Gathering - The daily, weekly, or other interval of statistic gathering is unlikely to be beneficial. Quite the opposite. It is more likely to cause problems. 2. Smart Re-Gathering - The time to collect statistics is when things have changed significantly. For a new installation of P6, this is happening more often because the data is growing from a few rows to thousands and more. But for a mature system, the data is not changing significantly from week-to-week. There are times to collect statistics: New releases of the application Changes in the underlying hardware or software versions (ex. new Oracle RDBMS version) When additional user groups are added. The new groups may use the software in significantly different ways. After significant changes in the data. This may be monthly, quarterly or yearly.  3. Always Test - If you take away one thing from this post, it would be to always have a plan to test after changing statistics. In reality, statistics can be collected as often as you desire provided there are tests in place to verify that performance is the same or better. These might be automated tests or simply a manual script of application functions. 4. Have a Way Out - Never change the statistics without a way to return to the previous set. Think of the statistics as one part of the overall application code that also includes the source code--both application and RDBMS. It would be foolish to change to the new code without a way to get back to the previous version. In the final post, I will talk about the actual script I created for P6 PMDB and possible future direction for managing query performance. 

    Read the article

  • Appropriate design / technologies to handle dynamic string formatting?

    - by Mark W
    recently I was tasked with implementing a way of adding support for versioning of hardware packet specifications to one of our libraries. First a bit of information about the project. We have a hardware library which has classes for each of the various commands we support sending to our hardware. These hardware modules are essentially just lights with a few buttons, and a 2 or 4 digit display. The packets typically follow the format {SOH}AADD{ETX}, where AA is our sentinel action code, and DD is the device ID. These packet specs are different from one command to the next obviously, and the different firmware versions we have support different specifications. For example, on version 1 an action code of 14 may have a spec of {SOH}AADDTEXT{ETX} which would be AA = 14 literal, DD = device ID, TEXT = literal text to display on the device. Then we come out with a revision with adds an extended byte(s) onto the end of the packet like this {SOH}AADDTEXTE{ETX}. Assume the TEXT field is fixed width for this example. We have now added a new field onto the end which could be used to say specify the color or flash rate of the text/buttons. Currently this java library only supports one version of the commands, the latest. In our hardware library we would have a class for this command, say a DisplayTextArgs.java. That class would have fields for the device ID, the text, and the extended byte. The command class would expose a method which generates the string ("{SOH}AADDTEXTE{ETX}") using the value from the class. In practice we would create the Args class as needed, populate the fields, call the method to get our packet string, then ship that down across the CAN. Some of our other commands specification can vary for the same command, on the same version, depending on some runtime state. For example, another command for version 1 may be {SOH}AA{ETX}, where this action code clears all of the modules behind a specific controller device of their text. We may overload this packet to have option fields with multiple meanings like {SOH}AAOC{ETX} where OC is literal text, which tells the controller to only clear text on a specific module type, and to leave the others alone, or the spec could also have an option format of {SOH}AADD{ETX} to clear the text off a a specific device. Currently, in the method which generates the packet string, we would evaluate fields on the args class to determine which spec we will be using when formatting the packet. For this example, it would be along the lines of: if m_DeviceID != null then use {SOH}AADD{ETX} else if m_ClearOCs == true then use {SOH}AAOC{EXT} else use {SOH}AA{ETX} I had considered using XML, or a database to store String.format format strings, which were linked to firmware version numbers in some table. We would load them up at startup, and pass in the version number of the hardwares firmware we are currently using (I can query the devices for their firmware version, but the version is not included in all packets as part of the spec). This breaks down pretty quickly because of the dynamic nature of how we select which version of the command to use. I then considered using a rule engine to possibly build out expressions which could be interpreted at runtume, to evaluate the args class's state, and from that select the appropriate format string to use, but my brief look at rule engines for java scared me away with its complexity. While it seems like it might be a viable solution, it seems overly complex. So this is why I am here. I wouldn't say design is my strongest skill, and im having trouble figuring out the best way to approach this problem. I probably wont be able to radically change the args classes, but if the trade off was good enough, I may be able to convince my boss that the change is appropriate. What I would like from the community is some feedback on some best practices / design methodologies / API or other resources which I could use to accomplish: Logic to determine which set of commands to use for a given firmware version Of those command, which version of each command to use (based on the args classes state) Keep the rules logic decoupled from the application so as to avoid needing releases for every firmware version Be simple enough so I don't need weeks of study and trial and error to implement effectively.

    Read the article

  • Moq how to correctly mock read-only properties or set only properies

    - by Chris Marisic
    What is the correct way for dealing with interfaces the expose only read-only or set-only properties with Moq? Previously I've added the other accessor but this has bleed into my domain too far with random throw new NotImplementedException() statements throughout. I just want to do something simple like mock.VerifySet(view => view.SetOnlyValue, Times.Never()); But this is a compile error of The property 'SetOnlyValue' has no getter

    Read the article

  • SVN Error 403 Forbidden

    - by Chris
    I can't figure this out. I try to import a new project into a svn repository from Netbeans and get 403 Forbidden. I just setup svn on my serverbox today. I can get to it through a browser just fine, though its empty as I haven't imported my project yet. Apache's path for html files is /var/www I setup the svn repo in /var/svn This is the structure of /var/svn [root@localhost svn]# ls -lR /var/svn /var/svn: total 4 drwxrwxrwx 7 apache apache 4096 2010-03-26 10:18 repo /var/svn/repo: total 36 drwxrwxrwx 2 apache apache 4096 2010-03-26 09:47 conf drwxrwxrwx 3 apache apache 4096 2010-03-26 10:18 dav drwxrwsrwx 6 apache apache 4096 2010-03-26 11:19 db -rwxrwxrwx 1 apache apache 2 2010-03-26 09:47 format drwxrwxrwx 2 apache apache 4096 2010-03-26 09:47 hooks drwxrwxrwx 2 apache apache 4096 2010-03-26 09:47 locks -rwxrwxrwx 1 apache apache 229 2010-03-26 09:47 README.txt -rwxrwxrwx 1 apache apache 15 2010-03-26 09:47 svnauth -rwxrwxrwx 1 apache apache 43 2010-03-26 09:48 svnpass /var/svn/repo/conf: total 12 -rwxrwxrwx 1 apache apache 1080 2010-03-26 09:47 authz -rwxrwxrwx 1 apache apache 309 2010-03-26 09:47 passwd -rwxrwxrwx 1 apache apache 2279 2010-03-26 09:47 svnserve.conf /var/svn/repo/dav: total 4 drwxrwxrwx 2 apache apache 4096 2010-03-26 11:19 activities.d /var/svn/repo/dav/activities.d: total 0 /var/svn/repo/db: total 48 -rwxrwxrwx 1 apache apache 2 2010-03-26 09:47 current -rwxrwxrwx 1 apache apache 22 2010-03-26 09:47 format -rwxrwxrwx 1 apache apache 1920 2010-03-26 09:47 fsfs.conf -rwxrwxrwx 1 apache apache 5 2010-03-26 09:47 fs-type -rwxrwxrwx 1 apache apache 2 2010-03-26 09:47 min-unpacked-rev -rwxrwxrwx 1 apache apache 4096 2010-03-26 09:47 rep-cache.db drwxrwsrwx 3 apache apache 4096 2010-03-26 09:47 revprops drwxrwsrwx 3 apache apache 4096 2010-03-26 09:47 revs drwxrwsrwx 2 apache apache 4096 2010-03-26 11:19 transactions -rwxrwxrwx 1 apache apache 2 2010-03-26 11:19 txn-current -rwxrwxrwx 1 apache apache 0 2010-03-26 09:47 txn-current-lock drwxrwsrwx 2 apache apache 4096 2010-03-26 11:19 txn-protorevs -rwxrwxrwx 1 apache apache 37 2010-03-26 09:47 uuid -rwxrwxrwx 1 apache apache 0 2010-03-26 09:47 write-lock /var/svn/repo/db/revprops: total 4 drwxrwsrwx 2 apache apache 4096 2010-03-26 09:47 0 /var/svn/repo/db/revprops/0: total 4 -rwxrwxrwx 1 apache apache 50 2010-03-26 09:47 0 /var/svn/repo/db/revs: total 4 drwxrwsrwx 2 apache apache 4096 2010-03-26 09:47 0 /var/svn/repo/db/revs/0: total 4 -rwxrwxrwx 1 apache apache 115 2010-03-26 09:47 0 /var/svn/repo/db/transactions: total 0 /var/svn/repo/db/txn-protorevs: total 0 /var/svn/repo/hooks: total 36 -rwxrwxrwx 1 apache apache 1955 2010-03-26 09:47 post-commit.tmpl -rwxrwxrwx 1 apache apache 1638 2010-03-26 09:47 post-lock.tmpl -rwxrwxrwx 1 apache apache 2267 2010-03-26 09:47 post-revprop-change.tmpl -rwxrwxrwx 1 apache apache 1567 2010-03-26 09:47 post-unlock.tmpl -rwxrwxrwx 1 apache apache 3404 2010-03-26 09:47 pre-commit.tmpl -rwxrwxrwx 1 apache apache 2410 2010-03-26 09:47 pre-lock.tmpl -rwxrwxrwx 1 apache apache 2764 2010-03-26 09:47 pre-revprop-change.tmpl -rwxrwxrwx 1 apache apache 2100 2010-03-26 09:47 pre-unlock.tmpl -rwxrwxrwx 1 apache apache 2758 2010-03-26 09:47 start-commit.tmpl /var/svn/repo/locks: total 8 -rwxrwxrwx 1 apache apache 139 2010-03-26 09:47 db.lock -rwxrwxrwx 1 apache apache 139 2010-03-26 09:47 db-logs.lock I've got httpd.conf loading svn.conf which contains: <Location /svn> DAV on DAV svn #SVNParentPath /var/svn SVNPath /var/svn/repo Authtype Basic AuthName "Subversion" AuthUserFile /var/svn/repo/svnpass Require valid-user AuthzSVNAccessFile /var/svn/repo/svnauth </Location> Full error message is: org.tigris.subversion.javahl.ClientException: RA layer request failed Server sent unexpected return value (403 Forbidden) in response to CHECKOUT request for '/svn/!svn/bln/0' Sorry for the incredibly long post, but I thought more info would be better than less. I've been fidgeting with this problem for a long time now.

    Read the article

  • Should I worry about DDMS console log messages "Can't bind to local nnnn for debugger"?

    - by Chris
    I'm new to Android programming (and Eclipse IDE and Android emulator). I've got Hello World and some of Notepad working, but I'm still constantly getting quite a few DDMS console log messages (shown below) about not being able to bind locals for debugger. Is this a problem? Can I get rid of these messages somehow? [2010-05-29 21:03:16 - ddms]Can't bind to local 8601 for debugger [2010-05-29 21:05:26 - Device]Failed to delete temporary package: device (emulator-5556) request rejected: device not found [2010-05-29 21:06:47 - ddms]Can't bind to local 8600 for debugger [2010-05-29 21:07:05 - ddms]Can't bind to local 8601 for debugger [2010-05-29 21:07:05 - ddms]Can't bind to local 8602 for debugger [2010-05-29 21:07:06 - ddms]Can't bind to local 8604 for debugger [2010-05-29 21:07:07 - ddms]Can't bind to local 8609 for debugger [2010-05-29 21:07:17 - ddms]Can't bind to local 8610 for debugger [2010-05-29 21:07:20 - ddms]Can't bind to local 8613 for debugger [2010-05-29 21:08:20 - ddms]Can't bind to local 8616 for debugger [2010-05-29 21:08:20 - ddms]Can't bind to local 8618 for debugger [2010-05-29 21:08:20 - ddms]Can't bind to local 8620 for debugger [2010-05-29 21:08:20 - ddms]Can't bind to local 8627 for debugger [2010-05-29 21:08:21 - ddms]Can't bind to local 8632 for debugger [2010-05-29 21:08:23 - ddms]Can't bind to local 8636 for debugger [2010-05-29 21:08:23 - ddms]Can't bind to local 8640 for debugger [2010-05-29 21:08:23 - ddms]Can't bind to local 8643 for debugger

    Read the article

  • Visual Studio 2012 crashes everytime I try to debug with error CLR20r3

    - by Chris
    Everytime I try to debug one of my apps I get the below error message. Anyone have any ideas? I tried running Visual Studio in safe mode but I get the same thing. I also tried to repair the install and completely reinstall it with no luck :(. The full Problem Signature is this: Problem signature: Problem Event Name: CLR20r3 Problem Signature 01: devenv.exe Problem Signature 02: 11.0.50727.1 Problem Signature 03: 5011ecaa Problem Signature 04: Microsoft.IntelliTrace.Package.11.0.0 Problem Signature 05: 11.0.50727.1 Problem Signature 06: 5011dad8 Problem Signature 07: 311 Problem Signature 08: 1f1 Problem Signature 09: System.AccessViolationException OS Version: 6.1.7601.2.1.0.256.48 Locale ID: 1033 Additional Information 1: 0a9e Additional Information 2: 0a9e372d3b4ad19135b953a78882e789 Additional Information 3: 0a9e Additional Information 4: 0a9e372d3b4ad19135b953a78882e789

    Read the article

  • Reg Ex parsing error - too many )'s

    - by Chris Herring
    Using regular expressions in .NET with the pattern ^%[^%]+%\Z and the string "few)few%" I get the error - System.ArgumentException: parsing "few)few%" - Too many )'s. Dim match As System.Text.RegularExpressions.Match = System.Text.RegularExpressions.Regex.Match("^%[^%]+%\Z", "few)few%") What would the issue be? Do I need to escape brackets in any input expression to reg ex? (I'm trying the determine if string has the wildcard % at the beginning and end of the string but not elsewhere in the string)

    Read the article

  • How to migrate deploy RESTful WCF Web Service from IIS 6.0 to IIS 7.0

    - by Chris Lee
    Hi all, I have a WCF Restful Web Service. It works find under VS and IIS 6.0. Now I want to move it to another work station with IIS 7.0 on it. I tried to copy all the deploy file from IIS 6 to IIS 7, but it cannot be accessed by other client, except for the request from it's own machine. I don't know what's wrong and I tried to enable the anonymous access. Please help me. Thanks.

    Read the article

  • Communication between Outlook addin and Program Automating Outlook

    - by Chris Kinsman
    I have an application that uses the automation interfaces to Microsoft Outlook to create a mail message and then after it is sent save an archive of that email message in my application. I am hitting issues with a number of the third party encryption addins because by the time the Sent event fires what is passed to me is the already encrypted message. I would like to somehow have them fire the event directly without sending a message to pass me the unencrypted version or I would like them to be able to somehow fire an event to me that passes the unencrypted message in a loosely coupled fashion. I can't seem to find a way to define new events on the Outlook application object so I am looking for other ideas. Thanks!

    Read the article

  • 'Microsoft.ACE.OLEDB.12.0' 64x Sql Server and 86x Office???

    - by Chris
    The error: OLE DB provider 'Microsoft.ACE.OLEDB.12.0' cannot be used for distributed queries because the provider is configured to run in single-threaded apartment mode. And the answers I'm seeing is a conflict between 64 bit Sql Server and 32 bit Office. Is there a way to run an openrowset on Excel into Sql Server? insert into dbo.FiscalCalendar select * from openrowset('Microsoft.ACE.OLEDB.12.0', 'Excel 12.0 Xml;Database=C:\Users\v-chrha\Desktop\fy11.xlsx;', 'Select * from [Sheet1]')

    Read the article

  • Add collection or array to wpf resource dictionary

    - by Chris Cap
    I've search high and low and can't find an answer to this. I have two questions How do you create an array or collection in XAML. I've got an array I want to stick in there and bind to a combo box. My first idea was to put an ItemsControl in a resource dictionary, but the ItemsSource of a combo box expects IEnumerable so that didn't work. Here's what I've tried in my resource dictionary and neither works <ItemsControl x:Key="stateList"> <sys:String>AL</sys:String> <sys:String>CA</sys:String> <sys:String>CN</sys:String> </ItemsControl> <ItemsControl x:Key="stateList2"> <ComboBoxItem>AL</ComboBoxItem> <ComboBoxItem>CA</ComboBoxItem> <ComboBoxItem>CN</ComboBoxItem> </ItemsControl> and here's how I bind to it <ComboBox SelectedValue="{Binding Path=State}" ItemsSource="{Binding Source={StaticResource stateList2}}" > </ComboBox> EDIT: UPDATED I got this first part to work this way <col:ArrayList x:Key="stateList3"> <sys:String>AL</sys:String> <sys:String>CA</sys:String> <sys:String>CN</sys:String> </col:ArrayList> However, I'd rather not use an array list, I'd like to use a generic list so if anyone knows how please let me know. EDIT UPDATE: I guess XAML has very limited support for generics so maybe an array list is the best I can do for now, but I would still like help on the second question if anyone has an anser 2nd. I've tried referencing a merged resource dictionary in my XAML and had problems because under Window.resources I had more than just the dictionary so it required me to add x:Key. Once I add the key, the system can no longer find the items in my resource dictionary. I had to move the merged dictionary to Grid.Resources instead. Ideally I'd like to reference the merged dictionary in the app.xaml but I have the same problem Here's some sample code. This first part required an x:key to compile because I have converter and it complained that every item must have a key if there is more than one <UserControl.Resources> <win:BooleanToVisibilityConverter x:Key="VisibilityConverter" /> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/ResourcesD.xaml" /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </UserControl.Resources> I had to change it to this <UI:BaseStep.Resources> <win:BooleanToVisibilityConverter x:Key="VisibilityConverter" /> </UI:BaseStep.Resources> <Grid> <Grid.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/ResourcesD.xaml" /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Grid.Resources> </Grid> Thank you

    Read the article

  • Using Response.Redirect with jQuery Thickbox

    - by Chris Stewart
    I'm using jQuery Thickbox to display an iframe (upload.aspx) that allows a user to upload a file. In the code behind for the upload.aspx I finish by sending: Response.Redirect("blah.aspx"); The page I redirect to is dynamic based on the results of the upload process. When this redirect happens, it happens inside the Thickbox and not the parent window as I'd like it to. Here's the calling ASP.NET page (home.aspx): <a href="upload.aspx?placeValuesBeforeTB_=savedValues&TB_iframe=true&height=300&width=500&modal=true" class="thickbox">Add New</a> And here's the submit button inside of the upload.aspx page: <asp:Button ID="btnUpload" runat="server" Text="Upload" OnClick="btnUpload_Click" OnClientClick="self.parent.tb_remove();" /> This is designed to close the modal window and send control to the code behind to perform the file upload, processing, etc. Has anyone experienced this before? How would I go about sending a redirect on the parent window?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >