Search Results

Search found 3171 results on 127 pages for 'manual manuel'.

Page 83/127 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Dell Vostro 3560 bluetooth doesn't work

    - by Shein
    I installed the wireless driver using this instruction How do I install BCM43142 wireless drivers for Dell Vostro 3460/3560 and I have WiFi working. No problems here. But unfortunately the bluetooth doesn't work. The ubuntu bar shows the bluetooth sign and I can turn the bluetooth on/off but I can't discover any devices. And I can't find my laptop when I turn visibility On. So, obviously bluetooth doesn't work. I couldn't find the reports that blutooth can actually work with this adapter in Ubuntu. So, my question is: Is there anyone with BCM43142 adapter that have bluetooth working? Thank You in advance. PS. Ubuntu 12.10 x64 Update: After some fiddling around with different drivers from different sources I managed to get bluetooth working. Not flawlessly but at least I can pair a device. Bluetooth started working after installation of this package bt-bcm43142-onereic_0.0+20111116somerville2_amd64.deb Originally I found this package on the disk with Ubuntu which came with the Laptop. What this package does, it installs a firmware loader and a firmware itself. This firmware needs to get bluetooth working. Still bluetooth sometimes doesn't work even with this package. But manual loading the firmware helps. brcm_patchram_plus_usb --patchram /lib/firmware/BCM43142A0_001.001.011.0028.0036.hcd hci0 Also I found it strange that this package writes all different ids into /sys/bus/usb/drivers/btusb/new_id because only one from the list matches my device ID bcm43142.conf: install btusb /sbin/modprobe --ignore-install btusb && echo '0a5c 21d3' > /sys/bus/usb/drivers/btusb/new_id && echo '0a5c 21d7' > /sys/bus/usb/drivers/btusb/new_id && echo '0a5c 21e1' > /sys/bus/usb/drivers/btusb/new_id && echo '0a5c 21e3' > /sys/bus/usb/drivers/btusb/new_id && hciconfig hci0 up && /usr/bin/brcm_patchram_plus_usb --patchram /lib/firmware/BCM43142A0_001.001.011.0028.0036.hcd hci0 & My lsusb: ... Bus 002 Device 003: ID 0a5c:21d7 Broadcom Corp. In conclusion: bluetooth works not nearly as good as in windows :( once I even got a complete crash of the system because of the btusb module. Luckily WiFi works perfectly :)

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Hyperlinked, externalized source code documentation

    - by Dave Jarvis
    Why do we still embed natural language descriptions of source code (i.e., the reason why a line of code was written) within the source code, rather than as a separate document? Given the expansive real-estate afforded to modern development environments (high-resolution monitors, dual-monitors, etc.), an IDE could provide semi-lock-step panels wherein source code is visually separated from -- but intrinsically linked to -- its corresponding comments. For example, developers could write source code comments in a hyper-linked markup language (linking to additional software requirements), which would simultaneously prevent documentation from cluttering the source code. What shortcomings would inhibit such a software development mechanism? A mock-up to help clarify the question: When the cursor is at a particular line in the source code (shown with a blue background, above), the documentation that corresponds to the line at the cursor is highlighted (i.e., distinguished from the other details). As noted in the question, the documentation would stay in lock-step with the source code as the cursor jumps through the source code. A hot-key could switch between "documentation mode" and "development mode". Potential advantages include: More source code and more documentation on the screen(s) at once Ability to edit documentation independently of source code (regardless of language?) Write documentation and source code in parallel without merge conflicts Real-time hyperlinked documentation with superior text formatting Quasi-real-time machine translation into different natural languages Every line of code can be clearly linked to a task, business requirement, etc. Documentation could automatically timestamp when each line of code was written (metrics) Dynamic inclusion of architecture diagrams, images to explain relations, etc. Single-source documentation (e.g., tag code snippets for user manual inclusion). Note: The documentation window can be collapsed Workflow for viewing or comparing source files would not be affected How the implementation happens is a detail; the documentation could be: kept at the end of the source file; split into two files by convention (filename.c, filename.c.doc); or fully database-driven By hyperlinked documentation, I mean linking to external sources (such as StackOverflow or Wikipedia) and internal documents (i.e., a wiki on a subdomain that could cross-reference business requirements documentation) and other source files (similar to JavaDocs). Related thread: What's with the aversion to documentation in the industry?

    Read the article

  • ¿Oficina sin papeles?

    - by [email protected]
    Recientemente hemos organizado un evento de Digitalización para mostrar algunos de los últimos productos de Oracle en éste área.Siempre tendemos a pensar que en España estamos retrasados en estas tecnologías y que el mercado no está preparado para eliminar el papel. En algunos casos es cierto, pero también nos hemos llevado sorpresas con clientes extremadamente avanzados en la gestión electrónica del papel.Para los clientes que no tienen una solución corporativa ya desplegada, nuestra oferta de Imaging les parece completa e integrada, porque les permite digitalizar el papel en el punto más cercano a su recepción y posteriormente realizar todo el trámite interno de forma digital.Este proceso es el que se muestra en la siguiente imágen: Sobre todo en el entorno financiero los clientes ya tienen grandes infraestructuras desplegadas (algunos con funcionalidades muy sofisticadas que han desarrollado a medida durante estos últimos años).En estos casos, su interés está centrado en 2 capacidades clave de nuestros productos: La digitalización distribuidaEl OCR inteligenteCuando ya disponemos de una infraestructura de digitalización centralizada, tenemos varios puntos de mejora con los que conseguir mayores ratios de ahorro en la gestión del papel. Uno de ellos es digitalizar en origen, de forma que ahorraremos en logística de desplazamiento y almacenamiento de papel (reducimos valijas) y en velocidad de arranque de los procesos (desde el momento de la recepción).El hecho de poder hacer esto sólo con un explorador de internet es muy novedoso para los clientes.El no instalar ninguna pieza de software de cliente parece que es un requisito que muchos clientes estaban demandando desde hace tiempo. De hecho, estamos realizando demos en vivo con un escáner del cliente (solo necesitamos el driver de windows para ese escáner). El resultado es sorprendente porque mostramos cómo: escaneamos con sólo un explorador de internet; el documento escaneado, con sus metadatos, se incorporan al gestor documental; y se dispara su workflow de aprobación.Hacer esto en segundos es algo que genera mucho interés en los clientes de cara a acelerar la gestión de muchos de sus trámites en papel.Por último, lo más novedoso de la oferta es el OCR inteligente. Hay quien ya tiene absolutamente operativas sus infraestructuras de digitalización con todas estas capacidades, y buscan un paso más allá con el reconocimiento inteligente de todos los metadatos posibles.El beneficio es rápido, claramente cuantificable y muy alto. El software de OCR inteligente se basa en lógica difusa y nos permite definir los umbrales de validación totalmente adecuados a nuestros factores de confianza. Es decir, configuramos el umbral para que cuando el software acepta un acierto tengamos la seguridad total de que dichos metadatos se han reconocido perfectamente. En caso contrario, el software lanza una validación manual.¿Qué pasa si conseguimos que para determinados documentos, el 40%, 50%, 60% o incluso el 70% u 80% de ellos fueran procesados 100% automáticamente?. El ahorro es inmenso, la reducción del tiempo de proceso también, y la integración con nuestras infraestructuras de digitalización es muy sencilla (basta con desviar unos cuantos documentos de un tipo concreto a Oracle Forms Recognition y evaluar el resultado).Os animo a que veáis estos productos y consigamos hacer realidad la reducción de papel.

    Read the article

  • SQL SERVER – 4 Tips for ETL Software IDE Developers

    - by pinaldave
    In a previous blog, I introduced the notion of Semantic Types. To an end-user, a seamlessly integrated semantic typing engine significantly increases the ease of use of an ETL IDE (integrated development environment, or developer studio). This led me to think about other ease-of-use issues I have encountered while building ETL applications. When I get stumped while programming, I find myself asking the variations on these questions: “How do I…?” “Now what?” “Why isn’t this working?” “Why do I have to redo the work I just did?” It seems to me that a good ETL IDE will anticipate these questions and seek to answer them before they are even asked. So here are my tips to help software vendors build developer IDEs that actually make development easier. How do I…? While developing an ETL application, have you ever asked yourself: “How do I set up the connection to my SQL Server database?”,“How do I import my table definitions from Access?”, etc. An easy answer might be “read the manual” but sometimes product manuals are not robust or easily accessible. So, integrating robust how-to instructions directly into your ETLstudio would help users get the information they need at the time they need it. Now what? IDEs in general know where you last clicked or performed an action using an input device such as a keyboard; so they should be able to reasonably predict the design context you are in and suggest the next steps accordingly. Context-sensitive suggestions based on the state of the user’s work will help users move forward in ETL application development. Why isn’t this working? Or why do I have to wait till I compile to be told about a critical design issue? If an ETL IDE is smart enough to signal to users what in their design structures is left to be completed or has been completed incorrectly, then the developer can spend much less time in the designàcompileàerror-correct loop. Just-in-time validation helps users detect and correct programming errors earlier in the ETL development life cycle. Why do I have to redo the work I just did? In ETL development, schemas, transformation rules, connectivity objects, etc., can be reused in various situations. Using mouse-clicks to build and manage libraries of reusable design objects implies that the application development effort should decrease over time and as the library acquires more objects. I met a great company at SQL Pass that is trying to address many of these usability issues. Check them out at www.expressor-software.com. What other ease-of-use suggestions do you have for ETL software vendors? Please post your valuable comments. ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: ETL

    Read the article

  • Silverlight Cream for May 02, 2010 -- #854

    - by Dave Campbell
    In this Issue: Michael Washington, Jason Young(-2-, -3-), Phil Middlemiss, Jeremy Likness, Victor Gaudioso, Kunal Chowdhury, Antoni Dol, and Jacek Ciereszko(-2-). Shoutout: Victor Gaudioso has aggregated All of My Silverlight Video Tutorials in One Place (revised again 05.02.10) From SilverlightCream.com: Unit Testing A Silverlight 'Simplified MVVM' Modal Popup Michael Washington's latest 'Simplified MVVM' post is published at The Code Project and is on Unit Testing with MVVM. Input Localization in Silverlight without IValueConverter Jason Young sent me some links to posts I've not seen... this first one is on localization by using the Language property of the Root Visual. MVVM – The Model - Part 1 – INotifyPropertyChanged Jason Young's next archive post is the first of a series on MVVM and Silverlight 4 ... implementing a simple ViewModel base class. Silverlight, WCF, and ASP.Net Configuration Gotchas Jason Young worked at tracking down the answers to some forum questions and in the process has produced a post of 'gotchas' with using WCF in Silverlight. A Chrome and Glass Theme - Part 5 Phil Middlemiss has part 5 of his Chrome and Glass Theme tutorial up ... in this one, he's looking at the Progress Bar and Slider. Download the files and play along. Silverlight Out of Browser (OOB) Versions, Images, and Isolated Storage Jeremy Likness has a post up responding to his 3 major questions about OOB apps, and he has to code up for the sample too. New Silverlight Video Tutorial: How to Make a Slide In/Out Navigation Bar – All in Blend Victor Gaudioso's latest video tutorial is on building a Behavior for a Slide in/out Navigation bar... kinda like the menu sliders on my GlyphMap Utility... only easier! Command Binding in Silverlight 4 (Step-by-Step) Kunal Chowdhury has another post up at DotNetFunda, and this time he's talking about Command Binding in Silverlight 4 with an eye toward MVVM usage. The Silverlight PageCurl implementation Antoni Dol has a post up about doing a Page Curl effect in Silverlight. He has a manual up on the effect and full application code. How to center and scale Silverlight applications using ViewBox control Jacek Ciereszko has a couple posts up about centering and scaling your app with the ViewBox control. This first one is a code solution. Source is available, as is a Polish version. Silverlight Center And Scale Behavior Jacek Ciereszko's 2nd post, he provides a Behavior that handles the scaling and centering of the previous post. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • To 'seal' or to 'wrap': that is the question ...

    - by Simon Thorpe
    If you follow this blog you will already have a good idea of what Oracle Information Rights Management (IRM) does. By encrypting documents Oracle IRM secures and tracks all copies of those documents, everywhere they are shared, stored and used, inside and outside your firewall. Unlike earlier encryption products authorized end users can transparently use IRM-encrypted documents within standard desktop applications such as Microsoft Office, Adobe Reader, Internet Explorer, etc. without first having to manually decrypt the documents. Oracle refers to this encryption process as 'sealing', and it is thanks to the freely available Oracle IRM Desktop that end users can transparently open 'sealed' documents within desktop applications without needing to know they are encrypted and without being able to save them out in unencrypted form. So Oracle IRM provides an amazing, unprecedented capability to secure and track every copy of your most sensitive information - even enabling end user access to be revoked long after the documents have been copied to home computers or burnt to CD/DVDs. But what doesn't it do? The main limitation of Oracle IRM (and IRM products in general) is format and platform support. Oracle IRM supports by far the broadest range of desktop applications and the deepest range of application versions, compared to other IRM vendors. This is important because you don't want to exclude sensitive business processes from being 'sealed' just because either the file format is not supported or users cannot upgrade to the latest version of Microsoft Office or Adobe Reader. But even the Oracle IRM Desktop can only open 'sealed' documents on Windows and does not for example currently support CAD (although this is coming in a future release). IRM products from other vendors are much more restrictive. To address this limitation Oracle has just made available the Oracle IRM Wrapper all-format, any-platform encryption/decryption utility. It uses the same core Oracle IRM web services and classification-based rights model to manually encrypt and decrypt files of any format on any Java-capable operating system. The encryption envelope is the same, and it uses the same role- and classification-based rights as 'sealing', but before you can use 'wrapped' files you must manually decrypt them. Essentially it is old-school manual encryption/decryption using the modern classification-based rights model of Oracle IRM. So if you want to share sensitive CAD documents, ZIP archives, media files, etc. with a partner, and you already have Oracle IRM, it's time to get 'wrapping'! Please note that the Oracle IRM Wrapper is made available as a free sample application (with full source code) and is not formally supported by Oracle. However it is informally supported by its author, Martin Lambert, who also created the widely-used Oracle IRM Hot Folder automated sealing application.

    Read the article

  • SQLAuthority News – Book Review – Beginning T-SQL 2008 by Kathi Kellenberger

    - by pinaldave
    Beginning T-SQL 2008 by Kathi Kellenberger Amazon Link Detail Review: Beginning T-SQL 2008 is one of the best books on the market if you are just beginning to work with Microsoft SQL, or have a little bit of experience and need to learn more quickly. Each chapter of the book introduces a new subject, and builds upon topics covered in previous chapters.  The author of the book, Kathi Kellenberger understands that you need to form a solid foundation of knowledge before moving on to new topics, and sets up each subject nicely.  Because the chapters move in an orderly progression, you continue to use skills you learned earlier. One of the best features of Beginning T-SQL 2008 is that each chapter has multiple examples and exercises.  Many books introduce a topic and then never go back to it.  This book gives enough examples that you will be familiar with the subject when you come across it in real life.  The exercises at the end of the chapter mean that you will be using the skills you learned – and there is no better way to cement a subject in your brain. The book also includes discussions of the common errors that programmers will come across, how to avoid them, and how to fix them if they happen.  Ms. Kellenberger understands that not only do mistakes happen, but they are bound to happen if you aren’t trained properly.  Mistakes are part of the learning process! The book begins by discussions relational theory, so that programmers will understand the way T-SQL works from the ground up.  It also walks readers through writing accurate queries, combining set-based and procedural processing, embedding logic in stored functions, and so much more. Overall, the main goal of Beginning T-SQL 2008 is to introduce novices to SQL programming, and quickly familiarize them with the basics of running the program.  The book is written with the idea that readers will not know any of the technical terms or vocabulary.  However, if you are a little more familiar with SQL and looking to become better, you will still find this book very helpful. Ratting: 4.5+ Stars Summary: I must recommend Beginning T-SQL 2008 highly enough.  If you are going to buy any beginners guide to Transect-SQL, this is the one you should spend your money on.  You can save yourself a lot of time and effort later by using this very affordable manual to learn the basics, which will allow you to become an expert much faster. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, T SQL, Technology

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

  • Red Gate Coder interviews: Robin Hellen

    - by Michael Williamson
    Robin Hellen is a test engineer here at Red Gate, and is also the latest coder I’ve interviewed. We chatted about debugging code, the roles of software engineers and testers, and why Vala is currently his favourite programming language. How did you get started with programming?It started when I was about six. My dad’s a professional programmer, and he gave me and my sister one of his old computers and taught us a bit about programming. It was an old Amiga 500 with a variant of BASIC. I don’t think I ever successfully completed anything! It was just faffing around. I didn’t really get anywhere with it.But then presumably you did get somewhere with it at some point.At some point. The PC emerged as the dominant platform, and I learnt a bit of Visual Basic. I didn’t really do much, just a couple of quick hacky things. A bit of demo animation. Took me a long time to get anywhere with programming, really.When did you feel like you did start to get somewhere?I think it was when I started doing things for someone else, which was my sister’s final year of university project. She called up my dad two days before she was due to submit, saying “We need something to display a graph!”. Dad says, “I’m too busy, go talk to your brother”. So I hacked up this ugly piece of code, sent it off and they won a prize for that project. Apparently, the graph, the bit that I wrote, was the reason they won a prize! That was when I first felt that I’d actually done something that was worthwhile. That was my first real bit of code, and the ugliest code I’ve ever written. It’s basically an array of pre-drawn line elements that I shifted round the screen to draw a very spikey graph.When did you decide that programming might actually be something that you wanted to do as a career?It’s not really a decision I took, I always wanted to do something with computers. And I had to take a gap year for uni, so I was looking for twelve month internships. I applied to Red Gate, and they gave me a job as a tester. And that’s where I really started having to write code well. To a better standard that I had been up to that point.How did you find coming to Red Gate and working with other coders?I thought it was really nice. I learnt so much just from other people around. I think one of the things that’s really great is that people are just willing to help you learn. Instead of “Don’t you know that, you’re so stupid”, it’s “You can just do it this way”.If you could go back to the very start of that internship, is there something that you would tell yourself?Write shorter code. I have a tendency to write massive, many-thousand line files that I break out of right at the end. And then half-way through a project I’m doing something, I think “Where did I write that bit that does that thing?”, and it’s almost impossible to find. I wrote some horrendous code when I started. Just that principle, just keep things short. Even if looks a bit crazy to be jumping around all over the place all of the time, it’s actually a lot more understandable.And how do you hold yourself to that?Generally, if a function’s going off my screen, it’s probably too long. That’s what I tell myself, and within the team here we have code reviews, so the guys I’m with at the moment are pretty good at pulling me up on, “Doesn’t that look like it’s getting a bit long?”. It’s more just the subjective standard of readability than anything.So you’re an advocate of code review?Yes, definitely. Both to spot errors that you might have made, and to improve your knowledge. The person you’re reviewing will say “Oh, you could have done it that way”. That’s how we learn, by talking to others, and also just sharing knowledge of how your project works around the team, or even outside the team. Definitely a very firm advocate of code reviews.Do you think there’s more we could do with them?I don’t know. We’re struggling with how to add them as part of the process without it becoming too cumbersome. We’ve experimented with a few different ways, and we’ve not found anything that just works.To get more into the nitty gritty: how do you like to debug code?The first thing is to do it in my head. I’ll actually think what piece of code is likely to have caused that error, and take a quick look at it, just to see if there’s anything glaringly obvious there. The next thing I’ll probably do is throw in print statements, or throw some exceptions from various points, just to check: is it going through the code path I expect it to? A last resort is to actually debug code using a debugger.Why is the debugger the last resort?Probably because of the environments I learnt programming in. VB and early BASIC didn’t have much of a debugger, the only way to find out what your program was doing was to add print statements. Also, because a lot of the stuff I tend to work with is non-interactive, if it’s something that takes a long time to run, I can throw in the print statements, set a run off, go and do something else, and look at it again later, rather than trying to remember what happened at that point when I was debugging through it. So it also gives me the record of what happens. I hate just sitting there pressing F5, F5, continually. If you’re having to find out what your code is doing at each line, you’ve probably got a very wrong mental model of what your code’s doing, and you can find that out just as easily by inspecting a couple of values through the print statements.If I were on some codebase that you were also working on, what should I do to make it as easy as possible to understand?I’d say short and well-named methods. The one thing I like to do when I’m looking at code is to find out where a value comes from, and the more layers of indirection there are, particularly DI [dependency injection] frameworks, the harder it is to find out where something’s come from. I really hate that. I want to know if the value come from the user here or is a constant here, and if I can’t find that out, that makes code very hard to understand for me.As a tester, where do you think the split should lie between software engineers and testers?I think the split is less on areas of the code you write and more what you’re designing and creating. The developers put a structure on the code, while my major role is to say which tests we should have, whether we should test that, or it’s not worth testing that because it’s a tiny function in code that nobody’s ever actually going to see. So it’s not a split in the code, it’s a split in what you’re thinking about. Saying what code we should write, but alternatively what code we should take out.In your experience, do the software engineers tend to do much testing themselves?They tend to control the lowest layer of tests. And, depending on how the balance of people is in the team, they might write some of the higher levels of test. Or that might go to the testers. I’m the only tester on my team with three other developers, so they’ll be writing quite a lot of the actual test code, with input from me as to whether we should test that functionality, whereas on other teams, where it’s been more equal numbers, the testers have written pretty much all of the high level tests, just because that’s the best use of resource.If you could shuffle resources around however you liked, do you think that the developers should be writing those high-level tests?I think they should be writing them occasionally. It helps when they have an understanding of how testing code works and possibly what assumptions we’ve made in tests, and they can say “actually, it doesn’t work like that under the hood so you’ve missed this whole area”. It’s one of those agile things that everyone on the team should be at least comfortable doing the various jobs. So if the developers can write test code then I think that’s a very good thing.So you think testers should be able to write production code?Yes, although given most testers skills at coding, I wouldn’t advise it too much! I have written a few things, and I did make a few changes that have actually gone into our production code base. They’re not necessarily running every time but they are there. I think having that mix of skill sets is really useful. In some ways we’re using our own product to test itself, so being able to make those changes where it’s not working saves me a round-trip through the developers. It can be really annoying if the developers have no time to make a change, and I can’t touch the code.If the software engineers are consistently writing tests at all levels, what role do you think the role of a tester is?I think on a team like that, those distinctions aren’t quite so useful. There’ll be two cases. There’s either the case where the developers think they’ve written good tests, but you still need someone with a test engineer mind-set to go through the tests and validate that it’s a useful set, or the correct set for that code. Or they won’t actually be pure developers, they’ll have that mix of test ability in there.I think having slightly more distinct roles is useful. When it starts to blur, then you lose that view of the tests as a whole. The tester job is not to create tests, it’s to validate the quality of the product, and you don’t do that just by writing tests. There’s more things you’ve got to keep in your mind. And I think when you blur the roles, you start to lose that end of the tester.So because you’re working on those features, you lose that holistic view of the whole system?Yeah, and anyone who’s worked on the feature shouldn’t be testing it. You always need to have it tested it by someone who didn’t write it. Otherwise you’re a bit too close and you assume “yes, people will only use it that way”, but the tester will come along and go “how do people use this? How would our most idiotic user use this?”. I might not test that because it might be completely irrelevant. But it’s coming in and trying to have a different set of assumptions.Are you a believer that it should all be automated if possible?Not entirely. So an automated test is always better than a manual test for the long-term, but there’s still nothing that beats a human sitting in front of the application and thinking “What could I do at this point?”. The automated test is very good but they follow that strict path, and they never check anything off the path. The human tester will look at things that they weren’t expecting, whereas the automated test can only ever go “Is that value correct?” in many respects, and it won’t notice that on the other side of the screen you’re showing something completely wrong. And that value might have been checked independently, but you always find a few odd interactions when you’re going through something manually, and you always need to go through something manually to start with anyway, otherwise you won’t know where the important bits to write your automation are.When you’re doing that manual testing, do you think it’s important to do that across the entire product, or just the bits that you’ve touched recently?I think it’s important to do it mostly on the bits you’ve touched, but you can’t ignore the rest of the product. Unless you’re dealing with a very, very self-contained bit, you’re almost always encounter other bits of the product along the way. Most testers I know, even if they are looking at just one path, they’ll keep open and move around a bit anyway, just because they want to find something that’s broken. If we find that your path is right, we’ll go out and hunt something else.How do you think this fits into the idea of continuously deploying, so long as the tests pass?With deploying a website it’s a bit different because you can always pull it back. If you’re deploying an application to customers, when you’ve released it, it’s out there, you can’t pull it back. Someone’s going to keep it, no matter how hard you try there will be a few installations that stay around. So I’d always have at least a human element on that path. With websites, you could probably automate straight out, or at least straight out to an internal environment or a single server in a cloud of fifty that will serve some people. But I don’t think you should release to everyone just on automated tests passing.You’ve already mentioned using BASIC and C# — are there any other languages that you’ve used?I’ve used a few. That’s something that has changed more recently, I’ve become familiar with more languages. Before I started at Red Gate I learnt a bit of C. Then last year, I taught myself Python which I actually really enjoyed using. I’ve also come across another language called Vala, which is sort of a C#-like language. It’s basically a pre-processor for C, but it has very nice syntax. I think that’s currently my favourite language.Any particular reason for trying Vala?I have a completely Linux environment at home, and I’ve been looking for a nice language, and C# just doesn’t cut it because I won’t touch Mono. So, I was looking for something like C# but that was useable in an open source environment, and Vala’s what I found. C#’s got a few features that Vala doesn’t, and Vala’s got a few features where I think “It would be awesome if C# had that”.What are some of the features that it’s missing?Extension methods. And I think that’s the only one that really bugs me. I like to use them when I’m writing C# because it makes some things really easy, especially with libraries that you can’t touch the internals of. It doesn’t have method overloading, which is sometimes annoying.Where it does win over C#?Everything is non-nullable by default, you never have to check that something’s unexpectedly null.Also, Vala has code contracts. This is starting to come in C# 4, but the way it works in Vala is that you specify requirements in short phrases as part of your function signature and they stick to the signature, so that when you inherit it, it has exactly the same code contract as the base one, or when you inherit from an interface, you have to match the signature exactly. Just using those makes you think a bit more about how you’re writing your method, it’s not an afterthought when you’ve got contracts from base classes given to you, you can’t change it. Which I think is a lot nicer than the way C# handles it. When are those actually checked?They’re checked both at compile and run-time. The compile-time checking isn’t very strong yet, it’s quite a new feature in the compiler, and because it compiles down to C, you can write C code and interface with your methods, so you can bypass that compile-time check anyway. So there’s an extra runtime check, and if you violate one of the contracts at runtime, it’s game over for your program, there’s no exception to catch, it’s just goodbye!One thing I dislike about C# is the exceptions. You write a bit of code and fifty exceptions could come from any point in your ten lines, and you can’t mentally model how those exceptions are going to come out, and you can’t even predict them based on the functions you’re calling, because if you’ve accidentally got a derived class there instead of a base class, that can throw a completely different set of exceptions. So I’ve got no way of mentally modelling those, whereas in Vala they’re checked like Java, so you know only these exceptions can come out. You know in advance the error conditions.I think Raymond Chen on Old New Thing says “the only thing you know when you throw an exception is that you’re in an invalid state somewhere in your program, so just kill it and be done with it!”You said you’ve also learnt bits of Python. How did you find that compared to Vala and C#?Very different because of the dynamic typing. I’ve been writing a website for my own use. I’m quite into photography, so I take photos off my camera, post-process them, dump them in a file, and I get a webpage with all my thumbnails. So sort of like Picassa, but written by myself because I wanted something to learn Python with. There are some things that are really nice, I just found it really difficult to cope with the fact that I’m not quite sure what this object type that I’m passed is, I might not ever be sure, so it can randomly blow up on me. But once I train myself to ignore that and just say “well, I’m fairly sure it’s going to be something that looks like this, so I’ll use it like this”, then it’s quite nice.Any particular features that you’ve appreciated?I don’t like any particular feature, it’s just very straightforward to work with. It’s very quick to write something in, particularly as you don’t have to worry that you’ve changed something that affects a different part of the program. If you have, then that part blows up, but I can get this part working right now.If you were doing a big project, would you be willing to do it in Python rather than C# or Vala?I think I might be willing to try something bigger or long term with Python. We’re currently doing an ASP.NET MVC project on C#, and I don’t like the amount of reflection. There’s a lot of magic that pulls values out, and it’s all done under the scenes. It’s almost managed to put a dynamic type system on top of C#, which in many ways destroys the language to me, whereas if you’re already in a dynamic language, having things done dynamically is much more natural. In many ways, you get the worst of both worlds. I think for web projects, I would go with Python again, whereas for anything desktop, command-line or GUI-based, I’d probably go for C# or Vala, depending on what environment I’m in.It’s the fact that you can gain from the strong typing in ways that you can’t so much on the web app. Or, in a web app, you have to use dynamic typing at some point, or you have to write a hell of a lot of boilerplate, and I’d rather use the dynamic typing than write the boilerplate.What do you think separates great programmers from everyone else?Probably design choices. Choosing to write it a piece of code one way or another. For any given program you ask me to write, I could probably do it five thousand ways. A programmer who is capable will see four or five of them, and choose one of the better ones. The excellent programmer will see the largest proportion and manage to pick the best one very quickly without having to think too much about it. I think that’s probably what separates, is the speed at which they can see what’s the best path to write the program in. More Red Gater Coder interviews

    Read the article

  • Querying Networking Statistics: dlstat(1M)

    - by user12612042
    Oracle Solaris 11 took another big leap forward in networking technologies providing a reliable, secure and scalable infrastructure to meet the growing needs of today's datacenter implementations. Oracle Solaris 11 introduced a new and powerful network stack architecture, also known as Project Crossbow. From Solaris 11 onwards, we introduced a command line tool viz. dlstat(1M) to query network statistics. dlstat (for datalink statistics) is a statistics querying counterpart for dladm(1M) - the datalink administration tool. The tool is very easy to get started. Just type dlstat on a shell prompt on Solaris 11 (or later). For example,: # dlstat LINK IPKTS RBYTES OPKTS OBYTES net0 834.11K 145.91M 575.19K 104.24M net1 7.87K 2.04M 0 0 In this example, the system has two datalinks net0 and net1. The output columns denote input packets/bytes as well as output packets/bytes. The numbers are abbreviated in xxx.xxUnit format. However, one could get the actual counts by simply running dlstat -u R (R for raw): # dlstat -u R LINK IPKTS RBYTES OPKTS OBYTES net0 834271 145931244 575246 104242934 net1 7869 2036958 0 0 In addition, dlstat also supports various subcommands dlstat help The following subcommands are supported: Stats : show-aggr show-ether show-link show-phys show-bridge For more info, run: dlstat help {default|} I will only describe couple of interesting subcommands/options here. For a comprehensive description of all the dlstat subcommands refer dlstat's official manual . For NICs that support multiple rings (e.g. ixgbe), dlstat show-phys -r allows us to query per Rx ring statistics. For example: dlstat show-phys -r net4 LINK TYPE INDEX IPKTS RBYTES net4 rx 0 0 0 net4 rx 1 0 0 net4 rx 2 0 0 net4 rx 3 0 0 net4 rx 4 0 0 net4 rx 5 0 0 net4 rx 6 0 0 net4 rx 7 0 0 In this case, net4 is just a vanity name for an ixgbe datalink. This view is especially useful if one wants to look at the network traffic spread across all the available rings. Furthermore, any of the dlstat commands could be run with -i option to periodically query and display stats. For example, running dlstat show-phys -r net4 -i 5 will emit per Rx ring stats every 5 seconds. This is especially useful while analyzing a live system. Similarly, dlstat show-phys -t could be used to query per Tx ring stats. -r and -t could also be combined as dlstat show-phys -rt to query both Rx as well as Tx stats at the same time. Finally, there is also a quick way to dump ALL the stats. Just run dlstat -A. You probably want to redirect this output to a file because you are going to get a whole load of stats :-).

    Read the article

  • My Ubuntu Touch seems to be broken no matter how many different files I try

    - by zeokila
    So I'm planning on testing out Ubuntu Touch, and developing some applications for it so I thought I would flash it to my Nexus 4 that was already unlocked, and running Paranoid Android and the kernel associated. I headed to Ubuntu's website, browsed around and came across this page: Touch/Install - Ubuntu Wiki I followed Step 1 perfectly, word by word. Its seems to me that everything on that part is fine. I skipped Step 2 having already done that for Paranoid Android, and then I follow 3 and 4 word to word also. Using the command phablet-flash -b everything seemed to be fine. So it booted up and all seemed normal, but it wasn't. Here are some major bugs that only seem to happen to me. So I was greeted with a normal lock screen: But one of the first noticable things on the home screen is that I only have 4 tabs, not 5: Some of the applications that are supposed to work do not (I know some are dummies) like here with the calculator, this is what I get: On the homescreen it is a black box, it will crash a couple of seconds later: Another annoying problem is that when I want to close apps, I have from right to left, if I close an app on the left first it will close it, but it will open the app to it's right, weird: Yet another bug, this one is in the pull down drawer, when you just click on it, you can see that not all the icons are there: Pretty much everything else works as it should, but my main problem is that there is no telephony, I'm not sure how it works exactly, but I'm never asked a SIM code (I'm guessing you need that?), I can't compose SMS's and can't dial numbers, it won't let me select the 'Send' or 'Call' button. I've after tried manual installs all over the place with these files: http://cdimage.ubuntu.com/ubuntu-touch-preview/daily-preinstalled/current/saucy-preinstalled-armel+mako.zip (Says 44MB on site - 46.6MB on my laptop) http://cdimage.ubuntu.com/ubuntu-touch-preview/daily-preinstalled/current/saucy-preinstalled-phablet-armhf.zip (Says 366MB on site - 383.2MB on my laptop) There are some weird size differences between what the site told me and what I downloaded, but I've tried re-downloading just to end up with the same file. And it just alway ends up with the same problems. No telephony and those weird bugs. So my question is, how the hell can I get the same version as everyone else, with the ability to send texts and call and open the calculator and more? Also, definitely running saucy: And maybe useful? This is what's in the device info:

    Read the article

  • How to create Adhoc workflow in UCM

    - by vijaykumar.yenne
    UCM has an inbuilt workflow engine that can handle document centric workflow approval/rejection process to ensure the right set of assets go into the repository. Anybody who has gone through the documentation is aware that there are two types of work flows that can be defined using the Workflow Admin applet in UCM namely Criteria and Basic While criteria is an Automatic workflow  process based on certain metadata attributes (Security Group and One of the Metadata Fields) , basic workflow is a manual workflow that need to be initiated by the admin. Any workflow  that can be put on the white board can be translated into the UCM wokflow process and there are concepts like sub workflows, tokens, events. idoc scripting that be introduced to handle any kind of complex workflows. There is a specific Workflow Implementation guide that explains the concepts in detail. One of the standard queries i come across is how to handle adhoc workflows where at the time of contributing the content, the contributors would like to decide on the workflow to be initiated and the users to be picked for approval in each step, hence this post.This is what i want to acheive, i would like to display on my Checkin Screen on the kind of workflows that a contributor could choose from:Based on the Workflow the contributor chooses, the other metadata fields (Step One, Step Two and Step Three)  need to be filled in and these fields decide who the approvers are going to be.1. Create a criteria workflow called One_Step_Review2.create two tokens StepOne <$wfAddUser(xWorkflowStepOne, "user")$>,  OrginalAuthor  <$wfAddUser(wfGet("OriginalAuthor"), "user")$>View image3.create two steps in the work flow created (One_Step_Review)View image4. Edit Step1 of the Workflow and add the Step One token and select the review permissionView image5. In the exit conditions tab have atleast One reveiwerView image6. In the events tab add an entry event <$wfSet("OriginalAuthor",dDocAuthor)$> to capture the contributor who shall be notified in the second step of the workflowView image7. Add the second step Notify_Author to the workflow8. Add the original author token to the above step9.  Enable the workflow10. Open the configration manager applet and create a Metadata field Workflow with option list enabled and add the list of values as show hereView image11. Create another metadata field WorkflowStepOne with option list configured to the Users View. This shall display all the users registered with UCM, which when selected shall be associated with the tokens associated with the workflow. Refer the above token.View imageAs indicated in the above steps you could create multiple work flows and associate the custom metadata field values to the tokens so that the contributors can decide who can approve their  content.

    Read the article

  • Silverlight Cream for April 04, 2010 -- #830

    - by Dave Campbell
    In this Issue: Michael Washington, Hassan, David Anson, Jeff Wilcox, UK Application Development Consulting, Davide Zordan, Victor Gaudioso, Anoop Madhusudanan, Phil Middlemiss, and Laurent Bugnion. Shoutouts: Josh Smith has a good-read post up: Design-time data is still data Shawn Hargreaves reported his MIX demo released From SilverlightCream.com: Silverlight MVVM: Enabling Design-Time Data in Expression Blend When Using Web Services Michael Washington has a tutorial up on MVVM and using a web service to get design-time data that works in Blend also... lots of information and screenshots. WP7 Transition Animation Hassan has a new WP7 tutorial up that demonstrates playing media and adding transition animation between pages. Tip: For a truly read-only custom DependencyProperty in Silverlight, use a read-only CLR property instead David Anson's latest tip is in response to comments on his previous post and details one by Dr. WPF who points out that a read-only DependencyProperty doesn't actually need to be a DependencyProperty as long as the class implements INotifyPropertyChanged. Template parts and custom controls (quick tip) Jeff Wilcox has posted a set of tips and recommendations to use when developing control development in Silverlight ... this is a post to bookmark. Flexible Data Template Support in Silverlight The UK Application Development Consulting details a 'problem' in Silverlight that doesn't exist in WPF and that is data templates that vary by type... and discusses a way around it. Multi-Touch enabling Silverlight Simon using Blend behaviors and the Surface sample for Silverlight Davide Zordan brought Multi-Touch to the Silverlight Simon game on CodePlex using Blend Behaviors. New Video Tutorial: How to Use a Behavior to Fire Methods from Objects in Styles Victor Gaudioso has a video tutorial up responding to a question from a developer. He demonstrates development of a Behavior that can be attached to objects in or out of Styles that allows you to specify what Method they need to fire. Creating a Silverlight Client for @shanselman ’s Nerd Dinner, using oData and Bing Maps Anoop Madhusudanan took Scott Hanselman's post on an OData API for StackOverflow, and has created a Silverlight client for Nerd Dinner, including BingMaps. A Chrome and Glass Theme - Part 2 Phil Middlemiss has the next part of his Chrome and Glass Theme up. In this one he creates a very nice chrome-look button with visual state changes. MVVM Light Toolkit V3 SP1 for Windows Phone 7 Laurent Bugnion has released a new version of MVVM Light for WP7. Included is an installation manual and information about what was changed. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • The Missing Post

    - by Joe Mayo
    It’s somewhat of a mystery how the writing process can conjure up results that weren’t initially intended. Case in point is the fact that another post was planned to be in place of this one, but it never made the light of day.  This particular post started off as an introduction to a technology I had just learned, used, and wanted to share the experience with others.  The beginning was fun and demonstrated how easy it was to get started.  One of the things I’ve been pondering over time is that the Web is filled with introductions to new technologies and quick first looks, so I set out to add more depth, share lessons learned, and generally help you avoid the problems I encountered along the way; problems being a key theme of why you aren’t reading that post at this very minute.  Problems that curiously came from nowhere to thwart my good intentions. Success was sweet when using the tool for the prototypical demo scenario. The thing is, I intended the tool to accomplish a real task.  Having embarked on the path toward getting the job done, glitches began creeping into the process.  Realizing that this was all a bit new, I had patience and found a suitable work-around, but this was to be short lived. As in marching ants to a freshly laid out picnic, the problems kept coming until I had to get up and walk away.  Not to be outdone, sheer will and brute force manual intervention led to mission accomplishment.  Though I kept a positive outlook and was pleased at the final result, the process of using the tool had somewhat soured. Regardless of a less than stellar experience with the tool, I have a great deal of respect for the company that produced it and the people who built it. Perhaps I empathize for what they might feel after reading a post that details such deficiencies in their product.  Sure, if you’re in this business, you’ve got to have a thick skin; brush it off, fix the problem, and move on to greatness. But, today I feel like they’re people and are probably already aware of any issues I would seemingly reveal.  Anyone who builds a product or provides a service takes a lot of pride in what they do.  Sometimes they screw up and if their worth a dime, they make it up. I think that will happen in this case and there’s no reason why I should post information that has the potential to sound more negative than helpful.  While no one would ever notice or care either way, I’m posting something that won’t harm. Joe

    Read the article

  • How to Run Apache Commands From Oracle HTTP Server 11g Home

    - by Daniel Mortimer
    Every now and then you come across a problem when there is nothing in the "troubleshooting manual" which can help you. Instead you need to think outside the box. This happened to me two or three years back. Oracle HTTP Server (OHS) 11g did not start. The error reported back by OPMN was generic and gave no clue, and worse the HTTP Server error log was empty, and remained so even after I had increased the OPMN and HTTP Server log levels. After checking configuration files, operating system resources, etc I was still no nearer the solution. And then the light bulb moment! OHS is based on Apache - what happens if I attempt to start HTTP Server using the native apache command. Trouble was the OHS 11g solution has its binaries and configuration files in separate "home" directories ORACLE_HOME contains the binaries ORACLE_INSTANCE contains the configuration files How to set the environment so that native apache commands run without error? Eventually, with help from a colleague, the knowledge articleHow to Start Oracle HTTP Server 11g Without Using opmnctl [ID 946532.1]was born! To be honest, I cannot remember the exact cause and solution to that OHS problem two or three years ago. But, I do remember that an attempt to start HTTP Server using the native apache command threw back an error to the console which led me to discover the culprit was some unusual filesystem fault.The other day, I was asked to review and publish a new knowledge article which described how to use the apache command to dump a list of static and shared loaded modules. This got me thinking that it was time [ID 946532.1] was given an update. The resultHow To Run Native Apache Commands in an Oracle HTTP Server 11g Environment [ID 946532.1] Highlights: Title change Improved environment setting scripts Interactive, should be no need to manually edit the scripts (although readers are welcome to do so) Automatically dump out some diagnostic information Inclusion of some links to other troubleshooting collateral To view the knowledge article you need a My Oracle Support login. For convenience, you can obtain the scripts via the links below.MS Windows:Wrapper cmd script - calls main cmd script [After download, remove the ".txt" file extension]Main cmd script - sets OHS 11g environment to run Apache commands [After download, remove the ".txt" file extension]Unix:Shell script - sets OHS 11g environment to run Apache commands on Unix Please note: I cannot guarantee that the scripts held in the blog repository will be maintained. Any enhancements or faults will applied to the scripts attached to the knowledge article. Lastly, to find out more about native apache commands, refer to the Apache Documentation apachectl - Apache HTTP Server Control Interface[http://httpd.apache.org/docs/2.2/programs/apachectl.html]httpd - Apache Hypertext Transfer Protocol Server[http://httpd.apache.org/docs/2.2/programs/httpd.html]

    Read the article

  • How to Assign a Static IP to an Ubuntu 10.04 Desktop Computer

    - by Mysticgeek
    If you have a home network with several computers, assigning them static IP addresses can make troubleshooting easier. Today we take a look at switching from DHCP to a static IP in Ubuntu. Assign a Static IP Using Static IPs prevents address conflicts between machines and can allow easier access to them. If you have a small home network and are satisfied with the machines getting their IP address automatically via DHCP, there won’t be anything gained by using static addresses. Using Static IPs isn’t necessarily for the average user, but if you’re a geek who wants to know the address assigned to each machine, it can allow for faster troubleshooting.  To change your Ubuntu machine to a Static IP go to System \ Preferences \ Network Connections. In our example, we’re on a wired system so click on the Wired tab, then select Auto eth0 and click on Edit. Select the IPv4 settings tab, change Method to Manual, click the Add button. Then type in the Static IP Address, Subnet Mask, DNS Servers, and Default Gateway. Then click Apply when you’re finished. Make sure to hit Enter after typing in the Default Gateway otherwise it will revert back to 0.0.0.0 You’ll need to enter in your admin password before the changes go into affect. To verify the changes have been made successfully launch a Terminal session and type in ifconfig at the command prompt, or follow these directions. You also might want to ping the address from another machine to make sure everything is communicating. If you want to assign a Static IP to your Windows machines, check out our article on how to assign a Static IP on Windows systems (make sure to browse the comments as our readers have some good suggestions).  Whether you have a small office or home network set up with a server and several machines, using a Static IP on each device can help you manage them easily. Again, it isn’t for everyone as it really depends on how your network is setup and the way you use it. Similar Articles Productive Geek Tips Change Ubuntu Desktop from DHCP to a Static IP AddressAllow Remote Control To Your Desktop On UbuntuAssign Custom Shortcut Keys on Ubuntu LinuxKeyboard Ninja: 21 Keyboard Shortcut ArticlesChange Ubuntu Server from DHCP to a Static IP Address TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries

    Read the article

  • Watching Green Day and discovering Sitecore, priceless.

    - by jonel
    I’m feeling inspired and I’d like to share a technique we’ve implemented in Sitecore to address a URL mapping from our legacy site that we wanted to carry over to the new beautiful Littelfuse.com. The challenge is to carry over all of our series URLs that have been published in our datasheets, we currently have a lot of series and having to create a manual mapping for those could be really tedious. It has the format of http://www.littelfuse.com/series/series-name.html, for instance, http://www.littelfuse.com/series/flnr.html. It would have been easier if we have our information architecture defined like this but that would have been too easy. I took a solution that is 2-fold. First, I need to create a URL rewrite rule using the IIS URL Rewrite Module 2.0. Secondly, we need to implement a handler that will take care of the actual lookup of the actual series. It will be amazing after we’ve gone over the details. Let’s start with the URL rewrite. Create a new blank rule, you can name it with anything you wish. The key part here to talk about is the Pattern and the Action groups. The Pattern is nothing but regex. Basically, I’m telling it to match the regex I have defined. In the Action group, I am telling it what to do, in this case, rewrite to the redirect.aspx webform. In this implementation, I will be using Rewrite instead of redirect so the URL sticks in the browser. If you opt to use Redirect, then the URL bar will display the new URL our webform will redirect to. Let me explain one small thing, the (\w+) in my Pattern group’s regex, will actually translate to {R:1} in my Action’s group. This is where the magic begins. Now let’s see what our Redirect.aspx contains. Remember our {R:1} above which becomes the query string variable s? This are basic .Net code. The only thing that you will probably ask is the LFSearch class. It’s our own implementation of addressing finding items by using a field search, we supply the fieldname, the value of the field, the template name of the item we are after, and the value of true or false if we want to do an exact search, or not. If eureka, then redirect to that item’s Path (Url). If not, tell the user tough luck, here’s the 404 page as a consolation. Amazing, ain’t it?

    Read the article

  • Virtual host is not working in Ubuntu 14 VPS using XAMPP 1.8.3

    - by viral4ever
    I am using XAMPP as server in ubuntu 14.04 VPS of digitalocean. I tried to setup virtual hosts. But it is not working and I am getting 403 error of access denied. I changed files too. My files with changes are /opt/lampp/etc/httpd.conf # # This is the main Apache HTTP server configuration file. It contains the # configuration directives that give the server its instructions. # See <URL:http://httpd.apache.org/docs/trunk/> for detailed information. # In particular, see # <URL:http://httpd.apache.org/docs/trunk/mod/directives.html> # for a discussion of each configuration directive. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so 'log/access_log' # with ServerRoot set to '/www' will be interpreted by the # server as '/www/log/access_log', where as '/log/access_log' will be # interpreted as '/log/access_log'. # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # Do not add a slash at the end of the directory path. If you point # ServerRoot at a non-local disk, be sure to specify a local disk on the # Mutex directive, if file-based mutexes are used. If you wish to share the # same ServerRoot for multiple httpd daemons, you will need to change at # least PidFile. # ServerRoot "/opt/lampp" # # Mutex: Allows you to set the mutex mechanism and mutex file directory # for individual mutexes, or change the global defaults # # Uncomment and change the directory if mutexes are file-based and the default # mutex file directory is not on a local disk or is not appropriate for some # other reason. # # Mutex default:logs # # Listen: Allows you to bind Apache to specific IP addresses and/or # ports, instead of the default. See also the <VirtualHost> # directive. # # Change this to Listen on specific IP addresses as shown below to # prevent Apache from glomming onto all bound IP addresses. # #Listen 12.34.56.78:80 Listen 80 # # Dynamic Shared Object (DSO) Support # # To be able to use the functionality of a module which was built as a DSO you # have to place corresponding `LoadModule' lines at this location so the # directives contained in it are actually available _before_ they are used. # Statically compiled modules (those listed by `httpd -l') do not need # to be loaded here. # # Example: # LoadModule foo_module modules/mod_foo.so # LoadModule authn_file_module modules/mod_authn_file.so LoadModule authn_dbm_module modules/mod_authn_dbm.so LoadModule authn_anon_module modules/mod_authn_anon.so LoadModule authn_dbd_module modules/mod_authn_dbd.so LoadModule authn_socache_module modules/mod_authn_socache.so LoadModule authn_core_module modules/mod_authn_core.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule authz_dbm_module modules/mod_authz_dbm.so LoadModule authz_owner_module modules/mod_authz_owner.so LoadModule authz_dbd_module modules/mod_authz_dbd.so LoadModule authz_core_module modules/mod_authz_core.so LoadModule authnz_ldap_module modules/mod_authnz_ldap.so LoadModule access_compat_module modules/mod_access_compat.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_form_module modules/mod_auth_form.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule allowmethods_module modules/mod_allowmethods.so LoadModule file_cache_module modules/mod_file_cache.so LoadModule cache_module modules/mod_cache.so LoadModule cache_disk_module modules/mod_cache_disk.so LoadModule socache_shmcb_module modules/mod_socache_shmcb.so LoadModule socache_dbm_module modules/mod_socache_dbm.so LoadModule socache_memcache_module modules/mod_socache_memcache.so LoadModule dbd_module modules/mod_dbd.so LoadModule bucketeer_module modules/mod_bucketeer.so LoadModule dumpio_module modules/mod_dumpio.so LoadModule echo_module modules/mod_echo.so LoadModule case_filter_module modules/mod_case_filter.so LoadModule case_filter_in_module modules/mod_case_filter_in.so LoadModule buffer_module modules/mod_buffer.so LoadModule ratelimit_module modules/mod_ratelimit.so LoadModule reqtimeout_module modules/mod_reqtimeout.so LoadModule ext_filter_module modules/mod_ext_filter.so LoadModule request_module modules/mod_request.so LoadModule include_module modules/mod_include.so LoadModule filter_module modules/mod_filter.so LoadModule substitute_module modules/mod_substitute.so LoadModule sed_module modules/mod_sed.so LoadModule charset_lite_module modules/mod_charset_lite.so LoadModule deflate_module modules/mod_deflate.so LoadModule mime_module modules/mod_mime.so LoadModule ldap_module modules/mod_ldap.so LoadModule log_config_module modules/mod_log_config.so LoadModule log_debug_module modules/mod_log_debug.so LoadModule logio_module modules/mod_logio.so LoadModule env_module modules/mod_env.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule cern_meta_module modules/mod_cern_meta.so LoadModule expires_module modules/mod_expires.so LoadModule headers_module modules/mod_headers.so LoadModule usertrack_module modules/mod_usertrack.so LoadModule unique_id_module modules/mod_unique_id.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule version_module modules/mod_version.so LoadModule remoteip_module modules/mod_remoteip.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule proxy_ftp_module modules/mod_proxy_ftp.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_fcgi_module modules/mod_proxy_fcgi.so LoadModule proxy_scgi_module modules/mod_proxy_scgi.so LoadModule proxy_ajp_module modules/mod_proxy_ajp.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_express_module modules/mod_proxy_express.so LoadModule session_module modules/mod_session.so LoadModule session_cookie_module modules/mod_session_cookie.so LoadModule session_dbd_module modules/mod_session_dbd.so LoadModule slotmem_shm_module modules/mod_slotmem_shm.so LoadModule ssl_module modules/mod_ssl.so LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so LoadModule lbmethod_bytraffic_module modules/mod_lbmethod_bytraffic.so LoadModule lbmethod_bybusyness_module modules/mod_lbmethod_bybusyness.so LoadModule lbmethod_heartbeat_module modules/mod_lbmethod_heartbeat.so LoadModule unixd_module modules/mod_unixd.so LoadModule dav_module modules/mod_dav.so LoadModule status_module modules/mod_status.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule info_module modules/mod_info.so LoadModule suexec_module modules/mod_suexec.so LoadModule cgi_module modules/mod_cgi.so LoadModule cgid_module modules/mod_cgid.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule dir_module modules/mod_dir.so LoadModule actions_module modules/mod_actions.so LoadModule speling_module modules/mod_speling.so LoadModule userdir_module modules/mod_userdir.so LoadModule alias_module modules/mod_alias.so LoadModule rewrite_module modules/mod_rewrite.so <IfDefine JUSTTOMAKEAPXSHAPPY> LoadModule php4_module modules/libphp4.so LoadModule php5_module modules/libphp5.so </IfDefine> <IfModule unixd_module> # # If you wish httpd to run as a different user or group, you must run # httpd as root initially and it will switch. # # User/Group: The name (or #number) of the user/group to run httpd as. # It is usually good practice to create a dedicated user and group for # running httpd, as with most system services. # User root Group www </IfModule> # 'Main' server configuration # # The directives in this section set up the values used by the 'main' # server, which responds to any requests that aren't handled by a # <VirtualHost> definition. These values also provide defaults for # any <VirtualHost> containers you may define later in the file. # # All of these directives may appear inside <VirtualHost> containers, # in which case these default settings will be overridden for the # virtual host being defined. # # # ServerAdmin: Your address, where problems with the server should be # e-mailed. This address appears on some server-generated pages, such # as error documents. e.g. [email protected] # ServerAdmin [email protected] # # ServerName gives the name and port that the server uses to identify itself. # This can often be determined automatically, but we recommend you specify # it explicitly to prevent problems during startup. # # If your host doesn't have a registered DNS name, enter its IP address here. # #ServerName www.example.com:@@Port@@ # XAMPP ServerName localhost # # Deny access to the entirety of your server's filesystem. You must # explicitly permit access to web content directories in other # <Directory> blocks below. # <Directory /> AllowOverride none Require all denied </Directory> # # Note that from this point forward you must specifically allow # particular features to be enabled - so if something's not working as # you might expect, make sure that you have specifically enabled it # below. # # # DocumentRoot: The directory out of which you will serve your # documents. By default, all requests are taken from this directory, but # symbolic links and aliases may be used to point to other locations. # DocumentRoot "/opt/lampp/htdocs" <Directory "/opt/lampp/htdocs"> # # Possible values for the Options directive are "None", "All", # or any combination of: # Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews # # Note that "MultiViews" must be named *explicitly* --- "Options All" # doesn't give it to you. # # The Options directive is both complicated and important. Please see # http://httpd.apache.org/docs/trunk/mod/core.html#options # for more information. # #Options Indexes FollowSymLinks # XAMPP Options Indexes FollowSymLinks ExecCGI Includes # # AllowOverride controls what directives may be placed in .htaccess files. # It can be "All", "None", or any combination of the keywords: # Options FileInfo AuthConfig Limit # #AllowOverride None # since XAMPP 1.4: AllowOverride All # # Controls who can get stuff from this server. # Require all granted </Directory> # # DirectoryIndex: sets the file that Apache will serve if a directory # is requested. # <IfModule dir_module> #DirectoryIndex index.html # XAMPP DirectoryIndex index.html index.html.var index.php index.php3 index.php4 </IfModule> # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ".ht*"> Require all denied </Files> # # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog "logs/error_log" # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn <IfModule log_config_module> # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> # You need to enable mod_logio.c to use %I and %O LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> # # The location and format of the access logfile (Common Logfile Format). # If you do not define any access logfiles within a <VirtualHost> # container, they will be logged here. Contrariwise, if you *do* # define per-<VirtualHost> access logfiles, transactions will be # logged therein and *not* in this file. # CustomLog "logs/access_log" common # # If you prefer a logfile with access, agent, and referer information # (Combined Logfile Format) you can use the following directive. # #CustomLog "logs/access_log" combined </IfModule> <IfModule alias_module> # # Redirect: Allows you to tell clients about documents that used to # exist in your server's namespace, but do not anymore. The client # will make a new request for the document at its new location. # Example: # Redirect permanent /foo http://www.example.com/bar # # Alias: Maps web paths into filesystem paths and is used to # access content that does not live under the DocumentRoot. # Example: # Alias /webpath /full/filesystem/path # # If you include a trailing / on /webpath then the server will # require it to be present in the URL. You will also likely # need to provide a <Directory> section to allow access to # the filesystem path. # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the target directory are treated as applications and # run by the server when requested rather than as documents sent to the # client. The same rules about trailing "/" apply to ScriptAlias # directives as to Alias. # ScriptAlias /cgi-bin/ "/opt/lampp/cgi-bin/" </IfModule> <IfModule cgid_module> # # ScriptSock: On threaded servers, designate the path to the UNIX # socket used to communicate with the CGI daemon of mod_cgid. # #Scriptsock logs/cgisock </IfModule> # # "/opt/lampp/cgi-bin" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # <Directory "/opt/lampp/cgi-bin"> AllowOverride None Options None Require all granted </Directory> <IfModule mime_module> # # TypesConfig points to the file containing the list of mappings from # filename extension to MIME-type. # TypesConfig etc/mime.types # # AddType allows you to add to or override the MIME configuration # file specified in TypesConfig for specific file types. # #AddType application/x-gzip .tgz # # AddEncoding allows you to have certain browsers uncompress # information on the fly. Note: Not all browsers support this. # #AddEncoding x-compress .Z #AddEncoding x-gzip .gz .tgz # # If the AddEncoding directives above are commented-out, then you # probably should define those extensions to indicate media types: # AddType application/x-compress .Z AddType application/x-gzip .gz .tgz # # AddHandler allows you to map certain file extensions to "handlers": # actions unrelated to filetype. These can be either built into the server # or added with the Action directive (see below) # # To use CGI scripts outside of ScriptAliased directories: # (You will also need to add "ExecCGI" to the "Options" directive.) # #AddHandler cgi-script .cgi # XAMPP, since LAMPP 0.9.8: AddHandler cgi-script .cgi .pl # For type maps (negotiated resources): #AddHandler type-map var # # Filters allow you to process content before it is sent to the client. # # To parse .shtml files for server-side includes (SSI): # (You will also need to add "Includes" to the "Options" directive.) # # XAMPP AddType text/html .shtml AddOutputFilter INCLUDES .shtml </IfModule> # # The mod_mime_magic module allows the server to use various hints from the # contents of the file itself to determine its type. The MIMEMagicFile # directive tells the module where the hint definitions are located. # #MIMEMagicFile etc/magic # # Customizable error responses come in three flavors: # 1) plain text 2) local redirects 3) external redirects # # Some examples: #ErrorDocument 500 "The server made a boo boo." #ErrorDocument 404 /missing.html #ErrorDocument 404 "/cgi-bin/missing_handler.pl" #ErrorDocument 402 http://www.example.com/subscription_info.html # # # MaxRanges: Maximum number of Ranges in a request before # returning the entire resource, or one of the special # values 'default', 'none' or 'unlimited'. # Default setting is to accept 200 Ranges. #MaxRanges unlimited # # EnableMMAP and EnableSendfile: On systems that support it, # memory-mapping or the sendfile syscall may be used to deliver # files. This usually improves server performance, but must # be turned off when serving from networked-mounted # filesystems or if support for these functions is otherwise # broken on your system. # Defaults: EnableMMAP On, EnableSendfile Off # EnableMMAP off EnableSendfile off # Supplemental configuration # # The configuration files in the etc/extra/ directory can be # included to add extra features or to modify the default configuration of # the server, or you may simply copy their contents here and change as # necessary. # Server-pool management (MPM specific) #Include etc/extra/httpd-mpm.conf # Multi-language error messages Include etc/extra/httpd-multilang-errordoc.conf # Fancy directory listings Include etc/extra/httpd-autoindex.conf # Language settings #Include etc/extra/httpd-languages.conf # User home directories #Include etc/extra/httpd-userdir.conf # Real-time info on requests and configuration #Include etc/extra/httpd-info.conf # Virtual hosts Include etc/extra/httpd-vhosts.conf # Local access to the Apache HTTP Server Manual #Include etc/extra/httpd-manual.conf # Distributed authoring and versioning (WebDAV) #Include etc/extra/httpd-dav.conf # Various default settings Include etc/extra/httpd-default.conf # Configure mod_proxy_html to understand HTML4/XHTML1 <IfModule proxy_html_module> Include etc/extra/proxy-html.conf </IfModule> # Secure (SSL/TLS) connections <IfModule ssl_module> # XAMPP <IfDefine SSL> Include etc/extra/httpd-ssl.conf </IfDefine> </IfModule> # # Note: The following must must be present to support # starting without SSL on platforms with no /dev/random equivalent # but a statically compiled-in mod_ssl. # <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> # XAMPP Include etc/extra/httpd-xampp.conf Include "/opt/lampp/apache2/conf/httpd.conf" I used command shown in this example. I used below lines to change and add group Add group "groupadd www" Add user to group "usermod -aG www root" Change htdocs group "chgrp -R www /opt/lampp/htdocs" Change sitedir group "chgrp -R www /opt/lampp/htdocs/mysite" Change htdocs chmod "chmod 2775 /opt/lampp/htdocs" Change sitedir chmod "chmod 2775 /opt/lampp/htdocs/mysite" And then I changed my vhosts.conf file # Virtual Hosts # # Required modules: mod_log_config # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.4/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. # <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/opt/lampp/docs/dummy-host.example.com" ServerName dummy-host.example.com ServerAlias www.dummy-host.example.com ErrorLog "logs/dummy-host.example.com-error_log" CustomLog "logs/dummy-host.example.com-access_log" common </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "/opt/lampp/docs/dummy-host2.example.com" ServerName dummy-host2.example.com ErrorLog "logs/dummy-host2.example.com-error_log" CustomLog "logs/dummy-host2.example.com-access_log" common </VirtualHost> NameVirtualHost * <VirtualHost *> ServerAdmin [email protected] DocumentRoot "/opt/lampp/htdocs/mysite" ServerName mysite.com ServerAlias mysite.com ErrorLog "/opt/lampp/htdocs/mysite/errorlogs" CustomLog "/opt/lampp/htdocs/mysite/customlog" common <Directory "/opt/lampp/htdocs/mysite"> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order Allow,Deny Allow from all Require all granted </Directory> </VirtualHost> but still its not working and I am getting 403 error on my ip and domain however I can access phpmyadmin. If anyone can help me, please help me.

    Read the article

  • Maintaining Revision Levels

    - by kyle.hatlestad
    A question that came up on an earlier blog post was how to limit the number of revisions on a piece of content. UCM does not inherently enforce any sort of limit on how many revisions you can have. It's unlimited. In some cases, there may be content that goes through lots of changes, but there just simply isn't a need to keep all of its revisions around. Deleting those revisions through the content information screen can be very cumbersome. And going through the Repository Manager applet can take time as well to filter and find the revisions to get rid of. But there is an easier way through the Archiver. The Export Query criteria in Archiver includes a very handy field called 'Revision Rank'. With revision labels, they typically go up as new revisions come in (e.g. 1, 2, 3, 4, etc...). But you can't really use this field to tell it to keep the top 5 revisions. Those top 5 revision numbers are always going up. But revision rank goes the opposite direction. The very latest revision is always 0. The previous revision to that is 1. Previous revision to that is 2. And so on and so forth. With revision rank, you can set your query to look for any Revision Rank greater or equal to 5. Now as older revisions move down the line, their revision rank gets higher and higher until they reach that threshold. Then when you run that archive export, you can choose to delete and remove those revisions. Running that export in Archiver is normally a manual process. But with Idc Command, you can script the process and have it run automatically from the server. Idc Command is a utility that allows you to run any of the content server services via the command line. You basically feed it a text file with the services and parameters defined along with the user to run it as. The Idc Command executable is located within the \bin\ directory: $ ./IdcCommand -f DeleteOlderRevisions.txt -u sysadmin -l delete_revisions.log In this example, our IdcCommand file to run the export and do the deletions would look like: IdcService=EXPORT_ARCHIVE aArchiveName=DeleteOlderRevisions aDoDelete=1 IDC_Name=idc dataSource=RevisionIDs <<EOD>> You can then use automated scheduling routines in the OS to run the command and command file at the frequency needed. Remember that you are deleting the revisions from within UCM, but they are still getting placed within the archive. So you will need to delete those batches to have them fully removed (or re-import if you need to recover them). For more information about Idc Command, you can find that in the Idc Command Reference Guide.

    Read the article

  • How to share two keyboard on the same laptop, french iso layout and usa ansi layout keyboard with usb?

    - by reyman64
    I recently buy a "noppoo choc mini" with this specific ANSI US-INTERNATIONAL pc84 layout. This specific keyboard have only 84 key , a 60% (compact tenkeyless) reduced layout My problem is simple, there is no keyboard layout into Ubuntu 12.04 which correspond to this usa normal ansi layout ... so it's the same problem with reduced version and only 84 key .. I search a template of normal ANSI US-INTERNATIONAL for xmodmap/xkb, and after i can try to manually map the other key. I search on google, and i don't find any other user which have same problem, so it's seem i have not the good keywoard to search this information.. Edit 1 : Here you can see there is probably a bug in ubuntu, because the layout for USA with dead key is not correct ! I have this : http://minus.com/lEdKMrsNAwkVA And other users have this for the same layout : http://i.stack.imgur.com/p52XG.png EDIT 2 It seems after a "sudo dpkg-reconfigure keyboard-configuration" : french standard keyboard pc105 + precision M65 keyboard from dell laptop Now i can see the good us layout in parameters, but i cannot have the iso layout for french usage... EDIT 3 Ok, after reboot i understand the probleme, i explain. I have one laptop with integrated french keyboard, and i want to use my usb keyboard which use a usa ANSI layout. It seem it's impossible in ubuntu and "dpkg-reconfigure keyboard-configuration" to share two different physical layout (ANSI and EU ISO) on the same computer ... EDIT4 Ok, it seems i can switch the physical layout (ISO <- ANSI) with this command in terminal : setxkbmap -layout us setxkbmap -layout us -variant alt-intl an setxkbmap -layout fr It's very complicated qnd it seem ubuntu 12.04 have big problem with keyboard manager ... because all works great with these two commands, without ANY change into the system parameters keyboard !!! Second bug ? The image of the layout for fr is buggy, the layout is not ISO, but i can press on the letter "< " at the left of right shift without any problem ! You can see the image here (french alternative with ANSI layout ? it's crazy ?) : http: //minus.com/lXsDJwoeyWAfF Can you help me on this point ? I'm lost with xkb, and manual mapping is very complicated ... Thanks a lot, SR

    Read the article

  • Important Note for Enablement Service Pack 1 for UPK 3.6.1

    - by marc.santosusso
    The following was originally posted to one of the UPK communities on LinkedIn. Since this post generated some feedback that this information was not well-known, I thought it would be good to repost, which I've done with permission from Earl Sullivan. This is an FYI for those who have UPK 3.6.1 and applied the Enablement Pack 1. There is a manual database update that is needed to be run. Here is the information: To correct an issue with permissioning in the Library, this Service Pack, issued in March 2010, also contains scripts to update the database on the Oracle Database or MicrosoftSQL server. Once you have run the Setup.exe file for the Service Pack, the necessary script files can be found at the root of the folder where the Developer is installed. These scripts must be run manually according to the instructions below. To update a database located on an Oracle Database server manually: Run the Setup.exe to install the files for the Service Pack. Start SQL*Plus and login with the system account. At the command prompt, enter the path to the AlterSchemaObjects.sql script located at the root of the folder where the Developer is installed. and append the following parameters: schema_owner - There is a limit of 20 characters on the schema owner name. You can find this information in the web.config file located in the Repository.WS in the folder where the server is installed. password - The existing schema owner password. Statement with generic parameters: @C:\AlterSchemaObjects.sql schema_owner password 4. Run the AlterSchemaObjects.sql script. To update a database located on a Microsoft SQL server manually: Run the Setup.exe to install the files for the Service Pack. Log in to the database using the database administrator account. Open and edit the AlterDBObjects.sql file located at the root of the folder where the Developer is installed. Replace the ODServer text with the username used when the database was installed. You can find this information in the web.config file located in the Repository.WS folder in the folder where the server is installed. Change the database from master to the name of the existing Developer database and run the AlterDBObjects.sql script. Note: The database name is the initial catalog in the connection string in the web.config file. Editor's note: The database update fixes a problem with permissions where the permissions for a user will be incorrectly updated when a group that the user was removed from has their permissions changed.

    Read the article

  • Mini Book Review of IronRuby Unleashed by Shay Friedman

    - by Eric Nelson
    When I get some time (and hell starts to look a little chilly) I would love to do a more detailed review. But I wanted to get something “out there” as I really like this book and reviews of it seem a little thin on the ground. In brief: Is it a good book? Yes Would I recommend this book to a .NET developer who was new to Ruby? Yes (This is me by the way) Would I recommend this book to a Ruby developer who was new to .NET ? Yes Would I recommend this book to a developer who sometimes does Ruby and sometimes does .NET? Yes Would I recommend this book to a developer new to .NET and new to Ruby? Yes The above demonstrates how well balanced this book is (IMHO). What I like about it: Its assumes pretty much no knowledge of IronRuby or .NET. All it asks is that you are a developer interested in IronRuby. Yet it manages to cover off the topics in a good degree of detail. If you are a Ruby developer you skip Part 2, if you are a .NET developer you skip some of Part 1 and whizz through the short intros to the individual technologies such as WPF. It is definitely not a “lets makes the manual look pretty” book – this is original content thoughtfully written and presented. It is pretty comprehensive – in 500 pages it packs in  Intro to IronRuby Intro to .NET Intro to Ruby Using IronRuby with Windows Forms, ASP.NET, WPF, Silverlight etc Getting Rails working with IronRuby Unit testing with IronRuby – which I think is an excellent way for a .NET developer to start using IronRuby Embedding IronRuby in a .NET app  - another interesting “first step” for a .NET developer What I didn’t like: Err… nothing yet. Ok, If I am being picky then the start of chapter 2 irked me a little as it went through the history of .NET. “The first version [of the .NET Framework] wasn’t that great”.  Felt pretty good to me compared to Java and C++ development at the time :-) Buy on Amazon UK | Buy on Amazon USA Related Links: Posts from the author Shay Friedman on IronRuby Guest Post: What's IronRuby, and how do I put it on Rails? Guest Post: Using IronRuby and .NET to produce the ‘Hello World of WPF’ Getting PhP and Ruby working on Windows Azure and SQL Azure

    Read the article

  • Is there a way to update all Java related alternatives?

    - by James McMahon
    Is there a way to quickly switch over all the Java related alternatives using update-alternatives? For instance, if want to switch Java over to 7, I run sudo update-alternatives --config java and select the Java 7 OpenJdk. But if I run update-alternatives --get-selections | grep java I get the following, appletviewer auto /usr/lib/jvm/java-6-openjdk-amd64/bin/appletviewer extcheck auto /usr/lib/jvm/java-6-openjdk-amd64/bin/extcheck idlj auto /usr/lib/jvm/java-6-openjdk-amd64/bin/idlj itweb-settings auto /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/itweb-settings jar auto /usr/lib/jvm/java-6-openjdk-amd64/bin/jar jarsigner auto /usr/lib/jvm/java-6-openjdk-amd64/bin/jarsigner java manual /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java javac auto /usr/lib/jvm/java-6-openjdk-amd64/bin/javac javadoc auto /usr/lib/jvm/java-6-openjdk-amd64/bin/javadoc javah auto /usr/lib/jvm/java-6-openjdk-amd64/bin/javah javap auto /usr/lib/jvm/java-6-openjdk-amd64/bin/javap javaws auto /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/javaws jconsole auto /usr/lib/jvm/java-6-openjdk-amd64/bin/jconsole jdb auto /usr/lib/jvm/java-6-openjdk-amd64/bin/jdb jexec auto /usr/lib/jvm/java-6-openjdk-amd64/jre/lib/jexec jhat auto /usr/lib/jvm/java-6-openjdk-amd64/bin/jhat jinfo auto /usr/lib/jvm/java-6-openjdk-amd64/bin/jinfo jmap auto /usr/lib/jvm/java-6-openjdk-amd64/bin/jmap jps auto /usr/lib/jvm/java-6-openjdk-amd64/bin/jps jrunscript auto /usr/lib/jvm/java-6-openjdk-amd64/bin/jrunscript jsadebugd auto /usr/lib/jvm/java-6-openjdk-amd64/bin/jsadebugd jstack auto /usr/lib/jvm/java-6-openjdk-amd64/bin/jstack jstat auto /usr/lib/jvm/java-6-openjdk-amd64/bin/jstat jstatd auto /usr/lib/jvm/java-6-openjdk-amd64/bin/jstatd keytool auto /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/keytool native2ascii auto /usr/lib/jvm/java-6-openjdk-amd64/bin/native2ascii orbd auto /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/orbd pack200 auto /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/pack200 policytool auto /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/policytool rmic auto /usr/lib/jvm/java-6-openjdk-amd64/bin/rmic rmid auto /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/rmid rmiregistry auto /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/rmiregistry schemagen auto /usr/lib/jvm/java-6-openjdk-amd64/bin/schemagen serialver auto /usr/lib/jvm/java-6-openjdk-amd64/bin/serialver servertool auto /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/servertool tnameserv auto /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/tnameserv unpack200 auto /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/unpack200 wsgen auto /usr/lib/jvm/java-6-openjdk-amd64/bin/wsgen wsimport auto /usr/lib/jvm/java-6-openjdk-amd64/bin/wsimport xjc auto /usr/lib/jvm/java-6-openjdk-amd64/bin/xjc As you can see, my Java alternative was switched over to 7, but every other alternative based on OpenJDK 6 was not switched over. Sure I could switch each one manually or write a script to do so, but I assume there is a better way to accomplish this.

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >