Search Results

Search found 45063 results on 1803 pages for 'name mangling'.

Page 638/1803 | < Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >

  • Rename Applications and Virtual Directories in IIS7

    - by AngelEyes
    from http://lanitdev.wordpress.com/2010/09/02/rename-applications-and-virtual-directories-in-iis7/   Rename Applications and Virtual Directories in IIS7 September 2, 2010 — Brian Grinstead Have you ever wondered why the box to change the name or “Alias” on an application or virtual directory is greyed out (see screenshot below)? I found a way to change the name without recreating all your settings. It uses the built in administration commands in IIS7, called appcmd. Renaming Applications In IIS7 Open a command prompt to see all of your applications. 1 C:> %systemroot%\system32\inetsrv\appcmd list app 2   3     APP "Default Web Site/OldApplicationName" 4     APP "Default Web Site/AnotherApplication" Run a command like this to change your “OldApplicationName” path to “NewApplicationName”. Now you can use http://localhost/newapplicationname 1 C:> %systemroot%\system32\inetsrv\appcmd set app "Default Web Site/OldApplicationName" -path:/NewApplicationName 2   3     APP object "Default Web Site/OldApplicationName" changed Renaming Virtual Directories In IIS7 Open a command prompt to see all of your virtual directories. 1 C:> %systemroot%\system32\inetsrv\appcmd list appcmd 2   3     VDIR "Default Web Site/OldApplicationName/Images" (physicalPath:\\server\images) 4     VDIR "Default Web Site/OldApplicationName/Data/Config" (physicalPath:\\server\config) We want to rename /Images to /Images2 and /Data/Config to /Data/Config2. Here are the example commands: 1 C:> %systemroot%\system32\inetsrv\appcmd set vdir "Default Web Site/OldApplicationName/Images" -path:/Images2 2   3     VDIR object "Default Web Site/OldApplicationName/Images" changed 4   5 C:> %systemroot%\system32\inetsrv\appcmd set vdir "Default Web Site/OldApplicationName/Data/Config" -path:/Data/Config2 6   7     VDIR object "Default Web Site/OldApplicationName/Data/Config" changed

    Read the article

  • raid md device is not remove from memory, how to overcome this problem

    - by santhosha
    i create raid 10 , i removed two arrays form md11 one by one , after that i going to editing the contents those are mounted ( it will be not responding stage), after i try for remove arrays those are left it is shows device or resource busy ( is not removed from memory). i try to terminate process this is also not work, i absorve from 4 days resync will be 8.0% it can not modifying. cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [linear] [raid10] md11 : active raid10 sde1[3] sdj14 286743936 blocks 64K chunks 2 near-copies [4/1] [___U] [1:2:3:0] [=...................] resync = 8.0% (23210368/286743936) finish=289392.6min speed=15K/sec mdadm -D /dev/md11 /dev/md11: Version : 00.90.03 Creation Time : Sun Jan 16 16:20:01 2011 Raid Level : raid10 Array Size : 286743936 (273.46 GiB 293.63 GB) Device Size : 143371968 (136.73 GiB 146.81 GB) Raid Devices : 4 Total Devices : 2 Preferred Minor : 11 Persistence : Superblock is persistent Update Time : Sun Jan 16 16:56:07 2011 State : active, degraded, resyncing Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K Rebuild Status : 8% complete UUID : 5e124ea4:79a01181:dc4110d3:a48576ea Events : 0.23 Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 4 8 145 2 faulty spare rebuilding /dev/sdj1 3 8 65 3 active sync /dev/sde1 umount /dev/md11 umount: /dev/md11: not mounted mdadm -S /dev/md11 mdadm: fail to stop array /dev/md11: Device or resource busy lsof /dev/md11 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME mount 2128 root 3r BLK 9,11 4058 /dev/md11 mount 5018 root 3r BLK 9,11 4058 /dev/md11 mdadm 27605 root 3r BLK 9,11 4058 /dev/md11 mount 30562 root 3r BLK 9,11 4058 /dev/md11 badblocks 30591 root 3r BLK 9,11 4058 /dev/md11 kill -9 2128 kill -9 5018 kill -9 27605 kill -9 30562 kill -3 30591 mdadm -S /dev/md11 mdadm: fail to stop array /dev/md11: Device or resource busy lsof /dev/md11 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME mount 2128 root 3r BLK 9,11 4058 /dev/md11 mount 5018 root 3r BLK 9,11 4058 /dev/md11 mdadm 27605 root 3r BLK 9,11 4058 /dev/md11 mount 30562 root 3r BLK 9,11 4058 /dev/md11 badblocks 30591 root 3r BLK 9,11 4058 /dev/md11 cat /proc/mdstat Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [linear] [raid10] md11 : active raid10 sde1[3] sdj14 286743936 blocks 64K chunks 2 near-copies [4/1] [___U] [1:2:3:0] [=...................] resync = 8.0% (23210368/286743936) finish=289392.6min speed=15K/sec

    Read the article

  • SQL SERVER – Reduce the Virtual Log Files (VLFs) from LDF file

    - by pinaldave
    Earlier, I wrote a quite note on SQL SERVER – Detect Virtual Log Files (VLF) in LDF. Because of this I got responses suggesting too many VLFs are bad for log file. This prompts to a simple question: “How many is ‘too many’ VLFs?” I suggest that you go and read an article written by Kimberly over here. I am sure that you are going to have a clear understanding of what a good number for your VLFs is from that article. If you have lots of VLFs, you can reduce them right away using the following method: (I am just attempting to write a working script over here.) USE AdventureWorks GO BACKUP LOG AdventureWorks TO DISK='d:\adtlog.bak' GO -- Get Logical file name of the log file sp_helpfile GO DBCC SHRINKFILE(AdventureWorks_Log,TRUNCATEONLY) GO ALTER DATABASE AdventureWorks MODIFY FILE (NAME = AdventureWorks_Log,SIZE = 1GB) GO DBCC LOGINFO GO Again, here I have assumed that your initial log size is 1 GB, but in reality you should select the number based on your own ideal size of the log file. If your log file grows to 10 GB every day, you may want to put the value as 10 GB. For accuracy, read what Kimberly’s original article says over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • You Can't Win on Price

    - by David Dorf
    This year I did the majority of my Christmas shopping from the comfort of my home office. There aren't many things in stores you can't find online these days. I find it easier to search, research, and compare products online rather than walking the mall anyway. But there's a segment of the population that likes to be in the store, touching the products. For those people, smartphones avail them some of the e-commerce features I mentioned right there in the aisles. First it was RedLaser, then TheFind, ShopSavvy and many others. But the one that should be scaring retailers is Amazon's PriceCheck application. It lets you scan the product barcode, take a picture of the product, or speak the product's name. Once the product is identified, it shows the online prices, with Amazon at the top of the list. Within 10 seconds you can order the item and Amazon Prime members get free 2-day shipping too. I don't think fashion and grocery retailers need to worry much, but I have to believe smartphones are helping Amazon win a little more of the brand-name hardgoods market. So what's a retailer to do? Best Buy has begun to put QR Codes on their shelf labels that are easily scanned by smartphones and take the consumer to a Best Buy Web page where they can get extended information about the product. The consumer is getting the additional information they want, and Best Buy avoids the price comparisons. Of course if a consumer chooses to use the Amazon PriceCheck app, then all bets are off. That's when Best Buy has to hope the in-store experience and customer service will save the sale. My point is that the internet makes information available to everyone, and smartphones make it available anywhere. Unless you want your store to be Amazon's local showroom, you need to be price-competitive but differentiate on other aspects of the shopping experience. With the cost of running a physical store, you can't win on price.

    Read the article

  • Display a Text Message During Bootup of Windows 7

    - by Mysticgeek
    Sometimes you might want to leave a text message for a user before they log into a Windows 7 computer. Today we show you a neat trick that allows you to leave a message they can read before logging in. Add a Text Message To add a message, click on Start and enter regedit into the Search box and hit Enter. Navigate to HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\Current Version\Policies\System and double-click on legalnoticecaption. In the Value data field enter in the header you want…for instance your company name or the name of your computer…whatever you want it to be, then click OK. Then double-click on legalnoticetext … And in the Value data field enter in the message you want to display and click OK. Close out of Registry Editor and reboot the computer.   After the machine reboots you’ll see the text message you just created at the Welcome screen.   You can include whatever text message you want to be included for the user to read before they log in. This is a neat trick if you have a company or school and want to show a particular message to the user before they log into the machine. Similar Articles Productive Geek Tips Start Your Computer More Quickly by Delaying the Startup of a Service in VistaCopy Windows Error Messages to the ClipboardHide the Recycle Bin Icon Text on Windows VistaHow To Disable Annoying Blinking Text in FirefoxStupid Geek Tricks: Using the Quick Zoom Feature in Outlook TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista

    Read the article

  • MVC Automatic Menu

    - by Nuri Halperin
    An ex-colleague of mine used to call his SQL script generator "Super-Scriptmatic 2000". It impressed our then boss little, but was fun to say and use. We called every batch job and script "something 2000" from that day on. I'm tempted to call this one Menu-Matic 2000, except it's waaaay past 2000. Oh well. The problem: I'm developing a bunch of stuff in MVC. There's no PM to generate mounds of requirements and there's no Ux Architect to create wireframe. During development, things change. Specifically, actions get renamed, moved from controller x to y etc. Well, as the site grows, it becomes a major pain to keep a static menu up to date, because the links change. The HtmlHelper doesn't live up to it's name and provides little help. How do I keep this growing list of pesky little forgotten actions reigned in? The general plan is: Decorate every action you want as a menu item with a custom attribute Reflect out all menu items into a structure at load time Render the menu using as CSS  friendly <ul><li> HTML. The MvcMenuItemAttribute decorates an action, designating it to be included as a menu item: [AttributeUsage(AttributeTargets.Method, AllowMultiple = true)] public class MvcMenuItemAttribute : Attribute {   public string MenuText { get; set; }   public int Order { get; set; }   public string ParentLink { get; set; }   internal string Controller { get; set; }   internal string Action { get; set; }     #region ctor   public MvcMenuItemAttribute(string menuText) : this(menuText, 0) { } public MvcMenuItemAttribute(string menuText, int order) { MenuText = menuText; Order = order; }       internal string Link { get { return string.Format("/{0}/{1}", Controller, this.Action); } }   internal MvcMenuItemAttribute ParentItem { get; set; } #endregion } The MenuText allows overriding the text displayed on the menu. The Order allows the items to be ordered. The ParentLink allows you to make this item a child of another menu item. An example action could then be decorated thusly: [MvcMenuItem("Tracks", Order = 20, ParentLink = "/Session/Index")] . All pretty straightforward methinks. The challenge with menu hierarchy becomes fairly apparent when you try to render a menu and highlight the "current" item or render a breadcrumb control. Both encounter an  ambiguity if you allow a data source to have more than one menu item with the same URL link. The issue is that there is no great way to tell which link a person click. Using referring URL will fail if a user bookmarked the page. Using some extra query string to disambiguate duplicate URLs essentially changes the links, and also ads a chance of collision with other query parameters. Besides, that smells. The stock ASP.Net sitemap provider simply disallows duplicate URLS. I decided not to, and simply pick the first one encountered as the "current". Although it doesn't solve the issue completely – one might say they wanted the second of the 2 links to be "current"- it allows one to include a link twice (home->deals and products->deals etc), and the logic of deciding "current" is easy enough to explain to the customer. Now that we got that out of the way, let's build the menu data structure: public static List<MvcMenuItemAttribute> ListMenuItems(Assembly assembly) { var result = new List<MvcMenuItemAttribute>(); foreach (var type in assembly.GetTypes()) { if (!type.IsSubclassOf(typeof(Controller))) { continue; } foreach (var method in type.GetMethods()) { var items = method.GetCustomAttributes(typeof(MvcMenuItemAttribute), false) as MvcMenuItemAttribute[]; if (items == null) { continue; } foreach (var item in items) { if (String.IsNullOrEmpty(item.Controller)) { item.Controller = type.Name.Substring(0, type.Name.Length - "Controller".Length); } if (String.IsNullOrEmpty(item.Action)) { item.Action = method.Name; } result.Add(item); } } } return result.OrderBy(i => i.Order).ToList(); } Using reflection, the ListMenuItems method takes an assembly (you will hand it your MVC web assembly) and generates a list of menu items. It digs up all the types, and for each one that is an MVC Controller, digs up the methods. Methods decorated with the MvcMenuItemAttribute get plucked and added to the output list. Again, pretty simple. To make the structure hierarchical, a LINQ expression matches up all the items to their parent: public static void RegisterMenuItems(List<MvcMenuItemAttribute> items) { _MenuItems = items; _MenuItems.ForEach(i => i.ParentItem = items.FirstOrDefault(p => String.Equals(p.Link, i.ParentLink, StringComparison.InvariantCultureIgnoreCase))); } The _MenuItems is simply an internal list to keep things around for later rendering. Finally, to package the menu building for easy consumption: public static void RegisterMenuItems(Type mvcApplicationType) { RegisterMenuItems(ListMenuItems(Assembly.GetAssembly(mvcApplicationType))); } To bring this puppy home, a call in Global.asax.cs Application_Start() registers the menu. Notice the ugliness of reflection is tucked away from the innocent developer. All they have to do is call the RegisterMenuItems() and pass in the type of the application. When you use the new project template, global.asax declares a class public class MvcApplication : HttpApplication and that is why the Register call passes in that type. protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes);   MvcMenu.RegisterMenuItems(typeof(MvcApplication)); }   What else is left to do? Oh, right, render! public static void ShowMenu(this TextWriter output) { var writer = new HtmlTextWriter(output);   renderHierarchy(writer, _MenuItems, null); }   public static void ShowBreadCrumb(this TextWriter output, Uri currentUri) { var writer = new HtmlTextWriter(output); string currentLink = "/" + currentUri.GetComponents(UriComponents.Path, UriFormat.Unescaped);   var menuItem = _MenuItems.FirstOrDefault(m => m.Link.Equals(currentLink, StringComparison.CurrentCultureIgnoreCase)); if (menuItem != null) { renderBreadCrumb(writer, _MenuItems, menuItem); } }   private static void renderBreadCrumb(HtmlTextWriter writer, List<MvcMenuItemAttribute> menuItems, MvcMenuItemAttribute current) { if (current == null) { return; } var parent = current.ParentItem; renderBreadCrumb(writer, menuItems, parent); writer.Write(current.MenuText); writer.Write(" / ");   }     static void renderHierarchy(HtmlTextWriter writer, List<MvcMenuItemAttribute> hierarchy, MvcMenuItemAttribute root) { if (!hierarchy.Any(i => i.ParentItem == root)) return;   writer.RenderBeginTag(HtmlTextWriterTag.Ul); foreach (var current in hierarchy.Where(element => element.ParentItem == root).OrderBy(i => i.Order)) { if (ItemFilter == null || ItemFilter(current)) {   writer.RenderBeginTag(HtmlTextWriterTag.Li); writer.AddAttribute(HtmlTextWriterAttribute.Href, current.Link); writer.AddAttribute(HtmlTextWriterAttribute.Alt, current.MenuText); writer.RenderBeginTag(HtmlTextWriterTag.A); writer.WriteEncodedText(current.MenuText); writer.RenderEndTag(); // link renderHierarchy(writer, hierarchy, current); writer.RenderEndTag(); // li } } writer.RenderEndTag(); // ul } The ShowMenu method renders the menu out to the provided TextWriter. In previous posts I've discussed my partiality to using well debugged, time test HtmlTextWriter to render HTML rather than writing out angled brackets by hand. In addition, writing out using the actual writer on the actual stream rather than generating string and byte intermediaries (yes, StringBuilder being no exception) disturbs me. To carry out the rendering of an hierarchical menu, the recursive renderHierarchy() is used. You may notice that an ItemFilter is called before rendering each item. I figured that at some point one might want to exclude certain items from the menu based on security role or context or something. That delegate is the hook for such future feature. To carry out rendering of a breadcrumb recursion is used again, this time simply to unwind the parent hierarchy from the leaf node, then rendering on the return from the recursion rather than as we go along deeper. I guess I was stuck in LISP that day.. recursion is fun though.   Now all that is left is some usage! Open your Site.Master or wherever you'd like to place a menu or breadcrumb, and plant one of these calls: <% MvcMenu.ShowBreadCrumb(this.Writer, Request.Url); %> to show a breadcrumb trail (notice lack of "=" after <% and the semicolon). <% MvcMenu.ShowMenu(Writer); %> to show the menu.   As mentioned before, the HTML output is nested <UL> <LI> tags, which should make it easy to style using abundant CSS to produce anything from static horizontal or vertical to dynamic drop-downs.   This has been quite a fun little implementation and I was pleased that the code size remained low. The main crux was figuring out how to pass parent information from the attribute to the hierarchy builder because attributes have restricted parameter types. Once I settled on that implementation, the rest falls into place quite easily.

    Read the article

  • OWB 11gR2 - Early Arriving Facts

    - by Dawei Sun
    A common challenge when building ETL components for a data warehouse is how to handle early arriving facts. OWB 11gR2 introduced a new feature to address this for dimensional objects entitled Orphan Management. An orphan record is one that does not have a corresponding existing parent record. Orphan management automates the process of handling source rows that do not meet the requirements necessary to form a valid dimension or cube record. In this article, a simple example will be provided to show you how to use Orphan Management in OWB. We first import a sample MDL file that contains all the objects we need. Then we take some time to examine all the objects. After that, we prepare the source data, deploy the target table and dimension/cube loading map. Finally, we run the loading maps, and check the data in target dimension/cube tables. OK, let’s start… 1. Import MDL file and examine sample project First, download zip file from here, which includes a MDL file and three source data files. Then we open OWB design center, import orphan_management.mdl by using the menu File->Import->Warehouse Builder Metadata. Now we have several objects in BI_DEMO project as below: Mapping LOAD_CHANNELS_OM: The mapping for dimension loading. Mapping LOAD_SALES_OM: The mapping for cube loading. Dimension CHANNELS_OM: The dimension that contains channels data. Cube SALES_OM: The cube that contains sales data. Table CHANNELS_OM: The star implementation table of dimension CHANNELS_OM. Table SALES_OM: The star implementation table of cube SALES_OM. Table SRC_CHANNELS: The source table of channels data, that will be loaded into dimension CHANNELS_OM. Table SRC_ORDERS and SRC_ORDER_ITEMS: The source tables of sales data that will be loaded into cube SALES_OM. Sequence CLASS_OM_DIM_SEQ: The sequence used for loading dimension CHANNELS_OM. Dimension CHANNELS_OM This dimension has a hierarchy with three levels: TOTAL, CLASS and CHANNEL. Each level has three attributes: ID (surrogate key), NAME and SOURCE_ID (business key). It has a standard star implementation. The orphan management policy and the default parent setting are shown in the following screenshots: The orphan management policy options that you can set for loading are: Reject Orphan: The record is not inserted. Default Parent: You can specify a default parent record. This default record is used as the parent record for any record that does not have an existing parent record. If the default parent record does not exist, Warehouse Builder creates the default parent record. You specify the attribute values of the default parent record at the time of defining the dimensional object. If any ancestor of the default parent does not exist, Warehouse Builder also creates this record. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. While removing data from a dimension, you can select one of the following orphan management policies: Reject Removal: Warehouse Builder does not allow you to delete the record if it has existing child records. No Maintenance: This is the default behavior. Warehouse Builder does not actively detect, reject, or fix orphan records. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#insertedID1) Cube SALES_OM This cube is references to dimension CHANNELS_OM. It has three measures: AMOUNT, QUANTITY and COST. The orphan management policy setting are shown as following screenshot: The orphan management policy options that you can set for loading are: No Maintenance: Warehouse Builder does not actively detect, reject, or fix orphan rows. Default Dimension Record: Warehouse Builder assigns a default dimension record for any row that has an invalid or null dimension key value. Use the Settings button to define the default parent row. Reject Orphan: Warehouse Builder does not insert the row if it does not have an existing dimension record. (More details are at http://download.oracle.com/docs/cd/E11882_01/owb.112/e10935/dim_objects.htm#BABEACDG) Mapping LOAD_CHANNELS_OM This mapping loads source data from table SRC_CHANNELS to dimension CHANNELS_OM. The operator CHANNELS_IN is bound to table SRC_CHANNELS; CHANNELS_OUT is bound to dimension CHANNELS_OM. The TOTALS operator is used for generating a constant value for the top level in the dimension. The CLASS_FILTER operator is used to filter out the “invalid” class name, so then we can see what will happen when those channel records with an “invalid” parent are loading into dimension. Some properties of the dimension operator in this mapping are important to orphan management. See the screenshot below: Create Default Level Records: If YES, then default level records will be created. This property must be set to YES for dimensions and cubes if one of their orphan management policies is “Default Parent” or “Default Dimension Record”. This property is set to NO by default, so the user may need to set this to YES manually. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the dimension editor. The values are set to the same as the dimension value when user drops the dimension into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the dimension. REMOVE Orphan Policy: This property is used when removing data from a dimension. Since the dimension loading type is set to LOAD in this example, this property is disabled. Mapping LOAD_SALES_OM This mapping loads source data from table SRC_ORDERS and SRC_ORDER_ITEMS to cube SALES_OM. This mapping seems a little bit complicated, but operators in the red rectangle are used to filter out and generate the records with “invalid” or “null” dimension keys. Some properties of the cube operator in a mapping are important to orphan management. See the screenshot below: Enable Source Aggregation: Should be checked in this example. If the default dimension record orphan policy is set for the cube operator, then it is recommended that source aggregation also be enabled. Otherwise, the orphan management processing may produce multiple fact rows with the same default dimension references, which will cause an “unstable rowset” execution error in the database, since the dimension refs are used as update match attributes for updating the fact table. LOAD policy for INVALID keys/ LOAD policy for NULL keys: These two properties have the same meaning as in the cube editor. The values are set to the same as in the cube editor when the user drops the cube into the mapping. The user does not need to modify these properties. Record Error Rows: If YES, error rows will be inserted into error table when loading the cube. 2. Deploy objects and mappings We now can deploy the objects. First, make sure location SALES_WH_LOCAL has been correctly configured. Then open Control Center Manager by using the menu Tools->Control Center Manager. Expand BI_DEMO->SALES_WH_LOCAL, click SALES_WH node on the project tree. We can see the following objects: Deploy all the objects in the following order: Sequence CLASS_OM_DIM_SEQ Table CHANNELS_OM, SALES_OM, SRC_CHANNELS, SRC_ORDERS, SRC_ORDER_ITEMS Dimension CHANNELS_OM Cube SALES_OM Mapping LOAD_CHANNELS_OM, LOAD_SALES_OM Note that we deployed source tables as well. Normally, we import source table from database instead of deploying them to target schema. However, in this example, we designed the source tables in OWB and deployed them to database for the purpose of this demonstration. 3. Prepare and examine source data Before running the mappings, we need to populate and examine the source data first. Run SRC_CHANNELS.sql, SRC_ORDERS.sql and SRC_ORDER_ITEMS.sql as target user. Then we check the data in these three tables. Table SRC_CHANNELS SQL> select rownum, id, class, name from src_channels; Records 1~5 are correct; they should be loaded into dimension without error. Records 6,7 and 8 have null parents; they should be loaded into dimension with a default parent value, and should be inserted into error table at the same time. Records 9, 10 and 11 have “invalid” parents; they should be rejected by dimension, and inserted into error table. Table SRC_ORDERS and SRC_ORDER_ITEMS SQL> select rownum, a.id, a.channel, b.amount, b.quantity, b.cost from src_orders a, src_order_items b where a.id = b.order_id; Record 178 has null dimension reference; it should be loaded into cube with a default dimension reference, and should be inserted into error table at the same time. Record 179 has “invalid” dimension reference; it should be rejected by cube, and inserted into error table. Other records should be aggregated and loaded into cube correctly. 4. Run the mappings and examine the target data In the Control Center Manager, expand BI_DEMO-> SALES_WH_LOCAL-> SALES_WH-> Mappings, right click on LOAD_CHANNELS_OM node, click Start. Use the same way to run mapping LOAD_SALES_OM. When they successfully finished, we can check the data in target tables. Table CHANNELS_OM SQL> select rownum, total_id, total_name, total_source_id, class_id,class_name, class_source_id, channel_id, channel_name,channel_source_id from channels_om order by abs(dimension_key); Records 1,2 and 3 are the default dimension records for the three levels. Records 8, 10 and 15 are the loaded records that originally have null parents. We see their parents name (class_name) is set to DEF_CLASS_NAME. Those records whose CHANNEL_NAME are Special_4, Special_5 and Special_6 are not loaded to this table because of the invalid parent. Error Table CHANNELS_OM_ERR SQL> select rownum, class_source_id, channel_id, channel_name,channel_source_id, err$$$_error_reason from channels_om_err order by channel_name; We can see all the record with null parent or invalid parent are inserted into this error table. Error reason is “Default parent used for record” for the first three records, and “No parent found for record” for the last three. Table SALES_OM SQL> select a.*, b.channel_name from sales_om a, channels_om b where a.channels=b.channel_id; We can see the order record with null channel_name has been loaded into target table with a default channel_name. The one with “invalid” channel_name are not loaded. Error Table SALES_OM_ERR SQL> select a.amount, a.cost, a.quantity, a.channels, b.channel_name, a.err$$$_error_reason from sales_om_err a, channels_om b where a.channels=b.channel_id(+); We can see the order records with null or invalid channel_name are inserted into error table. If the dimension reference column is null, the error reason is “Default dimension record used for fact”. If it is invalid, the error reason is “Dimension record not found for fact”. Summary In summary, this article illustrated the Orphan Management feature in OWB 11gR2. Automated orphan management policies improve ETL developer and administrator productivity by addressing an important cause of cube and dimension load failures, without requiring developers to explicitly build logic to handle these orphan rows.

    Read the article

  • ASP.NET Meta Keywords and Description

    - by Ben Griswold
    Some of the ASP.NET 4 improvements around SEO are neat.  The ASP.NET 4 Page.MetaKeywords and Page.MetaDescription properties, for example, are a welcomed change.  There’s nothing earth-shattering going on here – you can now set these meta tags via your Master page’s code behind rather than relying on updates to your markup alone.  It isn’t difficult to manage meta keywords and descriptions without these ASP.NET 4 properties but I still appreciate the attention SEO is getting.  It’s nice to get gentle reminder via new coding features that some of the more subtle aspects of one’s application deserve thought and attention too.  For the record, this is how I currently manage my meta: <meta name="keywords"     content="<%= Html.Encode(ConfigurationManager.AppSettings["Meta.Keywords"]) %>" /> <meta name="description"     content="<%= Html.Encode(ConfigurationManager.AppSettings["Meta.Description"]) %>" /> All Master pages assume the same keywords and description values as defined by the application settings.  Nothing fancy. Nothing dynamic. But it’s manageable.  It works, but I’m looking forward to the new way in ASP.NET 4.

    Read the article

  • Why are marketing employees, product managers, etc. deserving of their own office, yet programmers are jammed in a room as many as possible?

    - by TheImirOfGroofunkistan
    I don't understand why many (many) companies treat software developers like they are assembly line workers making widgets. Joel Spolsky has a great example of the problems this creates: With programmers, it's especially hard. Productivity depends on being able to juggle a lot of little details in short term memory all at once. Any kind of interruption can cause these details to come crashing down. When you resume work, you can't remember any of the details (like local variable names you were using, or where you were up to in implementing that search algorithm) and you have to keep looking these things up, which slows you down a lot until you get back up to speed. Here's the simple algebra. Let's say (as the evidence seems to suggest) that if we interrupt a programmer, even for a minute, we're really blowing away 15 minutes of productivity. For this example, lets put two programmers, Jeff and Mutt, in open cubicles next to each other in a standard Dilbert veal-fattening farm. Mutt can't remember the name of the Unicode version of the strcpy function. He could look it up, which takes 30 seconds, or he could ask Jeff, which takes 15 seconds. Since he's sitting right next to Jeff, he asks Jeff. Jeff gets distracted and loses 15 minutes of productivity (to save Mutt 15 seconds). Now let's move them into separate offices with walls and doors. Now when Mutt can't remember the name of that function, he could look it up, which still takes 30 seconds, or he could ask Jeff, which now takes 45 seconds and involves standing up (not an easy task given the average physical fitness of programmers!). So he looks it up. So now Mutt loses 30 seconds of productivity, but we save 15 minutes for Jeff. Ahhh! Quote Link More Spolsky on Offices Why don't managers and owner's see this?

    Read the article

  • Why do marketing employees get their own office, yet programmers are jammed in a room as many as possible?

    - by TheImirOfGroofunkistan
    I don't understand why many (many) companies treat software developers like they are assembly line workers making widgets. Joel Spolsky has a great example of the problems this creates: With programmers, it's especially hard. Productivity depends on being able to juggle a lot of little details in short term memory all at once. Any kind of interruption can cause these details to come crashing down. When you resume work, you can't remember any of the details (like local variable names you were using, or where you were up to in implementing that search algorithm) and you have to keep looking these things up, which slows you down a lot until you get back up to speed. Here's the simple algebra. Let's say (as the evidence seems to suggest) that if we interrupt a programmer, even for a minute, we're really blowing away 15 minutes of productivity. For this example, lets put two programmers, Jeff and Mutt, in open cubicles next to each other in a standard Dilbert veal-fattening farm. Mutt can't remember the name of the Unicode version of the strcpy function. He could look it up, which takes 30 seconds, or he could ask Jeff, which takes 15 seconds. Since he's sitting right next to Jeff, he asks Jeff. Jeff gets distracted and loses 15 minutes of productivity (to save Mutt 15 seconds). Now let's move them into separate offices with walls and doors. Now when Mutt can't remember the name of that function, he could look it up, which still takes 30 seconds, or he could ask Jeff, which now takes 45 seconds and involves standing up (not an easy task given the average physical fitness of programmers!). So he looks it up. So now Mutt loses 30 seconds of productivity, but we save 15 minutes for Jeff. Ahhh! Quote Link More Spolsky on Offices Why don't managers and owner's see this?

    Read the article

  • Help on TileMapRenderer

    - by Crypted
    In my project, I'm trying to render a map using TileMapRenderer. But it doesn't show anything when I render it. But when I use some other files from a tutorial they are rendered correctly. When debugging my TileAtlas instance shows the size as 0. I have used Texture Packer UI for packing the images. Comparing with the tutorial's files, I can see that the index starts from 1 in my file and 0 in the tutorial. But changing it to 0 wouldn't work also. map.png format: RGBA8888 filter: Nearest,Nearest repeat: none Map rotate: false xy: 0, 0 size: 32, 32 orig: 32, 32 offset: 0, 0 index: 1 Map rotate: false xy: 32, 0 size: 32, 32 orig: 32, 32 offset: 0, 0 index: 2 Map rotate: false xy: 64, 0 size: 32, 32 orig: 32, 32 offset: 0, 0 index: 3 Map rotate: false xy: 96, 0 size: 32, 32 orig: 32, 32 offset: 0, 0 index: 4 Map rotate: false xy: 128, 0 size: 32, 32 orig: 32, 32 offset: 0, 0 index: 5 Here is the begining of the tmx file. <?xml version="1.0" encoding="UTF-8"?> <map version="1.0" orientation="orthogonal" width="20" height="20" tilewidth="32" tileheight="32"> <tileset firstgid="1" name="a" tilewidth="32" tileheight="32"> <image source="map.png" width="256" height="32"/> </tileset> <layer name="Tile Layer 1" width="20" height="20"> <data> <tile gid="2"/> <tile gid="2"/> Apart from that the tutorial files and my files seems to be similar. Can anyone help me here.

    Read the article

  • Allen for Umbraco - Upload photos from your iPhone - iPad and iPod Touch

    - by Vizioz Limited
    At last year's UK Umbraco Festival we gave a demo of our alpha version of Allen for Umbraco, at that stage the application only worked on an iPhone and was a very quick prototype to see what people thought.When we returned to our office the next day, we decided if we were going to release Allen for Umbraco into the wild we really should start again from scratch, the main two reasons for this were;First to ensure it was a truly Universal application ( i.e. it can be installed on an iPhone, iPad or iPod ) which looks and behaves differently depending on the device. The second reason was we really wanted the application to be the foundations of more than just image uploading for Umbraco, for this to be the case we ensured the new version was built following proven design patterns and with lots of unit tests so that we can easily extended it.We have lots of plans for future versions of Allen for Umbraco including adding iCloud support to keep all your settings in sync across your multiple Apple devices. We are also working on support for Umbraco 5 which should be release soon.When you download the App and setup your site, make sure you have a look at the Image Resizing settings, by default we have set these to resize your images to 512 pixels wide, however you can choose from a variety of different resizing methods (by Height, Width, Fit within a frame or the full size image).Also, by default when you select a photo you will see that the image is named with it's date and time stamp of when the photograph was taken (or the current date and time if the original date is not stored in your image). If you click on this name you can edit the name of your photo before it is uploaded.Finally, we are really keep to get your feedback, so within the App help section you will find a way to submit Suggestions and if needed, you can send up Support emails from within the App :)We hope you enjoy the first version of Allen for Umbraco and we look forward to bringing you lots of exciting additional functionality in the future!

    Read the article

  • /users/tags should contain scores

    - by Sean Patrick Floyd
    I am implementing some simple JavaScript/bookmarklet based apps that show some reputation info, including the score in the User's top tags (roughly based on this previous bookmarklet of mine). Now I can get a user's top tags (using the API), and I can also get the per-tag score if the user is logged in, by dynamically parsing the tag's top users page. But it costs me one AJAX request per tag and I have to download 10+k to extract a single numeric value. It would save a lot of traffic if the tags in <api>/users/<userid>/tags had a score field. The data seems to be there, after all the top users pages use it, so it would just be a question of exposing the data. Suggested structure: "tags": [ { "name": { "description": "name of the tag", "values": "string", "optional": false, "suggested_buffer_size": 25 }, "score": { "description": "tag score, sum of up votes for answers on non-wiki questions", "values": "32-bit signed integer", "optional": false }, "count": { "description": "tag count, exact meaning depends on context", "values": "32-bit signed integer", "optional": false }, "restricted_to": { "description": "user types that can make use of this tag, lack of this field indicates it is useable by all", "values": "one of anonymous, unregistered, registered, or moderator", "optional": true }, "fulfills_required": { "description": "indicates whether this tag is one of those that is required to be on a post", "values": "boolean", "optional": false }, "user_id": { "description": "user associated with this tag, depends on context", "values": "32-bit signed integer", "optional": true } } ]

    Read the article

  • Custom Profile Provider with Web Deployment Project

    - by Ben Griswold
    I wrote about implementing a custom profile provider inside of your ASP.NET MVC application yesterday. If you haven’t read the article, don’t sweat it.  Most of the stuff I write is rubbish anyway. Since you have joined me today, though, I might as well offer up a little tip: you can run into trouble, like I did, if you enable your custom profile provider inside of an application which is deployed using a Web Deployment Project.  Everything will run great on your local machine and you’ll probably take an early lunch because you got the code running in no time flat and the build server is happy and all tests pass and, gosh, maybe you’ll just cut out early because it is Friday after all.  But then the first user hits the integration machine and, that’s right, yellow screen of death. Lucky you, just as you’re walking out the door, the user kindly sends the exception message and stack trace: Value cannot be null. Parameter name: type Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Stack Trace: [ArgumentNullException: Value cannot be null. Parameter name: type] System.Activator.CreateInstance(Type type, Boolean nonPublic) +2796915 System.Web.Profile.ProfileBase.CreateMyInstance(String username, Boolean isAuthenticated) +76 System.Web.Profile.ProfileBase.Create(String username, Boolean isAuthenticated) +312 User error?  Not this time. Damn! One hour later… you notice the harmless “Treat as library component (remove the App_Code.compiled file)” setting on the Output Assemblies Tab of your Web Deployment Project. You have no idea why, but you uncheck it.  You test and everything works great both locally and on the integration machine.  Application users think you’re the best and you’re still going to catch the last half hour of happy hour.  Happy Friday.

    Read the article

  • Udev webcam rule read, but not respected?

    - by user89305
    I have two usb-webcams on them machine, but at bot they some switch /dev/video number. The solution to this problem seems to be new udev rule. I have added this rule in/etc/udev/rules.d/jj-video.rules: Fix webcam 1 KERNEL=="video1", SUBSYSTEM=="video4linux", SUBSYSTEMS=="usb", ATTRS{idVendor}=="1d6b", ATTRS{idProduct}=="0001", SYMLINK+="webcam1" Fix webcam 2 KERNEL=="video2", SUBSYSTEM=="video4linux", ATTR{name}=="Logitech QuickCam Pro 3000", KERNELS=="0000:00:1d.0", SUBSYSTEMS=="pci", DRIVERS=="uhci_hcd", ATTRS{vendor}=="0x8086", ATTRS##{device}=="0x2658", SYMLINK+="webcam2" but the symlinks are not created. I have tried many different combinations in this file. The present ones are just my lates attempts. I found the parameters in: jjk@eee-old:~$ udevadm info -a -p $(udevadm info -q path -p /class/video4linux/video1) Udevadm info starts with the device specified by the devpath and then walks up the chain of parent devices. It prints for every device found, all possible attributes in the udev rules key format. A rule to match, can be composed by the attributes of the device and the attributes from one single parent device. looking at device '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/video4linux/video1': KERNEL=="video1" SUBSYSTEM=="video4linux" DRIVER=="" ATTR{name}=="Logitech QuickCam Pro 3000" ATTR{index}=="0" ATTR{button}=="0" looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0': KERNELS=="2-2:1.0" SUBSYSTEMS=="usb" DRIVERS=="Philips webcam" ATTRS{bInterfaceNumber}=="00" ATTRS{bAlternateSetting}==" 9" ATTRS{bNumEndpoints}=="02" ATTRS{bInterfaceClass}=="0a" ATTRS{bInterfaceSubClass}=="ff" ATTRS{bInterfaceProtocol}=="00" ATTRS{supports_autosuspend}=="0" looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-2': KERNELS=="2-2" SUBSYSTEMS=="usb" DRIVERS=="usb" ATTRS{configuration}=="" ATTRS{bNumInterfaces}==" 3" ATTRS{bConfigurationValue}=="1" ATTRS{bmAttributes}=="a0" ATTRS{bMaxPower}=="500mA" ATTRS{urbnum}=="371076" ATTRS{idVendor}=="046d" ATTRS{idProduct}=="08b0" ATTRS{bcdDevice}=="0002" ATTRS{bDeviceClass}=="00" ATTRS{bDeviceSubClass}=="00" ATTRS{bDeviceProtocol}=="00" ATTRS{bNumConfigurations}=="1" ATTRS{bMaxPacketSize0}=="8" ATTRS{speed}=="12" ATTRS{busnum}=="2" ATTRS{devnum}=="2" ATTRS{devpath}=="2" ATTRS{version}==" 1.10" ATTRS{maxchild}=="0" ATTRS{quirks}=="0x0" ATTRS{avoid_reset_quirk}=="0" ATTRS{authorized}=="1" ATTRS{serial}=="01402100A5000000" looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2': KERNELS=="usb2" SUBSYSTEMS=="usb" DRIVERS=="usb" ATTRS{configuration}=="" ATTRS{bNumInterfaces}==" 1" ATTRS{bConfigurationValue}=="1" ATTRS{bmAttributes}=="e0" ATTRS{bMaxPower}==" 0mA" ATTRS{urbnum}=="34" ATTRS{idVendor}=="1d6b" ATTRS{idProduct}=="0001" ATTRS{bcdDevice}=="0302" ATTRS{bDeviceClass}=="09" ATTRS{bDeviceSubClass}=="00" ATTRS{bDeviceProtocol}=="00" ATTRS{bNumConfigurations}=="1" ATTRS{bMaxPacketSize0}=="64" ATTRS{speed}=="12" ATTRS{busnum}=="2" ATTRS{devnum}=="1" ATTRS{devpath}=="0" ATTRS{version}==" 1.10" ATTRS{maxchild}=="2" ATTRS{quirks}=="0x0" ATTRS{avoid_reset_quirk}=="0" ATTRS{authorized}=="1" ATTRS{manufacturer}=="Linux 3.2.0-29-generic uhci_hcd" ATTRS{product}=="UHCI Host Controller" ATTRS{serial}=="0000:00:1d.0" ATTRS{authorized_default}=="1" looking at parent device '/devices/pci0000:00/0000:00:1d.0': KERNELS=="0000:00:1d.0" SUBSYSTEMS=="pci" DRIVERS=="uhci_hcd" ATTRS{vendor}=="0x8086" ATTRS{device}=="0x2658" ATTRS{subsystem_vendor}=="0x1043" ATTRS{subsystem_device}=="0x82d8" ATTRS{class}=="0x0c0300" ATTRS{irq}=="23" ATTRS{local_cpus}=="ff" ATTRS{local_cpulist}=="0-7" ATTRS{dma_mask_bits}=="32" ATTRS{consistent_dma_mask_bits}=="32" ATTRS{broken_parity_status}=="0" ATTRS{msi_bus}=="" looking at parent device '/devices/pci0000:00': KERNELS=="pci0000:00" SUBSYSTEMS=="" DRIVERS=="" jjk@eee-old:~$ And tested the setup: sudo udevadm --debug test /sys/class/video4linux/video1 main: runtime dir '/run/udev' run_command: calling: test adm_test: version 175 This program is for debugging only, it does not run any program, specified by a RUN key. It may show incorrect results, because some values may be different, or not available at a simulation run. parse_file: reading '/lib/udev/rules.d/40-crda.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-fuse.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-gnupg.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-hplip.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-ia64.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-inputattach.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-libgphoto2-2.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-libsane.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-ppc.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-usb_modeswitch.rules' as rules file parse_file: reading '/lib/udev/rules.d/40-xserver-xorg-video-intel.rules' as rules file parse_file: reading '/lib/udev/rules.d/42-qemu-usb.rules' as rules file parse_file: reading '/lib/udev/rules.d/50-firmware.rules' as rules file parse_file: reading '/lib/udev/rules.d/50-udev-default.rules' as rules file parse_file: reading '/lib/udev/rules.d/55-dm.rules' as rules file parse_file: reading '/lib/udev/rules.d/56-hpmud_support.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-cdrom_id.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-pcmcia.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-alsa.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-input.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-serial.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-storage-dm.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-storage-tape.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-storage.rules' as rules file parse_file: reading '/lib/udev/rules.d/60-persistent-v4l.rules' as rules file parse_file: reading '/lib/udev/rules.d/61-accelerometer.rules' as rules file parse_file: reading '/lib/udev/rules.d/64-xorg-xkb.rules' as rules file parse_file: reading '/lib/udev/rules.d/66-xorg-synaptics-quirks.rules' as rules file parse_file: reading '/lib/udev/rules.d/69-cd-sensors.rules' as rules file add_rule: IMPORT found builtin 'usb_id', replacing /lib/udev/rules.d/69-cd-sensors.rules:76 parse_file: reading '/lib/udev/rules.d/69-libmtp.rules' as rules file parse_file: reading '/lib/udev/rules.d/69-xorg-vmmouse.rules' as rules file parse_file: reading '/lib/udev/rules.d/69-xserver-xorg-input-wacom.rules' as rules file parse_file: reading '/etc/udev/rules.d/70-persistent-cd.rules' as rules file parse_file: reading '/etc/udev/rules.d/70-persistent-net.rules' as rules file parse_file: reading '/lib/udev/rules.d/70-printers.rules' as rules file parse_file: reading '/lib/udev/rules.d/70-udev-acl.rules' as rules file parse_file: reading '/lib/udev/rules.d/75-cd-aliases-generator.rules' as rules file parse_file: reading '/lib/udev/rules.d/75-net-description.rules' as rules file parse_file: reading '/lib/udev/rules.d/75-persistent-net-generator.rules' as rules file parse_file: reading '/lib/udev/rules.d/75-probe_mtd.rules' as rules file parse_file: reading '/lib/udev/rules.d/75-tty-description.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-ericsson-mbm.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-longcheer-port-types.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-nokia-port-types.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-pcmcia-device-blacklist.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-platform-serial-whitelist.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-qdl-device-blacklist.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-simtech-port-types.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-usb-device-blacklist.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-x22x-port-types.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-mm-zte-port-types.rules' as rules file parse_file: reading '/lib/udev/rules.d/77-nm-olpc-mesh.rules' as rules file parse_file: reading '/lib/udev/rules.d/78-graphics-card.rules' as rules file parse_file: reading '/lib/udev/rules.d/78-sound-card.rules' as rules file parse_file: reading '/lib/udev/rules.d/80-drivers.rules' as rules file parse_file: reading '/lib/udev/rules.d/80-mm-candidate.rules' as rules file parse_file: reading '/lib/udev/rules.d/80-udisks.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-brltty.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-hdparm.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-hplj10xx.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-keyboard-configuration.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-regulatory.rules' as rules file parse_file: reading '/lib/udev/rules.d/85-usbmuxd.rules' as rules file parse_file: reading '/lib/udev/rules.d/90-alsa-restore.rules' as rules file parse_file: reading '/lib/udev/rules.d/90-alsa-ucm.rules' as rules file parse_file: reading '/lib/udev/rules.d/90-libgpod.rules' as rules file parse_file: reading '/lib/udev/rules.d/90-pulseaudio.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-cd-devices.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-keyboard-force-release.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-keymap.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-udev-late.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-dell.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-fujitsu.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-gateway.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-ibm.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-lenovo.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-battery-recall-toshiba.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-csr.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-hid.rules' as rules file parse_file: reading '/lib/udev/rules.d/95-upower-wup.rules' as rules file parse_file: reading '/lib/udev/rules.d/97-bluetooth-hid2hci.rules' as rules file parse_file: reading '/etc/udev/rules.d/jj-video.rules' as rules file udev_rules_new: rules use 259284 bytes tokens (21607 * 12 bytes), 37913 bytes buffer udev_rules_new: temporary index used 67520 bytes (3376 * 20 bytes) udev_device_new_from_syspath: device 0x215103e0 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/video4linux/video1' udev_device_new_from_syspath: device 0x21510758 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/video4linux/video1' udev_device_read_db: device 0x21510758 filled with db file data udev_device_new_from_syspath: device 0x21510e10 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0' udev_device_new_from_syspath: device 0x21511b10 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2/2-2' udev_device_new_from_syspath: device 0x215132f8 has devpath '/devices/pci0000:00/0000:00:1d.0/usb2' udev_device_new_from_syspath: device 0x21513650 has devpath '/devices/pci0000:00/0000:00:1d.0' udev_device_new_from_syspath: device 0x21513980 has devpath '/devices/pci0000:00' udev_rules_apply_to_event: GROUP 44 /lib/udev/rules.d/50-udev-default.rules:29 udev_rules_apply_to_event: IMPORT 'v4l_id /dev/video1' /lib/udev/rules.d/60-persistent-v4l.rules:7 udev_event_spawn: starting 'v4l_id /dev/video1' spawn_read: 'v4l_id /dev/video1'(out) 'ID_V4L_VERSION=2' spawn_read: 'v4l_id /dev/video1'(out) 'ID_V4L_PRODUCT=Logitech QuickCam Pro 3000' spawn_read: 'v4l_id /dev/video1'(out) 'ID_V4L_CAPABILITIES=:capture:' spawn_wait: 'v4l_id /dev/video1' [2609] exit with return code 0 udev_rules_apply_to_event: IMPORT builtin 'usb_id' /lib/udev/rules.d/60-persistent-v4l.rules:9 builtin_usb_id: /sys/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0: if_class 10 protocol 0 udev_builtin_add_property: ID_VENDOR=046d udev_builtin_add_property: ID_VENDOR_ENC=046d udev_builtin_add_property: ID_VENDOR_ID=046d udev_builtin_add_property: ID_MODEL=08b0 udev_builtin_add_property: ID_MODEL_ENC=08b0 udev_builtin_add_property: ID_MODEL_ID=08b0 udev_builtin_add_property: ID_REVISION=0002 udev_builtin_add_property: ID_SERIAL=046d_08b0_01402100A5000000 udev_builtin_add_property: ID_SERIAL_SHORT=01402100A5000000 udev_builtin_add_property: ID_TYPE=generic udev_builtin_add_property: ID_BUS=usb udev_builtin_add_property: ID_USB_INTERFACES=:0aff00:010100:010200: udev_builtin_add_property: ID_USB_INTERFACE_NUM=00 udev_builtin_add_property: ID_USB_DRIVER=Philips webcam udev_rules_apply_to_event: LINK 'v4l/by-id/usb-046d_08b0_01402100A5000000-video-index0' /lib/udev/rules.d/60-persistent-v4l.rules:10 udev_rules_apply_to_event: IMPORT builtin 'path_id' /lib/udev/rules.d/60-persistent-v4l.rules:16 udev_builtin_add_property: ID_PATH=pci-0000:00:1d.0-usb-0:2:1.0 udev_builtin_add_property: ID_PATH_TAG=pci-0000_00_1d_0-usb-0_2_1_0 udev_rules_apply_to_event: LINK 'v4l/by-path/pci-0000:00:1d.0-usb-0:2:1.0-video-index0' /lib/udev/rules.d/60-persistent-v4l.rules:17 udev_rules_apply_to_event: RUN 'udev-acl --action=$env{ACTION} --device=$env{DEVNAME}' /lib/udev/rules.d/70-udev-acl.rules:74 udev_rules_apply_to_event: LINK 'webcam1' /etc/udev/rules.d/jj-video.rules:2 udev_event_execute_rules: no node name set, will use kernel supplied name 'video1' udev_node_add: creating device node '/dev/video1', devnum=81:1, mode=0660, uid=0, gid=44 udev_node_mknod: preserve file '/dev/video1', because it has correct dev_t udev_node_mknod: preserve permissions /dev/video1, 020660, uid=0, gid=44 node_symlink: preserve already existing symlink '/dev/char/81:1' to '../video1' link_find_prioritized: found 'c81:2' claiming '/run/udev/links/v4l\x2fby-id\x2fusb-046d_08b0_01402100A5000000-video-index0' udev_device_new_from_syspath: device 0x21516748 has devpath '/devices/pci0000:00/0000:00:1d.1/usb3/3-2/3-2:1.0/video4linux/video2' udev_device_read_db: device 0x21516748 filled with db file data link_find_prioritized: found 'c81:1' claiming '/run/udev/links/v4l\x2fby-id\x2fusb-046d_08b0_01402100A5000000-video-index0' link_update: creating link '/dev/v4l/by-id/usb-046d_08b0_01402100A5000000-video-index0' to '/dev/video1' node_symlink: atomically replace '/dev/v4l/by-id/usb-046d_08b0_01402100A5000000-video-index0' link_find_prioritized: found 'c81:1' claiming '/run/udev/links/v4l\x2fby-path\x2fpci-0000:00:1d.0-usb-0:2:1.0-video-index0' link_update: creating link '/dev/v4l/by-path/pci-0000:00:1d.0-usb-0:2:1.0-video-index0' to '/dev/video1' node_symlink: preserve already existing symlink '/dev/v4l/by-path/pci-0000:00:1d.0-usb-0:2:1.0-video-index0' to '../../video1' link_find_prioritized: found 'c81:1' claiming '/run/udev/links/webcam1' link_update: creating link '/dev/webcam1' to '/dev/video1' node_symlink: preserve already existing symlink '/dev/webcam1' to 'video1' udev_device_update_db: created db file '/run/udev/data/c81:1' for '/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/video4linux/video1' ACTION=add COLORD_DEVICE=1 COLORD_KIND=camera DEVLINKS=/dev/v4l/by-id/usb-046d_08b0_01402100A5000000-video-index0 /dev/v4l/by-path/pci-0000:00:1d.0-usb-0:2:1.0-video-index0 /dev/webcam1 DEVNAME=/dev/video1 DEVPATH=/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/video4linux/video1 ID_BUS=usb ID_MODEL=08b0 ID_MODEL_ENC=08b0 ID_MODEL_ID=08b0 ID_PATH=pci-0000:00:1d.0-usb-0:2:1.0 ID_PATH_TAG=pci-0000_00_1d_0-usb-0_2_1_0 ID_REVISION=0002 ID_SERIAL=046d_08b0_01402100A5000000 ID_SERIAL_SHORT=01402100A5000000 ID_TYPE=generic ID_USB_DRIVER=Philips webcam ID_USB_INTERFACES=:0aff00:010100:010200: ID_USB_INTERFACE_NUM=00 ID_V4L_CAPABILITIES=:capture: ID_V4L_PRODUCT=Logitech QuickCam Pro 3000 ID_V4L_VERSION=2 ID_VENDOR=046d ID_VENDOR_ENC=046d ID_VENDOR_ID=046d MAJOR=81 MINOR=1 SUBSYSTEM=video4linux TAGS=:udev-acl: UDEV_LOG=6 USEC_INITIALIZED=18213768 run: 'udev-acl --action=add --device=/dev/video1' jjk@eee-old:~$ (and correspondingly for video2) It looks to me like my rules are read, but not respected. What am I doing wrong?

    Read the article

  • Could not load type System.Configuration.NameValueSectionHandler

    If you upgrade older .NET sites from 1.x to 2.x or greater, you may encounter this error when you have configuration settings that look like this: <section name="CacheSettings" type="System.Configuration.NameValueFileSectionHandler, System"/> Once you try to run this on an upgraded appdomain, you may encounter this error: An error occurred creating the configuration section handler for CacheSettings: Could not load type 'System.Configuration.NameValueSectionHandler' from assembly 'System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. Microsoft moved a bunch of the Configuration related classes into a separate assembly, System.Configuration, and created a new class, ConfigurationManager.  This presents its own challenges which Ive blogged about in the past if you are wondering where ConfigurationManager is located.  However, the above error is separate. The issue in this case is that the NameValueSectionHandler is still in the System assembly, but is in the System.Configuration namespace.  This causes confusion which can be alleviated by using the following section definition: <section name="CacheSettings" type="System.Configuration.NameValueSectionHandler, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /> (you can remove the extra line breaks within the type=) With this in place, your web application should once more be able to load up the NameValueSectionHandler.  I do recommend using your own custom configuration section handlers instead of appSettings, and I would further suggest that you not use NamveValueSectionHandler if you can avoid it, but instead prefer a strongly typed configuration section handler. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Serve web application error messages from Http server [closed]

    - by licorna
    I have nginx as a http server with tomcat as a backend (using proxy_pass). It works great but I want to define my own error pages (404, 500, etc.) and that they are served by nginx and not tomcat. For example I have the following resource: https://domain.com/resource which doesn't exist. If I [GET] that URL then I get a Not Found message from Tomcat and not from nginx. What I want is that every time Tomcat responds with a 404 (or any other error message) nginx sends itself a message to the user: some html file accessible by nginx. The way I have my nginx server configured is very easy, just: location / { proxy_pass http://localhost:8080/<webapp-name>/; } And I've configured port 8080, which is tomcat, as not accessible from outside this machine. I don't think that using different location directives in nginx configuration will work, because there are some resources that depend on the URL: https://domain.com/customer/<non-existent-customer-name>/[GET] Will always return 404 (or any other error message), while: https://domain.com/customer/<existent-customer>/[GET] Will return anything different from 404 (the customer exists). Is there any way of serving Tomcat (Application Server) error messages with Nginx (http Server)? To check the message sent by the proxy_pass directive and act upon it?

    Read the article

  • Including Overestimates in MSF Agile Burndown Report

    After using the MSF Agile Burndown report for a few weeks in our new TFS 2010 environment, I have to say I am a huge fan.  I especially find the assignment of Work (hours) portion to be very useful in motivating the team to keep their tasks up to date every day.  Here is a view of the report that you get out of the box. However, I have one problem.  Id like the top line to have some more meaning.  Specifically, when it is changing is that an indication of scope creep, mis-estimation or a combination of the two.  So, today I decided to try to build in a view that would show overestimated time.  This would give me a more consistent top line.  My idea was to add another visual area on top of the graph whenever my originally estimated time was greater than the sum of completed and remaining.  This will effectively show me at least when the top line goes down whether it was scope change or over-estimation. Here is the final result. How did I do it?  Step 1: Add Cumulative_Original_Estimate field to the dsBurndown My approach was to follow the pattern where the completed time is included in the burndown chart and add my Overestimated hours.  First I added a field to the dsBurndown to hold the estimated time.         <Field Name="Cumulative_Original_Estimate">           <DataField><?xml version="1.0" encoding="utf-8"?><Field xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xsi:type="Measure" UniqueName="[Measures].[Microsoft_VSTS_Scheduling_OriginalEstimate]" /></DataField>           <rd:TypeName>System.Int32</rd:TypeName>         </Field> Step 2: Add a column to the query SELECT {     [Measures].[DateValue],     [Measures].[Work Item Count],     [Measures].[Microsoft_VSTS_Scheduling_RemainingWork],     [Measures].[Microsoft_VSTS_Scheduling_CompletedWork],     [Measures].[Microsoft_VSTS_Scheduling_OriginalEstimate],     [Measures].[RemainingWorkLine],     [Measures].[CountLine] Step 3: Add a new Item to the QueryDefinition <Item> <ID xsi:type="Measure"> <MeasureName>Microsoft_VSTS_Scheduling_OriginalEstimate</MeasureName> <UniqueName>[Measures].[Microsoft_VSTS_Scheduling_OriginalEstimate]</UniqueName> </ID> <ItemCaption>Cumulative Original Estimate</ItemCaption> <FormattedValue>true</FormattedValue> </Item> Step 4: Add a new ChartMember to DundasChartControl1 The burndown chart is called DundasChartControl1.  I need to add a ChartMember for the estimated time. <ChartMember>   <Label>Cumulative Original Estimate</Label> </ChartMember> Step 5: Add a ChartSeries to show the Overestimated Time <ChartSeries Name="OriginalEstimate">   <Hidden>=IIF(Parameters!YAxis.Value="count",True,False)</Hidden>   <ChartDataPoints>     <ChartDataPoint>       <ChartDataPointValues>         <Y>=IIF(Parameters!YAxis.Value = "hours", IIF(SUM(Fields!Cumulative_Original_Estimate.Value)>SUM(Fields!Cumulative_Completed_Work.Value+Fields!Cumulative_Remaining_Work.Value), SUM(Fields!Cumulative_Original_Estimate.Value-(Fields!Cumulative_Completed_Work.Value+Fields!Cumulative_Remaining_Work.Value)),Nothing),Nothing)</Y>       </ChartDataPointValues>       <ChartDataLabel>         <Style>           <FontFamily>Microsoft Sans Serif</FontFamily>           <FontSize>8pt</FontSize>         </Style>       </ChartDataLabel>       <Style>         <Border>           <Color>#9bdb00</Color>           <Width>0.75pt</Width>         </Border>         <Color>#666666</Color>         <BackgroundGradientEndColor>#666666</BackgroundGradientEndColor>       </Style>       <ChartMarker>         <Style />       </ChartMarker>       <CustomProperties>         <CustomProperty>           <Name>LabelStyle</Name>           <Value>Top</Value>         </CustomProperty>       </CustomProperties>     </ChartDataPoint>   </ChartDataPoints>   <Type>Area</Type>   <Subtype>Stacked</Subtype>   <Style />   <ChartEmptyPoints>     <Style>       <Color>#00ffffff</Color>     </Style>     <ChartMarker>       <Style />     </ChartMarker>     <ChartDataLabel>       <Style />     </ChartDataLabel>   </ChartEmptyPoints>   <LegendName>Default</LegendName>   <ChartItemInLegend>     <LegendText>Overestimated Hours</LegendText>   </ChartItemInLegend>   <ChartAreaName>Default</ChartAreaName>   <ValueAxisName>Primary</ValueAxisName>   <CategoryAxisName>Primary</CategoryAxisName>   <ChartSmartLabel>     <Disabled>true</Disabled>     <MaxMovingDistance>22.5pt</MaxMovingDistance>   </ChartSmartLabel> </ChartSeries> Thats it.  I find the improved report to add some value over the out of the box version.  You can download the updated rdl for the report here.  Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to use semantic markup and Google Places to assist in local search SEO?

    - by ElHaix
    In this article, adding additional localized markup is supposed to help your site's SEO. ie. <div itemscope itemtype="http://data-vocabulary.org/Organization"> <span itemprop="name">Search Engine People</span> <span itemprop="address" itemscope itemtype="http://data-vocabulary.org/Address"> <span itemprop="street-address">100 Westney Road South Unit 200, Building E</span> <span itemprop="locality">Ajax</span>, <span itemprop="region">ON</span> <span itemprop="country-name">Canada</span> <span itemprop="postal-code">L1S 7H3</span> </div> What about a site that contains valid localized results, where the actual business location is not relevant. For example, a site with valid local results from San Francisco, CA and Phoenix, AZ. Should these tags be added to the localized results, and has anyone got any experience with how much adding these tags have improved results? In terms of Google Places, however, they seem to ask for the business' actual physical location. Is there a way to use Google Places in the aforementioned example to assist in SEO?

    Read the article

  • Using Table-Valued Parameters With SQL Server Reporting Services

    - by Jesse
    In my last post I talked about using table-valued parameters to pass a list of integer values to a stored procedure without resorting to using comma-delimited strings and parsing out each value into a TABLE variable. In this post I’ll extend the “Customer Transaction Summary” report example to see how we might leverage this same stored procedure from within an SQL Server Reporting Services (SSRS) report. I’ve worked with SSRS off and on for the past several years and have generally found it to be a very useful tool for building nice-looking reports for end users quickly and easily. That said, I’ve been frustrated by SSRS from time to time when seemingly simple things are difficult to accomplish or simply not supported at all. I thought that using table-valued parameters from within a SSRS report would be simple, but unfortunately I was wrong. Customer Transaction Summary Example Let’s take the “Customer Transaction Summary” report example from the last post and try to plug that same stored procedure into an SSRS report. Our report will have three parameters: Start Date – beginning of the date range for which the report will summarize customer transactions End Date – end of the date range for which the report will summarize customer transactions Customer Ids – One or more customer Ids representing the customers that will be included in the report The simplest way to get started with this report will be to create a new dataset and point it at our Customer Transaction Summary report stored procedure (note that I’m using SSRS 2012 in the screenshots below, but there should be little to no difference with SSRS 2008): When you initially create this dataset the SSRS designer will try to invoke the stored procedure to determine what the parameters and output fields are for you automatically. As part of this process the following dialog pops-up: Obviously I can’t use this dialog to specify a value for the ‘@customerIds’ parameter since it is of the IntegerListTableType user-defined type that we created in the last post. Unfortunately this really throws the SSRS designer for a loop, and regardless of what combination of Data Type, Pass Null Value, or Parameter Value I used here, I kept getting this error dialog with the message, "Operand type clash: nvarchar is incompatible with IntegerListTableType". This error message makes some sense considering that the nvarchar type is indeed incompatible with the IntegerListTableType, but there’s little clue given as to how to remedy the situation. I don’t know for sure, but I think that behind-the-scenes the SSRS designer is trying to give the @customerIds parameter an nvarchar-typed SqlParameter which is causing the issue. When I first saw this error I figured that this might just be a limitation of the dataset designer and that I’d be able to work around the issue by manually defining the parameters. I know that there are some special steps that need to be taken when invoking a stored procedure with a table-valued parameter from ADO .NET, so I figured that I might be able to use some custom code embedded in the report  to create a SqlParameter instance with the needed properties and value to make this work, but the “Operand type clash" error message persisted. The Text Query Approach Just because we’re using a stored procedure to create the dataset for this report doesn’t mean that we can’t use the ‘Text’ Query Type option and construct an EXEC statement that will invoke the stored procedure. In order for this to work properly the EXEC statement will also need to declare and populate an IntegerListTableType variable to pass into the stored procedure. Before I go any further I want to make one point clear: this is a really ugly hack and it makes me cringe to do it. Simply put, I strongly feel that it should not be this difficult to use a table-valued parameter with SSRS. With that said, let’s take a look at what we’ll have to do to make this work. Manually Define Parameters First, we’ll need to manually define the parameters for report by right-clicking on the ‘Parameters’ folder in the ‘Report Data’ window. We’ll need to define the ‘@startDate’ and ‘@endDate’ as simple date parameters. We’ll also create a parameter called ‘@customerIds’ that will be a mutli-valued Integer parameter: In the ‘Available Values’ tab we’ll point this parameter at a simple dataset that just returns the CustomerId and CustomerName of each row in the Customers table of the database or manually define a handful of Customer Id values to make available when the report runs. Once we have these parameters properly defined we can take another crack at creating the dataset that will invoke the ‘rpt_CustomerTransactionSummary’ stored procedure. This time we’ll choose the ‘Text’ query type option and put the following into the ‘Query’ text area: 1: exec('declare @customerIdList IntegerListTableType ' + @customerIdInserts + 2: ' EXEC rpt_CustomerTransactionSummary 3: @startDate=''' + @startDate + ''', 4: @endDate='''+ @endDate + ''', 5: @customerIds=@customerIdList')   By using the ‘Text’ query type we can enter any arbitrary SQL that we we want to and then use parameters and string concatenation to inject pieces of that query at run time. It can be a bit tricky to parse this out at first glance, but from the SSRS designer’s point of view this query defines three parameters: @customerIdInserts – This will be a Text parameter that we use to define INSERT statements that will populate the @customerIdList variable that is being declared in the SQL. This parameter won’t actually ever get passed into the stored procedure. I’ll go into how this will work in a bit. @startDate – This is a simple date parameter that will get passed through directly into the @startDate parameter of the stored procedure on line 3. @endDate – This is another simple data parameter that will get passed through into the @endDate parameter of the stored procedure on line 4. At this point the dataset designer will be able to correctly parse the query and should even be able to detect the fields that the stored procedure will return without needing to specify any values for query when prompted to. Once the dataset has been correctly defined we’ll have a @customerIdInserts parameter listed in the ‘Parameters’ tab of the dataset designer. We need to define an expression for this parameter that will take the values selected by the user for the ‘@customerIds’ parameter that we defined earlier and convert them into INSERT statements that will populate the @customerIdList variable that we defined in our Text query. In order to do this we’ll need to add some custom code to our report using the ‘Report Properties’ dialog: Any custom code defined in the Report Properties dialog gets embedded into the .rdl of the report itself and (unfortunately) must be written in VB .NET. Note that you can also add references to custom .NET assemblies (which could be written in any language), but that’s outside the scope of this post so we’ll stick with the “quick and dirty” VB .NET approach for now. Here’s the VB .NET code (note that any embedded code that you add here must be defined in a static/shared function, though you can define as many functions as you want): 1: Public Shared Function BuildIntegerListInserts(ByVal variableName As String, ByVal paramValues As Object()) As String 2: Dim insertStatements As New System.Text.StringBuilder() 3: For Each paramValue As Object In paramValues 4: insertStatements.AppendLine(String.Format("INSERT {0} VALUES ({1})", variableName, paramValue)) 5: Next 6: Return insertStatements.ToString() 7: End Function   This method takes a variable name and an array of objects. We use an array of objects here because that is how SSRS will pass us the values that were selected by the user at run-time. The method uses a StringBuilder to construct INSERT statements that will insert each value from the object array into the provided variable name. Once this method has been defined in the custom code for the report we can go back into the dataset designer’s Parameters tab and update the expression for the ‘@customerIdInserts’ parameter by clicking on the button with the “function” symbol that appears to the right of the parameter value. We’ll set the expression to: 1: =Code.BuildIntegerListInserts("@customerIdList ", Parameters!customerIds.Value)   In order to invoke our custom code method we simply need to invoke “Code.<method name>” and pass in any needed parameters. The first parameter needs to match the name of the IntegerListTableType variable that we used in the EXEC statement of our query. The second parameter will come from the Value property of the ‘@customerIds’ parameter (this evaluates to an object array at run time). Finally, we’ll need to edit the properties of the ‘@customerIdInserts’ parameter on the report to mark it as a nullable internal parameter so that users aren’t prompted to provide a value for it when running the report. Limitations And Final Thoughts When I first started looking into the text query approach described above I wondered if there might be an upper limit to the size of the string that can be used to run a report. Obviously, the size of the actual query could increase pretty dramatically if you have a parameter that has a lot of potential values or you need to support several different table-valued parameters in the same query. I tested the example Customer Transaction Summary report with 1000 selected customers without any issue, but your mileage may vary depending on how much data you might need to pass into your query. If you think that the text query hack is a lot of work just to use a table-valued parameter, I agree! I think that it should be a lot easier than this to use a table-valued parameter from within SSRS, but so far I haven’t found a better way. It might be possible to create some custom .NET code that could build the EXEC statement for a given set of parameters automatically, but exploring that will have to wait for another post. For now, unless there’s a really compelling reason or requirement to use table-valued parameters from SSRS reports I would probably stick with the tried and true “join-multi-valued-parameter-to-CSV-and-split-in-the-query” approach for using mutli-valued parameters in a stored procedure.

    Read the article

  • SQL SERVER – Find Details for Statistics of Whole Database – DMV – T-SQL Script

    - by pinaldave
    I was recently asked is there a single script which can provide all the necessary details about statistics for any database. This question made me write following script. I was initially planning to use sp_helpstats command but I remembered that this is marked to be deprecated in future. Again, using DMV is the right thing to do moving forward. I quickly wrote following script which gives a lot more information than sp_helpstats. USE AdventureWorks GO SELECT DISTINCT OBJECT_NAME(s.[object_id]) AS TableName, c.name AS ColumnName, s.name AS StatName, s.auto_created, s.user_created, s.no_recompute, s.[object_id], s.stats_id, sc.stats_column_id, sc.column_id, STATS_DATE(s.[object_id], s.stats_id) AS LastUpdated FROM sys.stats s JOIN sys.stats_columns sc ON sc.[object_id] = s.[object_id] AND sc.stats_id = s.stats_id JOIN sys.columns c ON c.[object_id] = sc.[object_id] AND c.column_id = sc.column_id JOIN sys.partitions par ON par.[object_id] = s.[object_id] JOIN sys.objects obj ON par.[object_id] = obj.[object_id] WHERE OBJECTPROPERTY(s.OBJECT_ID,'IsUserTable') = 1 AND (s.auto_created = 1 OR s.user_created = 1); If you have better script to retrieve information about statistics, please share here and I will publish it with due credit. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics, Statistics

    Read the article

  • Unable to start Apache on Ubuntu 12.10: no listening sockets available

    - by michalstanko
    I'm unable to start apache2 installed using apt-get. I'm getting the very same error on 2 separate Ubuntu 12.10 installations, one on my desktop PC, the other one running in VirtualBox: michal@michaltest:~$ sudo service apache2 start * Starting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName no listening sockets available, shutting down Unable to open logs Action 'start' failed. The Apache error log may have more information. [fail] lsof says: michal@michaltest:/var/log/apache2$ sudo lsof -i :80 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ubuntu-ge 2074 michal 11u IPv4 23978 0t0 TCP michaltest.local:47578->mulberry.canonical.com:http (CLOSE_WAIT) firefox 25194 michal 71u IPv4 42477 0t0 TCP michaltest.local:59793->69.59.197.29:http (ESTABLISHED) firefox 25194 michal 76u IPv4 41834 0t0 TCP michaltest.local:59698->69.59.197.29:http (ESTABLISHED) gvfsd-htt 25320 michal 12u IPv4 42568 0t0 TCP michaltest.local:56203->lb260.amst.cotendo.net:http (CLOSE_WAIT) netstat says: michal@michaltest:/var/log/apache2$ sudo netstat -lnp | grep '80' unix 2 [ ACC ] STREAM LISTENING 8030 876/acpid /var/run/acpid.socket /var/log/apache2/error.log: [Thu Nov 08 11:13:30 2012] [notice] Apache/2.2.22 (Ubuntu) configured -- resuming normal operations [Thu Nov 08 11:17:32 2012] [notice] caught SIGTERM, shutting down /etc/apache2/ports.conf: NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> Thanks for your help. EDIT #1: michal@michaltest:~$ sudo netstat -ano | grep '443' tcp 54 0 10.0.2.15:58504 91.189.92.70:443 CLOSE_WAIT off (0.00/0/0)

    Read the article

  • Convertion of tiff image in Python script - OCR using tesseract

    - by PYTHON TEAM
    I want to convert a tiff image file to text document. My code perfectly as I expected to convert tiff images with usual font but its not working for french script font . My tiff image file contains text. The font of text is in french script format.I here is my code import Image import subprocess import util import errors tesseract_exe_name = 'tesseract' # Name of executable to be called at command line scratch_image_name = "temp.bmp" # This file must be .bmp or other Tesseract-compatible format scratch_text_name_root = "temp" # Leave out the .txt extension cleanup_scratch_flag = True # Temporary files cleaned up after OCR operation def call_tesseract(input_filename, output_filename): """Calls external tesseract.exe on input file (restrictions on types), outputting output_filename+'txt'""" args = [tesseract_exe_name, input_filename, output_filename] proc = subprocess.Popen(args) retcode = proc.wait() if retcode!=0: errors.check_for_errors() def image_to_string(im, cleanup = cleanup_scratch_flag): """Converts im to file, applies tesseract, and fetches resulting text. If cleanup=True, delete scratch files after operation.""" try: util.image_to_scratch(im, scratch_image_name) call_tesseract(scratch_image_name, scratch_text_name_root) text = util.retrieve_text(scratch_text_name_root) finally: if cleanup: util.perform_cleanup(scratch_image_name, scratch_text_name_root) return text def image_file_to_string(filename, cleanup = cleanup_scratch_flag, graceful_errors=True): If cleanup=True, delete scratch files after operation.""" try: try: call_tesseract(filename, scratch_text_name_root) text = util.retrieve_text(scratch_text_name_root) except errors.Tesser_General_Exception: if graceful_errors: im = Image.open(filename) text = image_to_string(im, cleanup) else: raise finally: if cleanup: util.perform_cleanup(scratch_image_name, scratch_text_name_root) return text if __name__=='__main__': im = Image.open("/home/oomsys/phototest.tif") text = image_to_string(im) print text try: text = image_file_to_string('fnord.tif', graceful_errors=False) except errors.Tesser_General_Exception, value: print "fnord.tif is incompatible filetype. Try graceful_errors=True" print value text = image_file_to_string('fnord.tif', graceful_errors=True) print "fnord.tif contents:", text text = image_file_to_string('fonts_test.png', graceful_errors=True) print text

    Read the article

  • Why do we need URIs for XML namespaces?

    - by Patryk
    I am trying to figure out why we need URIs for XML namespaces and I cannot find a purpose for that. Can anyone brighten me a little showing their use on a concrete example? EDIT: Ok so for instance: I have this from w3schools <root xmlns:h="http://www.w3.org/TR/html4/" xmlns:f="http://www.w3schools.com/furniture"> <h:table> <h:tr> <h:td>Apples</h:td> <h:td>Bananas</h:td> </h:tr> </h:table> <f:table> <f:name>African Coffee Table</f:name> <f:width>80</f:width> <f:length>120</f:length> </f:table> </root> So what should http://www.w3schools.com/furniture hold ?

    Read the article

  • .com domain transfer failing

    - by digital
    Hi, I'm trying to transfer one of my .com addresses between registrars. I'm down as the owner contact (confirmed working) and the losing registrar is down as the tech and admin contact. Last week I received an email stating that the domain transfer had been rejected by the losing registrar. I contacted the losing registrar and they denied that. My money from the winning registrar was refunded and I was told to try again. I've initiated the transfer again and received confirmation of pending transfer, I gave the correct EPP code and confirmed the transfer. Currently the status on the domain is set as OK, should it not be transfer pending? According to my name.com transfer page if the transfer is not authd in 5 days it will auto transfer anyway. I don't believe this will happen. Name.com have been really helpful but they can't really do much more now. The losing registrar is not being helpful hence me turning here. What can I do to make sure the domain transfers? The domain transfer is set to expire on the 17th. Any help would be greatly appreciated.

    Read the article

< Previous Page | 634 635 636 637 638 639 640 641 642 643 644 645  | Next Page >