Search Results

Search found 14702 results on 589 pages for 'dolby pro logic'.

Page 61/589 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Autodetect/mount SDCards and run script for them on Linux

    - by Brendan
    Hey Everyone, I'm currently running SME Server, and need to have a script run upon the attachment of SD Cards to my server. The script itself works fine (it copies the contents of the cards), but the automounting and execution of the script is where I'm having issues. The I have a USB hub consisting of 10 USB ports; that shows up as: [root@server ~]# lsusb Bus 004 Device 002: ID 0000:0000 Bus 004 Device 001: ID 0000:0000 Bus 003 Device 001: ID 0000:0000 Bus 002 Device 001: ID 0000:0000 Bus 001 Device 055: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 051: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 050: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 001: ID 0000:0000 (The hub is the TERMINUS TECHNOLOGY INC entries) As I cannot plug SD Cards directly into the server; I use a USB to SD card attachement (10 of them) plugged into the hub to read the cards. Upon pluggig the 10 attachments (without cards) into the hub; lsusb yields the following: [root@server ~]# lsusb Bus 004 Device 002: ID 0000:0000 Bus 004 Device 001: ID 0000:0000 Bus 003 Device 001: ID 0000:0000 Bus 002 Device 001: ID 0000:0000 Bus 001 Device 073: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 072: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 071: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 070: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 069: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 068: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 067: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 066: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 065: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 064: ID 05e3:0723 Genesys Logic, Inc. Bus 001 Device 055: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 051: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 050: ID 1a40:0101 TERMINUS TECHNOLOGY INC. Bus 001 Device 001: ID 0000:0000 As you can see, the readers are the "Gensys Logic, Inc" entries. Plugging in an SD card to a reader doesn't affect lsusb (it reads exactly as above), however my system recognises the cards fine; as indicated by dmesg: Attached scsi generic sg11 at scsi54, channel 0, id 0, lun 0, type 0 USB Mass Storage device found at 73 SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through sdd: sdd1 SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through sdd: sdd1 SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through SCSI device sdd: 31388672 512-byte hdwr sectors (16071 MB) sdd: Write Protect is on sdd: Mode Sense: 03 00 80 00 sdd: assuming drive cache: write through sdd: sdd1 If I manually mount sdd1 (mount /dev/sdd1 /somedirectory/) this works fine. What I'm really after is a solution that automounts each of the cards as they are inputted into the reader; and executes a script for them (this will involve copying their contents to another directory). My problem is that I don't know how to do this; I don't think udev will work as the USB devices don't change; if I could somehow get udev working with /dev/disk/by-path/ however I think this is doable (it seems to keep constant entries). ls /dev/disk returns: pci-0000:00:1d.7-usb-0:4.1.1.1:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.1.2:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.1.3:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.1.4:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.2:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.3:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.1.4:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.2:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.3:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.4:1.0-scsi-0:0:0:0 pci-0000:00:1d.7-usb-0:4.4:1.0-scsi-0:0:0:0-part1 pci-0000:0b:01.0-scsi-0:0:1:0 pci-0000:0b:01.0-scsi-0:0:1:0-part1 pci-0000:0b:01.0-scsi-0:0:1:0-part2 From above, we can see I have only one card plugged into the reader (pci-0000:00:1d.7-usb-0:4.4:1.0-scsi-0:0:0:0-part1). Going mount /dev/disk/by-path/pci-0000\:00\:1d.7-usb-0\:4.4\:1.0-scsi-0\:0\:0\:0-part1 Works and places the card under /media/usbdisk/, however: mount /dev/disk/by-path/pci-0000\:00\:1d.7-usb-0\:4.4\:1.0-scsi-0\:0\:0\:0-part1 slot1/ doesn't work, and returns "mount: can't get address for /dev/disk/by-path/pci-0000" Any ideas and solutions would be great, I've seen the knowledge of a lot of the guys on here before so I'm hopeful someone can help me out. Thanks

    Read the article

  • What is there in Win 7 Pro (or Ultimate) that is not there in Home Premium? - Especially considering this situation..

    - by Senthil
    I want to know the REAL difference between Windows 7 Home Premium and Professional/Utimate. In India, the cost of different versions: Ultimate - 11,200 INR Professional - 10,700 INR Home Premium - 6,600 INR The absolute cost of the first two is so high to me that the difference (500 INR) doesn't matter. So to me there is really no choice between the first two - If I decide to buy the Professional version, I'd rather go for Ultimate itself. What I want to know is, whether Home Premium is enough for my needs. I tried searching for comparison but many look like just marketing junk from MS. They are short and vague. According to this page, the major differences between Pro and HomePremium are Run many Windows XP productivity programs in Windows XP Mode. Connect to company networks easily and more securely with Domain Join. You can do both in Pro but not in Home Premium. I intend to use my Windows 7 for a small business - just starting up. So I'll be dealing with the following: All kinds of development tools, servers Very important - I will run Virtual Machine Software (MS VPC or VMWare or Sun VirtualBox etc..) My system will be acting as the server for most purposes till I can afford dedicated servers. Connect the system to a variety of network devices (PCs, Printers, etc..) Run productivity, business and financial apps Any other small software startup business requirement that I haven't thought of yet. Professional (and Ultimate) is twice as expensive as Home Premium. So it'd be great if someone can point out the things you cannot do with Home Premium, when you use it like I explained above, so that I can make a decision about which one to buy. I need some real-life experiences so that I can make an informed decision - not a decision based on marketing junk.

    Read the article

  • How (recipe) to build only one kernel module?

    - by Pro Backup
    I have a bug in a Linux kernel module that causes the stock Ubuntu 14.04 kernel to oops (crash). That is why I want to edit/patch the source of only that single kernel module to add some extra debug output. The kernel module in question is mvsas and not necessary to boot. For that reason I don't see any need to update any initrd images. I have read a lot of information (as shown below) and find the setup and build process confusion. I need two recipes: to setup/configure the build environment once steps to do after editing any source file of this kernel module (.c and .h) and converting that edit into a new kernel module (.ko) The sources that have been used are: build one kernel module - Google search http://www.linuxquestions.org/questions/linux-kernel-70/rebuilding-a-single-kernel-module-595116/ http://stackoverflow.com/questions/8744087/how-to-recompile-just-a-single-kernel-module http://www.pixelbeat.org/docs/rebuild_kernel_module.html How do I build a single in-tree kernel module? http://ubuntuforums.org/showthread.php?t=1153067 http://ubuntuforums.org/showthread.php?t=2112166 http://ubuntuforums.org/showthread.php?t=1115593 build one kernel module ubuntu - Google search 'make +single +kernel +module' - Ask Ubuntu 'make +kernel +module' - Ask Ubuntu My makefile results in: No rule to make target `arch/x86/tools/relocs.c', needed '"Invalid module format"' - Ask Ubuntu Driver installation: compiling source code for newer kernel Modprobe: 'Invalid nodule format', yet works after insmod "Symbol version dump" "is missing" - Google search http://stackoverflow.com/questions/9425523/should-i-care-that-the-symbol-version-dump-is-missing-how-do-i-get-one Where can I find the corresponding Module.symvers and .config files for 12.04.3 i386 server? "no symbol version for module_layout" when trying to load usbhid.ko Broken links inside Linux header file folder 'make modules_install' - Ask Ubuntu 'modules_install' - Ask Ubuntu Empty build directory in custom compiled kernel Not able to see pr_info output In which directory are the kernel source files and how can I recompile it? How can I compile and install that patched libata-eh.c file? 'modules_install +depmod' - Ask Ubuntu modules_install depmod - Google search "make modules_install" - Google search http://www.csee.umbc.edu/courses/undergraduate/CMSC421/fall02/burt/projects/howto_build_kernel.html http://unix.stackexchange.com/questions/20864/what-happens-in-each-step-of-the-linux-kernel-building-process https://wiki.ubuntu.com/KernelCustomBuild http://www.cyberciti.biz/tips/build-linux-kernel-module-against-installed-kernel-source-tree.html http://www.linuxforums.org/forum/kernel/170617-solved-make-modules_install-different-path.html "make prepare" - Google search "make prepare" "scripts/kconfig/conf --silentoldconfig Kconfig" - Google search http://ubuntuforums.org/showthread.php?t=1963515 ubuntu "make prepare" version - Google search http://stackoverflow.com/questions/8276245/how-to-compile-a-kernel-module-against-a-new-source https://help.ubuntu.com/community/Kernel/Compile How do I compile a kernel module? How to add a custom driver to my kernel? Compile and loading kernel module without compiling the kernel

    Read the article

  • How to boot load the kernel using EFI stub (efistub) loader?

    - by Pro Backup
    I have Ubuntu 14.04 running in UEFI mode as only operating system, no dual-boot here. The kernel version is 3.13.0-24-generic. There is an EFI partition. In this case the EFI partition is not at the default /dev/sda1 but at /dev/sda3 because I did actually convert BIOS mode to EFI mode. I have used the grub-efi-amd64 package, though that actually loads GRUB boot menu from UEFI firmware boot menu (UEFI boot loads \EFI\ubuntu\grubx64.efi). I want to skip that double boot menu loading step, and boot faster, directly from UEFI into the kernel. The Ubuntu kernels since 12.10 have "Kernel EFI stub loader" feature. I know I do need to copy the Ubuntu kernel to the EFI partition (possibly rename) and create an entry in UEFI boot menu (for instance using efibootmgr). Which exact terminal commands are necessary to do this?

    Read the article

  • Will BIOS boot mode Ubuntu install be able to boot when firmware "Fast Boot" is "Ultra Fast"?

    - by Pro Backup
    I have an AsRock mainboard with UEFI BIOS P1.50 02/14/2014. The firmware "Fast Boot" option is set to "Fast", Boot Option #1 is set to "AHCI P4: OCZ-VERT...": this is BIOS not UEFI boot. This boot disk has an MBR partitioning scheme (# parted -l | grep Partition\ Table:). Therefore Ubuntu 14.04 is installed in BIOS/CMS (Grub-PC) mode. The Ubuntu boot process ends in a text console (no GUI). There is no external graphics card in use. The stock Ubuntu kernel is replaced with Ubuntu supplied mainline 3.16.0-031600rc6-generic. dmesg outputs lines containing BIOS, like: SMBIOS 2.7 present Calgary: detecting Calgary via BIOS EBDA area Calgary: Unable to locate Rio Grande table in EBDA - bailing! [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored BIOS EDD facility v0.16 2004-Jun-25, 0 devices found The ASRock BIOS it selves display this help text for "Ultra Fast - Fast Boot": Ultra Fast mode is only supported by Windows 8 and the VBIOS must support UEFI GOP if you are using an external graphics card. Please notice that Ultra Fast mode will boot so fast that the only way to enter this UEFI Setup Utility is to Clear CMOS or run the Restart to UEFI utility in Windows. Assumptions: I suspect after changing UEFI setting "Fast Boot" to "Ultra Fast" that the machine will no longer boot into Ubuntu's console. I expect when first exchanging "Grub-pc" with "Grub-efi", that the machine will still be able to boot to a grub menu (thus allowing to change the "Fast Boot" setting back to "Fast" without clearing CMOS). Are these two "Fast Boot" assumptions correct, and/or, may I expect Ubuntu 14.04 running mainline kernel 3.16rc6 and Grub-efi to still boot to console after enabling UEFI Ultra Fast Boot?

    Read the article

  • Best virtualization solution for running Windows 7 as a guest?

    - by gkt.pro
    I depend upon Ubuntu for most of my work but I still need Windows 7 for some applications such as Office 2010 Casual Gaming Adobe CS4 and other windows softwares that are not available on Ubuntu yet. I checked Wine but as of now it provides no support for Office 2010 and most of my games and softwares. So, I decided to go for Virtualizing Windows 7 inside Ubuntu, but I am confused about which virtualization software should I use on Ubuntu for virtualization.

    Read the article

  • What is the easiest way to reference libraries in Qt projects?

    - by Jake Petroules
    I have two Qt4 Gui Application projects and one shared library project, all referenced under a .pro file with the "subdirs" template. So, it's like: exampleapp.pro app1.pro app2.pro sharedlib.pro Now, what I want to do is reference sharedlib from app1 and app2 so that every time I run app1.exe, I don't have to manually copy sharedlib.dll from its own folder to app1.exe's folder. I could set PATH environment variable in the projects window, but this isn't very portable. I've looked at putting the LIBS variable in the app1.pro file, but I'm not sure if that refers to statically linked libraries only - I've tried it with various syntaxes and it doesn't seem to work with shared libs.

    Read the article

  • Currency Conversion in Oracle BI applications

    - by Saurabh Verma
    Authored by Vijay Aggarwal and Hichem Sellami A typical data warehouse contains Star and/or Snowflake schema, made up of Dimensions and Facts. The facts store various numerical information including amounts. Example; Order Amount, Invoice Amount etc. With the true global nature of business now-a-days, the end-users want to view the reports in their own currency or in global/common currency as defined by their business. This presents a unique opportunity in BI to provide the amounts in converted rates either by pre-storing or by doing on-the-fly conversions while displaying the reports to the users. Source Systems OBIA caters to various source systems like EBS, PSFT, Sebl, JDE, Fusion etc. Each source has its own unique and intricate ways of defining and storing currency data, doing currency conversions and presenting to the OLTP users. For example; EBS stores conversion rates between currencies which can be classified by conversion rates, like Corporate rate, Spot rate, Period rate etc. Siebel stores exchange rates by conversion rates like Daily. EBS/Fusion stores the conversion rates for each day, where as PSFT/Siebel store for a range of days. PSFT has Rate Multiplication Factor and Rate Division Factor and we need to calculate the Rate based on them, where as other Source systems store the Currency Exchange Rate directly. OBIA Design The data consolidation from various disparate source systems, poses the challenge to conform various currencies, rate types, exchange rates etc., and designing the best way to present the amounts to the users without affecting the performance. When consolidating the data for reporting in OBIA, we have designed the mechanisms in the Common Dimension, to allow users to report based on their required currencies. OBIA Facts store amounts in various currencies: Document Currency: This is the currency of the actual transaction. For a multinational company, this can be in various currencies. Local Currency: This is the base currency in which the accounting entries are recorded by the business. This is generally defined in the Ledger of the company. Global Currencies: OBIA provides five Global Currencies. Three are used across all modules. The last two are for CRM only. A Global currency is very useful when creating reports where the data is viewed enterprise-wide. Example; a US based multinational would want to see the reports in USD. The company will choose USD as one of the global currencies. OBIA allows users to define up-to five global currencies during the initial implementation. The term Currency Preference is used to designate the set of values: Document Currency, Local Currency, Global Currency 1, Global Currency 2, Global Currency 3; which are shared among all modules. There are four more currency preferences, specific to certain modules: Global Currency 4 (aka CRM Currency) and Global Currency 5 which are used in CRM; and Project Currency and Contract Currency, used in Project Analytics. When choosing Local Currency for Currency preference, the data will show in the currency of the Ledger (or Business Unit) in the prompt. So it is important to select one Ledger or Business Unit when viewing data in Local Currency. More on this can be found in the section: Toggling Currency Preferences in the Dashboard. Design Logic When extracting the fact data, the OOTB mappings extract and load the document amount, and the local amount in target tables. It also loads the exchange rates required to convert the document amount into the corresponding global amounts. If the source system only provides the document amount in the transaction, the extract mapping does a lookup to get the Local currency code, and the Local exchange rate. The Load mapping then uses the local currency code and rate to derive the local amount. The load mapping also fetches the Global Currencies and looks up the corresponding exchange rates. The lookup of exchange rates is done via the Exchange Rate Dimension provided as a Common/Conforming Dimension in OBIA. The Exchange Rate Dimension stores the exchange rates between various currencies for a date range and Rate Type. Two physical tables W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are used to provide the lookups and conversions between currencies. The data is loaded from the source system’s Ledger tables. W_EXCH_RATE_G stores the exchange rates between currencies with a date range. On the other hand, W_GLOBAL_EXCH_RATE_G stores the currency conversions between the document currency and the pre-defined five Global Currencies for each day. Based on the requirements, the fact mappings can decide and use one or both tables to do the conversion. Currency design in OBIA also taps into the MLS and Domain architecture, thus allowing the users to map the currencies to a universal Domain during the implementation time. This is especially important for companies deploying and using OBIA with multiple source adapters. Some Gotchas to Look for It is necessary to think through the currencies during the initial implementation. 1) Identify various types of currencies that are used by your business. Understand what will be your Local (or Base) and Documentation currency. Identify various global currencies that your users will want to look at the reports. This will be based on the global nature of your business. Changes to these currencies later in the project, while permitted, but may cause Full data loads and hence lost time. 2) If the user has a multi source system make sure that the Global Currencies and Global Rate Types chosen in Configuration Manager do have the corresponding source specific counterparts. In other words, make sure for every DW specific value chosen for Currency Code or Rate Type, there is a source Domain mapping already done. Technical Section This section will briefly mention the technical scenarios employed in the OBIA adaptors to extract data from each source system. In OBIA, we have two main tables which store the Currency Rate information as explained in previous sections. W_EXCH_RATE_G and W_GLOBAL_EXCH_RATE_G are the two tables. W_EXCH_RATE_G stores all the Currency Conversions present in the source system. It captures data for a Date Range. W_GLOBAL_EXCH_RATE_G has Global Currency Conversions stored at a Daily level. However the challenge here is to store all the 5 Global Currency Exchange Rates in a single record for each From Currency. Let’s voyage further into the Source System Extraction logic for each of these tables and understand the flow briefly. EBS: In EBS, we have Currency Data stored in GL_DAILY_RATES table. As the name indicates GL_DAILY_RATES EBS table has data at a daily level. However in our warehouse we store the data with a Date Range and insert a new range record only when the Exchange Rate changes for a particular From Currency, To Currency and Rate Type. Below are the main logical steps that we employ in this process. (Incremental Flow only) – Cleanup the data in W_EXCH_RATE_G. Delete the records which have Start Date > minimum conversion date Update the End Date of the existing records. Compress the daily data from GL_DAILY_RATES table into Range Records. Incremental map uses $$XRATE_UPD_NUM_DAY as an extra parameter. Generate Previous Rate, Previous Date and Next Date for each of the Daily record from the OLTP. Filter out the records which have Conversion Rate same as Previous Rates or if the Conversion Date lies within a single day range. Mark the records as ‘Keep’ and ‘Filter’ and also get the final End Date for the single Range record (Unique Combination of From Date, To Date, Rate and Conversion Date). Filter the records marked as ‘Filter’ in the INFA map. The above steps will load W_EXCH_RATE_GS. Step 0 updates/deletes W_EXCH_RATE_G directly. SIL map will then insert/update the GS data into W_EXCH_RATE_G. These steps convert the daily records in GL_DAILY_RATES to Range records in W_EXCH_RATE_G. We do not need such special logic for loading W_GLOBAL_EXCH_RATE_G. This is a table where we store data at a Daily Granular Level. However we need to pivot the data because the data present in multiple rows in source tables needs to be stored in different columns of the same row in DW. We use GROUP BY and CASE logic to achieve this. Fusion: Fusion has extraction logic very similar to EBS. The only difference is that the Cleanup logic that was mentioned in step 0 above does not use $$XRATE_UPD_NUM_DAY parameter. In Fusion we bring all the Exchange Rates in Incremental as well and do the cleanup. The SIL then takes care of Insert/Updates accordingly. PeopleSoft:PeopleSoft does not have From Date and To Date explicitly in the Source tables. Let’s look at an example. Please note that this is achieved from PS1 onwards only. 1 Jan 2010 – USD to INR – 45 31 Jan 2010 – USD to INR – 46 PSFT stores records in above fashion. This means that Exchange Rate of 45 for USD to INR is applicable for 1 Jan 2010 to 30 Jan 2010. We need to store data in this fashion in DW. Also PSFT has Exchange Rate stored as RATE_MULT and RATE_DIV. We need to do a RATE_MULT/RATE_DIV to get the correct Exchange Rate. We generate From Date and To Date while extracting data from source and this has certain assumptions: If a record gets updated/inserted in the source, it will be extracted in incremental. Also if this updated/inserted record is between other dates, then we also extract the preceding and succeeding records (based on dates) of this record. This is required because we need to generate a range record and we have 3 records whose ranges have changed. Taking the same example as above, if there is a new record which gets inserted on 15 Jan 2010; the new ranges are 1 Jan to 14 Jan, 15 Jan to 30 Jan and 31 Jan to Next available date. Even though 1 Jan record and 31 Jan have not changed, we will still extract them because the range is affected. Similar logic is used for Global Exchange Rate Extraction. We create the Range records and get it into a Temporary table. Then we join to Day Dimension, create individual records and pivot the data to get the 5 Global Exchange Rates for each From Currency, Date and Rate Type. Siebel: Siebel Facts are dependent on Global Exchange Rates heavily and almost none of them really use individual Exchange Rates. In other words, W_GLOBAL_EXCH_RATE_G is the main table used in Siebel from PS1 release onwards. As of January 2002, the Euro Triangulation method for converting between currencies belonging to EMU members is not needed for present and future currency exchanges. However, the method is still available in Siebel applications, as are the old currencies, so that historical data can be maintained accurately. The following description applies only to historical data needing conversion prior to the 2002 switch to the Euro for the EMU member countries. If a country is a member of the European Monetary Union (EMU), you should convert its currency to other currencies through the Euro. This is called triangulation, and it is used whenever either currency being converted has EMU Triangulation checked. Due to this, there are multiple extraction flows in SEBL ie. EUR to EMU, EUR to NonEMU, EUR to DMC and so on. We load W_EXCH_RATE_G through multiple flows with these data. This has been kept same as previous versions of OBIA. W_GLOBAL_EXCH_RATE_G being a new table does not have such needs. However SEBL does not have From Date and To Date columns in the Source tables similar to PSFT. We use similar extraction logic as explained in PSFT section for SEBL as well. What if all 5 Global Currencies configured are same? As mentioned in previous sections, from PS1 onwards we store Global Exchange Rates in W_GLOBAL_EXCH_RATE_G table. The extraction logic for this table involves Pivoting data from multiple rows into a single row with 5 Global Exchange Rates in 5 columns. As mentioned in previous sections, we use CASE and GROUP BY functions to achieve this. This approach poses a unique problem when all the 5 Global Currencies Chosen are same. For example – If the user configures all 5 Global Currencies as ‘USD’ then the extract logic will not be able to generate a record for From Currency=USD. This is because, not all Source Systems will have a USD->USD conversion record. We have _Generated mappings to take care of this case. We generate a record with Conversion Rate=1 for such cases. Reusable Lookups Before PS1, we had a Mapplet for Currency Conversions. In PS1, we only have reusable Lookups- LKP_W_EXCH_RATE_G and LKP_W_GLOBAL_EXCH_RATE_G. These lookups have another layer of logic so that all the lookup conditions are met when they are used in various Fact Mappings. Any user who would want to do a LKP on W_EXCH_RATE_G or W_GLOBAL_EXCH_RATE_G should and must use these Lookups. A direct join or Lookup on the tables might lead to wrong data being returned. Changing Currency preferences in the Dashboard: In the 796x series, all amount metrics in OBIA were showing the Global1 amount. The customer needed to change the metric definitions to show them in another Currency preference. Project Analytics started supporting currency preferences since 7.9.6 release though, and it published a Tech note for other module customers to add toggling between currency preferences to the solution. List of Currency Preferences Starting from 11.1.1.x release, the BI Platform added a new feature to support multiple currencies. The new session variable (PREFERRED_CURRENCY) is populated through a newly introduced currency prompt. This prompt can take its values from the xml file: userpref_currencies_OBIA.xml, which is hosted in the BI Server installation folder, under :< home>\instances\instance1\config\OracleBIPresentationServicesComponent\coreapplication_obips1\userpref_currencies.xml This file contains the list of currency preferences, like“Local Currency”, “Global Currency 1”,…which customers can also rename to give them more meaningful business names. There are two options for showing the list of currency preferences to the user in the dashboard: Static and Dynamic. In Static mode, all users will see the full list as in the user preference currencies file. In the Dynamic mode, the list shown in the currency prompt drop down is a result of a dynamic query specified in the same file. Customers can build some security into the rpd, so the list of currency preferences will be based on the user roles…BI Applications built a subject area: “Dynamic Currency Preference” to run this query, and give every user only the list of currency preferences required by his application roles. Adding Currency to an Amount Field When the user selects one of the items from the currency prompt, all the amounts in that page will show in the Currency corresponding to that preference. For example, if the user selects “Global Currency1” from the prompt, all data will be showing in Global Currency 1 as specified in the Configuration Manager. If the user select “Local Currency”, all amount fields will show in the Currency of the Business Unit selected in the BU filter of the same page. If there is no particular Business Unit selected in that filter, and the data selected by the query contains amounts in more than one currency (for example one BU has USD as a functional currency, the other has EUR as functional currency), then subtotals will not be available (cannot add USD and EUR amounts in one field), and depending on the set up (see next paragraph), the user may receive an error. There are two ways to add the Currency field to an amount metric: In the form of currency code, like USD, EUR…For this the user needs to add the field “Apps Common Currency Code” to the report. This field is in every subject area, usually under the table “Currency Tag” or “Currency Code”… In the form of currency symbol ($ for USD, € for EUR,…) For this, the user needs to format the amount metrics in the report as a currency column, by specifying the currency tag column in the Column Properties option in Column Actions drop down list. Typically this column should be the “BI Common Currency Code” available in every subject area. Select Column Properties option in the Edit list of a metric. In the Data Format tab, select Custom as Treat Number As. Enter the following syntax under Custom Number Format: [$:currencyTagColumn=Subjectarea.table.column] Where Column is the “BI Common Currency Code” defined to take the currency code value based on the currency preference chosen by the user in the Currency preference prompt.

    Read the article

  • How do I create midi data to trigger an Ultrabeat pattern in Logic?

    - by user289581
    I've made some Ultrabeat patterns but I can't figure out how to trigger them. I make the pattern on C-1, then I go back to the arrange window and hit record and then hit C1 on my keyboard, and it doesn't play the pattern, it just plays a single note. I have Pattern Mode enabled on Ultrabeat and I'm using a one-shot trigger but I can't seem to trigger my pattern to play at all. What am I doing wrong?

    Read the article

  • Open Source vs. Closed Source? Which one to choose? [closed]

    - by Rafal Chmiel
    So far, I was always creating open-source applications (or didn't publish them at all) because it was free for me to create a new CodePlex project, and upload everything. Couple of days ago I started wandering what kind of apps should I make, closed or open source. I can see "cons" and "pros" in both such as the ones below: Open Source: Pro, free project hosting (CodePlex is excellent for .NET app updates. ClickOnce etc) Pro, free help such as developers and designers Con, people can get your source code and (sometimes) use some of your code in their apps and make money Con, companies such as Microsoft, Twitter or Tumblr won't be looking forward in buying your project (like for example Twitter bought TweetDeck - TweetDeck being a closed source AIR application, of course) Closed Source: Pro, it's harder for people to copy your idea without the source code Pro, you're more likely to get acquired/bought by companies Con, no free hosting - you have to have a website to do so (not good for updates) Con, no free help What do you think? What do you think I should choose?

    Read the article

  • Unity The parameter host could not be resolved when attempting to call constructor

    - by Terrance
    When I attempt to instantiate my instance of the base class I get the error: a ResolutionFailedException with roughly the following error "The parameter host could not be resolved when attempting to call constructor" I'm currently not using an Interface for the base type and my instance of the class is inheriting the base type class. I'm new to Unity and DI so I'm thinking its something I forgot possibly. ExeConfigurationFileMap map = new ExeConfigurationFileMap(); map.ExeConfigFilename = "Unity.Config"; Configuration config = ConfigurationManager.OpenMappedExeConfiguration(map, ConfigurationUserLevel.None); UnityConfigurationSection section = (UnityConfigurationSection)config.GetSection("unity"); IUnityContainer container = new UnityContainer(); section.Containers.Default.Configure(container); //Throws exception here on this BaseCalculatorServer server = container.Resolve<BaseCalculatorServer>(); and the Unity.Config file <container> <types> <type name ="CalculatorServer" type="Calculator.Logic.BaseCalculatorServer, Calculator.Logic" mapTo="Calculator.Logic.CalculateApi, Calculator.Logic"/> </types> </container> </containers> The Base class using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.Serialization; using System.ServiceModel; using System.ServiceModel.Transactions; using Microsoft.Practices.Unity; using Calculator.Logic; namespace Calculator.Logic { public class BaseCalculatorServer : IDisposable { public BaseCalculatorServer(){} public CalculateDelegate Calculate { get; set; } public CalculationHistoryDelegate CalculationHistory { get; set; } /// <summary> /// Performs application-defined tasks associated with freeing, releasing, or resetting unmanaged resources. /// </summary> public void Dispose() { this.Dispose(); } } } The Implementation using System; using System.Collections.Generic; using System.Linq; using System.Text; using Calculator.Logic; using System.ServiceModel; using System.ServiceModel.Configuration; using Microsoft.Practices.Unity; namespace Calculator.Logic { public class CalculateApi:BaseCalculatorServer { public CalculateApi(ServiceHost host) { host.Open(); Console.WriteLine("Press Enter To Exit"); Console.ReadLine(); host.Close(); } public CalculateDelegate Calculate { get; set; } public CalculationHistoryDelegate CalculationHistory { get; set; } } } Yes both base class and implementation are in the same Namespace and thats something design wise that will change once I get this working. Oh and a more detailed error Resolution of the dependency failed, type = "Calculator.Logic.BaseCalculatorServer", name = "". Exception message is: The current build operation (build key Build Key[Calculator.Logic.BaseCalculatorServer, null]) failed: The value for the property "Calculate" could not be resolved. (Strategy type BuildPlanStrategy, index 3)

    Read the article

  • SQL SERVER – Cardinality Estimation and Performance – SQL in Sixty Seconds #072

    - by Pinal Dave
    Yesterday I wrote blog post based on my latest Pluralsight course on learning SQL Server 2014. I discussed newly introduced cardinality estimation in SQL Server 2014 and how it improves the performance of the query. The cardinality estimation logic is responsible for quality of query plans and majorly responsible for improving performance for any query. This logic was not updated for quite a while, but in the latest version of SQL Server 2104 this logic is re-designed. The new logic now incorporates various assumptions and algorithms of OLTP and warehousing workload. I hope my earlier blog post clearly explained how new cardinality estimation logic improves performance. If not, I suggest you watch following quick video where I explain this concept in extremely simple words. You can download the code used in this course from Simple Demo of New Cardinality Estimation Features of SQL Server 2014. Action Item Here are the blog posts I have previously written. You can read it over here: Simple Demo of New Cardinality Estimation Features of SQL Server 2014 Pluralsight Course You can subscribe to my YouTube Channel for frequent updates. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Video

    Read the article

  • When using Data Annotations with MVC, Pro and Cons of using an interface vs. a MetadataType

    - by SkippyFire
    If you read this article on Validation with the Data Annotation Validators, it shows that you can use the MetadataType attribute to add validation attributes to properties on partial classes. You use this when working with ORMs like LINQ to SQL, Entity Framework, or Subsonic. Then you can use the "automagic" client and server side validation. It plays very nicely with MVC. However, a colleague of mine used an interface to accomplish exactly the same result. it looks almost exactly the same, and functionally accomplishes the same thing. So instead of doing this: [MetadataType(typeof(MovieMetaData))] public partial class Movie { } public class MovieMetaData { [Required] public object Title { get; set; } [Required] [StringLength(5)] public object Director { get; set; } [DisplayName("Date Released")] [Required] public object DateReleased { get; set; } } He did this: public partial class Movie :IMovie { } public interface IMovie { [Required] object Title { get; set; } [Required] [StringLength(5)] object Director { get; set; } [DisplayName("Date Released")] [Required] object DateReleased { get; set; } } So my question is, when does this difference actually matter? My thoughts are that interfaces tend to be more "reusable", and that making one for just a single class doesn't make that much sense. You could also argue that you could design your classes and interfaces in a way that allows you to use interfaces on multiple objects, but I feel like that is trying to fit your models into something else, when they should really stand on their own. What do you think?

    Read the article

  • SQL SERVER – How to Force New Cardinality Estimation or Old Cardinality Estimation

    - by Pinal Dave
    After reading my initial two blog posts on New Cardinality Estimation, I received quite a few questions. Once I receive this question, I felt I should have clarified it earlier few things when I started to write about cardinality. Before continuing this blog, if you have not read it before I suggest you read following two blog posts. SQL SERVER – Simple Demo of New Cardinality Estimation Features of SQL Server 2014 SQL SERVER – Cardinality Estimation and Performance – SQL in Sixty Seconds #072 Q: Does new cardinality will improve performance of all of my queries? A: Remember, there is no 0 or 1 logic when it is about estimation. The general assumption is that most of the queries will be benefited by new cardinality estimation introduced in SQL Server 2014. That is why the generic advice is to set the compatibility level of the database to 120, which is for SQL Server 2014. Q: Is it possible that after changing cardinality estimation to new logic by setting the value to compatibility level to 120, I get degraded performance for few queries? A: Yes, it is possible. However, the number of the queries where this impact should be very less. Q: Can I still run my database in older compatibility level and force few queries to newer cardinality estimation logic? If yes, How? A: Yes, you can do that. You will need to force your query with trace flag 2312 to use newer cardinality estimation logic. USE AdventureWorks2014 GO -- Old Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 110 GO -- Using New Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address] OPTION(QUERYTRACEON 2312);; -- Using Old Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address]; GO Q: Can I run my database in newer compatibility level and force few queries to older cardinality estimation logic? If yes, How? A: Yes, you can do that. You will need to force your query with trace flag 9481 to use newer cardinality estimation logic. USE AdventureWorks2014 GO -- NEW Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 120 GO -- Using New Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address]; -- Using Old Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address] OPTION(QUERYTRACEON 9481); GO I guess, I have covered most of the questions so far I have received. If I have missed any questions, please send me again and I will include the same. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • FTP GoDaddy Issues

    - by Brian McCarthy
    Is there a special port for godaddy servers? Do I have to call them to enable ftp support? I can login w the username and password on the control panel on godaddy.com but not on ftp. I'm not sure what I'm doing wrong. I tried using Filezilla and CuteFTP Pro using port 21 but w/ no luck. Go Daddy's Instructions are: 1.FTP Address or Hostname: Your Domain Name 2.FTP Username & Password: You selected both of these during account creation 3.Start Directory: You should leave this blank or include a single forward slash (i.e. /) 4.FTP Port: You should enter Standard, or 21. •FTP Client. ( ?Filezilla, ?WS-FTP, ?CuteFTP Pro, ?AceFTP ) Thanks!

    Read the article

  • Good online auction software

    - by Brett
    We are looking for some PHP-based auction software to start off with and I have have been scouring the net many times and am almost ready to purchase phpprobid as this seems to be the best and most feature rich of the lot; only bad things I have read is the lack of after-sales customer service. Others I have also looked at include: AJ Auction Software WeBid GuruScript Auction PHP Auction (enuuk). Many of them turn me off by having unprofessional sites which makes me think their software will be the same and be rubbish. Many also don't go into detail with the feature set like PHP Pro Bid does. So before we purchase PHP Pro Bid I was wondering if I missed something good? Thanks!

    Read the article

  • Rails - Logic for finding info from a :has_many :through needed!

    - by Jty.tan
    I have 3 relevant tables. User, Orders, and Viewables The idea is that each User has got many Orders, but at the same time, each User can View specific other Orders that belong to other Users. So Viewables has the attributes of user_id and order_id. Orders has a :has_many :Users, :through => :viewables Is it possible to do a find through an Order's view? So something like @viewable_orders = Orders.find(:all, :conditions = ["Viewable.user_id=?",1]) To get a list of Orders which are viewable by user_id=1. (This doesn't work, else I won't be asking. :( ) The idea being that I can do something like a sidebar where the current user (the logged-in one) can view a list of other people's orders that he can view. For example Three other Users who have some Orders that he can view should be eventually displayed like this: Jack (2) Basic Order (registry_id: 1) New Order (registry_id: 29) Amy (4) Short Order (registry_id: 12) Jill (5) Hardware Order (14) Pink Order (17) Software Order (76) (The number in brackets are the respective user_id or registry_id) So to find the list of all of the orders that the current user can find (assuming user_id of the current user is 1), would be found by doing @viewable_orders = Viewable.find(:all, :conditions => ["user_id=?", 1]) And that would give me the collection of the above 6 registries. Now, the easiest way to do this, is for me to just have a list of + Jill's Hardware Order + Jill's Pink Order + Amy's Short Order + etc But that gets ugly for long lists. Thanks!

    Read the article

  • Why learn Flash Builder 4 (Flex) when I can just use Flash Professional?

    - by Jason McKenna
    I want to learn Flash Builder 4 (Flex) because I see sooo many jobs requesting experience with it. i also just like knowing stuff. I am also very interested in focusing on RIA development now. BUT... can anyone tell me CLEARLY why the heck I would ever use FLEX over Flash Pro?? it is a time investment, so is it worth it? All I read are misguided posts about how Flash Pro is for games and banner ads, and Flex is for programmers and RIAs blah blah... this simply isn't so from my 9 years of contracting experience. I'm 99.9% certain that I can build anything a flex developer can build, but using Flash Pro. I can build powerful AS3-driven apps for the desktop, mobile device, or browser, and I can link to databases with XML and I can import text files and communicate with ColdFusion and everything. The advantage with Flash Pro is that I can also easily and clearly animate transitions and build custom elements that look the way I want/need them to look for my specific client. Why would I want to use a bunch of pre-built components that drive my file sizes to the moon?? Who is happy with a drag-n-drop button?? Is Flex just a thing made for programmer people with no artistic inclination? What is the advantage of using it?? It takes me back to Visual Basic class. Seems like a pain to have to use multiple tools to import crap from Flash Pro into Flex and yada yada... why when I can do it all nicely in Flash Pro to begin with. Am I clueless, or missing some major piece of the puzzle? Thanks for any clarity. PS, I couldn't care less about the code editors. It aint that bad people. They make it out like the thing doesn't even respond to keyboard input or something. Does everthing I need it do anyways. Please help out here. If I just dont need to learn it, I dont want to waste the time. Jase

    Read the article

  • What are the pro and cons of statically linking a library?

    - by Mathieu Pagé
    Hi, I want to release an application I developed as a hobby both for Linux and Windows. This application depends on boost (and possibly other libraries). The norm for this kind of application (a chess engine) is to provide only an executable file and possibly some helper files. I tough it would be a good idea to statically link the libraries so the executable would not have any dependencies. So the end user can just put the executable in a directory and start using it. However, while doing some research online I found some negative comments about statically linking libraries, some even arguing that an application with statically linked libraries would be hardly portable, meaning that it would only run on my system of highly similar systems. So what are the pros and cons of statically linking library? I already know that the executable will be bigger. But I can't see why it would make my application less portable.

    Read the article

  • Problem with importing an mdf created with SQL EXPRESS 2008 into SQL SERVER 2005 [an ASP.NET MVC pro

    - by user252160
    the question is probably extremely easy to resolve, but I need to resolve it because I need to carry on with my project. I am using SQLEXPRESS 2008 at home, and I've been working on an ASP.NET MVC app that stores my DB in an mdf file in the project's folder. The problem is that the SQL Server in the Uni labs is SQL SERVER 2005, and when I try to open the mdf file with the VS Server Explorer,It says that the version of the mdf file is more than the server can accept. The only oprion that comes to my mind is exporting the DB as an sql file, just like I've done it thousand times with phpmyadmin. the thing is that the SQL MANAGEMENT STUDIO EXPRESS is not the most usable tool in the world, and for some strange reason all the articles I could find in Google were irrelevant. Please, help.

    Read the article

  • Can someone help me refactor this C# linq business logic for efficiency?

    - by Russell
    I feel like this is not a very efficient way of using linq. I was hoping somebody on here would have a suggestion for a refactor. I realize this code is not very pretty, as I was in a complete rush. public class Workflow { public void AssignForms() { using (var cntx = new ProjectBusiness.Providers.ProjectDataContext()) { var emplist = (from e in cntx.vw_EmployeeTaskLists where e.OwnerEmployeeID == null select e).ToList(); foreach (var emp in emplist) { // if employee has a form assigned: break; if (emp.GRADE > 15 || (emp.Pay_Plan.ToLower().Contains("al") || emp.Pay_Plan.ToLower().Contains("ex"))) { //Assign278(); } else if ((emp.Series.Contains("0905") || emp.Series.Contains("0511") || emp.Series.Contains("0110") || emp.Series.Contains("1801")) || (emp.GRADE >= 12 && emp.GRADE <= 15)) { var emptask = new ProjectBusiness.Providers.EmployeeTask(); emptask.TimespanID = cntx.Timespans.SingleOrDefault(t => t.BeginDate.Year == DateTime.Today.Year & t.EndDate.Year == DateTime.Today.Year).TimespanID; var FormID = (from f in cntx.Forms where f.FormName.Contains("450") select f.FormID).FirstOrDefault(); var TaskStatusID = (from s in cntx.TaskStatus where s.StatusDescription.ToLower() == "not started" select s.TaskStatusID).FirstOrDefault(); Assign450((int)emp.EmployeeID, FormID, TaskStatusID, emptask); cntx.EmployeeTasks.InsertOnSubmit(emptask); } else { //Assign185(); } } cntx.SubmitChanges(); } } private void Assign450(int EmployeeID, int FormID, int TaskStatusID, ProjectBusiness.Providers.EmployeeTask emptask) { emptask.FormID = FormID; emptask.OwnerEmployeeID = EmployeeID; emptask.AssignedToEmployeeID = EmployeeID; emptask.TaskStatusID = TaskStatusID; emptask.DueDate = DateTime.Today; } }

    Read the article

  • Why is my logic not working correctly for SPOJ TOPOSORT?

    - by Kavish Dwivedi
    The given problem is http://www.spoj.com/problems/TOPOSORT/ The output format is particularly important as : Print "Sandro fails." if Sandro cannot complete all his duties on the list. If there is a solution print the correct ordering, the jobs to be done separated by a whitespace. If there are multiple solutions print the one, whose first number is smallest, if there are still multiple solutions, print the one whose second number is smallest, and so on. What I am doing is simply doing dfs by reversing the edges i.e if job A finishes before job B, there is a directed edge from B to A . I am maintaining the order by sorting the adjacency list I created and storing the nodes which don't have any constraints separately so as to print them later in correct order . There are two flag arrays used , one for marking discovered node and one for marking the node whose all neighbors have been explored. Now my solution is http://www.ideone.com/QCUmKY (the important function is the visit funtion ) and its giving WA after running correct for 10 cases so its really hard to figure out where am I doing it wrong since it runs for all of the test cases which I have done by hand.

    Read the article

  • Should adapters or wrappers be unit tested?

    - by m3th0dman
    Suppose that I have a class that implements some logic: public MyLogicImpl implements MyLogic { public void myLogicMethod() { //my logic here } } and somewhere else a test class: public MyLogicImplTest { @Test public void testMyLogicMethod() { /test my logic } } I also have: @WebService public MyWebServices class { @Inject private MyLogic myLogic; @WebMethod public void myLogicWebMethod() { myLogic.myLogicMethod(); } } Should there be a test unit for myLogicWebMethod or should the testing for it be handled in integration testing.

    Read the article

  • nmake: can a batch file run as a art of a command block, affect the environment of the nmake.exe pro

    - by Cheeso
    I think in nmake if I do this: example : set value=77 echo %%value%% The result will display 77 on the console. Is there a way for me to invoke a .cmd or .bat file that will affect the environment of the nmake.exe process? Suppose I put the statement set value=77 in a file called "setvalue.cmd". Then change the makefile to this: example : setvalue echo %%value%% I get: %value% Alternatively, if there's a way to set a macro within a command block, that would also work. Or, a way to set the value of a macro from a batch file, even outside a command block.

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >