Search Results

Search found 27143 results on 1086 pages for 'include path'.

Page 808/1086 | < Previous Page | 804 805 806 807 808 809 810 811 812 813 814 815  | Next Page >

  • Using CTAS & Exchange Partition Replace IAS for Copying Partition on Exadata

    - by Bandari Huang
    Usage Scenario: Copy data&index from one partition to another partition in a partitioned table. Solution: Create a partition definition Copy data from one partition to another partiton by 'Insert as select (IAS)' Create a nonpartitioned table by 'Create table as select (CTAS)' Convert a nonpartitioned table into a partition of partitoned table by exchangng their data segments. Rebuild unusable index Exchange Partition Convertion Mutual convertion between a partition (or subpartition) and a nonpartitioned table Mutual convertion between a hash-partitioned table and a partition of a composite *-hash partitioned table Mutual convertiton a [range | list]-partitioned table into a partition of a composite *-[range | list] partitioned table. Exchange Partition Usage Scenario High-speed data loading of new, incremental data into an existing partitioned table in DW environment Exchanging old data partitions out of a partitioned table, the data is purged from the partitioned table without actually being deleted and can be archived separately Exchange Partition Syntax ALTER TABLE schema.table EXCHANGE [PARTITION|SUBPARTITION] [partition|subprtition] WITH TABLE schema.table [INCLUDE|EXCLUDING] INDEX [WITH|WITHOUT] VALIDATION UPDATE [INDEXES|GLOBAL INDEXES] INCLUDING | EXCLUDING INDEXES Specify INCLUDING INDEXES if you want local index partitions or subpartitions to be exchanged with the corresponding table index (for a nonpartitioned table) or local indexes (for a hash-partitioned table). Specify EXCLUDING INDEXES if you want all index partitions or subpartitions corresponding to the partition and all the regular indexes and index partitions on the exchanged table to be marked UNUSABLE. If you omit this clause, then the default is EXCLUDING INDEXES. WITH | WITHOUT VALIDATION Specify WITH VALIDATION if you want Oracle Database to return an error if any rows in the exchanged table do not map into partitions or subpartitions being exchanged. Specify WITHOUT VALIDATION if you do not want Oracle Database to check the proper mapping of rows in the exchanged table. If you omit this clause, then the default is WITH VALIDATION.  UPADATE INDEX|GLOBAL INDEX Unless you specify UPDATE INDEXES, the database marks UNUSABLE the global indexes or all global index partitions on the table whose partition is being exchanged. Global indexes or global index partitions on the table being exchanged remain invalidated. (You cannot use UPDATE INDEXES for index-organized tables. Use UPDATE GLOBAL INDEXES instead.) Exchanging Partitions&Subpartitions Notes Both tables involved in the exchange must have the same primary key, and no validated foreign keys can be referencing either of the tables unless the referenced table is empty.  When exchanging partitioned index-organized tables: – The source and target table or partition must have their primary key set on the same columns, in the same order. – If key compression is enabled, then it must be enabled for both the source and the target, and with the same prefix length. – Both the source and target must be index organized. – Both the source and target must have overflow segments, or neither can have overflow segments. Also, both the source and target must have mapping tables, or neither can have a mapping table. – Both the source and target must have identical storage attributes for any LOB columns. 

    Read the article

  • How to Assign a Default Signature in Outlook 2013

    - by Lori Kaufman
    If you sign most of your emails the same way, you can easily specify a default signature to automatically insert into new email messages and replies and forwards. This can be done directly in the Signature editor in Outlook 2013. We recently showed you how to create a new signature. You can also create multiple signatures for each email account and define a different default signature for each account. When you change your sending account when composing a new email message, the signature would change automatically as well. NOTE: To have a signature added automatically to new email messages and replies and forwards, you must have a default signature assigned in each email account. If you don’t want a signature in every account, you can create a signature with just a space, a full stop, dashes, or other generic characters. To assign a default signature, open Outlook and click the File tab. Click Options in the menu list on the left side of the Account Information screen. On the Outlook Options dialog box, click Mail in the list of options on the left side of the dialog box. On the Mail screen, click Signatures in the Compose messages section. To change the default signature for an email account, select the account from the E-mail account drop-down list on the top, right side of the dialog box under Choose default signature. Then, select the signature you want to use by default for New messages and for Replies/forwards from the other two drop-down lists. Click OK to accept your changes and close the dialog box. Click OK on the Outlook Options dialog box to close it. You can also access the Signatures and Stationery dialog box from the Message window for new emails and drafts. Click New Email on the Home tab or double-click an email in the Drafts folder to access the Message window. Click Signature in the Include section of the New Mail Message window and select Signatures from the drop-down menu. In the next few days, we will be covering how to use the features of the signature editor next, and then how to insert and change signatures manually, backup and restore your signatures, and modify a signature for use in plain text emails.     

    Read the article

  • Cool Enhancements Everyone Can Enjoy

    - by Ruth
    With Release 17, we have a few visual and functional enhancements that make using CRM On Demand that much better for us all. I'll mention a few here, but to get the full outline of these upgrades, I recommend taking 10 minutes to view the Release 17 Usability Transfer of Information course. First and foremost, I find the ability to customize your theme (or skin) pretty cool, but I've said that before. Take a look at the Selecting Your Theme and the Themes - Create Your CRM Style blog articles for more information. My next favorite is the resizeable user interface (UI). CRM On Demand will dynamically fit the device and screen resolution you're using, which includes the resizing of fields, field editors and pop-ups. If you have a wide screen like me, you should appreciate that one very much. To make it easier to see that resized UI, the detail pages got a little face lift. New horizontal lines and other subtle changes make those pages easier to read. Also, those things you need to know, like error messages and inline help are highlighted with a little icon to show the message type. You may not think every change to the detail pages are particularly exciting, but I'm sure you'll enjoy the new Head Up Display, which saves you scrolling time by adding links to related information sections. I like that the head up display travels with me as I move up and down the page...it's like a little friend that takes me where I want to go as fast as possible. You may also really like the fact that the copy record feature is now available for all record types from both detail pages and lists. Your company administrator can choose which fields get copied, so you can maximize your efficiency when creating new records. Lists also got a face lift. Alternating colors in rows make it easier to see your data. Also, the Favorite Lists icon is now on the list itself, so you can save your most useful lists with one click. If you've ever tried to create a new list with 10 columns or more, you'll be happy to hear that the maximum number of columns in a list has increased from 9 to 20. This is great news, but doesn't mean you should include the kitchen sink in your list...excess columns can slow list performance. So choose your columns wisely. Again, these are just a few of my favorite things. Let us know what you think about the new usability features. What are your favorite things?

    Read the article

  • Custom session: Window does not capture full screen area by default. 12.04

    - by juzerali
    I am trying to create a custom session by creating a custom.desktop file in /usr/share/xesessions folder. Remember this is not a gnome or some other session. I have created my own application for this session, which are simple. Case 1 Chrome Browser Contents of custom.desktop file [Desktop Entry] Name=Internet Kiosk Comment=This is an internet kiosk Exec=google-chrome --kiosk TryExec= Icon= Type=Application Issue Chrome browser starts in kiosk mode but does not capture complete screen area. Some area is left at the bottom and right side of the screen. Case 2 Custom pyGTK app (Quickly) Contents of custom.desktop file [Desktop Entry] Name=Custom Kiosk Comment=This is a custom kiosk Exec=~/MyCustomPyGTKApp TryExec= Icon= Type=Application Issue My custom pyGTK app has window.fullScreen() in the code. That means it should open in full screen without the window chrome (and it does under the normal session). But that too, leaves lots of space around it. Need Help Can anyone tell me whats going on here. I think its some issue with borders as pointed out at http://www.instructables.com/id/Setting-Up-Ubuntu-as-a-Kiosk-Web-Appliance/?ALLSTEPS in Step 8 If by chance, Google Chromium is not stretched to the edges with the --kiosk switch enabled there is a simple fix. To stretch Chromium simply log in as your regular user and edit chromeKiosk.sh to not have the --kiosk switch. Then log in as the restricted user, click the wrench and choose options. Then on the Personal Stuff tab select Hide system title bar and use compact borders. Close the options screen and stretch Chromium to fit the monitor. Then go back into the options window and set it to Use system title bar and borders. After this is done, log out of your restricted user (might need to just reboot) and log into your regular user. Edit chromeKiosk.sh back to include the --kiosk switch again and Chromium should be full screen next time you log into the restricted user. If I were to use a custom pyGTK or a gtkmm app, how should I get around this issue. window.fullScreen() should occupy the complete screen area. This has to be done programmatically or in some other way that can scale. I have to deploy this on large number of machines located at different geographical areas. Doing it manually on every machine is not possible.

    Read the article

  • Manageability at Oracle Openworld: a Guide to sessions for Partners

    - by Javier Puerta
    A large number of sessions focusing on Manageability will be taking place during the week of Oracle Openworld in San Francisco. To help you organize your schedule I am including below a list of sessions and events around Manageability that you will find of interest. PARTNER SPECIFIC SESSIONS Date/Time/Location  Session   Monday, October 1st, 2011 at 15:30 - 18:00 PST Grand Hyatt San Francisco 345 Stockton Street, San Francisco (Conference Theater) (It is a 15 minute walk from OOW Moscone Center. See directions here) Exadata & Manageability EMEA Partner Community Forum.- Listen to other partners share their experiences in selling and implementing Exadata and Manageability projects, and have a direct dialogue with some of the Oracle executives that are driving the strategy of the company in these areas. Agenda Welcome - Hans-Peter Kipfer, VP, Engineered Systems Oracle EMEA Next challenges in building and managing clouds - Javier Cabrerizo, VP, Business Development for Exadata, Oracle Corp. Partner Experiences: IT modernization, simplification and cost reduction: The case of a customer in Transportation & Logistics with custom applications and SAP. - Francisco Bermudez, Country Leader Infrastructure Services, Capgemini, Spain Nvision cloud project - Dmitry Krasilov, Head of Oracle Competence Center, Nvision Group, Russia From Exadata Ready to Exadata Optimized: An ISV Experience - Miguel Alves, Product Business Solutions Manager, WeDo Technologies, Portugal To confirm your participation send an email to [email protected] Tuesday, Oct 2, 11:45 AM - 12:45 PM - Marriott Marquis - Golden Gate A Developing Services for Private and Public Clouds.- The Oracle Cloud provides new business opportunities, secures business applications and data, and provides operational efficiencies and cost savings. For customers lacking the skill or time to architect, develop, or build a cloud, there is a growing demand for services practice partners that can deliver and manage Oracle Cloud solutions. In this session,• Become familiar with services examples and use cases that demonstrate how an Oracle Cloud can provide a solution to a customer’s needs today• Learn about Oracle architecture and best practices available for an Oracle Cloud instances• Identify the right Oracle technology and the optimal model for meeting customer needs while providing excellent revenues and an optimal margin for services delivered Wednesday, Oct 3, 1:15 PM - 2:15 PM - Marriott Marquis - Golden Gate B Using Management Already Built into Oracle Products: Oracle Enterprise Manager .- Engineered into Oracle products are management capabilities ready to be used. In this session, applicable to all partners, understand the growing market opportunities and how to use or include Oracle Enterprise Manager as part of your solution or services. Other Cloud sessions for Partners at the Oracle PartnerNetwork Exchange  Click here.-     OOW CUSTOMER SESSIONS   Download the Focus On Oracle Enterprise Manager Cloud Control 12c (and Private Cloud) guide for a full list of Exadata OOW sessions.  

    Read the article

  • How (recipe) to build only one kernel module?

    - by Pro Backup
    I have a bug in a Linux kernel module that causes the stock Ubuntu 14.04 kernel to oops (crash). That is why I want to edit/patch the source of only that single kernel module to add some extra debug output. The kernel module in question is mvsas and not necessary to boot. For that reason I don't see any need to update any initrd images. I have read a lot of information (as shown below) and find the setup and build process confusion. I need two recipes: to setup/configure the build environment once steps to do after editing any source file of this kernel module (.c and .h) and converting that edit into a new kernel module (.ko) The sources that have been used are: build one kernel module - Google search http://www.linuxquestions.org/questions/linux-kernel-70/rebuilding-a-single-kernel-module-595116/ http://stackoverflow.com/questions/8744087/how-to-recompile-just-a-single-kernel-module http://www.pixelbeat.org/docs/rebuild_kernel_module.html How do I build a single in-tree kernel module? http://ubuntuforums.org/showthread.php?t=1153067 http://ubuntuforums.org/showthread.php?t=2112166 http://ubuntuforums.org/showthread.php?t=1115593 build one kernel module ubuntu - Google search 'make +single +kernel +module' - Ask Ubuntu 'make +kernel +module' - Ask Ubuntu My makefile results in: No rule to make target `arch/x86/tools/relocs.c', needed '"Invalid module format"' - Ask Ubuntu Driver installation: compiling source code for newer kernel Modprobe: 'Invalid nodule format', yet works after insmod "Symbol version dump" "is missing" - Google search http://stackoverflow.com/questions/9425523/should-i-care-that-the-symbol-version-dump-is-missing-how-do-i-get-one Where can I find the corresponding Module.symvers and .config files for 12.04.3 i386 server? "no symbol version for module_layout" when trying to load usbhid.ko Broken links inside Linux header file folder 'make modules_install' - Ask Ubuntu 'modules_install' - Ask Ubuntu Empty build directory in custom compiled kernel Not able to see pr_info output In which directory are the kernel source files and how can I recompile it? How can I compile and install that patched libata-eh.c file? 'modules_install +depmod' - Ask Ubuntu modules_install depmod - Google search "make modules_install" - Google search http://www.csee.umbc.edu/courses/undergraduate/CMSC421/fall02/burt/projects/howto_build_kernel.html http://unix.stackexchange.com/questions/20864/what-happens-in-each-step-of-the-linux-kernel-building-process https://wiki.ubuntu.com/KernelCustomBuild http://www.cyberciti.biz/tips/build-linux-kernel-module-against-installed-kernel-source-tree.html http://www.linuxforums.org/forum/kernel/170617-solved-make-modules_install-different-path.html "make prepare" - Google search "make prepare" "scripts/kconfig/conf --silentoldconfig Kconfig" - Google search http://ubuntuforums.org/showthread.php?t=1963515 ubuntu "make prepare" version - Google search http://stackoverflow.com/questions/8276245/how-to-compile-a-kernel-module-against-a-new-source https://help.ubuntu.com/community/Kernel/Compile How do I compile a kernel module? How to add a custom driver to my kernel? Compile and loading kernel module without compiling the kernel

    Read the article

  • Correcting Grammar for Microsoft Products and Technology

    I see book authors, editors, bloggers, press, team members, and occasionally even a VP misspell our products, technologies, and features that I thought I would build and maintain a list of the correct capitalization and spelling of the most commonly misspelled Microsoft products and technologies. Sources: Internal site (brandtools) and the Microsoft Trademarks Web site. Last updated: April 27, 2010   Incorrect Correct .net or .Net .NET .Net framework 4.0, .NET framework 4.0 .NET Framework AdCenter, Ad Center, Adcenter adCenter Ado.net, ADO.Net ADO.NET Asp.net, ASP.Net ASP.NET Asp.Net ajax, Asp.NET Ajax ASP.NET AJAX Asp.Net Mvc ASP.NET MVC Biz Spark, Bizspark BizSpark Clear Type, Clear type, Cleartype ClearType Directaccess, Direct Access DirectAccess Direct Show, Directshow DirectShow Direct X DirectX Dream Spark, Dreamspark DreamSpark Home Group, Home group HomeGroup HotMail, Hot Mail Hotmail Info Path, Infopath InfoPath Intellisense, Intellisense IntelliSense Iron Ruby IronRuby Kin KIN Linq LINQ MSN Messenger Windows Live Messenger One Note, Onenote OneNote Open type, Opentype OpenType PlayTo, Play to Play To Power Point, Powerpoint PowerPoint Powershell, Power Shell PowerShell Sea Dragon, Seadragon SeaDragon Sharepoint, Share Point SharePoint Silver Light, SilverLight Silverlight Skydrive, Sky Drive SkyDrive Sql Server SQL Server Visual Basic .net (the .net was removed in the 2005 version) Visual Basic  Visual C# Express 2010 or Visual Basic Express 2010 or Visual C++ Express 2010 Visual version 2010 Express as in Visual C# 2010 Express, Visual Basic 2010 Express Visual Studio 2010 Team Foundation Server Visual Studio Team Foundation Server 2010 Visual Studio Ultimate 2010 or Visual Studio Professional 2010 Visual Studio 2010 version, as in Visual Studio 2010 Ultimate, Visual Studio 2010 Professional WebSite Spark, Website spark Website Spark Win 32 Win32 Windows Mobile (except when referring to previous versions like 5.0 or 6), Windows phone 7 Series Windows Phone Xaml XAML XBOX, xbox Xbox Xbox Live, XBOX Live Xbox LIVE   Caveats These guidelines dont apply to URLs (ex: www.asp.net) or to code namespaces, variables, and classes should follow the .NET Framework naming guidelines. This list only covers capitalization/spacing rules, it doesnt cover the correct usage of (tm) or symbols or the correct word usage rules. For those, refer to the trademark Web site. Also note that I have no idea why we are so inconsistent say on keeping features/brands two words versus one word or the order of product/version/year.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • New Wine in New Bottles

    - by Tony Davis
    How many people, when their car shows signs of wear and tear, would consider upgrading the engine and keeping the shell? Even if you're cash-strapped, you'll soon work out the subtlety of the economics, the cost of sudden breakdowns, the precious time lost coping with the hassle, and the low 'book value'. You'll generally buy a new car. The same philosophy should apply to database systems. Mainstream support for SQL Server 2005 ends on April 12; many DBAS, if they haven't done so already, will be considering the migration to SQL Server 2008 R2. Hopefully, that upgrade plan will include a fresh install of the operating system on brand new hardware. SQL Server 2008 R2 and Windows Server 2008 R2 are designed to work together. The improved architecture, processing power, and hyper-threading capabilities of modern processors will dramatically improve the performance of many SQL Server workloads, and allow consolidation opportunities. Of course, there will be many DBAs smiling ruefully at the suggestion of such indulgence. This is nothing like the real world, this halcyon place where hardware and software budgets are limitless, development and testing resources are plentiful, and third party vendors immediately certify their applications for the latest-and-greatest platform! As with cars, or any other technology, the justification for a complete upgrade is complex. With Servers, the extra cost at time of upgrade will generally pay you back in terms of the increased performance of your business applications, reduced maintenance costs, training costs and downtime. Also, if you plan and design carefully, it's possible to offset hardware costs with reduced SQL Server licence costs. In his forthcoming SQL Server Hardware book, Glenn Berry describes a recent case where he was able to replace 4 single-socket database servers with one two-socket server, saving about $90K in hardware costs and $350K in SQL Server license costs. Of course, there are exceptions. If you do have a stable, reliable, secure SQL Server 6.5 system that still admirably meets the needs of a specific business requirement, and has no security vulnerabilities, then by all means leave it alone. Why upgrade just for the sake of it? However, as soon as a system shows sign of being unfit for purpose, or is moving out of mainstream support, the ruthless DBA will make the strongest possible case for a belts-and-braces upgrade. We'd love to hear what you think. What does your typical upgrade path look like? What are the major obstacles? Cheers, Tony.

    Read the article

  • Six Unusual Blogs I Like

    - by Bill Graziano
    I subscribe to and read over 100 SQL Server blogs every day.  I link to posts that I think are interesting.  I also read a fair number of non-SQL Server blogs.  Here are a few that I think are interesting. danah boyd. She is a researcher with Microsoft and writes about privacy, social media and teenagers.  I discovered her blog while looking for strategies to keep my personal and professional life separate.  (I haven’t found a good solution to that yet.)  Her stories of how teenagers use Facebook and other social media tools are fascinating. Clayton’s Web Snacks.  Steve Clayton works at Microsoft and has a variety of blogs out there.  This one focuses on … hmmm.  His latest posts are on graffiti, infographics, paper tweets, cartoons and slow motion videos.  It’s mostly visual and you never really know what you’ll get.  It’s always interesting though and I like what he posts.  It’s good creative stuff. Seth Godin.  Seth writes about Marketing.  I read him for motivation to get off my butt and get things done.  He’s a great motivator who encourages you to think big.  And do something! Ask the Pilot.  Patrick Smith is a commercial airline pilot writing about the airline industry.  He’s a great debunker of myths (no they don’t reduce oxygen in the cabin to keep you docile).  My favorite topics include the TSA, flying myths, airport reviews and flight delays. My old favorite flight blog used to be enplaned.  No one knew who wrote it.  It focused on the economics of the airline industry.  It was fascinating stuff.  One day it was gone.  The entire blog was deleted.  Someone tracked down some partial archives and put them online. The Agent’s Journal.  Jack Bechta is an NFL agent.  He writes about the business side of the NFL, the draft and free agency.  Lately he’s been writing about the potential lockout.  He has a distinct lack of hype which I find very refreshing.  xkcd.  I call this the comic for smart people.  A little math, some IT and internet privacy thrown in all make an unusual comic. Funny and intelligent.

    Read the article

  • SOA &amp; Application Grid Specialization step 2 of 6 &ndash; References &amp; Marketing Kits

    - by Jürgen Kress
    In our fist step to become SOA Specialized & Application Grid Specialized we highlighted our OMM to register your opportunities. We continue our path to specialization with our marketing offerings to create your reference cases and run joint marketing campaigns. References: Be Recognized Through Partner Success Stories Oracle delivers a wide variety of services and solutions through our partners and we believe that those successes should be recognized and promoted. References are also required to become specialized. We showcase our partners’ capabilities in Oracle products and industries through partner success stories that are published on Oracle.com. For significant implementations, we may invite partners to participate in a press release or be interviewed in a podcast. To participate and take a further step to become specialized, please take a minute to complete the form and tell us about the successful project you have implemented. If your story is selected, we will contact you for an interview. Create your references The partner reference program Enables partners to be recognized by both Oracle and our customers Provides an opportunity for partners to showcase successes with their customers on Oracle solutions Helps raise awareness of our partners’ capabilities, elevating them above their competition Time to submit a SOA and Application Grid reference request today To learn more about partner references, check out the following resources: Judson Althoff’s YouTube Video: Be Recognized with OPN Specialized Reference Program OPN PartnerCast: Be Recognized…Your Reference Matters!!! (MP3) Partner/Customer Reference Brochure (PDF) Marketing Kits We have created OFM 11g marketing kit http://tinyurl.com/soamarketing (OPN account required) The marketing kit includes all the ppts and demos from our launch event. Oracle package includes: • Event templates like invitation, agenda ,confirmation follow up templates • OFM 11g presentations • Free usage of the Oracle Customer Visit Center • Condition: mandatory lead registration in the Oracle Open Market Model (OMM) To download the material, please make sure that you select the campaign “Enterprise: Fusion Middleware 11g”: OFM 11g Oracle Marketing 4 Partners Package http://tinyurl.com/soamarketing (OPN account required)   For more information on Specialization please visit our OPN Specialized Webcast Series And become a member in our SOA Partner Community for registration please visit www.oracle.com/goto/ema/soa Jürgen Kress, SOA Partner Adoption EMEA SOA Specialized Application Grid Specialized Proof 2 transactions with OMM Proof 2 transactions with OMM Create your 2 references Create your 2 references SOA Sales assessment 3, Oracle Application Grid Sales Specialist  SOA Pre-Sales assessment 3 Oracle Application Grid PreSales Specialist Support assessment 1 Support assessment 2 SOA Implementation assessment 4 Application Gridplementation assessment 4

    Read the article

  • WNA Configuration in OAM 11g

    - by P Patra
    Pre-Requisite: Kerberos authentication scheme has to exist. This is usually pre-configured OAM authentication scheme. It should have Authentication Level - "2", Challenge Method - "WNA", Challenge Direct URL - "/oam/server" and Authentication Module- "Kerberos". The default authentication scheme name is "KerberosScheme", this name can be changed. The DNS name has to be resolvable on the OAM Server. The DNS name with referrals to AD have to be resolvable on OAM Server. Ensure nslookup work for the referrals. Pre-Install: AD team to produce keytab file on the AD server by running ktpass command. Provide OAM Hostname to AD Team. Receive from AD team the following: Keypass file produced when running the ktpass command ktpass username ktpass password Copy the keytab file to convenient location in OAM install tree and rename the file if desired. For instance where oam-policy.xml file resides. i.e. /fa_gai2_d/idm/admin/domains/idm-admin/IDMDomain/config/fmwconfig/keytab.kt Configure WNA Authentication on OAM Server: Create config file krb.config and set the environment variable to the path to this file: KRB_CONFIG=/fa_gai2_d/idm/admin/domains/idm-admin/IDMDomain/config/fmwconfig/krb.conf The variable KRB_CONFIG has to be set in the profile for the user that OAM java container(i.e. Wbelogic Server) runs as, so that this setting is available to the OAM server. i.e. "applmgr" user. In the krb.conf file specify: [libdefaults] default_realm= NOA.ABC.COM dns_lookup_realm= true dns_lookup_kdc= true ticket_lifetime= 24h forwardable= yes [realms] NOA.ABC.COM={ kdc=hub21.noa.abc.com:88 admin_server=hub21.noa.abc.com:749 default_domain=NOA.ABC.COM [domain_realm] .abc.com=ABC.COM abc.com=ABC.COM .noa.abc.com=NOA.ABC.COM noa.abc.com=NOA.ABC.COM Where hub21.noa.abc.com is load balanced DNS VIP name for AD Server and NOA.ABC.COM is the name of the domain. Create authentication policy to WNA protect the resource( i.e. EBSR12) and choose the "KerberosScheme" as authentication scheme. Login to OAM Console => Policy Configuration Tab => Browse Tab => Shared Components => Application Domains => IAM Suite => Authentication Policies => Create Name: ABC WNA Auth Policy Authentication Scheme: KerberosScheme Failure URL: http://hcm.noa.abc.com/cgi-bin/welcome Edit System Configuration for Kerberos System Configuration Tab => Access Manager Settings => expand Authentication Modules => expand Kerberos Authentication Module => double click on Kerberos Edit "Key Tab File" textbox - put in /fa_gai2_d/idm/admin/domains/idm-admin/IDMDomain/config/fmwconfig/keytab.kt Edit "Principal" textbox - put in HTTP/[email protected] Edit "KRB Config File" textbox - put in /fa-gai2_d/idm/admin/domains/idm-admin/IDMDomain/config/fmwconfig/krb.conf Cilck "Apply" In the script setting environment for the WLS server where OAM is deployed set the variable: KRB_CONFIG=/fa_gai2_d/idm/admin/domains/idm-admin/IDMDomain/config/fmwconfig/krb.conf Re-start OAM server and OAM Server Container( Weblogic Server)

    Read the article

  • 503.1 Service Unavailable Error Resolution

    - by Lee Brandt
    I was having a hell of a time tonight with my IIS on my development laptop. I don’t remember doing anything to change the IIS settings. I don’t use IIS that much on my dev machine. Usually Cassini is enough for testing my development efforts but tonight I needed to replicate a problem that seems to stem from x86 v x64 mismatch, so I went to create an IIS site pointed to my dev folder. When I did, I got a “503.1 Service Unavailable Error”. First thing I did is go over all my setting to make sure I didn’t screw something up when I set up the site. It was pointing to the right place, and the app pool settings seemed to be alright. However, when I got the 503.1 error and went back to my app pool list, I saw that the app pool I was using was stopped again. I must’ve started and ran it a dozen times to verify that I wasn’t seeing things. After having a colleague look at it and not finding an answer, I started poking around Google. I cam across a post from Phil Haack about the same error. His fix was not mine, however. When I ran his command on the CLI, I didn’t see the reserved routes for HTTP.SYS there. Finally, I looked in the event viewer (where I should have looked as soon as I saw that my app pool was stopping) and saw an error in there. For the IIS-W3SVC-WP Source I saw: The worker process for application pool 'DefaultAppPool' encountered an error 'Cannot read configuration file due to insufficient permissions ' trying to read configuration data from file '\\?\C:\Windows\Microsoft.NET\Framework64\v4.0.30319\CONFIG\machine.config', line number '0'. The data field contains the error code. So I went to that path and saw a little lock on the file icon. I opened up the security tab for file properties and saw that I was missing the IIS_IUSRS group. On a machine that was working correctly, I verified that it indeed had the IIS_IUSRS group set to Read and Read & Execute allowed. So I set mine up the same and voila! Hopefully this helps somebody else, too.

    Read the article

  • How valuable are you to your organization?

    - by Lance Shaw
    I don't know about you but I find it easy to get bogged down with the daily list of tasks and deliverables.  We all have lots to do and it all seems to be due tomorrow.  If you are reading this blog, than your to-do list is almost certainly filled with tasks related to the management, processing and publishing of information.  As we get mired in the daily routine of making sure that the content management needs of the organizations are met, we can easily lose sight of the value that we bring.  After all, if information and content is the lifeblood of our organizations, then surely maintaining the healthy flow of that information has real value.  But how can you measure that value and bring it forward on your résumé or your list of achievements in time for your next performance review? The AIIM organization has spent a lot of time recently researching the value of certification for "information professionals".  When it comes to enterprise content management (ECM) there are many areas of specialization including records management, content archivist, digital asset manager, content librarian and more.  Specialization can clearly drive up your value but it can also lock you into a narrow niche area of focus.  AIIM has found that what companies also need is someone that can apply their knowledge of how information is managed within the operational scope of the business in order to drive real, measurable strategic value.  When you can showcase the value of a broader, business-wide mindset to your management, you have more opportunity to make professional progress and drive real growth where it counts, your paycheck.   We here on the Oracle WebCenter team partnered with AIIM on the research they performed around the value of an information professional certification program. In a webinar this week, Doug Miles of AIIM and I will be talking about the results of that recent survey and what it is going to mean in the future to be recognized as a "Certified Information Professional" (CIP).  Oracle sponsored this research to help individuals and companies understand the value of enterprise content management and what it means across the entire organization. I hope you will join us. If any of us were stopped in the street and were asked about it, I bet most of us would think of ourselves as an "Information Professional".  Now we have a way to actually prove it!  There's only one downside that I can see...  you will have to get your business cards updated to include the "CIP" acronym after your name.  I think you will agree that is a price worth paying!

    Read the article

  • Blender DirectX exporter to Panda3D

    - by jakebird451
    I have been experimenting with Panda3D lately. I have a character made in Blender with various bones and currently with one animation that I wish to export to a *.x format for Panda3D. My current attempt was to export the model was to first export with bones [Armatures] by checking the "Export Armatures" button in the export menu (file name: char.x). Thanks to the *.x file format, I read the file and it seems to have the same bone structure format as the model (with parenting and matrix positional data). The second export was selecting Animations - Full Animation to provide just the animation (file name: char_idle.x). The models exported just fine. I am not sure about the animation yet, but the file seems to be just fine. This is my code for loading the model into python & Panda3D: self.model = Actor("char.x",{"char_idle.x"}) When I run the program the command line provides a couple of errors, the main errors of interest are: :Actor(warning): char.x is not a character! and ... File "C:\Panda3D-1.8.0\direct\actor\Actor.py", line 284, in __init__ if (type(anims[anims.keys()[0]])==type({})): AttributeError: 'set' object has no attribute 'keys' The first error is the most interesting to me. The model works if I leave the animation dictionary blank. With no animations loaded the character appears in its un-animated T position, however the actor warning still shows up. The character should include the various bones when I exported the model right? I am not that experienced with blender, I'm just a programmer. So if the problem lies in blender please try to keep that in mind when posting a reply. I'll try my best to keep up. I also tried to print out the bone structure without any animations loaded and it provides a similar error with the line print self.model.listJoints(): File "C:\Panda3D-1.8.0\direct\actor\Actor.py", line 410, in listJoints Actor.notify.error("no part named: %s" % (partName)) File "C:\Panda3D-1.8.0\direct\directnotify\Notifier.py", line 132, in error raise exception(errorString) StandardError: no part named: modelRoot I really hope it is a simple exporting fix.

    Read the article

  • It's intellisense for SQL Server

    - by Nick Harrison
    It's intellisense for SQL Server Anyone who has ever worked with me, heard me speak, or read any of writings knows that I am a HUGE fan of Reflector.    By extension,  I am a big fan of Red - Gate   I have recently begun exploring some of their other offerings and came across this jewel. SQL Prompt is a plug in for Visual Studio and SQL Server Management Studio.    It provides several tools to make dealing with SQL a little easier for your friendly neighborhood developer. When you a query window in a database, the plugin kicks in and gathers the metadata for the database that you are in.    As you type a query, you get handy feedback like a list of tables after you type select.    You can select one of the tables, specify * and then tab to expand the select clause to include all of the columns from the selected table.    As you are building up the where clause, you are prompted by the names of columns in the selected tables. If you spend any time writing ad hoc queries or building stored procedures by hand, this can save you substantial time. If you are learning a new data model, this can greatly cut down on your frustration level. The other really cool thing here is Format SQL.   I have searched all over the place for a really good SQL formatter.    Badly formatted  SQL is so much harder to read than well formatted SQL.   Unfortunately, management studio offers no support for keeping your SQL well formatted.    There are many tools available to format your SQL.   Some work better than others.    Some don't work that well at all.   Most will give you some measure of control over how the formatted SQL looks.    SQL Prompt produces good results and is easy to configure. Sadly no tool is perfect, and what would we be without a wish list.    There are some features that I would like to see: Make it easier to paste SQL in and out of code.    Strip off string builder, etc Automate replacing hard coded values with bind variables or parameters In addition to reformatting SQL, which is a huge refactor, support for other SQL refactors would be nice.    Convert join to sub query and vice versa come to mind Wish list a side, this is a wonderful tool that easily saves me an hour or more on most weeks.

    Read the article

  • Go for the Deep Dive on Oracle Products and Technology

    - by Oracle OpenWorld Blog Team
    by Karen Shamban Oracle University gives you more learning for your conference investment. It’s easier than ever before to get in-depth Oracle product and technology training if you’re attending any of the Oracle conferences this fall, including Oracle OpenWorld, the Oracle Customer Experience Summit @ OpenWorld, the Oracle PartnerNetwork Exchange @ OpenWorld, and MySQL Connect. Why is it easier? Because Oracle University preconference training takes place on Sunday, September 30 from 8:00 a.m. to 3:30 p.m. And you’re going to be in town for the conference anyway, right? The training ends early enough in the afternoon that you’ll still be able to get good seats for conference opening keynotes and get psyched for the welcome reception that follows. Each session will be taught by an expert Oracle University instructor and will be fact-packed with demos and tips to help you do more than ever before with your Oracle product and technology investment. The training sessions being offered include: Applications:·             PeopleSoft Test Framework Script Creation and Optimization·             New Integration Technologies for PeopleTools 8.52·             Oracle Fusion Applications: Security Fundamentals Database and Systems:·             Certification Exam Cram: Oracle Database 11g: New Features for Administrators·             Exadata Database Machine Administration Workshop·             Introduction to Big Data·             Using Oracle Enterprise Manager Cloud Control 12c·             Using Java - for PL/SQL and Database Developers Fusion Middleware:·             Developing Portable Java EE Applications with the Enterprise JavaBeans 3.1 API and Java Persistence API 2.0·             Developing Secure Java Web Services·             How The Latest Java EE and SOA Help in Architecting and Designing Robust Enterprise Applications·             Oracle Business Intelligence 11g: Overview to Analyses and Dashboards·             Oracle Fusion Middleware 11g: Build Applications with ADF I·             Oracle Fusion Middleware 11g Administer Forms Services·             Oracle SOA Suite 11g Administration·             WebLogic Server Administration Essentials Don’t miss this great opportunity to maximize your Oracle OpenWorld experience and investment. Learn more about Oracle University training sessions.

    Read the article

  • Thank you Geeks With Blogs for letting me join your community!

    - by GreeNTUG
    First, a link to the blog I can no longer edit because Office Live blew away my digital identity and so I can no longer log into it (the source of a loooong blog about protecting your digital identity sometime when I have more time and after it has played out to the end) http://greentug.spaces.live.com/ The following are the communities I participate in: Green & Sustainability.  I run a virtual user group on Green and Sustainability as it relates to developers and software architects.  It was located at greentug.groups.live.com, and we will need to find a new digital location for it, because I am locked out of that site as well. BizSpark Tampa Bay:  I run a BizSpark group for Microsoft technologists (meetup.com, search for BizSpark Tampa Bay) and speak at Code Camps about "No Better Time to Start Your Own Tech Business".  The meetup group facilitates a balanced presentation that is respectful to anyone wanting to start their own business, whether part-time or full-time, whether micro (just you), sustainable (grow to 2-25-ish, self-funded), high growth (get venture capital or other funding, grow it, sell it within 5 years, do it again), or hybrid (the new model going forward).  It is an "action" group, with assignments and homework if you want to get the most out of it.   At the end of a year you will either have your business on the path to where you want it to be, or you will know the steps you need to do to get it there. Women in Technology Have been participating in the Women in Technology community since 2008, my main interests in this area are mentoring women in the workplace to have them believe they can become geeks and double their income, and to mentor them with respect to starting and running their own business. Access 2010/SharePoint 2010.  This is a game-changer with respect to the Access community (the ap both devs and IT Pros love to hate, the other a-word that's not a fruit).  I conducted Lunch n Learns and Brunch n Learns around this topic before the Office 2010/SharePoint 2010 launch, and spoke on the topic at SharePoint Saturday Tampa in Nov 2009. Interested in learning more about: Using Silverlight HD Streaming out in the non-technical world (horses and equestrian sport).  Migrating to Access Web Services and VB .Net from VBA (see the Access 2010/SharePoint 2010 interest above) Windows Phone 7!  Exciting opportunities both for Green and Sustainability and for my "day job" of Environmental, Health & Safety (EHS). My day job is Environmental, Health & Safetey (EHS) consulting and software solutions, where that interfaces with the developer world is with respect to opportunities around Green and Sustainability, The SmartGrid and Juval Lowy's EnergyNet, both of which will require a lot of technology and software to make them work, The new Microsoft Partner competency for "Digital Home", and The Y2K kind of deadline around how managing chemicals in ERP systems is changing because of Global Harmonization, which hits the EU with a hard deadline on 11/30/10 (yes, this year), and hits the USA about 15 months later. Hope you enjoy my contributions to the digital geek community, and feel free to email me, [email protected] (the email leftover after my digital identity was blown away), and [email protected] (this one could go away at some future point) Best, Kathy Malone

    Read the article

  • Tic Tac Toe Winner in Javascript and html [closed]

    - by Yehuda G
    I am writing a tic tac toe game using html, css, and JavaScript. I have my JavaScript in an external .js file being referenced into the .html file. Within the .js file, I have a function called playerMove, which allows the player to make his/her move and switches between player 'x' and 'o'. What I am trying to do is determine the winner. Here is what I have: each square, when onclick(this), references playerMove(piece). After each move is made, I want to run an if statement to check for the winner, but am unsure if the parameters would include a reference to 'piece' or a,b, and c. Any suggestions would be greatly appreciated. Javascript: var turn = 0; a = document.getElementById("topLeftSquare").innerHTML; b = document.getElementById("topMiddleSquare").innerHTML; c = document.getElementById("topRightSquare").innerHTML; function playerMove(piece) { var win; if(piece.innerHTML != 'X' && piece.innerHTML != 'O'){ if(turn % 2 == 0){ document.getElementById('playerDisplay').innerHTML= "X Plays " + printEquation(1); piece.innerHTML = 'X'; window.setInterval("X", 10000) piece.style.color = "red"; if(piece.innerHTML == 'X') window.alert("X WINS!"); } else { document.getElementById('playerDisplay').innerHTML= "O Plays " + printEquation(1); piece.innerHTML = 'O'; piece.style.color = "brown"; } turn+=1; } html: <div id="board"> <div class="topLeftSquare" onclick="playerMove(this)"> </div> <div class="topMiddleSquare" onclick="playerMove(this)"> </div> <div class="topRightSquare" onclick="playerMove(this)"> </div> <div class="middleLeftSquare" onclick="playerMove(this)"> </div> <div class="middleSquare" onclick="playerMove(this)"> </div> <div class="middleRightSquare" onclick="playerMove(this)"> </div> <div class="bottomLeftSquare" onclick="playerMove(this)"> </div> <div class="bottomMiddleSquare" onclick="playerMove(this)"> </div> <div class="bottomRightSquare" onclick="playerMove(this)"> </div> </div>

    Read the article

  • ODI 11g - Dynamic and Flexible Code Generation

    - by David Allan
    ODI supports conditional branching at execution time in its code generation framework. This is a little used, little known, but very powerful capability - this let's one piece of template code behave dynamically based on a runtime variable's value for example. Generally knowledge module's are free of any variable dependency. Using variable's within a knowledge module for this kind of dynamic capability is a valid use case - definitely in the highly specialized area. The example I will illustrate is much simpler - how to define a filter (based on mapping here) that may or may not be included depending on whether at runtime a certain value is defined for a variable. I define a variable V_COND, if I set this variable's value to 1, then I will include the filter condition 'EMP.SAL > 1' otherwise I will just use '1=1' as the filter condition. I use ODIs substitution tags using a special tag '<$' which is processed just prior to execution in the runtime code - so this code is included in the ODI scenario code and it is processed after variables are substituted (unlike the '<?' tag).  So the lines below are not equal ... <$ if ( "#V_COND".equals("1")  ) { $> EMP.SAL > 1 <$ } else { $> 1 = 1 <$ } $> <? if ( "#V_COND".equals("1")  ) { ?> EMP.SAL > 1 <? } else { ?> 1 = 1 <? } ?> When the <? code is evaluated the code is executed without variable substitution - so we do not get the desired semantics, must use the <$ code. You can see the jython (java) code in red is the conditional if statement that drives whether the 'EMP.SAL > 1' or '1=1' is included in the generated code. For this illustration you need at least the ODI 11.1.1.6 release - with the vanilla 11.1.1.5 release it didn't work for me (may be patches?). As I mentioned, normally KMs don't have dependencies on variables - since any users must then have these variables defined etc. but it does afford a lot of runtime flexibility if such capabilities are required - something to keep in mind, definitely.

    Read the article

  • Compute directional light frustum from view furstum points and light direction

    - by Fabian
    I'm working on a friends engine project and my task is to construct a new frustum from the light direction that overlaps the view frustum and possible shadow casters. The project already has a function that creates a frustum for this but its way to big and includes way to many casters (shadows) which can't be seen in the view frustum. Now the only parameter of this function are the normalized light direction vector and a view class which lets me extract the 8 view frustum points in world space. I don't have any additional infos about the scene. I have read some of the related Questions here but non seem to fit very well to my problem as they often just point to cascaded shadow maps. Sadly i can't use DX or openGl functions directly because this engine has a dedicated math library. From what i've read so far the steps are: Transform view frustum points into light space and find min/max x and y values (or sometimes minima and maxima of all three axis) and create a AABB using the min/max vectors. But what comes after this step? How do i transform this new AABB back to world space? What i've done so far: CVector3 Points[8], MinLight = CVector3(FLT_MAX), MaxLight = CVector3(FLT_MAX); for(int i = 0; i<8;++i){ Points[i] = Points[i] * WorldToShadowMapMatrix; MinLight = Math::Min(Points[i],MinLight); MaxLight = Math::Max(Points[i],MaxLight); } AABox box(MinLight,MaxLight); I don't think this is the right way to do it. The near plain probably has to extend into the direction of the light source to include potentional shadow casters. I've read the Microsoft article about cascaded shadow maps http://msdn.microsoft.com/en-us/library/windows/desktop/ee416307%28v=vs.85%29.aspx which also includes some sample code. But they seem to use the scenes AABB to determine the near and far plane which I can't since i cant access this information from the funtion I'm working in. Could you guys please link some example code which shows the calculation of such frustum? Thanks in advance! Additional questio: is there a way to construct a WorldToFrustum matrix that represents the above transformation?

    Read the article

  • Oracle Utilities Application Framework V4.2.0.0.0 Released

    - by ACShorten
    The Oracle Utilities Application Framework V4.2.0.0.0 has been released with Oracle Utilities Customer Care And Billing V2.4. This release includes new functionality and updates to existing functionality and will be progressively released across the Oracle Utilities applications. The release is quite substantial with lots of new and exciting changes. The release notes shipped with the product includes a summary of the changes implemented in V4.2.0.0.0. They include the following: Configuration Migration Assistant (CMA) - A new data management capability to allow you to export and import Configuration Data from one environment to another with support for Approval/Rejection of individual changes. Database Connection Tagging - Additional tags have been added to the database connection to allow database administrators, Oracle Enterprise Manager and other Oracle technology the ability to monitor and use individual database connection information. Native Support for Oracle WebLogic - In the past the Oracle Utilities Application Framework used Oracle WebLogic in embedded mode, and now, to support advanced configuration and the ExaLogic platform, we are adding Native Support for Oracle WebLogic as configuration option. Native Web Services Support - In the past the Oracle Utilities Application Framework supplied a servlet to handle Web Services calls and now we offer an alternative to use the native Web Services capability of Oracle WebLogic. This allows for enhanced clustering, a greater level of Web Service standards support, enchanced security options and the ability to use the Web Services management capabilities in Oracle WebLogic to implement higher levels of management including defining additional security rules to control access to individual Web Services. XML Data Type Support - Oracle Utilities Application Framework now allows implementors to define XML Data types used in Oracle in the definition of custom objects to take advantage of XQuery and other XML features. Fuzzy Operator Support - Oracle Utilities Application Framework supports the use of the fuzzy operator in conjunction with Oracle Text to take advantage of the fuzzy searching capabilities within the database. Global Batch View - A new JMX based API has been implemented to allow JSR120 compliant consoles the ability to view batch execution across all threadpools in the Coherence based Named Cache Cluster. Portal Personalization - It is now possible to store the runtime customizations of query zones such as preferred sorting, field order and filters to reuse as personal preferences each time that zone is used. These are just the major changes and there are quite a few more that have been delivered (and more to come in the service packs!!). Over the next few weeks we will be publishing new whitepapers and new entries in this blog outlining new facilities that you want to take advantage of.

    Read the article

  • BizTalk 2009 - Creating a Custom Functoid Library

    - by StuartBrierley
    If you find that you have a need to created multiple Custom Functoids you may also choose to create a Custom Functoid Library - a single project containing many custom functoids.  As previsouly discussed, the Custom Functoid Wizard can be used to create a project with a new custom functoid inside.  But what if you want to extend this project to include more custom functoids and create your Custom Functoid Library?  First create a Custom Functoid Library project and your first Custom Functoid using the Custom Functoid Wizard. When you open your Custom Functoid Library project in Visual Studio you will see that it contains your custom functoid class file along with its resource file.  One of the items this resource file contains is the ID of the the custom functoid.  Each custom functoid needs a unique ID that is over 6000.  When creating a Custom Functoid Library I would first suggest that you delete the ID from this resource file and instead create a _FunctoidIDs class containing constants for each of your custom functoids.  In this way you can easily see which custom functoid IDs are assigned to which custom functoid and which ID is next in the sequence of availability: namespace MyCompany.BizTalk.Functoids.TestFunctoids {     class _FunctoidIDs     {         public const int TestFunctoid                       = 6001;     } } You will then need to update the base() function in your existing functoid class to reference these constant values rather than the current resource file. From:    int functoidID;    // This has to be a number greater than 6000    functoidID = System.Convert.ToInt32(resmgr.GetString("FunctoidId"));    this.ID = functoidID; To: this.ID = _FunctoidIDs.TestFunctoid; To create a new custom functoid you can copy the existing custom functoid, renaming the resultant class file as appropriate.  Once it is renamed you will need to change the Class name, ResourceName reference and Base function name in the class code to those of your new custom functoid.  You will also need to create a new constant value in the _FunctoidIDs class and update the ID reference in your code to match this.  Assuming that you need some different functionalty from your new  customfunctoid you will need to check or amend the following in your functoid class file: Min and Max connections Functoid Category Input and Output connection types The parameters and functionality of the Execute function To change the appearance of you new custom functoid you will need to check or amend the following in the functoid resource file: Name Description Tooltip Exception Icon You can change the String values by double clicking the resource file and amending the value fields in the string table. To amend the functoid icon you will need to create a 16x16 bitmap image.  Once you have saved this you are then ready to import it into the functoid resource file.  In Visual Studio change the resource view to images, right click the icon and choose import from file. You have now completed your new custom functoid and created a Custom Functoid Library.  You can test your new library of functoids by building the project, copying the resultant DLL to C:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\Mapper Extensions and then resetting the toolbox in Visual Studio.

    Read the article

  • ASP.NET MVC 3 Hosting :: Deploying ASP.NET MVC 3 web application to server where ASP.NET MVC 3 is not installed

    - by mbridge
    You can built sample application on ASP.NET MVC 3 for deploying it to your hosting first. To try it out first put it to web server where ASP.NET MVC 3 installed. In this posting I will tell you what files you need and where you can find them. Here are the files you need to upload to get application running on server where ASP.NET MVC 3 is not installed. Also you can deploying ASP.NET MVC 3 web application to server where ASP.NET MVC 3 is not installed like this example: you can change reference to System.Web.Helpers.dll to be the local one so it is copied to bin folder of your application. First file in this list is my web application dll and you don’t need it to get ASP.NET MVC 3 running. All other files are located at the following folder: C:\Program Files\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\Assemblies\ If there are more files needed in some other scenarios then please leave me a comment here. And… don’t forget to convert the folder in IIS to application. While developing an application locally, this isn’t a problem. But when you are ready to deploy your application to a hosting provider, this might well be a problem if the hoster does not have the ASP.NET MVC assemblies installed in the GAC. Fortunately, ASP.NET MVC is still bin-deployable. If your hosting provider has ASP.NET 3.5 SP1 installed, then you’ll only need to include the MVC DLL. If your hosting provider is still on ASP.NET 3.5, then you’ll need to deploy all three. It turns out that it’s really easy to do so. Also, ASP.NET MVC runs in Medium Trust, so it should work with most hosting providers’ Medium Trust policies. It’s always possible that a hosting provider customizes their Medium Trust policy to be draconian. Deployment is easy when you know what to copy in archive for publishing your web site on ASP.NET MVC 3 or later versions. What I like to do is use the Publish feature of Visual Studio to publish to a local directory and then upload the files to my hosting provider. If your hosting provider supports FTP, you can often skip this intermediate step and publish directly to the FTP site. The first thing I do in preparation is to go to my MVC web application project and expand the References node in the project tree. Select the aforementioned three assemblies and in the Properties dialog, set Copy Local to True. Now just right click on your application and select Publish. This brings up the following Publish wizard Notice that in this example, I selected a local directory. When I hit Publish, all the files needed to deploy my app are available in the directory I chose, including the assemblies that were in the GAC. Another ASP.NET MVC 3 article: - New Features in ASP.NET MVC 3 - ASP.NET MVC 3 First Look

    Read the article

  • Migrating a blog from Orchard 0.5 to 0.9

    - by Bertrand Le Roy
    My personal blog still runs on Orchard 0.5, because the theme that I used to build it is not yet available for more recent versions, but it is still very important for me to know that I can migrate all my content and comments to a new version at any time. Fortunately, Nick Mayne has been consistently shipping a BlogML module a few days after each of the Orchard versions shipped. Because the module gallery for each version is behind a different URL and is kept alive even after a new one shipped, it is very easy to install the module for both versions. Step 0: Setting up the migration environment In order to do the migration, I made a local copy of the production site on my laptop (data included: I'm using SQL CE) and I also created a new local site with a fresh install of Orchard 0.9. Step 1: Enable the gallery feature on both versions From the admin UI, go to Features and locate the Gallery feature under "Packaging". Enable it. You may now click on "Browse Gallery" on the 0.5 instance and "Modules" under "Gallery" for 0.9: Step 2: Install the BlogML module on both versions From the gallery page, locate the BlogML module and install it. Do it on both versions. Then go to Features and enable BlogML under "Content Publishing". Do it on both versions. Step 3: Export from the 0.5 version Click on "Manage Blog" then on "Export using BlogML" from the 0.5 version. The module then informs you of the path of the saved file: Step 4: Import into the 0.9 version From the 0.9 version, click "Import under "Blogs". Click the button to browse to the file that you just saved from 0.5. Then click "Upload file and Import" Step 5: Copy the 0.5 media folder into 0.9 Copy the contents of the 0.5 version's media folder into the media folder of the 0.9 version. Once that is done, you can delete the "Default/Blog Exports" subfolder. Step 6: Configure the target blog Click "Manage Blog", then "Blog Properties" and restore any properties you had on the source blog. For me, it was the title and URL as well as to set the blog as the home page and show it on the main menu: Step 7: Republish the new site to the production server Once this is done and everything works locally, you are ready to publish to the production site. I use FTP. Note: this should work just as well for any couple of versions for which the BlogML module exists, and not just for 0.5 and 0.9.

    Read the article

  • Given the presentation model pattern, is the view, presentation model, or model responsible for adding child views to an existing view at runtime?

    - by Ryan Taylor
    I am building a Flex 4 based application using the presentation model design pattern. This application will have several different components to it as shown in the image below. The MainView and DashboardView will always be visible and they each have corresponding presentation models and models as necessary. These views are easily created by declaring their MXML in the application root. <s:HGroup width="100%" height="100%"> <MainView width="75% height="100%"/> <DashboardView width="25%" height="100%"/> </s:HGroup> There will also be many WidgetViewN views that can be added to the DashboardView by the user at runtime through a simple drop down list. This will need to be accomplished via ActionScript. The drop down list should always show what WidgetViewN has already been added to the DashboardView. Therefore some state about which WidgetViewN's have been created needs to be stored. Since the list of available WidgetViewN and which ones are added to the DashboardView also need to be accessible from other components in the system I think this needs to be stored in a Model object. My understanding of the presentation model design pattern is that the view is very lean. It contains as close to zero logic as is practical. The view communicates/binds to the presentation model which contains all the necessary view logic. The presentation model is effectively an abstract representation of the view which supports low coupling and eases testability. The presentation model may have one or more models injected in in order to display the necessary information. The models themselves contain no view logic whatsoever. So I have a several questions around this design. Who should be responsible for creating the WidgetViewN components and adding these to the DashboardView? Is this the responsibility of the DashboardView, DashboardPresentationModel, DashboardModel or something else entirely? It seems like the DashboardPresentationModel would be responsible for creating/adding/removing any child views from it's display but how do you do this without passing in the DashboardView to the DashboardPresentationModel? The list of available and visible WidgetViewN components needs to be accessible to a few other components as well. Is it okay for a reference to a WidgetViewN to be stored/referenced in a model? Are there any good examples of the presentation model pattern online in Flex that also include creating child views at runtime?

    Read the article

< Previous Page | 804 805 806 807 808 809 810 811 812 813 814 815  | Next Page >