Search Results

Search found 29811 results on 1193 pages for 'table of contents'.

Page 436/1193 | < Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >

  • Numbered paragraphs in Word 2007

    - by Kit
    I have the following styles defined in Word 2007. They all have outline levels 1-6. They also correctly show up in the Table of Contents (not all, I only set the TOC up to Level 3). 1 Heading 1 1.1 Heading 2 1.1.1 Heading 3 1.1.1.1 Heading 4 1.1.1.1.1 Heading 5 1.1.1.1.1.1 Heading 6 This is what I want 1 Heading 1 1.1 Body text under Heading Level 1 1.2 Body text under Heading Level 1 2 Heading 1 2.1 Heading 2 2.1.1 Body text under Heading Level 2 2.1.2 Body text under Heading Level 2 2.1.3 Body text under Heading Level 2 2.2 Heading 2 2.2.1 Body text under Heading Level 2 2.2.2 Body text under Heading Level 2 How do I make two list sequences link to each other? Here's a {fill in the blanks} illustration: {section number} Heading 1 {section number}.{clause number} Body text under Heading Level 1 {section number}.{clause number} Body text under Heading Level 1 The example above should expand to: 1 Heading 1 1.1 Body text under Heading Level 1 1.2 Body text under Heading Level 1 Another example: {section number} Heading 1 {section number}.{subsection number} Heading 2 {section number}.{subsection number}.{clause number} Body text under Heading Level 2 {section number}.{subsection number}.{clause number} Body text under Heading Level 2 should expand to: 2 Heading 1 2.1 Heading 2 2.1.1 Body text under Heading Level 2 2.1.2 Body text under Heading Level 2 2.1.3 Body text under Heading Level 2 The numbered body text paragraphs shouldn't show up the Table of Contents. I couldn't find the right way to do that, whether in multilevel lists, fields, styles, etc. How do I do it right?

    Read the article

  • Apache2 doesn't serve PHP-scripts correctly

    - by cmbrnt
    I've run into a problem with my Apache 2.2.16 configuration, running on Debian Squeeze. The problem is that it stopped serving PHP5-scripts completely. When I try to access the sites with Google Chrome, it instead downloads a file called "download", which contains the contents of the script. This is of course not a good thing. It does serve common html-files perfectly... I've been at this for quite a while now, and after all the googling and troubleshooting, I thought it would be a good time to ask you guys. Here's what I've got: The php5 and libapache2-mod-php5 packages are installed /etc/apache2/mods-available contains both php5.load and php5.conf, and these are symlinked from the mods-enabled directory The /etc/php5/ directory is left untouched since the installation. Here's the contents of /etc/apache2/mods-available/php.load: LoadModule php5_module /usr/lib/apache2/modules/libphp5.so And /etc/apache2/mods-available/php.conf: <IfModule mod_php5.c> <FilesMatch "\.ph(p3?|tml)$"> SetHandler application/x-httpd-php </FilesMatch> <FilesMatch "\.phps$"> SetHandler application/x-httpd-php-source </FilesMatch> <IfModule mod_userdir.c> <Directory /home/*/public_html> php_admin_value engine Off </Directory> </IfModule> </IfModule> What am I missing? This is a server with modified virtual hosts and the like, so I might have changed some settings which causes this problem, but simply purging and reinstalling is not an option so far, since the configuration is quite extensive. Any help would be great. Thanks.

    Read the article

  • What does this example bash startup script do?

    - by Dimitri
    I am trying to set up GNU Octave on my computer (Mac OS X 10.7.4). I am newbie in using Terminal and I need help to understand what the following script actually does: if [ -f ~/.bashrc ];then<br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;. ~/.bashrc<br> fi<br> PATH=$PATH:/usr/local/bin<br> BASH_ENV=~/.bashrc<br> export BASH_ENV PATH<br> export GNUTERM=aqua<br> alias octave="/Applications/Octave.app/Contents/Resources/bin/octave"<br> alias gnuplot="/Applications/Gnuplot.app/Contents/Resources/bin/gnuplot"<br> (taken from here: http://wikibox.stanford.edu/me112/index.php/Main/OctaveMatlabNotes) So this script begins with the simple conditional if statement. I don't understand the conditional expression - what is -f and .bashrc? What the statement . ~/.bashrc actually does? Then 2 variables are defined PATH and BASH_ENV. Why are they exported? Why GNUTERM=aqua is exported even if it's not defined anywhere? All I need is a script that would allow me to run Octave by simply typing octave in the terminal. I don't need an alias for the gnu plot. Thanks

    Read the article

  • restrict access to IIS virtual directory from root website

    - by senthilkumar-c
    Hi, I have two domains (domain1.com and domain2.com). Both of them use the same Windows hosting service with IIS7. One of the domains is being called the "primary domain" by my hosting provider and it always points to the root folder that I was given. For the other domain, I have created a virtual directory in IIS and pointed it there. The folder structure is like this - root --Default.aspx --domain2folder ----Default.aspx So, if I type domain1.com, I see the regulakr Default.aspx. But if I type domain2.com, I am shown the contents of domain2folder as if it were a separate web application - I think that is what IIS virtual directory is meant for. Well and good. But the problem is, when I type http://domain1.com/domain2folder/, I see the domain2's website! But I don't want that to be shown when I use the path like that from domain1. Only if they use domain2.com, user should be able to see those contents. How can I do that? Hope I am making sense. Thanks.

    Read the article

  • Is it possible to use Google Docs Viewer to view files already in Google Docs?

    - by john2x
    The title is a little confusing. I'll elaborate. As far as I can tell, the Google Docs Viewer tool accepts a link to a raw document file (e.g. .doc, .pdf, et. al.), and renders its contents in the browser. For example, this url to a pdf http://research.google.com/archive/bigtable-osdi06.pdf when passed to Viewer, returns this link: http://docs.google.com/viewer?url=http%3A%2F%2Fresearch.google.com%2Farchive%2Fbigtable-osdi06.pdf What I'm trying to achieve is, use the Viewer to view a document already hosted in Google Docs (i.e. no longer a raw document file). When passing a link to a Google Docs document to the Viewer, the result is not as expected. It renders the link's HTML source instead of the document's contents. The reason I want to do this is that I want to be able to use the "embed" feature of Viewer to view Google Docs documents. Does Google Docs have a "link to embeddable view" feature? P.S. Here is a sample snippet to an embedded document. This is what I want, but pointing to an existing Google Docs document. <iframe src="http://docs.google.com/viewer?url=http%3A%2F%2Fresearch.google.com%2Farchive%2Fbigtable-osdi06.pdf&embedded=true" width="600" height="780" style="border: none;"></iframe>

    Read the article

  • How do you recreate the System Recovery environment in Windows 7?

    - by Howiecamp
    I'm running Windows 7 Home Premium RTM (64-bit) and I want to take advantage of the system recovery tools (eg the Command Prompt) without using the Windows 7 DVD. My understanding is that this environment (WinRE) should be installed to your HDD by default as part of the Windows 7 installation. However, when I hit F8 on boot and select "Repair", I get: Windows failed to start. A recent hardware or software change might be the cause. To fix the problem... Status: 0xc000000e Info: The boot selection failed because a required device is inaccessible. The "Info" line seems like the smoking gun. My next step was to boot from the Windows 7 DVD, and choose "Repair". It indicated my Recovery Environment wasn't on the Windows 7 boot menu (perfect) and offered to fix it. I said yes and rebooted, however same issue as above. In addition, when I booted in to Windows 7 and I looked at the boot menu options, the recovery/repair option was not there. Only my Windows installation. Finally, I ran the Disk Management tool (diskmgmt.msc) and took a look at the contents of my "System Reserved" partition (which was set to "Active" as normal). It's unclear to me what the contents should look like, however it is my understanding that the WinRE environment gets installed to this partition. (As part of the above troubleshooting I followed http://superuser.com/questions/25728/how-to-fix-windows-7-boot-process which lead to http://www.sevenforums.com/tutorials/668-system-recovery-options.html).

    Read the article

  • Loading the preview function of AUCTeX 11.86 on macports Emacs-app 23.2.1 port.

    - by Sarah
    I've installed Emacs-app 23.2.1 via MacPorts and I'm trying to install AUCTeX 11.86 so that it will work on this installation. I've run the following configure line for AUCTeX and that seems to work. ./configure --with-emacs=/Applications/MacPorts/Emacs.app/Contents/MacOS/Emacs --with-lispdir=/Applications/MacPorts/Emacs.app/Contents/Resources/site-lisp/ --with-texmf-dir=/usr/local/texlive/2010basic/texmf-local/ make and make install seem to work, and I've added the following line to my init.el (require 'tex-site) as per the installation instructions. However, when I open a TeX file, the Preview menu does not show up (although the LaTeX menu does.) The following are some of my tests: M-x load-library RET preview-latex RET doesn't seem to do anything. M-x load-library RET preview RET brings up the Preview menu. Is it safe to somehow add the load-library preview to my init.el? Or do I risk mucking up something? I'm new to Emacs and primarily trying to learn it because of the AUCTeX preview features, but I don't feel very safe in this environment yet.

    Read the article

  • Apache2 doesn't serve PHP-scripts correctly [closed]

    - by cmbrnt
    I've run into a problem with my Apache 2.2.16 configuration, running on Debian Squeeze. The problem is that it stopped serving PHP5-scripts completely. When I try to access the sites with Google Chrome, it instead downloads a file called "download", which contains the contents of the script. This is of course not a good thing. It does serve common html-files perfectly... I've been at this for quite a while now, and after all the googling and troubleshooting, I thought it would be a good time to ask you guys. Here's what I've got: The php5 and libapache2-mod-php5 packages are installed /etc/apache2/mods-available contains both php5.load and php5.conf, and these are symlinked from the mods-enabled directory The /etc/php5/ directory is left untouched since the installation. Here's the contents of /etc/apache2/mods-available/php.load: LoadModule php5_module /usr/lib/apache2/modules/libphp5.so And /etc/apache2/mods-available/php.conf: <IfModule mod_php5.c> <FilesMatch "\.ph(p3?|tml)$"> SetHandler application/x-httpd-php </FilesMatch> <FilesMatch "\.phps$"> SetHandler application/x-httpd-php-source </FilesMatch> <IfModule mod_userdir.c> <Directory /home/*/public_html> php_admin_value engine Off </Directory> </IfModule> </IfModule> What am I missing? This is a server with modified virtual hosts and the like, so I might have changed some settings which causes this problem, but simply purging and reinstalling is not an option so far, since the configuration is quite extensive. Any help would be great. Thanks.

    Read the article

  • Why does my simple Raid 1 backup storage perform really slow sometimes?

    - by randomguy
    I bought 2x Samsung F3 EcoGreen 2TB hard disks to make a backup storage. I put them in Raid 1 (mirror) mode. Made a single partition and formatted it to NTFS, running Windows 7. For some reason, accessing the drive's contents (simply by navigating folders) is sometimes really slow. Like opening D:/photos/ can sometimes take several seconds before it starts showing any of the folder's contents. Same applies for other folders. What could be causing this and what could I do to improve the performance? I remember that there was an option somewhere inside Windows to choose fast access but less reliable persistence operations (read/write). It was a tick inside some dialog. At the time, it felt like a good idea to take the tick away from the option and get more reliable persistence but slower access, but now I'm regretting. I'm unable to find this dialog.. I've looked hard. I don't know, if it would make any difference. Oh, and I've ran scan disk and defrag on the drive. No errors and speed isn't improved.

    Read the article

  • Apache directory access with virtual host

    - by alexeygaidamaka
    I have a virtual host with a configuration like that. When i'm trying to get into foobar.com/dir providing valid username/password pair i get 403 forbidden page instead of that directory contents. www.foobar.com/dir has 777 rights, .httpaswd is chmoded 644. But i can't figure out why i am still not seeing contents. Please, give me a hint. ServerAdmin webmaster@localhost ServerName www.foobar.com ServerAlias www.foobar.com DocumentRoot /var/www/foobar <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/foobar> Options -Indexes FollowSymLinks AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> <Directory /var/www/foobar/dir> AllowOverride AuthConfig AuthName "Authorize yourself, please!" AuthType Basic AuthUserFile /etc/apache2/.htpasswd AuthGroupFile /dev/null Allow from All Order Allow,Deny Require valid-user

    Read the article

  • Network driver for Hyper-V restore from Windows Home Server

    - by Philipp Schmid
    I have backed up Windows Server 2008 running virtualized on Hyper-V to a Windows Home Server 2008 SP1 (I know I should have backed up the VHD instead). Now I need to restore the contents of the VM from WHS. I have created a restore CD ISO and used it to create a new VM. It all works as advertised up to the point where the restore process wants to load the network drivers (it only finds 4 disk drivers on the restore CD. but no network drivers). So I created a virtual floppy and copied the contents of 'Home Server Drivers for Restore onto it. But no luck! I have tried moving the 4 subdirectories into the root of the floppy, but that didn't work either. Finally, I started another instance of the WS 2008 to identify the network driver that the virtualized instance is using (%WINDOWS%\system32\drivers\netvsc60.sys) and copied that file onto the virtual floppy, without success. Does anyone have any suggestions on how to get networking working on a Hyper-V instance running off the Windows Home Server Restore CD? UPDATE: As suggested by delenda, I have added a legacy network adapter to my VM, and indeed I now get a network driver listed! However, the WHS it still not found, even after entering the home server name manually. PHS

    Read the article

  • Windows 7 - ignore security when reading external drive

    - by w-
    hi, My system hard drive on an XP computer kind of failed (random corrupt sectors). So i got a new harddrive and am trying to recover the files. The filesystem is NTFS. The system i'm trying to use when recovering the files is Windows 7. I'm obviously an admin on this box. The last data i'm trying to recover is stuff in the Documents and Settings folder. I'm using a SATA to a USB cable thingy so that I just plug it in as an External Hard Drive. The problem: In Windows Explorer when i try to copy the data, I keep getting prompted with Security warnings and error messages. It keeps telling me i have to change the owner permissions of the folder and all it's contents. If i tell it to change all the files and folder permissions it takes a really long time because it has to recurse through all the folder contents to change the permissions. Is there a way for me to ignore the file permissions when doing this? thanks

    Read the article

  • How do I configure IIS to allow access to network resources for PHP scripts?

    - by Dereleased
    I am currently working on a PHP front-end that joins together a series of applications running on separate servers; many of these applications generate files that I need access to, but these files (for various reasons) reside on their parent servers. If I, from the command line, issue a bit of script such as: <?php var_dump(glob("\\\\machine-name\\some\\share\\*")); I will get the full contents of that directory, proving that there's no problem programmatically with PHP reading the contents of a UNC share. However, if I try to execute the same script from the web server, I get an empty array -- more specifically, if I use more explicitly functions designed to "open" a directory like it was a file, I get access errors. I believe this to be a permissions issue, but I am not a server/network administrator type, so I'm not sure what I need to do to correct this and get my script running, and the links I've checked out have not been a terrible amount of help, perhaps due to my background, or lack thereof as far as IIS is concerned, coupled with the fact that we are not actually using .NET for this. Relevant Stats: Windows Server 2008 Standard SP2 IIS 7.0 PHP 5.2.9 I will be connecting to two types of servers: a few other nearly-identical Server 2008 machines, and a machine running embedded XP. Links that have not been particularly helpful but maybe I am just misreading: http://support.microsoft.com/?id=306158 http://support.microsoft.com/kb/207671/EN-US/ http://support.microsoft.com/kb/280383/

    Read the article

  • Kindle (client) for Mac--text search or highlighting/notes?

    - by doug
    just so we're clear, i'm talking about the client/software version here--ie, that you install on your Mac or PC--not the device. The Kindle client was recently released for the Mac. I downloaded it and bought a couple of Kindle-edition books to view on this client. Astonishingly, two features i consider to be more or less essential to any ebook reader are missing in the Kindle client, either that, or i can't find them: (i) text searching; and (ii) highlighting text. First, does anyone know how to access the search feature? I'm aware of the "Go To" button at the top middle of the reader window--the options in that menu when you click the button are: "Cover", "Table of Contents", "Beginning" and "Location." "Location" requires that you type in an integer (but it doesn't correspond to page number--e.g., typing "167" brought me to the table of contents), not a search term. Second, there's a button on the upper right-hand corner of the window "Show Notes and Marks" yet i can't find any way to highlight text. The only kind of "note" or "mark" i have been able to record is to "bookmark" a page by clicking the "bookmark" button also at the top of the window.

    Read the article

  • Can I install windows on an SSD and access data from my old windows HDD?

    - by nzifnab
    I purchased new computer components, switching my hardware from AMD and Radeon to Intel and Nvidia. I kept components from my old computer like the powersupply and two HDDs. Everything appeared to install correctly and the system booted into the BIOS just fine (after a brief snafu with the CPU fan). My goal was to use the two harddrives and just be able to turn on the computer and load up my old windows install with all the files, programs, and documents. I expected to have to call Microsoft to re-register the windows install for the new hardware (since I had to do that last time I upgraded w/ the same windows version). When the computer attempts to boot into windows it briefly flashes a bluescreen and then restarts. System recovery gives a message something like "BadDriver Failover" something something. I assume this is because it's trying to use amd drivers for an intel chipset (or something...?) and I've been as-yet unsuccessful in getting it to boot into my old windows partition. SO! I decided eff it, maybe I'll go visit my nearest Micro Center and buy a 200 GB SSD, install windows onto that, and then... be able to access just the contents of both of my other harddrives? I don't intend on running any of the programs but there were some saved files I would like to salvage from the 500 GB harddrive, The 1.5 TB harddrive only had files on it, no OS or applications so I'd also expect to still be able to access it. Is this possible? Can I install the SSD and only format/install windows onto it and still access the contents from my two transferred-over drives?

    Read the article

  • Calculating memory footprints using /proc/sysvipc/shm

    - by MarkTeehan
    This is for a SLES 10 database server. One of my servers runs three databases and three app servers; I am analyzing how their shared memory segments grow and shrink to avoid intermittent out-of-memory scenarios. "Top" is hot helpful for this since its calculations for RES and VIRT are inconsistent. I am doing this by matching up the contents of /proc/sysvipc/shm with memory usage reported by the database admin console. I do this by totaling up saving the contents of /proc/sysvipc/shm and then total up "bytes" for all of the segments for the offending userid. This is a large server with hundreds of segments and tens (or hundreds) of GB of allocated memory per userid. However it doesn't match up - the database management software claims to be using around 25% more memory than the total I calculate. Negligible swap space is in use, so I am ignoring that. I am running it as root so I am sure I see all shared memory segments. My question is : is all (significant) allocated memory recorded in /proc/sysvipc/shm, or is this only shared memory (*and not "un-shared" memory?). If this is incorrect, what is the correct way to calculate out the total allocated memory for each userid? Also: I believe doing a 'cat' on this file locks server IPC. I check it every 5 seconds - is it likely that this frequency could be problematic? Thanks! Mark Teehan Singapore

    Read the article

  • How to copy a floppy boot disk?

    - by Sammy
    I have a floppy boot disk and I would like to copy it to preserve it, as a backup. If I have two floppy drives, A and B, how can I copy the disk? Assuming one has two floppy drives Can I simply insert the floppy disk in one of the drives and then an empty floppy disk in the other and issue a simple command like this one. A:\>copy . b: Will this only copy the contents of the current directory and none of the files in subdirectories? Do I have to explicitly specify the option to copy everything? Also, what about the boot information? That won't get copied, right? If one has only one floppy drive... How do you copy a floppy disk if you only have one floppy drive? Do you in fact have to copy its contents to the local hard drive C and then copy that to an empty floppy disk using the same floppy drive? A:\>copy . c:\floppydisk A:\> A:\>c: C:\> C:\>copy floppydisk a: C:\> I'm guessing I will need some type of disk image tool to really copy everything on a bootable floppy disk. Something like the dd command on Linux perhaps? Am I right?

    Read the article

  • Dual Monitor + Virtual Desktop software (plus for cube)

    - by xenithorb
    I've recently purchased another monitor, my first one being a TV and being much larger. I now sit at a desk and use my shiny new 24" LED more often, but I like to extend the desktop into the TV. The problem presented with this is to save power and the longevity of my 47" VIZIO, I try to keep it off when possible. What I'm seeking sounds very simple - If any of you have ever used Compiz or Deskspace (Yod'm) - You'll know what im referring to when I talk about a "cube." The most important functionality I'm looking for is the ability to scroll desktop contents between both displays and virtual desktops. Deskspace does and excellent job of presenting an attractive cube, but it creates a separate cube and virtual desktop space for the second extended monitor (now the TV) - Again, what I'm looking to do is scroll between virtual desktops, by passing through both monitors. The net effect of this functionality would allow me to scroll the contents of the extended monitor to the first monitor should a window get caught there without having to turn on the TV. So imagine the horizontal portion of a cube as being actual real monitors - is there anything that allows one to rotate desktops between displays?

    Read the article

  • apache subdomain configuration

    - by terrid25
    I seem to be having a small problem with setting up a subdomain in apache under CentOS. I have the following: <VirtualHost *:80> ServerName www.domain.co.uk ServerAlias domain.co.uk dev.domain.co.uk DocumentRoot "/var/www/html/domain/web" DirectoryIndex index.php Alias /sf /var/www/html/symfony14/web/sf <Directory "/var/www/html/domain/web"> AllowOverride All Allow from All </Directory> </VirtualHost> <Directory "/var/www/html/symfony14/web/sf"> AllowOverride All Allow from All </Directory> <VirtualHost *:80> ServerName test.domain.co.uk DocumentRoot "/var/www/html/domain_test/web" DirectoryIndex index.php Alias /sf /var/www/html/symfony14/web/sf <Directory "/var/www/html/domain_test/web"> AllowOverride All Allow from All </Directory> </VirtualHost> So going to www.domain.co.uk and domain.co.uk display the contents from /var/www/html/domain, but going to test.domain.co.uk also displays the same folder contents. Is this because of the ServerAlias ? Thanks

    Read the article

  • How to ensure local file is up-to-date or ahead (dropbox sync) before truecrypt auto-mount it?

    - by user620965
    There are a lot tutorials out there that states that dropbox build-in encryption is not secure enought. That tutorials recommands to sync a truecrypt container file to have all files in it securely encrypted. This setup is know to be limited. You can NOT have that truecrypt container file mounted on the same time on more than one location - if you have inserted changes to the contents of the container in more then one location at a time then this setup produces a conflict on the container file in the dropbox system - resulting in one container file for each location. In my case that issue is not relevant - i do not use my data on more than one location at a time. I want to use the auto-mount feature of truecrypt on startup of windows 7 to have a zero configuration environment - and start working right away. But i want to ensure that the local truecrypt container file is up-to-date before truecrypt mounts it automatically - imagine you updated the contents of the container on your primary location and your secondary location was off for a long time. In that case it can take "a long time" till dropbox sync is complete (e.g. depending on your internet connection and the size of the container file). There is a option in truecrypt that ensures that truecrypt do not update the timestamp of the container file - which speeds up the sync, because dropbox client is doing a differential sync then instead of a time consuming full-sync. That is an improvement to that setup, but this do not fix my issue. The question is how to make the auto-mount function wait for the container file to be up-to-date (updated by dropbox)? In contrast: if the file was changed local, but remote file (in the dropbox cloud system) is still old (not jet updated by the sync process / or process is progress), should not make truecrypt to wait for the sync. Suggestions?

    Read the article

  • Help with Design for Vacation Tracking System (C#/.NET/Access/WebServices/SOA/Excel) [closed]

    - by Aaronaught
    I have been tasked with developing a system for tracking our company's paid time-off (vacation, sick days, etc.) At the moment we are using an Excel spreadsheet on a shared network drive, and it works pretty well, but we are concerned that we won't be able to "trust" employees forever and sometimes we run into locking issues when two people try to open the spreadsheet at once. So we are trying to build something a little more robust. I would like some input on this design in terms of maintainability, scalability, extensibility, etc. It's a pretty simple workflow we need to represent right now: I started with a basic MS Access schema like this: Employees (EmpID int, EmpName varchar(50), AllowedDays int) Vacations (VacationID int, EmpID int, BeginDate datetime, EndDate datetime) But we don't want to spend a lot of time building a schema and database like this and have to change it later, so I think I am going to go with something that will be easier to expand through configuration. Right now the vacation table has this schema: Vacations (VacationID int, PropName varchar(50), PropValue varchar(50)) And the table will be populated with data like this: VacationID | PropName | PropValue -----------+--------------+------------------ 1 | EmpID | 4 1 | EmpName | James Jones 1 | Reason | Vacation 1 | BeginDate | 2/24/2010 1 | EndDate | 2/30/2010 1 | Destination | Spectate Swamp 2 | ... | ... I think this is a pretty good, extensible design, we can easily add new properties to the vacation like the destination or maybe approval status, etc. I wasn't too sure how to go about managing the database of valid properties, I thought of putting them in a separate PropNames table but it gets complicated to manage all the different data types and people say that you shouldn't put CLR type names into a SQL database, so I decided to use XML instead, here is the schema: <VacationProperties> <PropertyNames>EmpID,EmpName,Reason,BeginDate,EndDate,Destination</PropertyNames> <PropertyTypes>System.Int32,System.String,System.String,System.DateTime,System.DateTime,System.String</PropertyTypes> <PropertiesRequired>true,true,false,true,true,false</PropertiesRequired> </VacationProperties> I might need more fields than that, I'm not completely sure. I'm parsing the XML like this (would like some feedback on the parsing code): string xml = File.ReadAllText("properties.xml"); Match m = Regex.Match(xml, "<(PropertyNames)>(.*?)</PropertyNames>"; string[] pn = m.Value.Split(','); // do the same for PropertyTypes, PropertiesRequired Then I use the following code to persist configuration changes to the database: string sql = "DROP TABLE VacationProperties"; sql = sql + " CREATE TABLE VacationProperties "; sql = sql + "(PropertyName varchar(100), PropertyType varchar(100) "; sql = sql + "IsRequired varchar(100))"; for (int i = 0; i < pn.Length; i++) { sql = sql + " INSERT VacationProperties VALUES (" + pn[i] + "," + pt[i] + "," + pv[i] + ")"; } // GlobalConnection is a singleton new SqlCommand(sql, GlobalConnection.Instance).ExecuteReader(); So far so good, but after a few days of this I then realized that a lot of this was just a more specific kind of a generic workflow which could be further abstracted, and instead of writing all of this boilerplate plumbing code I could just come up with a workflow and plug it into a workflow engine like Windows Workflow Foundation and have the users configure it: In order to support routing these configurations throw the workflow system, it seemed natural to implement generic XML Web Services for this instead of just using an XML file as above. I've used this code to implement the Web Services: public class VacationConfigurationService : WebService { [WebMethod] public void UpdateConfiguration(string xml) { // Above code goes here } } Which was pretty easy, although I'm still working on a way to validate that XML against some kind of schema as there's no error-checking yet. I also created a few different services for other operations like VacationSubmissionService, VacationReportService, VacationDataService, VacationAuthenticationService, etc. The whole Service Oriented Architecture looks like this: And because the workflow itself might change, I have been working on a way to integrate the WF workflow system with MS Visio, which everybody at the office already knows how to use so they could make changes pretty easily. We have a diagram that looks like the following (it's kind of hard to read but the main items are Activities, Authenticators, Validators, Transformers, Processors, and Data Connections, they're all analogous to the services in the SOA diagram above). The requirements for this system are: (Note - I don't control these, they were given to me by management) Main workflow must interface with Excel spreadsheet, probably through VBA macros (to ease the transition to the new system) Alerts should integrate with MS Outlook, Lotus Notes, and SMS (text messages). We also want to interface it with the company Voice Mail system but that is not a "hard" requirement. Performance requirements: Must handle 250,000 Transactions Per Second Should be able to handle up to 20,000 employees (right now we have 3) 99.99% uptime ("four nines") expected Must be secure against outside hacking, but users cannot be required to enter a username/password. Platforms: Must support Windows XP/Vista/7, Linux, iPhone, Blackberry, DOS 2.0, VAX, IRIX, PDP-11, Apple IIc. Time to complete: 6 to 8 weeks. My questions are: Is this a good design for the system so far? Am I using all of the recommended best practices for these technologies? How do I integrate the Visio diagram above with the Windows Workflow Foundation to call the ConfigurationService and persist workflow changes? Am I missing any important components? Will this be extensible enough to support any scenario via end-user configuration? Will the system scale to the above performance requirements? Will we need any expensive hardware to run it? Are there any "gotchas" I should know about with respect to cross-platform compatibility? For example would it be difficult to convert this to an iPhone app? How long would you expect this to take? (We've dedicated 1 week for testing so I'm thinking maybe 5 weeks?) Many thanks for your advices, Aaron

    Read the article

  • Model binding nested collections in ASP.NET MVC

    - by MartinHN
    Hi I'm using Steve Sanderson's BeginCollectionItem helper with ASP.NET MVC 2 to model bind a collection if items. That works fine, as long as the Model of the collection items does not contain another collection. I have a model like this: -Product --Variants ---IncludedAttributes Whenever I render and model bind the Variants collection, it works jusst fine. But with the IncludedAttributes collection, I cannot use the BeginCollectionItem helper because the id and names value won't honor the id and names value that was produced for it's parent Variant: <div class="variant"> <input type="hidden" value="bbd4fdd4-fa22-49f9-8a5e-3ff7e2942126" autocomplete="off" name="Variants.index"> <input type="hidden" value="0" name="Variants[bbd4fdd4-fa22-49f9-8a5e-3ff7e2942126].SlotAmount" id="Variants_bbd4fdd4-fa22-49f9-8a5e-3ff7e2942126__SlotAmount"> <table class="included-attributes"> <input type="hidden" value="0" name="Variants.IncludedAttributes[c5989db5-b1e1-485b-b09d-a9e50dd1d2cb].Id" id="Variants_IncludedAttributes_c5989db5-b1e1-485b-b09d-a9e50dd1d2cb__Id" class="attribute-id"> <tr> <td> <input type="hidden" value="0" name="Variants.IncludedAttributes[c5989db5-b1e1-485b-b09d-a9e50dd1d2cb].Id" id="Variants_IncludedAttributes_c5989db5-b1e1-485b-b09d-a9e50dd1d2cb__Id" class="attribute-id"> </td> </tr> </table> </div> If you look at the name of the first hidden field inside the table, it is Variants.IncludedAttributes - where it should have been Variants[bbd4fdd4-fa22-49f9-8a5e-3ff7e2942126].IncludedAttributes[...]... That is because when I call BeginCollectionItem the second time (On the IncludedAttributes collection) there's given no information about the item index value of it's parent Variant. My code for rendering a Variant looks like this: <div class="product-variant round-content-box grid_6" data-id="<%: Model.AttributeType.Id %>"> <h2><%: Model.AttributeType.AttributeTypeName %></h2> <div class="box-content"> <% using (Html.BeginCollectionItem("Variants")) { %> <div class="slot-amount"> <label class="inline" for="slotAmountSelectList"><%: Text.amountOfThisVariant %>:</label> <select id="slotAmountSelectList"><option value="1">1</option><option value="2">2</option></select> </div> <div class="add-values"> <label class="inline" for="txtProductAttributeSearch"><%: Text.addVariantItems %>:</label> <input type="text" id="txtProductAttributeSearch" class="product-attribute-search" /><span><%: Text.or %> <a class="select-from-list-link" href="#select-from-list" data-id="<%: Model.AttributeType.Id %>"><%: Text.selectFromList.ToLowerInvariant() %></a></span> <div class="clear"></div> </div> <%: Html.HiddenFor(m=>m.SlotAmount) %> <div class="included-attributes"> <table> <thead> <tr> <th><%: Text.name %></th> <th style="width: 80px;"><%: Text.price %></th> <th><%: Text.shipping %></th> <th style="width: 90px;"><%: Text.image %></th> </tr> </thead> <tbody> <% for (int i = 0; i < Model.IncludedAttributes.Count; i++) { %> <tr><%: Html.EditorFor(m => m.IncludedAttributes[i]) %></tr> <% } %> </tbody> </table> </div> <% } %> </div> </div> And the code for rendering an IncludedAttribute: <% using (Html.BeginCollectionItem("Variants.IncludedAttributes")) { %> <td> <%: Model.AttributeName %> <%: Html.HiddenFor(m => m.Id, new { @class = "attribute-id" })%> <%: Html.HiddenFor(m => m.ProductAttributeTypeId) %> </td> <td><%: Model.Price.ToCurrencyString() %></td> <td><%: Html.DropDownListFor(m => m.RequiredShippingTypeId, AppData.GetShippingTypesSelectListItems(Model.RequiredShippingTypeId)) %></td> <td><%: Model.ImageId %></td> <% } %>

    Read the article

  • SQL Azure Reporting Limited CTP Arrived

    - by Shaun
    It’s about 3 months later when I registered the SQL Azure Reporting CTP on the Microsoft Connect after TechED 2010 China. Today when I checked my mailbox I found that the SQL Azure team had just accepted my request and sent the activation code over to me. So let’s have a look on the new SQL Azure Reporting.   Concept The SQL Azure Reporting provides cloud-based reporting as a service, built on SQL Server Reporting Services and SQL Azure technologies. Cloud-based reporting solutions such as SQL Azure Reporting provide many benefits, including rapid provisioning, cost-effective scalability, high availability, and reduced management overhead for report servers; and secure access, viewing, and management of reports. By using the SQL Azure Reporting service, we can do: Embed the Visual Studio Report Viewer ADO.NET Ajax control or Windows Form control to view the reports deployed on SQL Azure Reporting Service in our web or desktop application. Leverage the SQL Azure Reporting SOAP API to manage and retrieve the report content from any kinds of application. Use the SQL Azure Reporting Service Portal to navigate and view the reports deployed on the cloud. Since the SQL Azure Reporting was built based on the SQL Server 2008 R2 Reporting Service, we can use any tools we are familiar with, such as the SQL Server Integration Studio, Visual Studio Report Viewer. The SQL Azure Reporting Service runs as a remote SQL Server Reporting Service just on the cloud rather than on a server besides us.   Establish a New SQL Azure Reporting Let’s move to the windows azure deveploer portal and click the Reporting item from the left side navigation bar. If you don’t have the activation code you can click the Sign Up button to send a requirement to the Microsoft Connect. Since I already recieved the received code mail I clicked the Provision button. Then after agree the terms of the service I will select the subscription for where my SQL Azure Reporting CTP should be provisioned. In this case I selected my free Windows Azure Pass subscription. Then the final step, paste the activation code and enter the password of our SQL Azure Reporting Service. The user name of the SQL Azure Reporting will be generated by SQL Azure automatically. After a while the new SQL Azure Reporting Server will be shown on our developer portal. The Reporting Service URL and the user name will be shown as well. We can reset the password from the toolbar button.   Deploy Report to SQL Azure Reporting If you are familiar with SQL Server Reporting Service you will find this part will be very similar with what you know and what you did before. Firstly we open the SQL Server Business Intelligence Development Studio and create a new Report Server Project. Then we will create a shared data source where the report data will be retrieved from. This data source can be SQL Azure but we can use local SQL Server or other database if it opens the port up. In this case we use a SQL Azure database located in the same data center of our reporting service. In the Credentials tab page we entered the user name and password to this SQL Azure database. The SQL Azure Reporting CTP only available at the North US Data Center now so that the related SQL Server and hosted service might be better to select the same data center to avoid the external data transfer fee. Then we create a very simple report, just retrieve all records from a table named Members and have a table in the report to list them. In the data source selection step we choose the shared data source we created before, then enter the T-SQL to select all records from the Member table, then put all fields into the table columns. The report will be like this as following In order to deploy the report onto the SQL Azure Reporting Service we need to update the project property. Right click the project node from the solution explorer and select the property item. In the Target Server URL item we will specify the reporting server URL of our SQL Azure Reporting. We can go back to the developer portal and select the reporting node from the left side, then copy the Web Service URL and paste here. But notice that we need to append “/reportserver” after pasted. Then just click the Deploy menu item in the context menu of the project, the Visual Studio will compile the report and then upload to the reporting service accordingly. In this step we will be prompted to input the user name and password of our SQL Azure Reporting Service. We can get the user name from the developer portal, just next to the Web Service URL in the SQL Azure Reporting page. And the password is the one we specified when created the reporting service. After about one minute the report will be deployed succeed.   View the Report in Browser SQL Azure Reporting allows us to view the reports which deployed on the cloud from a standard browser. We copied the Web Service URL from the reporting service main page and appended “/reportserver” in HTTPS protocol then we will have the SQL Azure Reporting Service login page. After entered the user name and password of the SQL Azure Reporting Service we can see the directories and reports listed. Click the report will launch the Report Viewer to render the report.   View Report in a Web Role with the Report Viewer The ASP.NET and Windows Form Report Viewer works well with the SQL Azure Reporting Service as well. We can create a ASP.NET Web Role and added the Report Viewer control in the default page. What we need to change to the report viewer are Change the Processing Mode to Remote. Specify the Report Server URL under the Server Remote category to the URL of the SQL Azure Reporting Web Service URL with “/reportserver” appended. Specify the Report Path to the report which we want to display. The report name should NOT include the extension name. For example my report was in the SqlAzureReportingTest project and named MemberList.rdl then the report path should be /SqlAzureReportingTest/MemberList. And the next one is to specify the SQL Azure Reporting Credentials. We can use the following class to wrap the report server credential. 1: private class ReportServerCredentials : IReportServerCredentials 2: { 3: private string _userName; 4: private string _password; 5: private string _domain; 6:  7: public ReportServerCredentials(string userName, string password, string domain) 8: { 9: _userName = userName; 10: _password = password; 11: _domain = domain; 12: } 13:  14: public WindowsIdentity ImpersonationUser 15: { 16: get 17: { 18: return null; 19: } 20: } 21:  22: public ICredentials NetworkCredentials 23: { 24: get 25: { 26: return null; 27: } 28: } 29:  30: public bool GetFormsCredentials(out Cookie authCookie, out string user, out string password, out string authority) 31: { 32: authCookie = null; 33: user = _userName; 34: password = _password; 35: authority = _domain; 36: return true; 37: } 38: } And then in the Page_Load method, pass it to the report viewer. 1: protected void Page_Load(object sender, EventArgs e) 2: { 3: ReportViewer1.ServerReport.ReportServerCredentials = new ReportServerCredentials( 4: "<user name>", 5: "<password>", 6: "<sql azure reporting web service url>"); 7: } Finally deploy it to Windows Azure and enjoy the report.   Summary In this post I introduced the SQL Azure Reporting CTP which had just available. Likes other features in Windows Azure, the SQL Azure Reporting is very similar with the SQL Server Reporting. As you can see in this post we can use the existing and familiar tools to build and deploy the reports and display them on a website. But the SQL Azure Reporting is just in the CTP stage which means It is free. There’s no support for it. Only available at the North US Data Center. You can get more information about the SQL Azure Reporting CTP from the links following SQL Azure Reporting Limited CTP at MSDN SQL Azure Reporting Samples at TechNet Wiki You can download the solutions and the projects used in this post here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Building services with the .NET framework Cont’d

    - by Allan Rwakatungu
    In my previous blog I wrote an introductory post on services and how you can build services using the .NET frameworks Windows Communication Foundation (WCF) In this post I will show how to develop a real world application using WCF The problem During the last meeting we realized developers in Uganda are not so cool – they don’t use twitter so may not get the latest news and updates from the technology world. We also noticed they mostly use kabiriti phones (jokes). With their kabiriti phones they are unable to access the twitter web client or alternative twitter mobile clients like tweetdeck , twirl or tweetie. However, the kabiriti phones support SMS (Yeeeeeeei). So what we going to do to make these developers cool and keep them updated is by enabling them to receive tweets via SMS. We shall also enable them to develop their own applications that can extend this functionality Analysis Thanks to services and open API’s solving our problem is going to be easy.  1. To get tweets we can use the twitter service for FREE 2. To send SMS we shall use www.clickatell.com/ as they can send SMS to any country in the world. Besides we could not find any local service that offers API's for sending SMS :(. 3. To enable developers to integrate with our application so that they can extend it and build even cooler applications we use WCF. In addittion , because connectivity might be an issue we decided to use WCF because if has a inbuilt queing features. We also choose WCF because this is a post about .NET and WCF :). The Code Accessing the tweets To consume twitters REST API we shall use the WCF REST starter kit. Like it name indicates , the REST starter kit is a set of .NET framework classes that enable developers to create and access REST style services ( like the twitter service). Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Http; using System.Net; using System.Xml.Linq;   namespace UG.Demo {     public class TwitterService     {         public IList<TwitterStatus> SomeMethodName()         {             //Connect to the twitter service (HttpClient is part of the REST startkit classes)             HttpClient cl = new HttpClient("http://api.twitter.com/1/statuses/friends_timeline.xml");             //Supply your basic authentication credentials             cl.TransportSettings.Credentials = new NetworkCredential("ourusername", "ourpassword");             //issue an http             HttpResponseMessage resp = cl.Get();             //ensure we got reponse 200             resp.EnsureStatusIsSuccessful();             //use XLinq to parse the REST XML             var statuses = from r in resp.Content.ReadAsXElement().Descendants("status")                            select new TwitterStatus                            {                                User = r.Element("user").Element("screen_name").Value,                                Status = r.Element("text").Value                            };             return statuses.ToList();         }     }     public class TwitterStatus     {         public string User { get; set; }         public string Status { get; set; }     } }  Sending SMS Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} public class SMSService     {         public void Send(string phone, string message)         {                         HttpClient cl1 = new HttpClient();              //the clickatell XML format for sending SMS             string xml = String.Format("<clickAPI><sendMsg><api_id>3239621</api_id><user>ourusername</user><password>ourpassword</password><to>{0}</to><text>{1}</text></sendMsg></clickAPI>",phone,message);             //Post form data             HttpUrlEncodedForm form = new HttpUrlEncodedForm();             form.Add("data", xml);             System.Net.ServicePointManager.Expect100Continue = false;             string uri = @"http://api.clickatell.com/xml/xml";             HttpResponseMessage resp = cl1.Post(uri, form.CreateHttpContent());             resp.EnsureStatusIsSuccessful();         }     }

    Read the article

  • Start a Mapping or Process Flow from OWB Browser

    - by Dong Ruirong
    Basically, we start a Mapping or Process Flow from Oracle Warehouse Builder (OWB) Design Client. But actually we can also start a Mapping or Process Flow from OWB Browser. This paper will introduce the Start Report first and then introduce how to start/rerun a Mapping or Process Flow from OWB Browser. Start Report Start Report is used to start an execution of a Mapping or Process Flow. So there are two kinds of Start Report: Mapping Start Report (See Figure 1) and Process Flow Start Report (See Figure 2). Start Report shows the Mapping or Process Flow identification properties, including latest deployment and latest execution, lists all execution parameters for the Mapping or Process Flow, which were specified by the latest deployment, and assigns parameter default values from the latest deployment specification. You can do a couple of things from Start Report: Sort execution parameters on name, category. Table 1 lists all parameters of a Mapping. Table 2 lists all parameters of a Process Flow. Change values of any input parameter where permitted. For some parameters, selection lists are provided. For example, Mapping’s parameter Audit Level has a selection list. Reset all parameter settings to their default values. Apply basic validation to parameter values before starting an execution. Start the Mapping or Process Flow, which means it is executed immediately. Navigate to Deployment Report for latest deployment details of the Mapping or Process Flow. Navigate to Execution Job Report for latest execution of current Mapping or Process Flow Link to on-link help Warehouse Report Page, Deployment Report, Execution Report, Execution Schedule Report and Execution Summary Report. Figure 1 Mapping Start Report Table 1 Execution Parameters and default values for a Mapping Category Name Mode Input Value System Audit Level In Error Details System Bulk Size In 1000 System Commit Frequency In 1000 System EXECUTE_RESUME_TASK In FALSE System FORCE_RESUME_OPTION In FALSE System Max No of Errors In 50 System NUMBER_OF_TIMES_TO_RETRY In 2 System Operating Mode In Set Based Fail Over to Row Based System PARALLEL_LEVEL In 0 System Procedure Name In main System Purge Group In WB Figure 2 Process Flow Start Report Table 2 Execution Parameters and default values for a Process Flow Category Name Mode Input Value System EVAL_LOCATION In   System Item Key In-Out   System Item Type In PFPKG_1 Start a Mapping or Process Flow To navigate to Start Report, it’s better to login OWB Browser with Control Center option; if not, after logging in OWB Browser, go to Control Center first. Then you can follow the ways introduced in this section to navigate to Start Report. One more thing you need to pay attention to is that you are not allowed to deploy any Mappings and Process Flows from OWB Browser as it’s not supported. So it’s necessary to deploy the Mappings and Process Flows first before starting them from OWB Browser. If you have deployed a Mapping or Process Flow but have not started it, please navigate from Object Summary Report or Deployment Schedule Report to Start Report. 1. Navigating from Object Summary Report to Start Report Open the Object Summary Report to see all deployed Mappings and Process Flows. Click the Mapping Name or Process Flow Name link to see its Deployment Report. Select the Start link in the Available Reports tab for the given Mapping or Process Flow to display a Start Report for the Mapping or Process Flow. The execution parameters have the default deployment-time settings. Change any of the input parameter values as required. Click Start Execution button to execute the Mapping or Process Flow. 2. Navigating from Deployment Schedule Report to Start Report Open the Deployment Schedule Report to see deployment details of Mapping and Process Flow. Expand the project trees to find the deployed Mappings and Process Flows. Click the Mapping Name or Process Flow Name link to see its Deployment Report. Select the Start link in the Available Reports tab for the given Mapping or Process Flow to display a Start Report for the Mapping or Process Flow. The execution parameters have the default deployment-time settings. Change any of the input parameter values as required. Click Start Execution button to execute the Mapping or Process Flow. Re-run a Mapping or Process Flow If you have executed a Mapping or Process Flow, you can navigate from Object Summary Report, Deployment Schedule Report, Execution Summary Report or Execution Schedule Report to Start Report. 1. Navigating from the Execution Summary Report to Start Report Open the Execution Summary Report to see all execution jobs including Mapping jobs and Process Flow jobs. Click on the Mapping Name or Process Flow Name to see its Execution Report. Select the Start link in the Available Reports tab for the given Mapping or Process Flow to display a Start Report for the Mapping or Process Flow. The execution parameters have the default deployment-time settings. Change any of the input parameter values as required. Click Start Execution button to execute the Mapping or Process Flow. 2. Navigating from the Execution Schedule Report to Start Report Open the Execution Schedule Report to see list of all executions of Mapping and Process Flow. Click on the Mapping Name or Process Flow Name to see its Execution Report. Select the Start link in the Available Reports tab for the given Mapping or Process Flow to display a Start Report for the Mapping or Process Flow. The execution parameters have the default deployment-time settings. Change any of the input parameter values as required. Click Start Execution button to execute the Mapping or Process Flow. If the execution of a Mapping or Process Flow is successful, you will see this message from the Start Report: Start Execution request successful. (See Figure 3) Figure 3 Execution Result You can also confirm the execution of the Mapping or Process Flow by referring to Execution Report of the current Mapping or Process Flow by clicking the link in the Available Reports tab for the given Mapping or Process Flow. One new record of execution job details is added to Execution Report of the Mapping or Process Flow which shows the details of the execution such as Start Time, Elapsed Time, Status, the number of records selected, inserted, updated, deleted etc.

    Read the article

< Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >