Search Results

Search found 1289 results on 52 pages for 'haras pl'.

Page 16/52 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • SQL query problem

    - by Pankaj
    Hello All I have 2 tables Project and ProjectList like this Project ProjectID Name ProjectListID - allow null In ProjectList ProjectListID ProjName Now what i need here, i want only those recoed from ProjectList table which ProjectListID not in Project table. I made a query but it is taking lot of time to execute. select * FROM projectslist pl where pl.ProjectsListID not in (SELECT p.ProjectsListID FROM project p where (p.ProjectsListID is not null and p.ProjectsListID <>0)) Please help me to create optimize query. I am using My SQL.

    Read the article

  • git status shows a file that I have listed explicitly in my .gitignore file

    - by metaperl
    I have the following line in my .gitignore file: var/www/docs/.backroom/billing_info/inv.pl but when I type 'git status' I am told the following: # modified: var/www/docs/.backroom/billing_info/inv.pl I dont understand how a file which is explicitly listed as an ignore pattern could be listed as modified when I want git to ignore it. There are no lines starting with a ! in my .gitignore file Here is my entire .gitignore file for reference: http://pastebin.com/Jw445Qd7

    Read the article

  • URL Redirection

    - by Paolo
    Hi pidors, I want to do it so that when someone follow the link www.mysite.pl/file1.mp3, hi gets redirected to www.mysite.pl/file2.mp3 but the adderes in his browser bar keeps the same. How to do it?

    Read the article

  • Freeradius problem

    - by IceAgeBosna
    Hello my Dear friends! Firstly sorry to my English I am not an expert:) I am using freeradius 2.1.7 and MySQL instaled on Ubuntu server 9.04. Now, the perl script called: "auth.pl" is verifying usersnames, passwords, and updating information. The problem is that on a certain number of connections simply "NAS" - Mikrotik until the next reboot users can not connect. If you need i cann show you auth.pl script

    Read the article

  • perl, module variable

    - by Mike
    I don't really understand how scoping works in perl modules. This doesn't print anything. I would like it if running a.pl printed 1 b.pm $f=1; a.pl use b; print $f

    Read the article

  • Lighttpd + fastcgi + python (for django) slow on first request

    - by EagleOne
    I'm having a problem with a django website I host with lighttpd + fastcgi. It works great but it seems that the first request always takes up to 3seconds. Subsequent requests are much faster (<1s). I activated access logs in lighttpd in order to track the issue. But I'm kind of stuck. Here are logs where I 'lose' 4s (from 10:04:17 to 10:04:21): 2012-12-01 10:04:17: (mod_fastcgi.c.3636) handling it in mod_fastcgi 2012-12-01 10:04:17: (response.c.470) -- before doc_root 2012-12-01 10:04:17: (response.c.471) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.472) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.473) Path : 2012-12-01 10:04:17: (response.c.521) -- after doc_root 2012-12-01 10:04:17: (response.c.522) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.523) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.524) Path : /var/www/finderauto.fcgi 2012-12-01 10:04:17: (response.c.541) -- logical -> physical 2012-12-01 10:04:17: (response.c.542) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.543) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.544) Path : /var/www/finderauto.fcgi 2012-12-01 10:04:21: (response.c.128) Response-Header: HTTP/1.1 200 OK Last-Modified: Sat, 01 Dec 2012 09:04:21 GMT Expires: Sat, 01 Dec 2012 09:14:21 GMT Content-Type: text/html; charset=utf-8 Cache-Control: max-age=600 Transfer-Encoding: chunked Date: Sat, 01 Dec 2012 09:04:21 GMT Server: lighttpd/1.4.28 I guess that if there is a problem, it's whith my configuration. So here is the way I launch my django app: python manage.py runfcgi method=threaded host=127.0.0.1 port=3033 And here is my lighttpd conf: server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", "mod_accesslog", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" accesslog.filename = "/var/log/lighttpd/access.log" debug.log-request-header = "enable" debug.log-response-header = "enable" debug.log-file-not-found = "enable" debug.log-request-handling = "enable" debug.log-timeouts = "enable" debug.log-ssl-noise = "enable" debug.log-condition-cache-handling = "enable" debug.log-condition-handling = "enable" fastcgi.server = ( "/finderauto.fcgi" => ( "main" => ( # Use host / port instead of socket for TCP fastcgi "host" => "127.0.0.1", "port" => 3033, #"socket" => "/home/finderadmin/finderauto.sock", "check-local" => "disable", "fix-root-scriptname" => "enable", ) ), ) alias.url = ( "/media" => "/home/user/django/contrib/admin/media/", ) url.rewrite-once = ( "^(/media.*)$" => "$1", "^/favicon\.ico$" => "/media/favicon.ico", "^(/.*)$" => "/finderauto.fcgi$1", ) index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", " index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ## Use ipv6 if available #include_shell "/usr/share/lighttpd/use-ipv6.pl" dir-listing.encoding = "utf-8" server.dir-listing = "enable" compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/x-javascript", "text/css", "text/html", "text/plain" ) include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" If any of you could help me finding out where I lose these 3 or 4 s. I would much appreciate. Thanks in advance!

    Read the article

  • GPS format in PHP

    - by Adam
    Hello, how made in PHP from format 52.593800, 21.448850 format +52° 35' 37.68", +21° 26' 55.86" like do it google http://maps.google.pl/maps?hl=pl&t=m&q=52.593800,21.448850 ?

    Read the article

  • how to install ffmpeg in cpanel

    - by Ajay Chthri
    i'm using dedicated server(linux) so i need to install ffmpeg in cpanel so here ffmpeg i found in Main Software Install a Perl Module but i writing script in php so how can i install ffmpeg phpperl when i'am trying to install ffmpeg in perl module i get this response Checking C compiler....C compiler (/usr/bin/cc) OK (cached Tue Jan 17 19:16:31 2012)....Done CPAN fallback is disabled since /var/cpanel/conserve_memory exists, and cpanm is available. Method: Using Perl Expect, Installer: cpanm You have make /usr/bin/make Falling back to HTTP::Tiny 0.009 You have /bin/tar: tar (GNU tar) 1.15.1 You have /usr/bin/unzip You have Cpanel::HttpRequest 2.1 Testing connection speed...(using fast method)...Done Ping:2 (ticks) Testing connection speed to cpan.knowledgematters.net using pureperl...(28800.00 bytes/s)...Done Ping:2 (ticks) Testing connection speed to cpan.develooper.com using pureperl...(22233.33 bytes/s)...Done Ping:2 (ticks) Testing connection speed to cpan.schatt.com using pureperl...(32750.00 bytes/s)...Done Ping:3 (ticks) Testing connection speed to cpan.mirror.facebook.net using pureperl...(14050.00 bytes/s)...Done Ping:2 (ticks) Testing connection speed to cpan.mirrors.hoobly.com using pureperl...(5150.00 bytes/s)...Done Five usable mirrors located Ping:0 (ticks) Testing connection speed to 208.109.109.239 using pureperl...(28950.00 bytes/s)...Done Ping:2 (ticks) Testing connection speed to 208.82.118.100 using pureperl...(19300.00 bytes/s)...Done Ping:1 (ticks) Testing connection speed to 69.50.192.73 using pureperl...(19300.00 bytes/s)...Done Three usable fallback mirrors located Mirror Check passed for cpan.schatt.com (/index.html) Searching on cpanmetadb ... Fetching http://cpanmetadb.cpanel.net/v1.0/package/Video::FFmpeg?cpanel_version=11.30.5.6&cpanel_tier=release (connected:0).......(request attempt 1/12)...Using dns cache file /root/.HttpRequest/cpanmetadb.cpanel.net......searching for mirrors (mirror search attempt 1/3)......5 usable mirrors located. (less then expected)......mirror search success......connecting to 208.74.123.82...@208.74.123.82......connected......receiving...100%......request success......Done Searching Video::FFmpeg on cpanmetadb (http://cpanmetadb.cpanel.net/v1.0/package/Video::FFmpeg?cpanel_version=11.30.5.6&cpanel_tier=release) ... Fetching http://cpanmetadb.cpanel.net/v1.0/package/Video::FFmpeg?cpanel_version=11.30.5.6&cpanel_tier=release (connected:1).......(request attempt 1/12)[email protected]%......request success......Done Source: fastest CPAN mirror ... --> Working on Video::FFmpeg Fetching http://cpan.schatt.com//authors/id/R/RA/RANDOMMAN/Video-FFmpeg-0.47.tar.gz ... Fetching http://cpan.schatt.com/authors/id/R/RA/RANDOMMAN/Video-FFmpeg-0.47.tar.gz (connected:1).......(request attempt 1/12)...Resolving cpan.schatt.com...(resolve attempt 1/65)......connecting to 66.249.128.125...@66.249.128.125......connected......receiving...25%...50%...75%...100%......request success......Done OK Unpacking Video-FFmpeg-0.47.tar.gz Video-FFmpeg-0.47/ Video-FFmpeg-0.47/Changes Video-FFmpeg-0.47/FFmpeg.xs Video-FFmpeg-0.47/MANIFEST Video-FFmpeg-0.47/META.yml Video-FFmpeg-0.47/Makefile.PL Video-FFmpeg-0.47/README Video-FFmpeg-0.47/lib/ Video-FFmpeg-0.47/lib/Video/ Video-FFmpeg-0.47/lib/Video/FFmpeg/ Video-FFmpeg-0.47/lib/Video/FFmpeg/AVFormat.pm Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream/ Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream/Audio.pm Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream/Subtitle.pm Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream/Video.pm Video-FFmpeg-0.47/lib/Video/FFmpeg/AVStream.pm Video-FFmpeg-0.47/lib/Video/FFmpeg.pm Video-FFmpeg-0.47/ppport.h Video-FFmpeg-0.47/t/ Video-FFmpeg-0.47/t/Video-FFmpeg.t Video-FFmpeg-0.47/test Video-FFmpeg-0.47/test.mp4 Video-FFmpeg-0.47/typemap Entering Video-FFmpeg-0.47 Checking configure dependencies from META.yml META.yml not found or unparsable. Fetching META.yml from search.cpan.org Fetching http://search.cpan.org/meta/Video-FFmpeg-0.47/META.yml (connected:1).......(request attempt 1/12)...Resolving search.cpan.org...(resolve attempt 1/65)......connecting to 199.15.176.161...@199.15.176.161......connected......receiving...100%......request success......Done Configuring Video-FFmpeg-0.47 ... Running Makefile.PL Perl v5.10.0 required--this is only v5.8.8, stopped at Makefile.PL line 1. BEGIN failed--compilation aborted at Makefile.PL line 1. N/A ! Configure failed for Video-FFmpeg-0.47. See /home/.cpanm/build.log for details. Perl Expect failed with non-zero exit status: 256 All available perl module install methods have failed guide me how can i install ffmpeg in cPanel Thanks for advance.

    Read the article

  • Since upgrading to Solaris 11, my ARC size has consistently targeted 119MB, despite having 30GB RAM. What? Why?

    - by growse
    I ran a NAS/SAN box on Solaris 11 Express before Solaris 11 was released. The box is an HP X1600 with an attached D2700. In all, 12x 1TB 7200 SATA disks, 12x 300GB 10k SAS disks in separate zpools. Total RAM is 30GB. Services provided are CIFS, NFS and iSCSI. All was well, and I had a ZFS memory usage graph looking like this: A fairly healthy Arc size of around 23GB - making use of the available memory for caching. However, I then upgraded to Solaris 11 when that came out. Now, my graph looks like this: Partial output of arc_summary.pl is: System Memory: Physical RAM: 30701 MB Free Memory : 26719 MB LotsFree: 479 MB ZFS Tunables (/etc/system): ARC Size: Current Size: 915 MB (arcsize) Target Size (Adaptive): 119 MB (c) Min Size (Hard Limit): 64 MB (zfs_arc_min) Max Size (Hard Limit): 29677 MB (zfs_arc_max) It's targetting 119MB while sitting at 915MB. It's got 30GB to play with. Why? Did they change something? Edit To clarify, arc_summary.pl is Ben Rockwood's, and the relevent lines generating the above stats are: my $mru_size = ${Kstat}->{zfs}->{0}->{arcstats}->{p}; my $target_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c}; my $arc_min_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c_min}; my $arc_max_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c_max}; my $arc_size = ${Kstat}->{zfs}->{0}->{arcstats}->{size}; The Kstat entries are there, I'm just getting odd values out of them. Edit 2 I've just re-measured the arc size with arc_summary.pl - I've verified these numbers with kstat: System Memory: Physical RAM: 30701 MB Free Memory : 26697 MB LotsFree: 479 MB ZFS Tunables (/etc/system): ARC Size: Current Size: 744 MB (arcsize) Target Size (Adaptive): 119 MB (c) Min Size (Hard Limit): 64 MB (zfs_arc_min) Max Size (Hard Limit): 29677 MB (zfs_arc_max) The thing that strikes me is that the Target Size is 119MB. Looking at the graph, it's targeted the exact same value (124.91M according to cacti, 119M according to arc_summary.pl - think the difference is just 1024/1000 rounding issues) ever since Solaris 11 was installed. It looks like the kernel's making zero effort to shift the target size to anything different. The current size is fluctuating as the needs of the system (large) fight with the target size, and it appears equilibrium is between 700 and 1000MB. So the question is now a little more pointed - why is Solaris 11 hard setting my ARC target size to 119MB, and how do I change it? Should I raise the min size to see what happens? I've stuck the output of kstat -n arcstats over at http://pastebin.com/WHPimhfg Edit 3 Ok, weirdness now. I know flibflob mentioned that there was a patch to fix this. I haven't applied this patch yet (still sorting out internal support issues) and I've not applied any other software updates. Last thursday, the box crashed. As in, completely stopped responding to everything. When I rebooted it, it came back up fine, but here's what my graph now looks like. It seems to have fixed the problem. This is proper la la land stuff now. I've literally no idea what's going on. :(

    Read the article

  • Using SQL Execution Plans to discover the Swedish alphabet

    - by Rob Farley
    SQL Server is quite remarkable in a bunch of ways. In this post, I’m using the way that the Query Optimizer handles LIKE to keep it SARGable, the Execution Plans that result, Collations, and PowerShell to come up with the Swedish alphabet. SARGability is the ability to seek for items in an index according to a particular set of criteria. If you don’t have SARGability in play, you need to scan the whole index (or table if you don’t have an index). For example, I can find myself in the phonebook easily, because it’s sorted by LastName and I can find Farley in there by moving to the Fs, and so on. I can’t find everyone in my suburb easily, because the phonebook isn’t sorted that way. I can’t even find people who have six letters in their last name, because also the book is sorted by LastName, it’s not sorted by LEN(LastName). This is all stuff I’ve looked at before, including in the talk I gave at SQLBits in October 2010. If I try to find everyone who’s names start with F, I can do that using a query a bit like: SELECT LastName FROM dbo.PhoneBook WHERE LEFT(LastName,1) = 'F'; Unfortunately, the Query Optimizer doesn’t realise that all the entries that satisfy LEFT(LastName,1) = 'F' will be together, and it has to scan the whole table to find them. But if I write: SELECT LastName FROM dbo.PhoneBook WHERE LastName LIKE 'F%'; then SQL is smart enough to understand this, and performs an Index Seek instead. To see why, I look further into the plan, in particular, the properties of the Index Seek operator. The ToolTip shows me what I’m after: You’ll see that it does a Seek to find any entries that are at least F, but not yet G. There’s an extra Predicate in there (a Residual Predicate if you like), which checks that each LastName is really LIKE F% – I suppose it doesn’t consider that the Seek Predicate is quite enough – but most of the benefit is seen by its working out the Seek Predicate, filtering to just the “at least F but not yet G” section of the data. This got me curious though, particularly about where the G comes from, and whether I could leverage it to create the Swedish alphabet. I know that in the Swedish language, there are three extra letters that appear at the end of the alphabet. One of them is ä that appears in the word Västerås. It turns out that Västerås is quite hard to find in an index when you’re looking it up in a Swedish map. I talked about this briefly in my five-minute talk on Collation from SQLPASS (the one which was slightly less than serious). So by looking at the plan, I can work out what the next letter is in the alphabet of the collation used by the column. In other words, if my alphabet were Swedish, I’d be able to tell what the next letter after F is – just in case it’s not G. It turns out it is… Yes, the Swedish letter after F is G. But I worked this out by using a copy of my PhoneBook table that used the Finnish_Swedish_CI_AI collation. I couldn’t find how the Query Optimizer calculates the G, and my friend Paul White (@SQL_Kiwi) tells me that it’s frustratingly internal to the QO. He’s particularly smart, even if he is from New Zealand. To investigate further, I decided to do some PowerShell, leveraging the Get-SqlPlan function that I blogged about recently (make sure you also have the SqlServerCmdletSnapin100 snap-in added). I started by indicating that I was going to use Finnish_Swedish_CI_AI as my collation of choice, and that I’d start whichever letter cam straight after the number 9. I figure that this is a cheat’s way of guessing the first letter of the alphabet (but it doesn’t actually work in Unicode – luckily I’m using varchar not nvarchar. Actually, there are a few aspects of this code that only work using ASCII, so apologies if you were wanting to apply it to Greek, Japanese, etc). I also initialised my $alphabet variable. $collation = 'Finnish_Swedish_CI_AI'; $firstletter = '9'; $alphabet = ''; Now I created the table for my test. A single field would do, and putting a Clustered Index on it would suffice for the Seeks. Invoke-Sqlcmd -server . -data tempdb -query "create table dbo.collation_test (col varchar(10) collate $collation primary key);" Now I get into the looping. $c = $firstletter; $stillgoing = $true; while ($stillgoing) { I construct the query I want, seeking for entries which start with whatever $c has reached, and get the plan for it: $query = "select col from dbo.collation_test where col like '$($c)%';"; [xml] $pl = get-sqlplan $query "." "tempdb"; At this point, my $pl variable is a scary piece of XML, representing the execution plan. A bit of hunting through it showed me that the EndRange element contained what I was after, and that if it contained NULL, then I was done. $stillgoing = ($pl.ShowPlanXML.BatchSequence.Batch.Statements.StmtSimple.QueryPlan.RelOp.IndexScan.SeekPredicates.SeekPredicateNew.SeekKeys.EndRange -ne $null); Now I could grab the value out of it (which came with apostrophes that needed stripping), and append that to my $alphabet variable.   if ($stillgoing)   {  $c=$pl.ShowPlanXML.BatchSequence.Batch.Statements.StmtSimple.QueryPlan.RelOp.IndexScan.SeekPredicates.SeekPredicateNew.SeekKeys.EndRange.RangeExpressions.ScalarOperator.ScalarString.Replace("'","");     $alphabet += $c;   } Finally, finishing the loop, dropping the table, and showing my alphabet! } Invoke-Sqlcmd -server . -data tempdb -query "drop table dbo.collation_test;"; $alphabet; When I run all this, I see that the Swedish alphabet is ABCDEFGHIJKLMNOPQRSTUVXYZÅÄÖ, which matches what I see at Wikipedia. Interesting to see that the letters on the end are still there, even with Case Insensitivity. Turns out they’re not just “letters with accents”, they’re letters in their own right. I’m sure you gave up reading long ago, and really aren’t that fazed about the idea of doing this using PowerShell. I chose PowerShell because I’d already come up with an easy way of grabbing the estimated plan for a query, and PowerShell does allow for easy navigation of XML. I find the most interesting aspect of this as the fact that the Query Optimizer uses the next letter of the alphabet to maintain the SARGability of LIKE. I’m hoping they do something similar for a whole bunch of operations. Oh, and the fact that you know how to find stuff in the IKEA catalogue. Footnote: If you are interested in whether this works in other languages, you might want to consider the following screenshot, which shows that in principle, it should work with Japanese. It might be a bit harder to run this in PowerShell though, as I’m not sure how it translates. In Hiragana, the Japanese alphabet starts ?, ?, ?, ?, ?, ...

    Read the article

  • Combination of Operating Mode and Commit Strategy

    - by Kevin Yang
    If you want to populate a source into multiple targets, you may also want to ensure that every row from the source affects all targets uniformly (or separately). Let’s consider the Example Mapping below. If a row from SOURCE causes different changes in multiple targets (TARGET_1, TARGET_2 and TARGET_3), for example, it can be successfully inserted into TARGET_1 and TARGET_3, but failed to be inserted into TARGET_2, and the current Mapping Property TLO (target load order) is “TARGET_1 -> TARGET_2 -> TARGET_3”. What should Oracle Warehouse Builder do, in order to commit the appropriate data to all affected targets at the same time? If it doesn’t behave as you intended, the data could become inaccurate and possibly unusable.                                               Example Mapping In OWB, we can use Mapping Configuration Commit Strategies and Operating Modes together to achieve this kind of requirements. Below we will explore the combination of these two features and how they affect the results in the target tables Before going to the example, let’s review some of the terms we will be using (Details can be found in white paper Oracle® Warehouse Builder Data Modeling, ETL, and Data Quality Guide11g Release 2): Operating Modes: Set-Based Mode: Warehouse Builder generates a single SQL statement that processes all data and performs all operations. Row-Based Mode: Warehouse Builder generates statements that process data row by row. The select statement is in a SQL cursor. All subsequent statements are PL/SQL. Row-Based (Target Only) Mode: Warehouse Builder generates a cursor select statement and attempts to include as many operations as possible in the cursor. For each target, Warehouse Builder inserts each row into the target separately. Commit Strategies: Automatic: Warehouse Builder loads and then automatically commits data based on the mapping design. If the mapping has multiple targets, Warehouse Builder commits and rolls back each target separately and independently of other targets. Use the automatic commit when the consequences of multiple targets being loaded unequally are not great or are irrelevant. Automatic correlated: It is a specialized type of automatic commit that applies to PL/SQL mappings with multiple targets only. Warehouse Builder considers all targets collectively and commits or rolls back data uniformly across all targets. Use the correlated commit when it is important to ensure that every row in the source affects all affected targets uniformly. Manual: select manual commit control for PL/SQL mappings when you want to interject complex business logic, perform validations, or run other mappings before committing data. Combination of the commit strategy and operating mode To understand the effects of each combination of operating mode and commit strategy, I’ll illustrate using the following example Mapping. Firstly we insert 100 rows into the SOURCE table and make sure that the 99th row and 100th row have the same ID value. And then we create a unique key constraint on ID column for TARGET_2 table. So while running the example mapping, OWB tries to load all 100 rows to each of the targets. But the mapping should fail to load the 100th row to TARGET_2, because it will violate the unique key constraint of table TARGET_2. With different combinations of Commit Strategy and Operating Mode, here are the results ¦ Set-based/ Correlated Commit: Configuration of Example mapping:                                                     Result:                                                      What’s happening: A single error anywhere in the mapping triggers the rollback of all data. OWB encounters the error inserting into Target_2, it reports an error for the table and does not load the row. OWB rolls back all the rows inserted into Target_1 and does not attempt to load rows to Target_3. No rows are added to any of the target tables. ¦ Row-based/ Correlated Commit: Configuration of Example mapping:                                                   Result:                                                  What’s happening: OWB evaluates each row separately and loads it to all three targets. Loading continues in this way until OWB encounters an error loading row 100th to Target_2. OWB reports the error and does not load the row. It rolls back the row 100th previously inserted into Target_1 and does not attempt to load row 100 to Target_3. Then, if there are remaining rows, OWB will continue loading them, resuming with loading rows to Target_1. The mapping completes with 99 rows inserted into each target. ¦ Set-based/ Automatic Commit: Configuration of Example mapping: Result: What’s happening: When OWB encounters the error inserting into Target_2, it does not load any rows and reports an error for the table. It does, however, continue to insert rows into Target_3 and does not roll back the rows previously inserted into Target_1. The mapping completes with one error message for Target_2, no rows inserted into Target_2, and 100 rows inserted into Target_1 and Target_3 separately. ¦ Row-based/Automatic Commit: Configuration of Example mapping: Result: What’s happening: OWB evaluates each row separately for loading into the targets. Loading continues in this way until OWB encounters an error loading row 100 to Target_2 and reports the error. OWB does not roll back row 100th from Target_1, does insert it into Target_3. If there are remaining rows, it will continue to load them. The mapping completes with 99 rows inserted into Target_2 and 100 rows inserted into each of the other targets. Note: Automatic Correlated commit is not applicable for row-based (target only). If you design a mapping with the row-based (target only) and correlated commit combination, OWB runs the mapping but does not perform the correlated commit. In set-based mode, correlated commit may impact the size of your rollback segments. Space for rollback segments may be a concern when you merge data (insert/update or update/insert). Correlated commit operates transparently with PL/SQL bulk processing code. The correlated commit strategy is not available for mappings run in any mode that are configured for Partition Exchange Loading or that include a Queue, Match Merge, or Table Function operator. If you want to practice in your own environment, you can follow the steps: 1. Import the MDL file: commit_operating_mode.mdl 2. Fix the location for oracle module ORCL and deploy all tables under it. 3. Insert sample records into SOURCE table, using below plsql code: begin     for i in 1..99     loop         insert into source values(i, 'col_'||i);     end loop;     insert into source values(99, 'col_99'); end; 4. Configure MAPPING_1 to any combinations of operating mode and commit strategy you want to test. And make sure feature TLO of mapping is open. 5. Deploy Mapping “MAPPING_1”. 6. Run the mapping and check the result.

    Read the article

  • How can I make a table move in JavaScript?

    - by Michal Skrzypek
    My problem is that I was creating a simple website the other day and I needed the content to move according to the button pressed. I managed to do so in CSS3, but the solution did not work for IE whatsoever. Therefore I would like to ask if there is a simple solution for that in js? I don't know js at all but I heard what I need is much easier in js than in css. Details: http://i42.tinypic.com/6yl4ia.png I need the table in the picture to move according to the buttons (which are labels to be exact). The visible area is a div. Here's the relevant code (without animation as I was not satisfied with it): body { background-color: #fff; color: #fff; padding:0px; } #bodywrapperfixed { width: 1248px; margin: 0px auto; position: relative; overflow: hidden; height: 730px; } #bodywrapper { display:block; background-color: #fff; width: 1248px; color: #59595B; padding-top:50px; font-family: 'Roboto', sans-serif; position: absolute; top:0px; left:0px; z-index:1; font-size: 60px; height:730px; } #bodywrapper img { width:400px; padding:15px 0px 20px 0px; } #texten { font-family: 'Roboto', sans-serif; font-size: 35px; padding:5px; } #textpl { font-family: 'Roboto', sans-serif; font-size: 25px; padding:5px; } table#linki { width: 110px; border: none; margin-top:15px; } label { display: block; height: 54px; width: 54px; color:#fff; font-family: 'Roboto', sans-serif; font-weight: 300; font-size: 35px; background-color: #117D10; text-align: center; padding:23px; } label:hover { background-color: #004F00; cursor: pointer; } input#pl { position: absolute; top: -9999px; left: -9999px; } input#en { position: absolute; top: -9999px; left: -9999px; } and the relevant HTML: <div id="bodywrapperfixed"> <div id="bodywrapperfloat"> <table id="ramka"> <tr> <td>random text</td> <td><div id="bodywrapper"> <center> <div id="texten"><div style="font-weight:300; display:inline-block;">Introducing the all-in-one entertainment system.</div><div style="font-weight:500; display:inline-block;">&nbsp;For everyone.</div></div> <div id="textpl"><div style="font-weight:300; display:inline-block;">Przedstawiamy zintegrowany system rozrywki.</div><div style="font-weight:500; display:inline-block;">&nbsp;&nbsp;Dla wszystkich.</div></div> <img src="imgs/xboxone.png"> <div id="texten"><div style="font-weight:300; display:inline-block;">Choose your version of the story:</div></div> <div id="textpl"><div style="font-weight:300; display:inline-block;">Wybierz swoja wersja opowiesci:</div></div> <table id="linki"> <tr> <td><label for="en">en</label><input id="en" type="checkbox"></td> <td><label for="pl">pl</label><input id="pl" type="checkbox"></td> </tr></table> </center> </div></td> <td>random text</td> </tr> </table> </div> </div> Here's what it looks like: http://ingame.lh.pl/thinkone/ Please help me.

    Read the article

  • A single request appears to have come from all the browsers? Should I be worried?

    - by HorusKol
    I was looking over my site access logs when I noticed a request with the following user agent string: "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.12) Gecko/20101026 Firefox/3.6.12\",\"Mozilla/5.0 (Windows; U; Windows NT 5.1; pl-PL; rv:1.8.1.24pre) Gecko/20100228 K-Meleon/1.5.4\",\"Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/540.0 (KHTML,like Gecko) Chrome/9.1.0.0 Safari/540.0\",\"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Comodo_Dragon/4.1.1.11 Chrome/4.1.249.1042 Safari/532.5\",\"Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.9.0.16) Gecko/2009122206 Firefox/3.0.16 Flock/2.5.6\",\"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/533.1 (KHTML, like Gecko) Maxthon/3.0.8.2 Safari/533.1\",\"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.8pre) Gecko/20070928 Firefox/2.0.0.7 Navigator/9.0RC1\",\"Opera/9.99 (Windows NT 5.1; U; pl) Presto/9.9.9\",\"Mozilla/5.0 (Windows; U; Windows NT 6.1; zh-HK) AppleWebKit/533.18.1 (KHTML, like Gecko) Version/5.0.2 Safari/533.18.5\",\"Seamonkey-1.1.13-1(X11; U; GNU Fedora fc 10) Gecko/20081112\",\"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 2.0.50727; SLCC2; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; Zune 4.0; Tablet PC 2.0; InfoPath.3; .NET4.0C; .NET4.0E)\",\"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MS-RTC LM 8; .NET4.0C; .NET4.0E; InfoPath.3)" The request appears to have originated from 91.121.153.210 - which appears to be owned by these guys: http://www.medialta.eu/accueil.html I find this rather impressive - a request from 'all' user-agents. There's actually quite a few of these requests over at least the few days - so it naturally piqued my interested. Searching Google simply seems to produce a very long list of websites which make their Apache access logs publicly available... Is this some weird indication that we're being targeted? And by who?

    Read the article

  • Oracle Forms Migration to ADF - Webinar vom ORACLE Partner PITSS

    - by Thomas Leopold
      Tuesday, February 22, 2011 5:00 PM - 6:00 PM CET Free Webinar Re-Engineering Legacy Oracle Forms Migration from Forms to ADF - A Case Study Join Oracle's Grant Ronald and PITSS to see a software architecture comparison of Oracle Forms and ADF and a live step-by-step presentation on how to achieve a successful migration. Learn about various migration options, challenges and best practices to protect your current investment in Oracle Forms. PL/SQL is without match for what it does: manipulating data in the database. If you blindly migrate all your PL/SQL to Java you will, in all probability, end up with less maintainable and less efficient code. Instead you should consider which code it best left as PL/SQL..." Grant Ronald - "Migrating Oracle Forms to Fusion: Myth or Magic Bullet?" Re-Engineering existing business logic is mandatory for your legacy Forms application to take advantage of the new Software architectures like ADF. The PITSS.CON solution combines the deep understanding of Oracle Forms and Reports applications architecture with powerful re-engineering capabilities that allows the developer community to protect the investment in the existing Forms applications and to concentrate on fine-tuning and customization of the modernized functionality rather than manually recreating every module and business logic from bottom up. Registration: https://www2.gotomeeting.com/register/971702250   PITSS GmbHKönigsdorferstrasse 25D-82515 WolfratshausenDo not forget to check out these Free Webinars in March! Thursday, March 3, 2011 Upgrade and Modernize Your Application to Forms 11g Registration/Information Tuesday, March 15, 2011 Shaping the Future for your Oracle Forms Application:Forms 11g, ADF, APEX Registration/Information Tuesday, March 29, 2011 Oracle Forms Modernization to APEX Registration/Information Registration is limited, so sign up  today!Presented By:        Grant Ronald, Senior Group Product Manager,Oracle       Magdalena Serban, Product Manager,PITSS   Contact Us:            PITSS in Americas +1 248.740.0935 [email protected] www.pitssamerica.com       PITSS in Europe +49 (0) 717287 5200 [email protected] www.pitss.com   White Paper:      From Oracle Forms to Oracle ADF and JEE     © Copyright 2010 PITSS GmbH, Wolfratshausen, Stuttgart, München; Managing Directors: Dipl.-Ing. Andreas Gaede, Michael Kilimann, Dipl.-Ing. Dirk Fleischmann Commercial Register: HRB 125471 at District Court Munich. All rights reserved. Any duplication or further treatment in any medium, in parts or as a whole, requires a written agreement. If you do not want to receive invitations for events, meetings and seminars from us, then please click here.

    Read the article

  • Any reliable polygon normal calculation code?

    - by Jenko
    Do you have any reliable face normal calculation code? I'm using this but it fails when faces are 90 degrees upright or similar. // the normal point var x:Number = 0; var y:Number = 0; var z:Number = 0; // if is a triangle with 3 points if (points.length == 3) { // read vertices of triangle var Ax:Number, Bx:Number, Cx:Number; var Ay:Number, By:Number, Cy:Number; var Az:Number, Bz:Number, Cz:Number; Ax = points[0].x; Bx = points[1].x; Cx = points[2].x; Ay = points[0].y; By = points[1].y; Cy = points[2].y; Az = points[0].z; Bz = points[1].z; Cz = points[2].z; // calculate normal of a triangle x = (By - Ay) * (Cz - Az) - (Bz - Az) * (Cy - Ay); y = (Bz - Az) * (Cx - Ax) - (Bx - Ax) * (Cz - Az); z = (Bx - Ax) * (Cy - Ay) - (By - Ay) * (Cx - Ax); // if is a polygon with 4+ points }else if (points.length > 3){ // calculate normal of a polygon using all points var n:int = points.length; x = 0; y = 0; z = 0 // ensure all points above 0 var minx:Number = 0, miny:Number = 0, minz:Number = 0; for (var p:int = 0, pl:int = points.length; p < pl; p++) { var po:_Point3D = points[p] = points[p].clone(); if (po.x < minx) { minx = po.x; } if (po.y < miny) { miny = po.y; } if (po.z < minz) { minz = po.z; } } if (minx > 0 || miny > 0 || minz > 0){ for (p = 0; p < pl; p++) { po = points[p]; po.x -= minx; po.y -= miny; po.z -= minz; } } var cur:int = 1, prev:int = 0, next:int = 2; for (var i:int = 1; i <= n; i++) { // using Newell method x += points[cur].y * (points[next].z - points[prev].z); y += points[cur].z * (points[next].x - points[prev].x); z += points[cur].x * (points[next].y - points[prev].y); cur = (cur+1) % n; next = (next+1) % n; prev = (prev+1) % n; } } // length of the normal var length:Number = Math.sqrt(x * x + y * y + z * z); // if area is 0 if (length == 0) { return null; }else{ // turn large values into a unit vector x = x / length; y = y / length; z = z / length; }

    Read the article

  • New R Interface to Oracle Data Mining Available for Download

    - by charlie.berger
      The R Interface to Oracle Data Mining ( R-ODM) allows R users to access the power of Oracle Data Mining's in-database functions using the familiar R syntax. R-ODM provides a powerful environment for prototyping data analysis and data mining methodologies. R-ODM is especially useful for: Quick prototyping of vertical or domain-based applications where the Oracle Database supports the application Scripting of "production" data mining methodologies Customizing graphics of ODM data mining results (examples: classification, regression, anomaly detection) The R-ODM interface allows R users to mine data using Oracle Data Mining from the R programming environment. It consists of a set of function wrappers written in source R language that pass data and parameters from the R environment to the Oracle RDBMS enterprise edition as standard user PL/SQL queries via an ODBC interface. The R-ODM interface code is a thin layer of logic and SQL that calls through an ODBC interface. R-ODM does not use or expose any Oracle product code as it is completely an external interface and not part of any Oracle product. R-ODM is similar to the example scripts (e.g., the PL/SQL demo code) that illustrates the use of Oracle Data Mining, for example, how to create Data Mining models, pass arguments, retrieve results etc. R-ODM is packaged as a standard R source package and is distributed freely as part of the R environment's Comprehensive R Archive Network (CRAN). For information about the R environment, R packages and CRAN, see www.r-project.org. R-ODM is particularly intended for data analysts and statisticians familiar with R but not necessarily familiar with the Oracle database environment or PL/SQL. It is a convenient environment to rapidly experiment and prototype Data Mining models and applications. Data Mining models prototyped in the R environment can easily be deployed in their final form in the database environment, just like any other standard Oracle Data Mining model. What is R? R is a system for statistical computation and graphics. It consists of a language plus a run-time environment with graphics, a debugger, access to certain system functions, and the ability to run programs stored in script files. The design of R has been heavily influenced by two existing languages: Becker, Chambers & Wilks' S and Sussman's Scheme. Whereas the resulting language is very similar in appearance to S, the underlying implementation and semantics are derived from Scheme. R was initially written by Ross Ihaka and Robert Gentleman at the Department of Statistics of the University of Auckland in Auckland, New Zealand. Since mid-1997 there has been a core group (the "R Core Team") who can modify the R source code archive. Besides this core group many R users have contributed application code as represented in the near 1,500 publicly-available packages in the CRAN archive (which has shown exponential growth since 2001; R News Volume 8/2, October 2008). Today the R community is a vibrant and growing group of dozens of thousands of users worldwide. It is free software distributed under a GNU-style copyleft, and an official part of the GNU project ("GNU S"). Resources: R website / CRAN R-ODM

    Read the article

  • How to autostart Kupfer in Ubuntu 13.10?

    - by JJD
    I built Kupfer from source on Ubuntu 13.10 and installed it into ./local. In the preferences I checked Start automatically on login. However, Kupfer does not automatically start. The desktop file in ~/.local/share/applications/kupfer.desktop looks like this: [Desktop Entry] Version=1.0 Name=Kupfer Name[cs]=Kupfer Name[da]=Kupfer Name[de]=Kupfer Name[el]=Kupfer Name[es]=Kupfer Name[eu]=Kupfer Name[fr]=Kupfer Name[gl]=Kupfer Name[hu]=Kupfer Name[it]=Kupfer Name[ko]=?? Name[nb]=Kupfer Name[nl]=Kupfer Name[pl]=Kupfer Name[pt]=Kupfer Name[pt_BR]=Kupfer Name[ru]=Kupfer Name[sl]=Kupfer Name[sv]=Kupfer Name[tr]=Kupfer Name[zh_CN]=Kupfer GenericName=Application Launcher GenericName[ca]=Llançador d'aplicació GenericName[cs]=Spouštec aplikací GenericName[da]=Programopstarter GenericName[de]=Anwendungsstarter GenericName[el]=??????t?? efa?µ???? GenericName[es]=Lanzador de aplicaciones GenericName[eu]=Aplikazioen abiarazlea GenericName[fr]=Lanceur d'applications GenericName[gl]=Iniciador de aplicativos GenericName[hu]=Alkalmazásindító GenericName[it]=Lanciatore di applicazioni GenericName[ko]=?? ???? ?? ??? GenericName[nb]=Programstarter GenericName[nl]=Programmastarter GenericName[pl]=Aktywator programów GenericName[pt]=Lançador de Aplicações GenericName[pt_BR]=Lançador de aplicativos GenericName[ru]=???????? ??????? ?????????? GenericName[sl]=Zaganjalnik programov GenericName[sv]=Programstartare GenericName[tr]=Uygulama Çalistirici GenericName[zh_CN]=????? Comment=Convenient command and access tool for applications and documents Comment[cs]=Nástroj pro pohodlné provádení príkazu a prístup k aplikacím a dokumentum Comment[da]=Nemt kommando- og adgangsværktøj til programmer og dokumenter Comment[de]=Praktisches Befehls- und Zugriffswerkzeug für Anwendungen und Dokumente Comment[el]=?????? e??a?e?? e?t???? ?a? p??sßas?? ??a efa?µ???? ?a? ????afa Comment[es]=Herramienta para acceso y manejo de aplicaciones y documentos Comment[eu]=Komando eta atzipen tresna egokia aplikazio eta dokumentuentzat Comment[fr]=Outil pratique pour accéder à des documents et lancer des applications Comment[gl]=Ferramenta cómoda para controlar e acceder a aplicativos e documentos Comment[hu]=Kényelmes parancs és hozzáférési eszköz az alkalmazásokhoz és dokumentumokhoz Comment[it]=Comodo comando e strumento di accesso per applicazioni e documenti Comment[ko]=???? ???????? ??? ???? ??? ?? ? ?? ?? Comment[nb]=Praktiskt kommandoverktøy for programmer og dokumenter Comment[nl]=Handige opdracht- en toegangshulp voor programma's en documenten Comment[pl]=Wygodne narzedzie do uruchamiania programów i otwierania dokumentów Comment[pt]=Ferramenta conveniente para acesso e gestão de aplicações e documentos Comment[pt_BR]=Uma conveniente ferramenta de comando e acesso para aplicativos e documentos Comment[ru]=??????? ?????????? ??? ???????? ??????? ? ?????????? ? ?????????? Comment[sl]=Prikladno orodje za izvajanje ukazov in dostopa do programov in dokumentov Comment[sv]=Praktiskt kommandoprogram för åtkomst av program och dokument Comment[zh_CN]=???????????????? Icon=kupfer Exec=python /home/user/.local/share/kupfer/kupfer.py %F Type=Application Categories=Utility; StartupNotify=true X-UserData=$CONFIG/kupfer;$DATA/kupfer;$CACHE/kupfer Terminal=false

    Read the article

  • i get the exception org.hibernate.MappingException: No Dialect mapping for JDBC type: -9

    - by ramesh m
    i am using hibernate .i wrote Native sql query. this query will be execute in sqlSever command promt try { session=HibernateUtil.getInstance().getSession(); transaction=session.beginTransaction(); SQLQuery query = session.createSQLQuery("SELECT AP.PROJECT_NAME, AP.SKILLSET, PA.START_DATE, PA.END_DATE, RS.EMPLOYEE_ID, RS.EMPLOYEE_NAME, RS.REPORTING_PM FROM RESOURCE_MASTER RS,SHARED_PROPOSAL S, ACTUAL_PROPOSAL AP, PROJECT_APPROVED PA, PROJECT_ALLOCATION PL WHERE RS.EMPLOYEE_ID = PL.EMPLOYEE_ID AND PA.PROJECT_ID = PL.PROJECT_ID AND PA.SHARED_PROPOSAL_ID = S.SHARED_PROPOSAL_ID AND S.ACTUAL_PROPOSAL_ID=AP.ACTUAL_PROPOSAL_ID"); List<Object[]> obj=query.list(); Object[] object=new Object[arrayList.size()]; for (int i = 0; i < arrayList.size(); i++) { object[i]=arrayList.get(i); System.out.println(object[i]); } arrayList.get(0); String name=(String)arrayList.get(0); logger.info("In find All searchDeveloper"); }catch(Exception exception) { throw new PPAMException("Contact admin","Problem retrieving resource master list",exception); } like that i am using on that time i got this Exception: org.hibernate.MappingException: No Dialect mapping for JDBC type: -9 this query is executed in sqlserver command propt , i maaped seven tables, but remove ACTUAL_PROPOSAL AP table .it is execute correctly please help me

    Read the article

  • Package creation issues using SQL Developer

    - by Carter
    So I've never worked with stored procedures and have not a whole lot of DB experience in general and I've been assigned a task that requires I create a package and I'm stuck. Using SQL Developer, I'm trying to create a package called JUMPTO with this code... create or replace package JUMPTO is type t_locations is ref cursor; procedure procGetLocations(locations out t_locations); end JUMPTO; When I run it, it spits out this PL/SQL code block... DECLARE LOCATIONS APPLICATION.JUMPTO.t_locations; BEGIN JUMPTO.PROCGET_LOCATIONS( LOCATIONS = LOCATIONS ); -- Modify the code to output the variable -- DBMS_OUTPUT.PUT_LINE('LOCATIONS = ' || LOCATIONS); END; A tutorial I found said to take out the comment for that second line there. I've tried with and without the comment. When I hit "ok" I get the error... ORA-06550: line 2, column 32: PLS-00302: component 'JUMPTO' must be declared ORA-06550: line 2, column 13: PL/SQL: item ignored ORA-06550: line 6, column 18: PLS-00320: the declaration of the type of this expression is incomplete or malformed ORA-06550: line 5, column 3: PL/SQL: Statement ignored ORA-06512: at line 58 I really don't have any idea what's going on, this is all completely new territory for me. I tried creating a body that just selected some stuff from the database but nothing is working the way it seems like it should in my head. Can anyone give me any insight into this?

    Read the article

  • How do I use a dependency on a Perl module installed in a non-standard location?

    - by Kinopiko
    I need to install two Perl modules on a web host. Let's call them A::B and X::Y. X::Y depends on A::B (needs A::B to run). Both of them use Module::Install. I have successfully installed A::B into a non-system location using perl Makefile.PL PREFIX=/non/system/location make; make test; make install Now I want to install X::Y, so I try the same thing perl Makefile.PL PREFIX=/non/system/location The output is $ perl Makefile.PL PREFIX=/non/system/location/ Cannot determine perl version info from lib/X/Y.pm *** Module::AutoInstall version 1.03 *** Checking for Perl dependencies... [Core Features] - Test::More ...loaded. (0.94) - ExtUtils::MakeMaker ...loaded. (6.54 >= 6.11) - File::ShareDir ...loaded. (1.00) - A::B ...missing. ==> Auto-install the 1 mandatory module(s) from CPAN? [y] It can't seem to find A::B in the system, although it is installed, and when it tries to auto-install the module from CPAN, it tries to write it into the system directory (ignoring PREFIX). I have tried using variables like PERL_LIB and LIB on the command line, after PREFIX=..., but nothing I have done seems to work. I can do make and make install successfully, but I can't do make test because of this problem. Any suggestions? I found some advice at http://servers.digitaldaze.com/extensions/perl/modules.html to use an environment variable PERL5LIB, but this also doesn't seem to work: export PERL5LIB=/non/system/location/lib/perl5/ didn't solve the problem.

    Read the article

  • Perl program - Dynamic Bootstrapping code

    - by mgj
    Hi.. I need to understand the working of this particular program, It seems to be quite complicated, could you please see if you could help me understanding what this program in Perl does, I am a beginner so I hardly can understand whats happening in the code given on the following link below, Any kind of guidance or insights wrt this program is highly appreciated. Thank you...:) This program is called premove.pl.c Its associated with one more program premove.pl, Its code looks like this: #!perl open (newdata,">newdata.txt") || die("cant create new file\n");#create passwd file $linedata = ""; while($line=<>){ chomp($line); #chop($line); print newdata $line."\n"; } close(newdata); close(olddata); __END__ I am even not sure how to run the two programs mentioned here. I wonder also what does the extension of the first program signify as it has "pl.c" extension, please let me know if you know what it could mean. I need to understand it asap thats why I am posting this question, I am kind of short of time else I would try to figure it out myself, This seems to be a complex program for a beginner like me, hope you understand. Thank you again for your time.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >