Search Results

Search found 9926 results on 398 pages for 'lookup tables'.

Page 12/398 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Postcode and radius lookup recommendations

    - by WestDiscGolf
    I look after a number of divisional websites for a uk based membership organisation and what we want to do is provide, as well as other address functions, is a closest member lookup to a web user from the websites themselves. A few use cases that I want to fill: Case 1: The user puts in their post code and wants to see all the members in a 5/10/15/20/30/40 mile radius from them Case 2: The member puts in an area (city, county, etc.) and gets a list of members in that area. Essentially what I'm looking for is a programmable API which I can code against to do: post code lookup and returns addresses (after picking house number for example). search post code + radius (5miles, 10miles etc) and get a set of applicable post codes to then join onto the membership records in the database Any recommendations? It can be a quarterly update install on the server, it can be a queryable web service. I'm open to suggestions. Thanks in advance

    Read the article

  • DNS Lookup in ASP / ASP.Net

    - by Dave Forgac
    I have a Windows server that is intermittently losing the ability to lookup DNS information. I'm trying to get to the root cause of the problem but in the mean time I'd like to be able to monitor whether the server can perform lookups. Basically, it should attempt to lookup some common hostnames and the display 'Success' if the lookups are successful. I see lots of examples of doing this with third party components in ASP but I would prefer to be able to do this with a single ASP / ASP.Net script that would be portable and not require anything additional be installed.

    Read the article

  • Creation time of Innodb tables

    - by shantanuo
    CRETAE_TIME column of "TABLES" table from INFORMATION_SCHEMA shows the same CREATE_TIME for all my innodb tables. It means all these tables were created between 2010-03-26 06:52:00 and 2010-03-26 06:53:00 while actually they were created a few months ago. Does the CREATE_TABLE field change automatically for Innodb tables?

    Read the article

  • Preference values - static without tables using a model with virtual attributes

    - by Mike
    Im trying to eliminate two tables from my database. The tables are message_sort_options and per_page_options. These tables basically just have 5 records which are options a user can set as their preference in a preferences table. The preferences table has columns like sort_preferences and per_page_preference which both point to a record in the other two tables containing the options. How can i set up the models with virtual attributes and fixed values for the options - eliminating table lookups every time the preferences are looked up?

    Read the article

  • how to update tables' structures keeping current data

    - by Leon
    I have an c# application that uses tables from sqlserver 2008 database (runs on standalone pc with local sqlserver). Initially i install database on this pc with some initial data (there are some tables that application uses and the user doesn't touch). The question is - how can i upgrade this database after user created some new data without harming it (i continue developing and can add some new tables or stored procedures or add some columns to existing tables). Thanks in advance!

    Read the article

  • A few tables are still out of sync after running mk-table-sync

    - by smusumeche
    I have 1 master and 2 slaves. I am using MySQL 5.1.42 on all servers. I am attempting to use mk-table-checksum to verify that their data is in sync, but I am getting unexpected results on one of the slaves. First, I generate the checksums on the master like this: mk-table-checksum h=localhost --databases MYDB --tables {$table_list} --replicate=MYDB.mk_checksum --chunk-size=10M My understanding is that this runs the checksum queries on the master which then propagate via normal replication to the slaves. So, no locking is needed because the slaves will be at the same logical point in time when they run the checksum queries on themselves. Is this correct? Next, to verify that the checksums match, I run this on the master: mk-table-checksum --databases MYDB --replicate=IRC.mk_checksum --replicate-check 1 h=localhost,u=maatkit,p=xxxx If there are any differences, I repair the slaves like this: mk-table-sync --execute --verbose --replicate IRC.mk_checksum h=localhost,u=maatkit,p=xxxx After doing all of this, I repaired both slaves with mk-table-sync. However, everytime I run this sequence (after everything has already been repaired), one slave is perfectly in sync but one slave always has a few tables out of sync. I am 99.999% sure that the data on the slaves matches, since I repaired everything and the tables were not even updated on the master between runs of the checksum script. What would cause a few tables to always show out of sync on only one of the slaves? I am stuck. Here is the output: Differences on h=x.x.x.x,p=...,u=maatkit DB TBL CHUNK CNT_DIFF CRC_DIFF BOUNDARIES IRC product 10 0 1 product_id = 147377 AND product_id < 162085 IRC post_order_survey 0 0 1 1=1 IRC mk_heartbeat 0 0 1 1=1 IRC mailing_list 0 0 1 1=1 IRC honey_pot_log 0 0 1 1=1 IRC product 12 0 1 product_id = 176793 AND product_id < 191501 IRC product 18 0 1 product_id = 265041 IRC orders 26 0 1 order_id = 694472 IRC orders_product 6 0 1 op_id = 935375

    Read the article

  • DNS Lookup in simple C#/asp.net ajax call is extremely slow

    - by Ryan
    I'm running this out of the VS 2008 debugger on Windows 7, running .Net 3.5. The idea was to make all ajax requests with jQuery only, rather than .net, following some tutorials online. Default.aspx - HTML page, jquery triggers method in Default.aspx.cs http://pastebin.com/pxBvKA2H Default.aspx.cs - C# Webform, just defines a GetDate fuction, which only returns a string for now (trying to eliminate any possible issues) (can only post one hyperlink...) pastebin.com/pnHn50hu The ajax query takes longer than it should. Profiling with firebug revealed that it took 1.03 ms. 1s DNS Lookup | 26ms Waiting | 1ms Receiving EDIT: It continues to take the same set of times if you continue to click and resubmit the request. Is there anything I can do to cut down on the DNS Lookup time / what did I do wrong? Thanks for any help.

    Read the article

  • Most efficient way for a lookup/search in a huge list (python)

    - by user229269
    Hey guys, -- I just parsed a big file and I created a list containing 42.000 strings/words. I want to query [against this list] to check if a given word/string belongs to it. So my question is: What is the most efficient way for such a lookup? A first approach is to sort the list [list.sort()] and then just use the if word in list: print 'word' -- which is really trivial and I am sure there is a better way to do it. My goal is to apply a fast lookup that finds whether a given string is in this list or not. If you have any ideas of another data structure, they are welcome. Yet, I want to avoid for now more sophisticated data-structures like Tries etc. I am interested in hearing ideas (or tricks) about fast lookups or any other python library methods that might do the search faster than the simple 'in'. Thanks in advance!

    Read the article

  • NHibernate: how to do lookup a specific date

    - by Daoming Yang
    How I can lookup a specific date in Nhibernate? I'm currently using this to lookup one day's order. ICriteria criteria = SessionManager.CurrentSession.CreateCriteria(typeof(Order)) .Add(Expression.Between("DateCreated", date.Date.AddDays(-1), date.Date.AddDays(1))) .AddOrder(NHibernate.Criterion.Order.Desc("OrderID")); I tried the following code, but they did bring the data for me. Expression.Eq("DateCreated", date) Expression.Like("DateCreated", date) Note: The pass in date value will be like this 2010-04-03 00:00:00, The actual date value in the database will be like this 2010-03-13 11:17:16.000 Can anyone let me know how to do this? Many thanks.

    Read the article

  • NHibernate: how to do lookup a specific date in Nhibernate

    - by Daoming Yang
    How I can lookup a specific date in Nhibernate? I'm currently using this to lookup one day's order. ICriteria criteria = SessionManager.CurrentSession.CreateCriteria(typeof(Order)) .Add(Expression.Between("DateCreated", date.Date.AddDays(-1), date.Date.AddDays(1))) .AddOrder(NHibernate.Criterion.Order.Desc("OrderID")); I tried the following code, but they did bring the data for me. Expression.Eq("DateCreated", date) Expression.Like("DateCreated", date) Note: The pass in date value will be like this 2010-04-03 00:00:00, The actual date value in the database will be like this 2010-03-13 11:17:16.000 Can anyone let me know how to do this? Many thanks.

    Read the article

  • DNS Lookup in ASP.Net

    - by Dave Forgac
    I have a Windows server that is intermittently losing the ability to lookup DNS information. I'm trying to get to the root cause of the problem but in the mean time I'd like to be able to monitor whether the server can perform lookups. Basically, it should attempt to lookup some common hostnames and the display 'Success' if the lookups are successful. I see lots of examples of doing this with third party components in ASP.NET but I would prefer to be able to do this with a single ASP.Net script that would be portable and not require anything additional be installed.

    Read the article

  • Lookup function not working (RS SP2)

    - by Al Reyes
    Hi, I made the upgrade to SP2. I'm trying to use the Lookup function to link data from two different servers. I'm trying first a simple exercise linking data from two datasets from the same server, having one dataset with journals and the other with the account description. My Expression looks like this at a field on the table I have: =Lookup(Fields!ACTINDX.Value,Fields!ACTINDX.Value,Fields!ACTDESCR.Value,"ACCTINFO") I made sure of the names and using only uppercases for datasets and fields but I'm receiving the following message when I try to preview: "An error occurred during local report processing. The definition of the report '/DETAIL' is invalid. The Value expression for the text box 'ACTINDX' refers to the field 'ACTDESCR'. Report item expressions can only refer to fields within the current dataset scope or, if inside an aggregate, the specified dataset scope". I'll appreciate any suggestions. Regards, Al

    Read the article

  • Ideal timeout period for dns lookup

    - by railscoder
    In my rails app i do a nslookup using a ruby library resolv. If the site like dgdfgdfgdfg.com is entered its talking too long to resolve. in some instance like 20 sec.(mostly for non-existent sites) Because it cause the application to slowdown. So i though of introducing a timeout period for the dns lookup. What will be the ideal timeout period for the dns lookup so that resolution of actual site doesnt fail. will something like 10 sec will be fine?

    Read the article

  • best way to create tables with Doctrine?

    - by ajsie
    assume that i start coding an application from scratch, is the best way to create tables when using Doctrine, to manually create tables in mysql and then generate models from the tables, or is it the other way around, that is to create the models in php and then generate tables from models? and if i already have a database, will the models created be optimal? cause i have heard some say that its best to create the database from scratch when using ORM, so that the relations are optimized for OOD. share your thoughts!

    Read the article

  • How To Delete Top 100 Rows From SQL Server Tables

    - by Gopinath
    If you want to delete top 100/n records from an SQL Server table, it is very easy with the following query: DELETE FROM MyTable WHERE PK_Column IN(     SELECT TOP 100 PK_Column     FROM MyTable     ORDER BY creation    ) Why Would You Require To Delete Top 100 Records? I often delete a top n records of a table when number of rows in the are too huge. Lets say if I’ve 1000000000 records in a table, deleting 10000 rows at a time in a loop is faster than trying to delete all the 1000000000  at a time. What ever may be reason, if you ever come across a requirement of deleting a bunch of rows at a time, this query will be helpful to you. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • SQL SERVER – Introduction to SQL Server 2014 In-Memory OLTP

    - by Pinal Dave
    In SQL Server 2014 Microsoft has introduced a new database engine component called In-Memory OLTP aka project “Hekaton” which is fully integrated into the SQL Server Database Engine. It is optimized for OLTP workloads accessing memory resident data. In-memory OLTP helps us create memory optimized tables which in turn offer significant performance improvement for our typical OLTP workload. The main objective of memory optimized table is to ensure that highly transactional tables could live in memory and remain in memory forever without even losing out a single record. The most significant part is that it still supports majority of our Transact-SQL statement. Transact-SQL stored procedures can be compiled to machine code for further performance improvements on memory-optimized tables. This engine is designed to ensure higher concurrency and minimal blocking. In-Memory OLTP alleviates the issue of locking, using a new type of multi-version optimistic concurrency control. It also substantially reduces waiting for log writes by generating far less log data and needing fewer log writes. Points to remember Memory-optimized tables refer to tables using the new data structures and key words added as part of In-Memory OLTP. Disk-based tables refer to your normal tables which we used to create in SQL Server since its inception. These tables use a fixed size 8 KB pages that need to be read from and written to disk as a unit. Natively compiled stored procedures refer to an object Type which is new and is supported by in-memory OLTP engine which convert it into machine code, which can further improve the data access performance for memory –optimized tables. Natively compiled stored procedures can only reference memory-optimized tables, they can’t be used to reference any disk –based table. Interpreted Transact-SQL stored procedures, which is what SQL Server has always used. Cross-container transactions refer to transactions that reference both memory-optimized tables and disk-based tables. Interop refers to interpreted Transact-SQL that references memory-optimized tables. Using In-Memory OLTP In-Memory OLTP engine has been available as part of SQL Server 2014 since June 2013 CTPs. Installation of In-Memory OLTP is part of the SQL Server setup application. The In-Memory OLTP components can only be installed with a 64-bit edition of SQL Server 2014 hence they are not available with 32-bit editions. Creating Databases Any database that will store memory-optimized tables must have a MEMORY_OPTIMIZED_DATA filegroup. This filegroup is specifically designed to store the checkpoint files needed by SQL Server to recover the memory-optimized tables, and although the syntax for creating the filegroup is almost the same as for creating a regular filestream filegroup, it must also specify the option CONTAINS MEMORY_OPTIMIZED_DATA. Here is an example of a CREATE DATABASE statement for a database that can support memory-optimized tables: CREATE DATABASE InMemoryDB ON PRIMARY(NAME = [InMemoryDB_data], FILENAME = 'D:\data\InMemoryDB_data.mdf', size=500MB), FILEGROUP [SampleDB_mod_fg] CONTAINS MEMORY_OPTIMIZED_DATA (NAME = [InMemoryDB_mod_dir], FILENAME = 'S:\data\InMemoryDB_mod_dir'), (NAME = [InMemoryDB_mod_dir], FILENAME = 'R:\data\InMemoryDB_mod_dir') LOG ON (name = [SampleDB_log], Filename='L:\log\InMemoryDB_log.ldf', size=500MB) COLLATE Latin1_General_100_BIN2; Above example code creates files on three different drives (D:  S: and R:) for the data files and in memory storage so if you would like to run this code kindly change the drive and folder locations as per your convenience. Also notice that binary collation was specified as Windows (non-SQL). BIN2 collation is the only collation support at this point for any indexes on memory optimized tables. It is also possible to add a MEMORY_OPTIMIZED_DATA file group to an existing database, use the below command to achieve the same. ALTER DATABASE AdventureWorks2012 ADD FILEGROUP hekaton_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE AdventureWorks2012 ADD FILE (NAME='hekaton_mod', FILENAME='S:\data\hekaton_mod') TO FILEGROUP hekaton_mod; GO Creating Tables There is no major syntactical difference between creating a disk based table or a memory –optimized table but yes there are a few restrictions and a few new essential extensions. Essentially any memory-optimized table should use the MEMORY_OPTIMIZED = ON clause as shown in the Create Table query example. DURABILITY clause (SCHEMA_AND_DATA or SCHEMA_ONLY) Memory-optimized table should always be defined with a DURABILITY value which can be either SCHEMA_AND_DATA or  SCHEMA_ONLY the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY will not persist the data to disk which means the data durability is compromised whereas DURABILITY= SCHEMA_AND_DATA ensures that data is also persisted along with the schema. Indexing Memory Optimized Table A memory-optimized table must always have an index for all tables created with DURABILITY= SCHEMA_AND_DATA and this can be achieved by declaring a PRIMARY KEY Constraint at the time of creating a table. The following example shows a PRIMARY KEY index created as a HASH index, for which a bucket count must also be specified. CREATE TABLE Mem_Table ( [Name] VARCHAR(32) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 100000), [City] VARCHAR(32) NULL, [State_Province] VARCHAR(32) NULL, [LastModified] DATETIME NOT NULL, ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); Now as you can see in the above query example we have used the clause MEMORY_OPTIMIZED = ON to make sure that it is considered as a memory optimized table and not just a normal table and also used the DURABILITY Clause= SCHEMA_AND_DATA which means it will persist data along with metadata and also you can notice this table has a PRIMARY KEY mentioned upfront which is also a mandatory clause for memory-optimized tables. We will talk more about HASH Indexes and BUCKET_COUNT in later articles on this topic which will be focusing more on Row and Index storage on Memory-Optimized tables. So stay tuned for that as well. Now as we covered the basics of Memory Optimized tables and understood the key things to remember while using memory optimized tables, let’s explore more using examples to understand the Performance gains using memory-optimized tables. I will be using the database which i created earlier in this article i.e. InMemoryDB in the below Demo Exercise. USE InMemoryDB GO -- Creating a disk based table CREATE TABLE dbo.Disktable ( Id INT IDENTITY, Name CHAR(40) ) GO CREATE NONCLUSTERED INDEX IX_ID ON dbo.Disktable (Id) GO -- Creating a memory optimized table with similar structure and DURABILITY = SCHEMA_AND_DATA CREATE TABLE dbo.Memorytable_durable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA) GO -- Creating an another memory optimized table with similar structure but DURABILITY = SCHEMA_Only CREATE TABLE dbo.Memorytable_nondurable ( Id INT NOT NULL PRIMARY KEY NONCLUSTERED Hash WITH (bucket_count =1000000), Name CHAR(40) ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_only) GO -- Now insert 100000 records in dbo.Disktable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Disktable(Name) VALUES('sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Do the same inserts for Memory table dbo.Memorytable_durable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_durable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END -- Now finally do the same inserts for Memory table dbo.Memorytable_nondurable and observe the Time Taken DECLARE @i_t bigint SET @i_t =1 WHILE @i_t<= 100000 BEGIN INSERT INTO dbo.Memorytable_nondurable VALUES(@i_t, 'sachin' + CONVERT(VARCHAR,@i_t)) SET @i_t+=1 END The above 3 Inserts took 1.20 minutes, 54 secs, and 2 secs respectively to insert 100000 records on my machine with 8 Gb RAM. This proves the point that memory-optimized tables can definitely help businesses achieve better performance for their highly transactional business table and memory- optimized tables with Durability SCHEMA_ONLY is even faster as it does not bother persisting its data to disk which makes it supremely fast. Koenig Solutions is one of the few organizations which offer IT training on SQL Server 2014 and all its updates. Now, I leave the decision on using memory_Optimized tables on you, I hope you like this article and it helped you understand  the fundamentals of IN-Memory OLTP . Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Koenig

    Read the article

  • Export mysql database tables to php code to create same tables in other database?

    - by chefnelone
    How do I Export mysql database tables to php code so that it allows me to create and populate same tables in other database? I have a local database, I exported to sql syntax, then I get something like: CREATE TABLE `boletinSuscritos` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(120) NOT NULL, `email` varchar(120) NOT NULL, `date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=3 ; INSERT INTO `boletinSuscritos` VALUES(1, 'walter', '[email protected]', '2010-03-24 12:53:12'); INSERT INTO `boletinSuscritos` VALUES(2, 'Paco', '[email protected]', '2010-03-24 12:56:56'); but I need it to be: (Is there any way to export the tables in this way) $sql = "CREATE TABLE boletinSuscritos ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(120) NOT NULL, email varchar(120) NOT NULL, date timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY ( id ) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=3 )"; mysql_query($sql,$conexion); mysql_query("INSERT INTO boletinSuscritos VALUES(1, 'walter', '[email protected]', '2010-03-24 12:53:12')"); mysql_query("INSERT INTO boletinSuscritos VALUES(2, 'Paco', '[email protected]', '2010-03-24 12:56:56')");

    Read the article

  • Part 4, Getting the conversion tables ready for CS to DNN

    - by Chris Hammond
    This is the fourth post in a series of blog posts about converting from CommunityServer to DotNetNuke. A brief background: I had a number of websites running on CommunityServer 2.1, I decided it was finally time to ditch CommunityServer due to the change in their licensing model and pricing that made it not good for the small guy. This series of blog posts is about how to convert your CommunityServer based sites to DotNetNuke . Previous Posts: Part 1: An Introduction Part 2: DotNetNuke Installation...(read more)

    Read the article

  • How to create a PeopleCode Application Package/Application Class using PeopleTools Tables

    - by Andreea Vaduva
    This article describes how - in PeopleCode (Release PeopleTools 8.50) - to enable a grid without enabling each static column, using a dynamic Application Class. The goal is to disable the following grid with three columns “Effort Date”, ”Effort Amount” and “Charge Back” , when the Check Box “Finished with task” is selected , without referencing each static column; this PeopleCode could be used dynamically with any grid. If the check box “Finished with task” is cleared, the content of the grid columns is editable (and the buttons “+” and “-“ are available): So, you create an Application Package “CLASS_EXTENSIONS” that contains an Application Class “EWK_ROWSET”. This Application Class is defined with Class extends “ Rowset” and you add two news properties “Enabled” and “Visible”: After creating this Application Class, you use it in two PeopleCode Events : Rowinit and FieldChange : This code is very ‘simple’, you write only one command : ” &ERS2.Enabled = False” → and the entire grid is “Enabled”… and you can use this code with any Grid! So, the complete PeopleCode to create the Application Package is (with explanation in [….]) : ******Package CLASS_EXTENSIONS : [Name of the Package: CLASS_EXTENSIONS] --Beginning of the declaration part------------------------------------------------------------------------------ class EWK_ROWSET extends Rowset; [Definition Class EWK_ROWSET as a subclass of Class Rowset] method EWK_ROWSET(&RS As Rowset); [Constructor is the Method with the same name of the Class] property boolean Visible get set; property boolean Enabled get set; [Definition of the property “Enabled” in read/write] private [Before the word “private”, all the declarations are publics] method SetDisplay(&DisplaySW As boolean, &PropName As string, &ChildSW As boolean); instance boolean &EnSW; instance boolean &VisSW; instance Rowset &NextChildRS; instance Row &NextRow; instance Record &NextRec; instance Field &NextFld; instance integer &RowCnt, &RecCnt, &FldCnt, &ChildRSCnt; instance integer &i, &j, &k; instance CLASS_EXTENSIONS:EWK_ROWSET &ERSChild; [For recursion] Constant &VisibleProperty = "VISIBLE"; Constant &EnabledProperty = "ENABLED"; end-class; --End of the declaration part------------------------------------------------------------------------------ method EWK_ROWSET [The Constructor] /+ &RS as Rowset +/ %Super = &RS; end-method; get Enabled /+ Returns Boolean +/; Return &EnSW; end-get; set Enabled /+ &NewValue as Boolean +/; &EnSW = &NewValue; %This.InsertEnabled=&EnSW; %This.DeleteEnabled=&EnSW; %This.SetDisplay(&EnSW, &EnabledProperty, False); [This method is called when you set this property] end-set; get Visible /+ Returns Boolean +/; Return &VisSW; end-get; set Visible /+ &NewValue as Boolean +/; &VisSW = &NewValue; %This.SetDisplay(&VisSW, &VisibleProperty, False); end-set; method SetDisplay [The most important PeopleCode Method] /+ &DisplaySW as Boolean, +/ /+ &PropName as String, +/ /+ &ChildSW as Boolean +/ [Not used in our example] &RowCnt = %This.ActiveRowCount; &NextRow = %This.GetRow(1); [To know the structure of a line ] &RecCnt = &NextRow.RecordCount; For &i = 1 To &RowCnt [Loop for each Line] &NextRow = %This.GetRow(&i); For &j = 1 To &RecCnt [Loop for each Record] &NextRec = &NextRow.GetRecord(&j); &FldCnt = &NextRec.FieldCount; For &k = 1 To &FldCnt [Loop for each Field/Record] &NextFld = &NextRec.GetField(&k); Evaluate Upper(&PropName) When = &VisibleProperty &NextFld.Visible = &DisplaySW; Break; When = &EnabledProperty; &NextFld.Enabled = &DisplaySW; [Enable each Field/Record] Break; When-Other Error "Invalid display property; Must be either VISIBLE or ENABLED" End-Evaluate; End-For; End-For; If &ChildSW = True Then [If recursion] &ChildRSCnt = &NextRow.ChildCount; For &j = 1 To &ChildRSCnt [Loop for each Rowset child] &NextChildRS = &NextRow.GetRowset(&j); &ERSChild = create CLASS_EXTENSIONS:EWK_ROWSET(&NextChildRS); &ERSChild.SetDisplay(&DisplaySW, &PropName, &ChildSW); [For each Rowset child, call Method SetDisplay with the same parameters used with the Rowset parent] End-For; End-If; End-For; end-method; ******End of the Package CLASS_EXTENSIONS:[Name of the Package: CLASS_EXTENSIONS] About the Author: Pascal Thaler joined Oracle University in 2005 where he is a Senior Instructor. His area of expertise is Oracle Peoplesoft Technology and he delivers the following courses: For Developers: PeopleTools Overview, PeopleTools I &II, Batch Application Engine, Language Oriented Object PeopleCode, Administration Security For Administrators : Server Administration & Installation, Database Upgrade & Data Management Tools For Interface Users: Integration Broker (Web Service)

    Read the article

  • Extracting Data from a Source System to History Tables

    - by Derek D.
    This is a topic I find very little information written about, however it is very important that the method for extracting data be done in a way that does not hinder performance of the source system.  In this example, the goal is to extract data from a source system, into another database (or server) all [...]

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >