Search Results

Search found 59643 results on 2386 pages for 'data migration'.

Page 116/2386 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Understanding EDI 997.

    - by VishnuTiwariBlog
    Hi Guys, This is for the EDI starter. Below is the complete detail of EDI 997 segment and element details. 997 Functional Acknowledgment Transaction Layout: No. Seg ID Name Description Example M/O 010 ST Transaction Set Header To indicate the start of a transaction set and to assign a control number ST*997*382823~   M ST01   Code uniquely identifying a Transaction Set   M ST02   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 020 AK1 Functional Group Response Header To start acknowledgment of a functional group AK1*QM*2459823 M        AK101   Code identifying a group of application related transaction sets IN Invoice Information (810) SH Ship Notice/Manifest (856)     AK102   Assigned number originated and maintained by the sender     030 AK2 Transaction Set Response Header To start acknowledgment of a single transaction set AK2*856*001 M AK201   Code uniquely identifying a Transaction Set 810 Invoice 856 Ship Notice/Manifest   M AK202   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 040 AK3 Data Segment Note To report errors in a data segment and identify the location of the data segment AK3*TD3*9 O AK301 Segment ID Code Code defining the segment ID of the data segment in error (See Appendix A - Number 77)     AK302 Segment Position in Transaction Set The numerical count position of this data segment from the start of the transaction set: the transaction set header is count position 1     050 AK4 Data Element Note To report errors in a data element or composite data structure and identify the location of the data element AK4*2**2 O AK401 Position in Segment Code indicating the relative position of a simple data element, or the relative position of a composite data structure combined with the relative position of the component data element within the composite data structure, in error; the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK402 Element Position in Segment This is used to indicate the relative position of a simple data element, or the relative position of a composite data structure with the relative position of the component within the composite data structure, in error; in the data segment the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK403 Data Element Syntax Error Code Code indicating the error found after syntax edits of a data element 1 Mandatory Data Element Missing 2 Conditional Required Data Element Missing 3 Too Many Data Elements 4 Data Element Too Short 5 Data Element Too Long 6 Invalid Character in Data Element 7 Invalid Code Value 8 Invalid Date 9 Invalid Time 10 Exclusion Condition Violated     AK404 Copy of Bad Data Element This is a copy of the data element in error     060 AK5 AK5 Transaction Set Response Trailer To acknowledge acceptance or rejection and report errors in a transaction set AK5*A~ AK5*R*5~ M AK501 Transaction Set Acknowledgment Code Code indicating accept or reject condition based on the syntax editing of the transaction set A Accepted E Accepted But Errors Were Noted R Rejected     AK502 Transaction Set Syntax Error Code Code indicating error found based on the syntax editing of a transaction set 1 Transaction Set Not Supported 2 Transaction Set Trailer Missing 3 Transaction Set Control Number in Header and Trailer Do Not Match 4 Number of Included Segments Does Not Match Actual Count 5 One or More Segments in Error 6 Missing or Invalid Transaction Set Identifier 7 Missing or Invalid Transaction Set Control Number     070 AK9 Functional Group Response Trailer To acknowledge acceptance or rejection of a functional group and report the number of included transaction sets from the original trailer, the accepted sets, and the received sets in this functional group AK9*A*1*1*1~ AK9*R*1*1*0~ M AK901 Functional Group Acknowledge Code Code indicating accept or reject condition based on the syntax editing of the functional group A Accepted E Accepted, But Errors Were Noted. R Rejected     AK902 Number of Transaction Sets Included Total number of transaction sets included in the functional group or interchange (transmission) group terminated by the trailer containing this data element     AK903 Number of Received Transaction Sets Number of Transaction Sets received     AK904 Number of Accepted Transaction Sets Number of accepted Transaction Sets in a Functional Group     AK905 Functional Group Syntax Error Code Code indicating error found based on the syntax editing of the functional group header and/or trailer 1 Functional Group Not Supported 2 Functional Group Version Not Supported 3 Functional Group Trailer Missing 4 Group Control Number in the Functional Group Header and Trailer Do Not Agree 5 Number of Included Transaction Sets Does Not Match Actual Count 6 Group Control Number Violates Syntax     080 SE Transaction Set Trailer To indicate the end of the transaction set and provide the count of the transmitted segments (including the beginning (ST) and ending (SE) segments) SE*9*223~ M SE01 Number of Included Segments Total number of segments included in a transaction set including ST and SE segments     SE02 Transaction Set Control Number Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set

    Read the article

  • Understanding EDI 997

    - by VishnuTiwariBlog
    Hi Guys, This is for the EDI starter. Below is the complete detail of EDI 997 segment and element details. 997 Functional Acknowledgment Transaction Layout:   No. Seg ID Name Description Example M/O 010 ST Transaction Set Header To indicate the start of a transaction set and to assign a control number ST*997*382823~   M ST01   Code uniquely identifying a Transaction Set   M ST02   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 020 AK1 Functional Group Response Header To start acknowledgment of a functional group AK1*QM*2459823 M        AK101   Code identifying a group of application related transaction sets IN Invoice Information (810) SH Ship Notice/Manifest (856)     AK102   Assigned number originated and maintained by the sender     030 AK2 Transaction Set Response Header To start acknowledgment of a single transaction set AK2*856*001 M AK201   Code uniquely identifying a Transaction Set 810 Invoice 856 Ship Notice/Manifest   M AK202   Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set   M 040 AK3 Data Segment Note To report errors in a data segment and identify the location of the data segment AK3*TD3*9 O AK301 Segment ID Code Code defining the segment ID of the data segment in error (See Appendix A - Number 77)     AK302 Segment Position in Transaction Set The numerical count position of this data segment from the start of the transaction set: the transaction set header is count position 1     050 AK4 Data Element Note To report errors in a data element or composite data structure and identify the location of the data element AK4*2**2 O AK401 Position in Segment Code indicating the relative position of a simple data element, or the relative position of a composite data structure combined with the relative position of the component data element within the composite data structure, in error; the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK402 Element Position in Segment This is used to indicate the relative position of a simple data element, or the relative position of a composite data structure with the relative position of the component within the composite data structure, in error; in the data segment the count starts with 1 for the simple data element or composite data structure immediately following the segment ID     AK403 Data Element Syntax Error Code Code indicating the error found after syntax edits of a data element 1 Mandatory Data Element Missing 2 Conditional Required Data Element Missing 3 Too Many Data Elements 4 Data Element Too Short 5 Data Element Too Long 6 Invalid Character in Data Element 7 Invalid Code Value 8 Invalid Date 9 Invalid Time 10 Exclusion Condition Violated     AK404 Copy of Bad Data Element This is a copy of the data element in error     060 AK5 AK5 Transaction Set Response Trailer To acknowledge acceptance or rejection and report errors in a transaction set AK5*A~ AK5*R*5~ M AK501 Transaction Set Acknowledgment Code Code indicating accept or reject condition based on the syntax editing of the transaction set A Accepted E Accepted But Errors Were Noted R Rejected     AK502 Transaction Set Syntax Error Code Code indicating error found based on the syntax editing of a transaction set 1 Transaction Set Not Supported 2 Transaction Set Trailer Missing 3 Transaction Set Control Number in Header and Trailer Do Not Match 4 Number of Included Segments Does Not Match Actual Count 5 One or More Segments in Error 6 Missing or Invalid Transaction Set Identifier 7 Missing or Invalid Transaction Set Control Number     070 AK9 Functional Group Response Trailer To acknowledge acceptance or rejection of a functional group and report the number of included transaction sets from the original trailer, the accepted sets, and the received sets in this functional group AK9*A*1*1*1~ AK9*R*1*1*0~ M AK901 Functional Group Acknowledge Code Code indicating accept or reject condition based on the syntax editing of the functional group A Accepted E Accepted, But Errors Were Noted. R Rejected     AK902 Number of Transaction Sets Included Total number of transaction sets included in the functional group or interchange (transmission) group terminated by the trailer containing this data element     AK903 Number of Received Transaction Sets Number of Transaction Sets received     AK904 Number of Accepted Transaction Sets Number of accepted Transaction Sets in a Functional Group     AK905 Functional Group Syntax Error Code Code indicating error found based on the syntax editing of the functional group header and/or trailer 1 Functional Group Not Supported 2 Functional Group Version Not Supported 3 Functional Group Trailer Missing 4 Group Control Number in the Functional Group Header and Trailer Do Not Agree 5 Number of Included Transaction Sets Does Not Match Actual Count 6 Group Control Number Violates Syntax     080 SE Transaction Set Trailer To indicate the end of the transaction set and provide the count of the transmitted segments (including the beginning (ST) and ending (SE) segments) SE*9*223~ M SE01 Number of Included Segments Total number of segments included in a transaction set including ST and SE segments     SE02 Transaction Set Control Number Identifying control number that must be unique within the transaction set functional group assigned by the originator for a transaction set

    Read the article

  • Getting UPK data into Excel

    - by maria.cozzolino(at)oracle.com
    Did you ever want someone to review your UPK outline outside of the Developer? You can send your outline to an Excel report, which can be distributed through email. Depending on how much additional data you want with your outline, there are two ways you can do this task. Basic data: • You can print a listing of all the items in the outline. • With your outline open, choose File/Print... • Choose the "Save document as" command on the right, and choose Excel (or xlsx). • HINT: If you have not expanded your entire outline, it's faster to use the commands in Developer to expand the entire outline. However, you can expand specific sections by clicking on them in the print preview. • NOTE: If you have the Details view displayed rather than the Player view, you can print all the data that appears in that view. Advanced data: If you desire a more detailed report, you can use the HP Quality Center publishing style, which also creates an Excel file. This style contains a default set of fields for use with Quality Center, but any of the metadata fields can be added to the report, and it can be used for more than just importing into HP Quality Center. To add additional columns to the HP Quality Center publishing style: 1. Make a copy of the publishing style. This process ensures that you have a good copy to revert to if something goes wrong with your customizations, and also allows you to keep your modifications when the software is upgraded. 2. Open the copy of the columnspec.xml file in your favorite XML editor - I use notepad. (This file is located in a language-specific folder in the HP Quality Center publishing style.) 3. Scroll down the columnspec file until you find the column to include. All the metadata fields that can be added to the report are listed in the columnspec file - you just need to tell the system to include the columns. 4. You will see a series of sections like this: 5. Change the value for "col export" to "yes". This will include the column in the Excel file. 6. If desired, change the value for "Play_ModesColHeader" to be whatever name you wish to appear in the Excel column heading. 7. Save the columnspec file. 8. Save the publishing style package. Now, when you publish for HP Quality Center, you will see your newly added columns. You can refer to the section on Customizing HP Quality Center Output in the Content Deployment Guide for additional customization details. Happy customization! I'd be interested in hearing what other uses you have for Excel reporting. Wishing you and yours a happy and healthy New Year! ~~Maria Cozzolino, Manager of Software Requirements and UI

    Read the article

  • Willy Rotstein on Supply Chain Planning

    - by sarah.taylor(at)oracle.com
    Each time a merchandiser, buyer or planner in Retail makes a business decision around assortment, inventory, pricing and promotions there is an opportunity to improve both Profitability and Customer Service. Improving decision making, however, has always been a tricky business for retailers.  I have worked in this space for more than 15 years. I began my career as an academic, at Imperial College London, and then broadened this interest with Retailers, aiming to optimize their merchandising and supply chain decisions. Planning the business and optimizing profit is a complex process. The complexity arises from the variety of people involved, the large number of decisions to take across all business processes, the uncertainty intrinsic to the retail environment as well as the volume of data available for analysis.  Things are not getting any easier either. The advent of multi-channel, social media and mobile is taking these complexities to a new level and presenting additional opportunities for those willing to exploit them. I guess it is due to the complexities of the decision making process that, over the last couple of years working with Oracle Retail, I have witnessed a clear trend around the deployment of planning systems. Retailers are aiming to simplify their decision making processes. They want to use one joined up planning platform across the business and enhance it with "actionable" data mining and optimization techniques. At Oracle Retail, we have a vibrant community of international retailers who regularly come together to discuss the big issues in retail planning. It is a combination of fashion, grocery and speciality retailers, all sharing their best practice vision for planning and optimizing merchandise decisions. As part of the Retail Exchange program, at the recent National Retail Federation event in New York, I jointly hosted a Planning dinner with Peter Fitzgerald from Google UK, Retail Division. Those retailers from our international planning community who were in New York for the annual NRF event were able to attend. The group comprised some of Europe's great International Retail brands.  All sectors were represented by organisations like Mango, LVMH, Ahold, Morrisons, Shop Direct and River Island. They confirmed the current importance of engaging with Planning and Optimization issues. In particular the impact of the internet was a key topic. We had a great debate about new retail initiatives.  Peter highlighted how mobility is changing retail - in particular with the new "local availability search" initiative. We also had an exciting discussion around the opportunities to improve merchandising using the new data that is becoming available from search, social media and ecommerce sites. It will be our focus to continue to help retailers translate this data into better results while keeping their business operations simple. New developments in "actionable" analytics and computing capacity make this a very exciting area today. Watch this space for my contributions on these topics which will be made available through this blog. Oracle Retail has a strong Planning community. if you are a category manager, a planner, a buyer, a merchandiser, a retail supplier or any retail executive with a keen interest in planning then you would be very welcome to join Oracle Retail's Planning Community. As part of our community you will be able to join our in-person and virtual events, download topical white papers and best practice information specifically tailored to your area of interest.  If anyone would like to register their interest in joining our community of retailers discussing planning then please contact me at [email protected]   Willy Rotstein, Oracle Retail

    Read the article

  • SQL SERVER – Updating Data in A Columnstore Index

    - by pinaldave
    So far I have written two articles on Columnstore Indexes, and both of them got very interesting readership. In fact, just recently I got a query on my previous article on Columnstore Index. Read the following two articles to get familiar with the Columnstore Index. They will give you a reference to the question which was asked by a certain reader: SQL SERVER – Fundamentals of Columnstore Index SQL SERVER – How to Ignore Columnstore Index Usage in Query Here is the reader’s question: ” When I tried to update my table after creating the Columnstore index, it gives me an error. What should I do?” When the Columnstore index is created on the table, the table becomes Read-Only table and it does not let any insert/update/delete on the table. The basic understanding is that Columnstore Index will be created on the table that is very huge and holds lots of data. If a table is small enough, there is no need to create a Columnstore index. The regular index should just help it. The reason why Columnstore index was needed is because the table was so big that retrieving the data was taking a really, really long time. Now, updating such a huge table is always a challenge by itself. If the Columnstore Index is created on the table, and the table needs to be updated, you need to know that there are various ways to update it. The easiest way is to disable the Index and enable it. Consider the following code: USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Attempt to Update the table UPDATE [dbo].[MySalesOrderDetail] SET OrderQty = OrderQty +1 WHERE [SalesOrderID] = 43659 GO /* It will throw following error Msg 35330, Level 15, State 1, Line 2 UPDATE statement failed because data cannot be updated in a table with a columnstore index. Consider disabling the columnstore index before issuing the UPDATE statement, then rebuilding the columnstore index after UPDATE is complete. */ A similar error also shows up for Insert/Delete function. Here is the workaround. Disable the Columnstore Index and performance update, enable the Columnstore Index: -- Disable the Columnstore Index ALTER INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] DISABLE GO -- Attempt to Update the table UPDATE [dbo].[MySalesOrderDetail] SET OrderQty = OrderQty +1 WHERE [SalesOrderID] = 43659 GO -- Rebuild the Columnstore Index ALTER INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] REBUILD GO This time it will not throw an error while the update of the table goes successfully. Let us do a cleanup of our tables using this code: -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In the next post we will see how we can use Partition to update the Columnstore Index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Android - passing data between Activities

    - by Bill Osuch
    (To follow along with this, you should understand the basics of starting new activities: Link ) The easiest way to pass data from one activity to another is to create your own custom bundle and pass it to your new class. First, create two new activities called Search and SearchResults (make sure you add the second one you create to the AndroidManifest.xml file!), and create xml layout files for each. Search's file should look like this: <?xml version="1.0" encoding="utf-8"?> <LinearLayout     xmlns:android="http://schemas.android.com/apk/res/android"     android:layout_width="fill_parent"     android:layout_height="fill_parent"     android:orientation="vertical">     <TextView          android:layout_width="fill_parent"      android:layout_height="wrap_content"      android:text="Name:"/>     <EditText                android:id="@+id/edittext"         android:layout_width="fill_parent"         android:layout_height="wrap_content"/>     <TextView          android:layout_width="fill_parent"         android:layout_height="wrap_content"         android:text="ID Number:"/>     <EditText                android:id="@+id/edittext2"                android:layout_width="fill_parent"                android:layout_height="wrap_content"/>     <Button           android:id="@+id/btnSearch"          android:layout_width="fill_parent"         android:layout_height="wrap_content"         android:text="Search" /> </LinearLayout> and SearchResult's should look like this: <?xml version="1.0" encoding="utf-8"?> <LinearLayout     xmlns:android="http://schemas.android.com/apk/res/android"     android:layout_width="fill_parent"     android:layout_height="fill_parent"     android:orientation="vertical">     <TextView          android:id="@+id/txtName"         android:layout_width="fill_parent"         android:layout_height="wrap_content"/>     <TextView          android:id="@+id/txtState"         android:layout_width="fill_parent"         android:layout_height="wrap_content"         android:text="No data"/> </LinearLayout> Next, we'll override the OnCreate method of Search: @Override public void onCreate(Bundle savedInstanceState) {     super.onCreate(savedInstanceState);     setContentView(R.layout.search);     Button search = (Button) findViewById(R.id.btnSearch);     search.setOnClickListener(new View.OnClickListener() {         public void onClick(View view) {                           Intent intent = new Intent(Search.this, SearchResults.class);              Bundle b = new Bundle();                           EditText txt1 = (EditText) findViewById(R.id.edittext);             EditText txt2 = (EditText) findViewById(R.id.edittext2);                                      b.putString("name", txt1.getText().toString());             b.putInt("state", Integer.parseInt(txt2.getText().toString()));                              //Add the set of extended data to the intent and start it             intent.putExtras(b);             startActivity(intent);          }     }); } This is very similar to the previous example, except here we're creating our own bundle, adding some key/value pairs to it, and adding it to the intent. Now, to retrieve the data, we just need to grab the Bundle that was passed to the new Activity and extract our values from it: @Override public void onCreate(Bundle savedInstanceState) {     super.onCreate(savedInstanceState);     setContentView(R.layout.search_results);     Bundle b = getIntent().getExtras();     int value = b.getInt("state", 0);     String name = b.getString("name");             TextView vw1 = (TextView) findViewById(R.id.txtName);     TextView vw2 = (TextView) findViewById(R.id.txtState);             vw1.setText("Name: " + name);     vw2.setText("State: " + String.valueOf(value)); }

    Read the article

  • How to store a shmup level?

    - by pek
    I am developing a 2D shmup (i.e. Aero Fighters) and I was wondering what are the various ways to store a level. Assuming that enemies are defined in their own xml file, how would you define when an enemy spawns in the level? Would it be based on time? Updates? Distance? Currently I do this based on "level time" (the amount of time the level is running - pausing doesn't update the time). Here is an example (the serialization was done by XNA): <?xml version="1.0" encoding="utf-8"?> <XnaContent xmlns:level="pekalicious.xanor.XanorContentShared.content.level"> <Asset Type="level:Level"> <Enemies> <Enemy> <EnemyType>data/enemies/smallenemy</EnemyType> <SpawnTime>PT0S</SpawnTime> <NumberOfSpawns>60</NumberOfSpawns> <SpawnOffset>PT0.2S</SpawnOffset> </Enemy> <Enemy> <EnemyType>data/enemies/secondenemy</EnemyType> <SpawnTime>PT0S</SpawnTime> <NumberOfSpawns>10</NumberOfSpawns> <SpawnOffset>PT0.5S</SpawnOffset> </Enemy> <Enemy> <EnemyType>data/enemies/secondenemy</EnemyType> <SpawnTime>PT20S</SpawnTime> <NumberOfSpawns>10</NumberOfSpawns> <SpawnOffset>PT0.5S</SpawnOffset> </Enemy> <Enemy> <EnemyType>data/enemies/boss1</EnemyType> <SpawnTime>PT30S</SpawnTime> <NumberOfSpawns>1</NumberOfSpawns> <SpawnOffset>PT0S</SpawnOffset> </Enemy> </Enemies> </Asset> </XnaContent> Each Enemy element is basically a wave of specific enemy types. The type is defined in EnemyType while SpawnTime is the "level time" this wave should appear. NumberOfSpawns and SpawnOffset is the number of enemies that will show up and the time it takes between each spawn respectively. This could be a good idea or there could be better ones out there. I'm not sure. I would like to see some opinions and ideas. I have two problems with this: spawning an enemy correctly and creating a level editor. The level editor thing is an entirely different problem (which I will probably post in the future :P). As for spawning correctly, the problem lies in the fact that I have a variable update time and so I need to make sure I don't miss an enemy spawn because the spawn offset is too small, or because the update took a little more time. I kinda fixed it for the most part, but it seems to me that the problem is with how I store the level. So, any ideas? Comments? Thank you in advance.

    Read the article

  • Achieve Named Criteria with multiple tables in EJB Data control

    - by Deepak Siddappa
    In EJB create a named criteria using sparse xml and in named criteria wizard, only attributes related to the that particular entities will be displayed.  So here we can filter results only on particular entity bean. Take a scenario where we need to create Named Criteria based on multiple tables using EJB. In BC4J we can achieve this by creating view object based on multiple tables. So in this article, we will try to achieve named criteria based on multiple tables using EJB.Implementation StepsCreate Java EE Web Application with entity based on Departments and Employees, then create a session bean and data control for the session bean.Create a Java Bean, name as CustomBean and add below code to the file. Here in java bean from both Departments and Employees tables three fields are taken. public class CustomBean { private BigDecimal departmentId; private String departmentName; private BigDecimal locationId; private BigDecimal employeeId; private String firstName; private String lastName; public CustomBean() { super(); } public void setDepartmentId(BigDecimal departmentId) { this.departmentId = departmentId; } public BigDecimal getDepartmentId() { return departmentId; } public void setDepartmentName(String departmentName) { this.departmentName = departmentName; } public String getDepartmentName() { return departmentName; } public void setLocationId(BigDecimal locationId) { this.locationId = locationId; } public BigDecimal getLocationId() { return locationId; } public void setEmployeeId(BigDecimal employeeId) { this.employeeId = employeeId; } public BigDecimal getEmployeeId() { return employeeId; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getFirstName() { return firstName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getLastName() { return lastName; } } Open the sessionEJb file and add the below code to the session bean and expose the method in local/remote interface and generate a data control for that. Note:- Here in the below code "em" is a EntityManager. public List<CustomBean> getCustomBeanFindAll() { String queryString = "select d.department_id, d.department_name, d.location_id, e.employee_id, e.first_name, e.last_name from departments d, employees e\n" + "where e.department_id = d.department_id"; Query genericSearchQuery = em.createNativeQuery(queryString, "CustomQuery"); List resultList = genericSearchQuery.getResultList(); Iterator resultListIterator = resultList.iterator(); List<CustomBean> customList = new ArrayList(); while (resultListIterator.hasNext()) { Object col[] = (Object[])resultListIterator.next(); CustomBean custom = new CustomBean(); custom.setDepartmentId((BigDecimal)col[0]); custom.setDepartmentName((String)col[1]); custom.setLocationId((BigDecimal)col[2]); custom.setEmployeeId((BigDecimal)col[3]); custom.setFirstName((String)col[4]); custom.setLastName((String)col[5]); customList.add(custom); } return customList; } Open the DataControls.dcx file and create sparse xml for customBean. In sparse xml navigate to Named criteria tab -> Bind Variable section, create two binding variables deptId,fName. In sparse xml navigate to Named criteria tab ->Named criteria, create a named criteria and map the query attributes to the bind variables. In the ViewController create a file jspx page, from data control palette drop customBeanFindAll->Named Criteria->CustomBeanCriteria->Query as ADF Query Panel with Table. Run the jspx page and enter values in search form with departmentId as 50 and firstName as "M". Named criteria will filter the query of a data source and display the result like below.

    Read the article

  • perl comparing 2 data file as array 2D for finding match one to one [migrated]

    - by roman serpa
    I'm doing a program that uses combinations of variables ( combiData.txt 63 rows x different number of columns) for analysing a data table ( j1j2_1.csv, 1000filas x 19 columns ) , to choose how many times each combination is repeated in data table and which rows come from (for instance, tableData[row][4]). I have tried to compile it , however I get the following message : Use of uninitialized value $val in numeric eq (==) at rowInData.pl line 34. Use of reference "ARRAY(0x1a2eae4)" as array index at rowInData.pl line 56. Use of reference "ARRAY(0x1a1334c)" as array index at rowInData.pl line 56. Use of uninitialized value in subtraction (-) at rowInData.pl line 56. Modification of non-creatable array value attempted, subscript -1 at rowInData.pl line 56. nothing This is my code: #!/usr/bin/perl use strict; use warnings; my $line_match; my $countTrue; open (FILE1, "<combiData.txt") or die "can't open file text1.txt\n"; my @tableCombi; while(<FILE1>) { my @row = split(' ', $_); push(@tableCombi, \@row); } close FILE1 || die $!; open (FILE2, "<j1j2_1.csv") or die "can't open file text1.txt\n"; my @tableData; while(<FILE2>) { my @row2 = split(/\s*,\s*/, $_); push(@tableData, \@row2); } close FILE2 || die $!; #function transform combiData.txt variable (position ) to the real value that i have to find in the data table. sub trueVal($){ my ($val) = $_[0]; if($val == 7){ return ('nonsynonymous_SNV'); } elsif( $val == 14) { return '1'; } elsif( $val == 15) { return '1';} elsif( $val == 16) { return '1'; } elsif( $val == 17) { return '1'; } elsif( $val == 18) { return '1';} elsif( $val == 19) { return '1';} else { print 'nothing'; } } #function IntToStr ( ) , i'm not sure if it is necessary) that transforms $ to strings , to use the function <eq> in the third loop for the array of combinations compared with the data array . sub IntToStr { return "$_[0]"; } for my $combi (@tableCombi) { $line_match = 0; for my $sheetData (@tableData) { $countTrue=0; for my $cell ( @$combi) { #my $temp =\$tableCombi[$combi][$cell] ; #if ( trueVal($tableCombi[$combi][$cell] ) eq $tableData[$sheetData][ $tableCombi[$combi][$cell] - 1 ] ){ #if ( IntToStr(trueVal($$temp )) eq IntToStr( $tableData[$sheetData][ $$temp-1] ) ){ if ( IntToStr(trueVal($tableCombi[$combi][$cell]) ) eq IntToStr($tableData[$sheetData][ $tableCombi[$combi][$cell] -1]) ){ $countTrue++;} if ($countTrue==@$combi){ $line_match++; #if ($line_match < 50){ print $tableData[$sheetData][4]." "; #} } } } print $line_match." \n"; }

    Read the article

  • Non use of persisted data

    - by Dave Ballantyne
    Working at a client site, that in itself is good to say, I ran into a set of circumstances that made me ponder, and appreciate, the optimizer engine a bit more. Working on optimizing a stored procedure, I found a piece of code similar to : select BillToAddressID, Rowguid, dbo.udfCleanGuid(rowguid) from sales.salesorderheaderwhere BillToAddressID = 985 A lovely scalar UDF was being used,  in actuality it was used as part of the WHERE clause but simplified here.  Normally I would use an inline table valued function here, but in this case it wasn't a good option. So this seemed like a pretty good case to use a persisted column to improve performance. The supporting index was already defined as create index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid) and the function code is Create Function udfCleanGuid(@GUID uniqueidentifier)returns varchar(255)with schemabindingasbegin Declare @RetStr varchar(255) Select @RetStr=CAST(@Guid as varchar(255)) Select @RetStr=REPLACE(@Retstr,'-','') return @RetStrend Executing the Select statement produced a plan of : Nothing surprising, a seek to find the data and compute scalar to execute the UDF. Lets get optimizing and remove the UDF with a persisted column Alter table sales.salesorderheaderadd CleanedGuid as dbo.udfCleanGuid(rowguid)PERSISTED A subtle change to the SELECT statement… select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 and our new optimized plan looks like… Not a lot different from before!  We are using persisted data on our table, where is the lookup to fetch it ?  It didnt happen,  it was recalculated.  Looking at the properties of the relevant Compute Scalar would confirm this ,  but a more graphic example would be shown in the profiler SP:StatementCompleted event. Why did the lookup happen ? Remember the index definition,  it has included the original guid to avoid the lookup.  The optimizer knows this column will be passed into the UDF, run through its logic and decided that to recalculate is cheaper than the lookup.  That may or may not be the case in actuality,  the optimizer has no idea of the real cost of a scalar udf.  IMO the default cost of a scalar UDF should be seen as a lot higher than it is, since they are invariably higher. Knowing this, how do we avoid the function call?  Dropping the guid from the index is not an option, there may be other code reliant on it.   We are left with only one real option,  add the persisted column into the index. drop index Sales.SalesOrderHeader.idxBillgocreate index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid,cleanedguid) Now if we repeat the statement select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 We still have a compute scalar operator, but this time it wasnt used to recalculate the persisted data.  This can be confirmed with profiler again. The takeaway here is,  just because you have persisted data dont automatically assumed that it is being used.

    Read the article

  • Is it possible to run Visual Studio Database Edition schema migrations from the command line?

    - by Damian Powell
    Visual Studio 2008 Database Edition (Data Dude) has the ability to perform schema comparisons between databases and generate a script which migrates from one database to the other. Is it possible to perform this comparison and generate the migration script from the command line? If so, what are the command line tools, and are the same tools used in equivalent versions of Visual Studio 2010?

    Read the article

  • Why No android.content.SyncAdapter meta-data registering sync-adapter?

    - by mobibob
    I am following the SampleSyncAdapter and upon startup, it appears that my SyncAdapter is not configured correctly. It reports an error trying to load its meta-data. How can I isolate the problem? You can see the other accounts in the system that register correctly. Logcat: 12-21 17:10:50.667 W/PackageManager( 121): Unable to load service info ResolveInfo{4605dcd0 com.myapp.syncadapter.MySyncAdapter p=0 o=0 m=0x108000} 12-21 17:10:50.667 W/PackageManager( 121): org.xmlpull.v1.XmlPullParserException: No android.content.SyncAdapter meta-data 12-21 17:10:50.667 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache.parseServiceInfo(RegisteredServicesCache.java:391) 12-21 17:10:50.667 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache.generateServicesMap(RegisteredServicesCache.java:260) 12-21 17:10:50.667 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache$1.onReceive(RegisteredServicesCache.java:110) 12-21 17:10:50.667 W/PackageManager( 121): at android.app.ActivityThread$PackageInfo$ReceiverDispatcher$Args.run(ActivityThread.java:892) 12-21 17:10:50.667 W/PackageManager( 121): at android.os.Handler.handleCallback(Handler.java:587) 12-21 17:10:50.667 W/PackageManager( 121): at android.os.Handler.dispatchMessage(Handler.java:92) 12-21 17:10:50.667 W/PackageManager( 121): at android.os.Looper.loop(Looper.java:123) 12-21 17:10:50.667 W/PackageManager( 121): at com.android.server.ServerThread.run(SystemServer.java:570) 12-21 17:10:50.747 D/Sources ( 294): Creating external source for type=com.skype.contacts.sync, packageName=com.skype.raider 12-21 17:10:50.747 D/Sources ( 294): Creating external source for type=com.twitter.android.auth.login, packageName=com.twitter.android 12-21 17:10:50.747 D/Sources ( 294): Creating external source for type=com.example.android.samplesync, packageName=com.example.android.samplesync 12-21 17:10:50.747 W/PackageManager( 121): Unable to load service info ResolveInfo{460504b0 com.myapp.syncadapter.MySyncAdapter p=0 o=0 m=0x108000} 12-21 17:10:50.747 W/PackageManager( 121): org.xmlpull.v1.XmlPullParserException: No android.content.SyncAdapter meta-data 12-21 17:10:50.747 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache.parseServiceInfo(RegisteredServicesCache.java:391) 12-21 17:10:50.747 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache.generateServicesMap(RegisteredServicesCache.java:260) 12-21 17:10:50.747 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache$1.onReceive(RegisteredServicesCache.java:110) 12-21 17:10:50.747 W/PackageManager( 121): at android.app.ActivityThread$PackageInfo$ReceiverDispatcher$Args.run(ActivityThread.java:892) 12-21 17:10:50.747 W/PackageManager( 121): at android.os.Handler.handleCallback(Handler.java:587) 12-21 17:10:50.747 W/PackageManager( 121): at android.os.Handler.dispatchMessage(Handler.java:92) 12-21 17:10:50.747 W/PackageManager( 121): at android.os.Looper.loop(Looper.java:123) 12-21 17:10:50.747 W/PackageManager( 121): at com.android.server.ServerThread.run(SystemServer.java:570)

    Read the article

  • Failed to save to data store: The operation couldn’t be completed. (Cocoa error 133020.)

    - by poslinski.net
    I am working on quite complex app with huge sync procedure beetwen iphone and web server. I have no troubles with adding records, until I run sync procedure in separete thread, and It will update data on serwer, and send them back to iphone. But after this procedure, inserting new data cause error, such as this: 2011-01-07 12:49:10.722 App[1987:207] Failed to save to data store: The operation couldn’t be completed. (Cocoa error 133020.) 2011-01-07 12:49:10.724 App[1987:207] { conflictList = ( "NSMergeConflict (0x5ac1ea0) for NSManagedObject (0x5a2d710) with objectID '0x5a27080 <x-coredata://E82E75ED-96DB-4CBF-9D15-9CC106AC0052/uzytkownicy/p10>' with oldVersion = 9 and newVersion = 21 and old object snapshot = {\n adres = \"<null>\";\n haslo = xxxxxxxxxxxxxxxxxxxxxx;\n \"id_uzytkownika\" = 3;\n imie = Jan;\n \"kod_jednorazowy\" = 0;\n komorka = \"<null>\";\n login = nowakjan;\n nazwisko = Nowak;\n pesel = 0;\n rodzaj = 2;\n \"stan_konta\" = 0;\n telefon = \"<null>\";\n \"uzytkownicy_uczniowie\" = \"<null>\";\n \"zmienna_losowa\" = 8G9e1;\n} and new cached row = {\n adres = \"<null>\";\n haslo = xxxxxxxxxxxxxxxxxxxxxx;\n \"id_uzytkownika\" = 3;\n imie = Jan;\n \"kod_jednorazowy\" = 0;\n komorka = \"<null>\";\n login = nowakjan;\n nazwisko = Nowak;\n pesel = 0;\n rodzaj = 2;\n \"stan_konta\" = 0;\n telefon = \"<null>\";\n \"uzytkownicy_uczniowie\" = \"<null>\";\n \"zmienna_losowa\" = 8G9e1;\n}", "NSMergeConflict (0xd266990) for NSManagedObject (0xcd05950) with objectID '0x5a453b0 <x-coredata://E82E75ED-96DB-4CBF-9D15-9CC106AC0052/uczniowie/p125>' with oldVersion = 5 and newVersion = 10 and old object snapshot = {\n adres = \"Warszawa; ul. Lwowska 32\";\n \"data_urodzenia\" = \"1997-02-01 23:00:00 +0000\";\n dysfunkcje = \"\";\n email = \"<null>\";\n frekwencja = 0;\n \"id_ucznia\" = 86;\n imie2 = Marian;\n \"imie_ucznia\" = \"S\\U0142awomir\";\n klasa = \"0x5a47820 <x-coredata://E82E75ED-96DB-4CBF-9D15-9CC106AC0052/zespoly/p9>\";\n komorka = \"<null>\";\n \"miejsce_urodzenia\" = Warszawa;\n \"nazwisko_ucznia\" = \"S\\U0142awek\";\n \"numer_ewidencyjny\" = 20;\n opiekun1 = \"Mariusz S\\U0142awek\";\n opiekun2 = \" \";\n pesel = 97020298919;\n plec = 1;\n telefon = 890000002;\n \"uzytkownicy_uczniowie\" = \"<null>\";\n \"web_klasa\" = 50;\n} and new cached row = {\n adres = \"Warszawa; ul. Lwowska 32\";\n \"data_urodzenia\" = \"1997-02-01 23:00:00 +0000\";\n dysfunkcje = \"\";\n email = \"<null>\";\n frekwencja = 0;\n \"id_ucznia\" = 86;\n imie2 = Marian;\n \"imie_ucznia\" = \"S\\U0142awomir\";\n klasa = \"0x5a8e7c0 <x-coredata://E82E75ED-96DB-4CBF-9D15-9CC106AC0052/zespoly/p9>\";\n komorka = \"<null>\";\n \"miejsce_urodzenia\" = Warszawa;\n \"nazwisko_ucznia\" = \"S\\U0142awek\";\n \"numer_ewidencyjny\" = 20;\n opiekun1 = \"Mariusz S\\U0142awek\";\n opiekun2 = \" \";\n pesel = 97020298919;\n plec = 1;\n telefon = 890000002;\n \"uzytkownicy_uczniowie\" = \"<null>\";\n \"web_klasa\" = 50;\n}", "NSMergeConflict (0xd2669b0) for NSManagedObject (0x5a44480) with objectID '0x5a47830 <x-coredata://E82E75ED-96DB-4CBF-9D15-9CC106AC0052/przedmioty/p12>' with oldVersion = 7 and newVersion = 15 and old object snapshot = {\n \"id_przedmiotu\" = 1;\n \"nazwa_przedmiotu\" = Historia;\n \"skrot_nazwy\" = Hist;\n} and new cached row = {\n \"id_przedmiotu\" = 1;\n \"nazwa_przedmiotu\" = Historia;\n \"skrot_nazwy\" = Hist;\n}" ); } I've been looking for any solution but without luck. Thank You in advance for any usefull help.

    Read the article

  • Setting up Eloquent relationships in Laravel for existing InnoDB relationships

    - by adam
    I have an initial migration that sets up two tables (users and projects), with a relationship (innoDB). $table->foreign('user_id')->references('id')->on('users'); I have two Eloquent models set up, blank except for the relationship: return $this->has_many('Project'); Do i definitely need to tell eloquent about the relationship in the models and the database? I'd assumed something as comprehensive as Laravel would infer it from the Schema? Is there something I'm missing?

    Read the article

  • Upgrade TFS 2008 to 2010 on different server

    - by Chen
    Hi, I have been looking for a way to migrate and upgrade our TFS 2008 server to 2010 server preferably without losing any data. I have been looking at the TFS Integration Platform http://tfsintegration.codeplex.com/ and also Visual Studio 2010 TFS Upgrade Guide vs2010upgradeguide.codeplex.com Looking at the document TFS Integration Platform - Migration Guidance.xps using the first link, it seems to suggest that I could preserve all the data by first migrating the TFS 2008 from one server to the other and then upgrade the TFS 2008 to 2010. Is this true? Thank you, Chen

    Read the article

  • large amount of data in many text files - how to process?

    - by Stephen
    Hi, I have large amounts of data (a few terabytes) and accumulating... They are contained in many tab-delimited flat text files (each about 30MB). Most of the task involves reading the data and aggregating (summing/averaging + additional transformations) over observations/rows based on a series of predicate statements, and then saving the output as text, HDF5, or SQLite files, etc. I normally use R for such tasks but I fear this may be a bit large. Some candidate solutions are to 1) write the whole thing in C (or Fortran) 2) import the files (tables) into a relational database directly and then pull off chunks in R or Python (some of the transformations are not amenable for pure SQL solutions) 3) write the whole thing in Python Would (3) be a bad idea? I know you can wrap C routines in Python but in this case since there isn't anything computationally prohibitive (e.g., optimization routines that require many iterative calculations), I think I/O may be as much of a bottleneck as the computation itself. Do you have any recommendations on further considerations or suggestions? Thanks

    Read the article

  • iPhones SDK: Setting a relationship property object using core data?

    - by Harkonian
    I'm using core data in my app. I have two entities that are related: EntityA and EntityB. EntityA has a property of type "relationship" with EntityB. In addition, both of these entities are defined classes (not the default NSManagedObject). I'm inserting a new object into my data like this: EntityA *newEntityA = [NSEntityDescription insertNewObjectForEntityForName:@"EntityA" inManagedObjectContext:self.managedObjectContext]; newEntityA.name = @"some name"; newEntityA.entityB.name = @"some other name"; The problem is entityB.name is null. Even if I add an NSLog() statement right after assigning the value, it is null. What is the proper way of setting my "name" property of EntityB when EntityB is a property of EntityA?

    Read the article

  • Sorting DB tables from least dependent to most dependent

    - by Sergey Mikhanov
    Hi community, I'm performing a data migration and the database I'm using only allows to export and import each table separately. In such a setup importing becomes a problem since the order in which tables are imported is important (you have to import referenced tables before referencing ones). Is there any external tool that allows me to list the database tables sorted from least dependent to most dependent? Thanks in advance.

    Read the article

  • How to replace the domain name in a Wordpress database?

    - by Cristian
    I have a Wordpress database which was installed in a development environment... thus, all references to the site itself have a fixed IP address (say 192.168.16.2). Now, I have to migrate that database to a new Wordpress installation on a hosting. The problem is that the SQL dump contains a lot of references to the IP address, and I have to replace it with: my_domain.com. I could use sed or some other command to change the that from the command line, the problem is that there are a lot of configuration data which uses JSON. So what? Well, as you know, JSON arrays uses things like: s:4: to know how many chars an element has, and thus, if I just replace the IP with the domain name, the configuration files will get corrupted. I used an app for Windows some years ago that allows to change values in a database and takes care of the JSON arrays. Unfortunately, I forgot the name of the app... so the question is: do you know any app that allows me to do what I want?

    Read the article

  • Is there an existing solution to the multithreaded data structure problem?

    - by thr
    I've had the need for a multi-threaded data structure that supports these claims: Allows multiple concurrent readers and writers Is sorted Is easy to reason about Fulfilling multiple readers and one writer is a lot easier, but I really would wan't to allow multiple writers. I've been doing research into this area, and I'm aware of ConcurrentSkipList (by Lea based on work by Fraser and Harris) as it's implemented in Java SE 6. I've also implemented my own version of a concurrent Skip List based on A Provably Correct Scalable Concurrent Skip List by Herlihy, Lev, Luchangco and Shavit. These two implementations are developed by people that are light years smarter then me, but I still (somewhat ashamed, because it is amazing work) have to ask the question if these are the two only viable implementations of a concurrent multi reader/writer data structures available today?

    Read the article

  • How to auto-increment reference number persistently when NSManagedObjects created in core-data.

    - by KayKay
    In my application i am using core-data to store information and saving these data to the server using web-connectivity i have to use MySql. Basically what i want to do is to keep track of number of NSManagedObject already created and Whenever i am adding new NSManagedObject, based on that counting it will assign the class a Int_value which will act as primary_key in MySql. For examaple, there are already 10 NSManagedobjects, and when i will add new one it will assign it "11" as primary_key. these value will have to be increasing because there is no deleting of NSManagedObject. From my approach its about static member in applicationDelegate whose initial value can be any integer but should be incremented by one(like auto-increment) everytime new NSManagedObject is created and also it should be persistent. I am not clear how to do this, please give me suggestions. Thanks in advance.

    Read the article

  • Proper binding data to combobox and handling its events.

    - by Wodzu
    Hi guys. I have a table in SQL Server which looks like this: ID Code Name Surname 1 MS Mike Smith 2 JD John Doe 3 UP Unknown Person and so on... Now I want to bind the data from this table into the ComboBox in a way that in the ComboBox I have displayed value from the Code column. I am doing the binding in this way: SqlDataAdapter sqlAdapter = new SqlDataAdapter("SELECT * FROM dbo.Users ORDER BY Code", MainConnection); sqlAdapter.Fill(dsUsers, "Users"); cbxUsers.DataSource = dsUsers.Tables["Users"]; cmUsers = (CurrencyManager)cbxUsers.BindingContext[dsUsers.Tables["Users"]]; cbxUsers.DisplayMember = "Code"; And this code seems to work. I can scroll through the list of Codes. Also I can start to write code by hand and ComboBox will autocomplete the code for me. However, I wanted to put a label at the top of the combobox to display Name and Surname of the currently selected user code. My line of though was like that: "So, I need to find an event which will fire up after the change of code in combobox and in that event I will get the current DataRow..." I was browsing through the events of combobox, tried many of them but without a success. For example: private void cbxUsers_SelectionChangeCommitted(object sender, EventArgs e) { if (cmUsers != null) { DataRowView drvCurrentRowView = (DataRowView)cmUsers.Current; DataRow drCurrentRow = drvCurrentRowView.Row; lblNameSurname.Text = Convert.ToString(drCurrentRow["Name"]) + " " + Convert.ToString(drCurrentRow["Surname"]); } } This give me a strange results. Firstly when I scroll via mouse scroll it doesn't return me the row wich I am expecting to obtain. For example on JD it shows me "Mike Smith", on MS it shows me "John Doe" and on UP it shows me "Mike Smith" again! The other problem is that when I start to type in ComboBox and press enter it doesn't trigger the event. However, everything works as expected when I bind data to lblNameSurname.Text in this way: lblNameSurname.DataBindings.Add("Text", dsusers.Tables["Users"], "Name"); The problem here is that I can bind only one column and I want to have two. I don't want to use two labels for it (one to display name and other to display surname). So, what is the solution to my problem? Also, I have one question related to the data selection in ComboBox. Now, when I type something in the combobox it allows me to type letters that are not existing in the list. For example, I start to type "J" and instead of finishing with "D" so I would have "JD", I type "Jsomerandomtexthere". Combobox will allow that but such item does not exists on the list. In other words, I want combobox to prevent user from typing code which is not on the list of codes. Thanks in advance for your time.

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >