Search Results

Search found 12215 results on 489 pages for 'identity column'.

Page 249/489 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • how to query an embedded entity by using a query builder

    - by user577719
    I've searched quite a time for an answer to this question. Following Codesmell: @Entity public class Person { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) protected Integer id; @Column(nullable = true, length = 50) @Size(max = 50) private String name; @Embedded @Valid protected Adress adress; public void setId(Integer id) { this.id = id; } public Integer getId() { return this.id; } public void setName(String name) { this.name = name; } public void getName() { return this.name; } public void setAdress(Adress adress) { this.adress = adress; } public void getAdress() { return this.adress; } } @Embeddable public class Adress { @Column(nullable = false, length = 50) @Size(max = 50) @NotNull private String place; public void setPlace(String place) { this.place = place; } public void getPlace() { return this.place; } } public class PersonDaoJpa { public List<Ort> findByPerson(final Person person) { CriteriaBuilder builder = this.entityManager.getCriteriaBuilder(); CriteriaQuery<Person> query = builder.createQuery(Person.class); Root<Person> rootPerson = query.from(Person.class); List<Predicate> wherePredicates = new ArrayList<Predicate>(); if (person.getName() != null) { wherePredicates.add( builder.like(builder.lower(rootPerson.<String>get("name")), ort.getName().toLowerCase()) ); } Adresse adresse = ort.getAdresse(); if (adresse != null) { if(adresse.getPlace() != null) { // this won't work wherePredicates.add( builder.like(builder.lower(rootPerson.<String>get("person.adress.place")), adresse.getPlace().toLowerCase()) ); } } Predicate whereClause = builder.and(wherePredicates.toArray(new Predicate[0])); query.where(whereClause); return this.entityManager.createQuery(query).getResultList(); } } How can I access the Adress.place through rootPerson? rootPerson.get("place"), or rootPerson.get("adress.place") won't work...

    Read the article

  • The scroll viewer is not updating in silverlight

    - by Malcolm
    I have an image inside scroll viewer and i have a control for zooming the image and in zooming event i change the scale of an image ,as below : void zoomSlider_ValueChanged(object sender, RoutedPropertyChangedEventArgs<double> e) { scale.ScaleX = e.NewValue; scale.ScaleY = e.NewValue; //scroll is a name of scrolviewer scroll.UpdateLayout(); } And a xaml below <Grid x:Name="Preview" Grid.Column="1"> <Border x:Name="OuterBorder" BorderThickness="1" BorderBrush="#A3A3A3" > <Border x:Name="InnerBorder" BorderBrush="Transparent" Margin="2" > <Grid Background="White" > <Grid.ColumnDefinitions> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <ScrollViewer x:Name="scroll" HorizontalScrollBarVisibility="Auto" Grid.Column="0" VerticalScrollBarVisibility="Auto" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Themes:ThemeManager.StyleKey="TreeScrollViewer"> <Image Source="../charge_chargeline.PNG" > <Image.RenderTransform> <CompositeTransform x:Name="**scale**" /> </Image.RenderTransform> </Image> </ScrollViewer> <Border HorizontalAlignment="Center" CornerRadius="0,0,2,2" Width="250" Height="24" VerticalAlignment="Top"> <Border.Background> <LinearGradientBrush StartPoint="0,0" EndPoint="0,1"> <GradientStop Color="#CDD1D4" Offset="0.0"/> <GradientStop Color="#C8CACD" Offset="1.0"/> </LinearGradientBrush> </Border.Background> <ChargeEntry:Zoom x:Name="zoominout" /> </Border> </Grid> </Border> </Border> </Grid>

    Read the article

  • Validating a value for a DataColumn

    - by Richard Neil Ilagan
    Hello! I'm using a DataGrid with edit functionalities in my project. It's handy, compared to having to edit its source data manually, but sadly, that means that I'll have to deal with validating user input a bit more. And my problem is basically just that. When I set my DataGrid to EDIT mode, modify the values and then set it to UPDATE, what is the best way to check if a value that I've entered is, in fact, compatible with the corresponding column's data type? i.e. (simple example) // assuming DataTable dt = new DataTable(); dt.Columns.Add("aa",typeof(System.Int32)); DataGrid dg = new DataGrid(); dg.DataSource = dt; dg.DataBind(); dg.UpdateCommand += dg_Update; // this is the update handler protected void dg_Update(object src, DataGridCommandEventArgs e) { string newValue = (someValueIEnteredInTextBox); // HOW DO I CHECK IF [newValue] IS COMPATIBLE WITH COLUMN "aa" ABOVE? dt.LoadDataRow(newValue, true); } Thanks guys. Any leads would be so much help.

    Read the article

  • Are there good reasons not to use an ORM?

    - by hangy
    During my apprenticeship, I have used NHibernate for some smaller projects which I mostly coded and designed on my own. Now, before starting some bigger project, the discussion arose how to design data access and whether or not to use an ORM layer. As I am still in my apprenticeship and still consider myself a beginner in enterprise programming, I did not really try to push in my opinion, which is that using an object relational mapper to the database can ease development quite a lot. The other coders in the development team are much more experienced than me, so I think I will just do what they say. :-) However, I do not completely understand two of the main reasons for not using NHibernate or a similar project: One can just build one’s own data access objects with SQL queries and copy those queries out of Microsoft SQL Server Management Studio. Debugging an ORM can be hard. So, of course I could just build my data access layer with a lot of SELECTs etc, but here I miss the advantage of automatic joins, lazy-loading proxy classes and a lower maintenance effort if a table gets a new column or a column gets renamed. (Updating numerous SELECT, INSERT and UPDATE queries vs. updating the mapping config and possibly refactoring the business classes and DTOs.) Also, using NHibernate you can run into unforeseen problems if you do not know the framework very well. That could be, for example, trusting the Table.hbm.xml where you set a string’s length to be automatically validated. However, I can also imagine similar bugs in a “simple” SqlConnection query based data access layer. Finally, are those arguments mentioned above really a good reason not to utilise an ORM for a non-trivial database based enterprise application? Are there probably other arguments they/I might have missed? (I should probably add that I think this is like the first “big” .NET/C# based application which will require teamwork. Good practices, which are seen as pretty normal on Stack Overflow, such as unit testing or continuous integration, are non-existing here up to now.)

    Read the article

  • How do i solve A datatable problem in JSF

    - by Nitesh Panchal
    I have a databale on index.xhtml <h:dataTable style="border: solid 2px black;" value="#{IndexBean.bookList}" var="item" binding="#{IndexBean.datatableBooks}"> <h:column> <h:commandButton value="Edit" actionListener="#{IndexBean.editBook}"> <f:param name="index" value="#{IndexBean.datatableBooks.rowIndex}"/> </h:commandButton> </h:column> </h:dataTable> My bean :- @ManagedBean(name="IndexBean") @ViewScoped public class IndexBean implements Serializable { private HtmlDataTable datatableBooks; public HtmlDataTable getDatatableBooks() { return datatableBooks; } public void setDatatableBooks(HtmlDataTable datatableBooks) { this.datatableBooks = datatableBooks; } public void editBook() throws IOException{ int index = Integer.parseInt(FacesContext.getCurrentInstance().getExternalContext().getRequestParameterMap().get("index").toString()); System.out.println(index); } } My problem is i always get the same index in server log even though i click the different edit buttons, Imagine that there is one collection which is supplied to the datatable. I have not shown that in bean. If i change scope from ViewScope to RequestScope it works fine. What can be the problem with @ViewScoped? Thanks in advance :)

    Read the article

  • Multidimensional data structure?

    - by Austin Truong
    I need a multidimensional data structure with a row and a column. Must be able to insert elements any location in the data structure. Example: {A , B} I want to insert C in between A and B. {A, C, B}. Dynamic: I do not know the size of the data structure. Another example: I know the [row][col] of where I want to insert the element. EX. insert("A", 1, 5), where A is the element to be inserted, 1 is the row, 5 is the column. EDIT I want to be able to insert like this. static void Main(string[] args) { Program p = new Program(); List<string> list = new List<string>(); list.Insert(1, "HELLO"); list.Insert(5, "RAWR"); for (int i = 0; i < list.Count; i++) { Console.WriteLine(list[i]); } Console.ReadKey(); } And of course this crashes with an out of bounds error. So in a sense I will have a user who will choose which ROW and COL to insert the element to.

    Read the article

  • creating arrays in for loops.... without creating an endless loop that ruins my day!

    - by Peter
    Hey Guys, Im starting with a csv varible of column names. This is then exploded into an array, then counted and tossed into a for loop that is supposed to create another array. Every time I run it, it goes into this endless loop that just hammers away at my browser...until it dies. :( Here is the code.. $columns = 'id, name, phone, blood_type'; <code> $column_array = explode(',',$columns); $column_length = count($column_array); //loop through the column length, create post vars and set default for($i = 0; $i <= $column_length; $i++) { //create the array iSortCol_1 => $column_array[1]... $array[] = 'iSortCol_'.$i = $column_array[0]; } </code> What I would like to get out of all this is a new array that looks like so.. <code> $goal = array( "iSortCol_1" => "id", "iSortCol_2" => "name", "iSortCol_3" => "phone", "iSortCol_4" => "blood_type" ); </code>

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

  • StoreFilterField input doesn't react

    - by user1289877
    I'm trying to build grid with build in column filtering (using sencha gxt), here is my code: public Grid<Stock> createGrid() { // Columns definition ColumnConfig<Stock, String> nameCol = new ColumnConfig<Stock, String>(props.name(), 100, "Company"); // Column model definition and creation List<ColumnConfig<Stock, ?>> cl = new ArrayList<ColumnConfig<Stock, ?>>(); cl.add(nameCol); ColumnModel<Stock> cm = new ColumnModel<Stock>(cl); // Data populating ListStore<Stock> store = new ListStore<Stock>(props.key()); store.addAll(TestData.getStocks()); // Grid creation with data final Grid<Stock> grid = new Grid<Stock>(store, cm); grid.getView().setAutoExpandColumn(nameCol); grid.setBorders(false); grid.getView().setStripeRows(true); grid.getView().setColumnLines(true); // Filters definition StoreFilterField<Stock> filter = new StoreFilterField<Stock>() { @Override protected boolean doSelect(Store<Stock> store, Stock parent, Stock item, String filter) { // Window.alert(String.valueOf("a")); String name = item.getName(); name = name.toLowerCase(); if (name.startsWith(filter.toLowerCase())) { return true; } return false; } }; filter.bind(store); cm.addHeaderGroup(0, 0, new HeaderGroupConfig(filter, 1, 1)); filter.focus(); return grid; } My problem is: after I run this code, I cannot write anything to filter input, I'm using test data and classes (Stock.java and StockProperties.java) from this example: http://sencha.com/examples-dev/#ExamplePlace:filtergrid I try to put allert in doSelect method to check if this function was called, but it wasn't. Any idea will be welcome. Thanks.

    Read the article

  • ADO.NET DataTable/DataRow Thread Safety

    - by Allen E. Scharfenberg
    Introduction A user reported to me this morning that he was having an issue with inconsistent results (namely, column values sometimes coming out null when they should not be) of some parallel execution code that we provide as part of an internal framework. This code has worked fine in the past and has not been tampered with lately, but it got me to thinking about the following snippet: Code Sample lock (ResultTable) { newRow = ResultTable.NewRow(); } newRow["Key"] = currentKey; foreach (KeyValuePair<string, object> output in outputs) { object resultValue = output.Value; newRow[output.Name] = resultValue != null ? resultValue : DBNull.Value; } lock (ResultTable) { ResultTable.Rows.Add(newRow); } (No guarantees that that compiles, hand-edited to mask proprietery information.) Explanation We have this cascading type of locking code other places in our system, and it works fine, but this is the first instance of cascading locking code that I have come across that interacts with ADO .NET. As we all know, members of framework objects are usually not thread safe (which is the case in this situation), but the cascading locking should ensure that we are not reading and writing to ResultTable.Rows concurrently. We are safe, right? Hypothesis Well, the cascading lock code does not ensure that we are not reading from or writing to ResultTable.Rows at the same time that we are assigning values to columns in the new row. What if ADO .NET uses some kind of buffer for assigning column values that is not thread safe--even when different object types are involved (DataTable vs. DataRow)? Has anyone run into anything like this before? I thought I would ask here at StackOverflow before beating my head against this for hours on end :) Conclusion Well, the consensus appears to be that changing the cascading lock to a full lock has resolved the issue. That is not the result that I expected, but the full lock version has not produced the issue after many, many, many tests. The lesson: be wary of cascading locks used on APIs that you do not control. Who knows what may be going on under the covers!

    Read the article

  • How to add an array value to the middle of an associative array?

    - by Citizen
    Lets say I have this array: $array = array('a'=>1,'z'=>2,'d'=>4); Later in the script, I want to add the value 'c'=>3 before 'z'. How can I do this? EDIT: Yes, the order is important. When I run a foreach() through the array, I do NOT want this newly added value added to the end of the array. I am getting this array from a mysql_fetch_assoc() EDIT 2: The keys I used above are placeholders. Using ksort() will not achieve what I want. EDIT 3: http://www.php.net/manual/en/function.array-splice.php#88896 accomplishes what I'm looking for but I'm looking for something simpler. EDIT 4: Thanks for the downvotes. I gave feedback to your answers and you couldn't help, so you downvoted and requested to close the question because you didn't know the answer. Thanks. EDIT 5: Take a sample db table with about 30 columns. I get this data using mysql_fetch_assoc(). In this new array, after column 'pizza' and 'drink', I want to add a new column 'full_dinner' that combines the values of 'pizza' and 'drink' so that when I run a foreach() on the said array, 'full_dinner' comes directly after 'drink'

    Read the article

  • excel graphs using perl

    - by user1822725
    i amfacing problem when i ran the script its giving error like Can't locate object method "add_chart" via package "Spreadsheet::WriteExcel" at chart_column.pl line 33. May i know what is the problem here? And i am using perl, v5.8.5 built for x86_64-linux. #!/usr/bin/perl -w ############################################################################### # # A simple demo of Column charts in Spreadsheet::WriteExcel. # # reverse('©'), December 2009, John McNamara, [email protected] # use strict; use Spreadsheet::WriteExcel; my $workbook = Spreadsheet::WriteExcel->new( 'chart_column.xls' ); my $worksheet = $workbook->add_worksheet(); my $bold = $workbook->add_format( bold => 1 ); # Add the worksheet data that the charts will refer to. my $headings = [ 'Category', 'Values 1', 'Values 2' ]; my $data = [ [ 2, 3, 4, 5, 6, 7 ], [ 1, 4, 5, 2, 1, 5 ], [ 3, 6, 7, 5, 4, 3 ], ]; $worksheet->write( 'A1', $headings, $bold ); $worksheet->write( 'A2', $data ); ############################################################################### # # Example 1. A minimal chart. # my $chart1 = $workbook->add_chart( type => 'column' ); # Add values only. Use the default categories. $chart1->add_series( values => '=Sheet1!$B$2:$B$7' ); # Insert the chart into the main worksheet. $worksheet->insert_chart( 'E2', $chart1 );

    Read the article

  • Best way to migrate export/import from SQL Server to oracle

    - by matao
    Hi guys! I'm faced with needing access for reporting to some data that lives in Oracle and other data that lives in a SQL Server 2000 database. For various reasons these live on different sides of a firewall. Now we're looking at doing an export/import from sql server to oracle and I'd like some advice on the best way to go about it... The procedure will need to be fully automated and run nightly, so that excludes using the SQL developer tools. I also can't make a live link between databases from our (oracle) side as the firewall is in the way. The data needs to be transformed in the process from a star schema to a de-normalised table ready for reporting. What I'm thinking about is writing a monster query for SQL Server (which I mostly have already) that will denormalise and read out the data from SQL Server into a flat file using the sql server equivalent of sqlplus as a scheduled task, dump into a Well Known Location, then on the oracle side have a cron job that copies down the file and loads it with sql loader and rebuilds indexes etc. This is all doable, but very manual. Is there one or a combination of FOSS or standard oracle/SQL Server tools that could automate this for me? the Irreducible complexity is the query on one side and building indexes on the other, but I would love to not have to write the CSV dumping detail or the SQL loader script, just say dump this view out to CSV on one side, and on the other truncate and insert into this table from CSV and not worry about mapping column names and all other arcane sqlldr voodoo... best practices? thoughts? comments? edit: I have about 50+ columns all of varying types and lengths in my dataset, which is why I'd prefer to not have to write out how to generate and map each single column...

    Read the article

  • richfaces suggestionBox passing additional values to backing bean

    - by Martlark
    When using the rich faces suggestionBox how can you pass more than one id or value from the page with the text input to the suggestionBox backing bean. ie: to show a list of suggested cities within a selected state? Here is my autoComplete method. public List< Suburb > autocomplete(Object suggest) { String pref = (String) suggest; ArrayList< Suburb > result = new ArrayList< Suburb >(); Iterator< Suburb > iterator = getSuburbs().iterator(); while( iterator.hasNext() ) { Suburb elem = ((Suburb) iterator.next()); if( (elem.getName() != null && elem.getName().toLowerCase().indexOf( pref.toLowerCase() ) == 0) || "".equals( pref ) ) { result.add( elem ); } } return result; } As you can see there is one value passed from the page, Object suggest, which is the text of the h:inputText (in the faceLets m:textFormRow ) <m:textFormRow id="suburb" label="#{msgs.suburbPrompt}" property="#{bean[dto].addressDTO.suburb}" required="true" maxlength="100" size="30" /> <rich:suggestionbox height="200" width="200" usingSuggestObjects="true" suggestionAction="#{suburbsMBean.autocomplete}" var="suburb" for="suburb" fetchValue="#{suburb.name}" id="suggestion"> <h:column> <h:outputText value="#{suburb.name}" /> </h:column> </rich:suggestionbox> Earlier in the page you can select a state which I'd like to use to pare down the list of suburbs that the suggestion box displays.

    Read the article

  • Compound Primary Key in Hibernate using Annotations

    - by Rich
    Hi, I have a table which uses two columns to represent its primary key, a transaction id and then the sequence number. I tried what was recommended http://docs.jboss.org/hibernate/stable/annotations/reference/en/html_single/#entity-mapping in section 2.2.3.2.2, but when I used the Hibernate session to commit this Entity object, it leaves out the TXN_ID field in the insert statement and only includes the BA_SEQ field! What's going wrong? Here's the related code excerpt: @Id @Column(name="TXN_ID") private long txn_id; public long getTxnId(){return txn_id;} public void setTxnId(long t){this.txn_id=t;} @Id @Column(name="BA_SEQ") private int seq; public int getSeq(){return seq;} public void setSeq(int s){this.seq=s;} And here are some log statements to show what exactly happens to fail: In createKeepTxnId of DAO base class: about to commit Transaction :: txn_id->90625 seq->0 ...<Snip>... Hibernate: insert into TBL (BA_ACCT_TXN_ID, BA_AUTH_SRVC_TXN_ID, BILL_SRVC_ID, BA_BILL_SRVC_TXN_ID, BA_CAUSE_TXN_ID, BA_CHANNEL, CUSTOMER_ID, BA_MERCHANT_FREETEXT, MERCHANT_ID, MERCHANT_PARENT_ID, MERCHANT_ROOT_ID, BA_MERCHANT_TXN_ID, BA_PRICE, BA_PRICE_CRNCY, BA_PROP_REQS, BA_PROP_VALS, BA_REFERENCE, RESERVED_1, RESERVED_2, RESERVED_3, SRVC_PROD_ID, BA_STATUS, BA_TAX_NAME, BA_TAX_RATE, BA_TIMESTAMP, BA_SEQ) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) [WARN] util.JDBCExceptionReporter SQL Error: 1400, SQLState: 23000 [ERROR] util.JDBCExceptionReporter ORA-01400: cannot insert NULL into ("SCHEMA"."TBL"."TXN_ID") The important thing to note is I print out the entity object which has a txn_id set, and then the following insert into statement does not include TXN_ID in the listing and thus the NOT NULL table constraint rejects the query.

    Read the article

  • import csv file/excel into sql database asp.net

    - by kiev
    Hi everyone! I am starting a project with asp.net visual studio 2008 / SQL 2000 (2005 in future) using c#. The tricky part for me is that the existing DB schema changes often and the import files columns will all have to me matched up with the existing db schema since they may not be one to one match on column names. (There is a lookup table that provides the tables schema with column names I will use) I am exploring different ways to approach this, and need some expert advice. Is there any existing controls or frameworks that I can leverage to do any of this? So far I explored FileUpload .NET control, as well as some 3rd party upload controls to accomplish the upload such as SlickUpload but the files uploaded should be < 500mb Next part is reading of my csv /excel and parsing it for display to the user so they can match it with our db schema. I saw CSVReader and others but for excel its more difficult since I will need to support different versions. Essentially The user performing this import will insert and/or update several tables from this import file. There are other more advance requirements like record matching but and preview of the import records, but I wish to get through understanding how to do this first. Update: I ended up using csvReader with LumenWorks.Framework for uploading the csv files.

    Read the article

  • Aligning decimal points in HTML

    - by ijw
    I have a table containing decimal numbers in one column. I'm looking to align them in a manner similar to a word processor's "decimal tab" feature, so that all the points sit on a vertical line. I have two possible solutions at the moment but I'm hoping for something better... Solution 1: Split the numbers within the HTML, e.g. <td><div>1234</div><div class='dp'>.5</div></td> with .dp { width: 3em; } (Yes, this solution doesn't quite work as-is. The concept is, however, valid.) Solution 2: I found mention of <COL ALIGN="CHAR" CHAR="."> This is part of HTML4 according to the reference page, but it doesn't work in FF3.5, Safari 4 or IE7, which are the browsers I have to hand. It also has the problem that you can't pull out the numeric formatting to CSS (although, since it's affecting a whole column, I suppose that's not too surprising). Thus, anyone have a better idea?

    Read the article

  • MySQL Database is Indexed at Apache Solr, How to access it via URL

    - by Wasim
    data-config.xml <dataConfig> <dataSource encoding="UTF-8" type="JdbcDataSource" driver="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/somevisits" user="root" password=""/> <document name="somevisits"> <entity name="login" query="select * from login"> <field column="sv_id" name="sv_id" /> <field column="sv_username" name="sv_username" /> </entity> </document> </dataConfig> schema.xml <?xml version="1.0" encoding="UTF-8" ?> <schema name="example" version="1.5"> <fields> <field name="sv_id" type="string" indexed="true" stored="true" required="true" multiValued="false" /> <field name="username" type="string" indexed="true" stored="true" required="true"/> <field name="_version_" type="long" indexed="true" stored="true" multiValued="false"/> <field name="text" type="string" indexed="true" stored="false" multiValued="true"/> </fields> <uniqueKey>sv_id</uniqueKey> <types> <fieldType name="string" class="solr.StrField" sortMissingLast="true" /> <fieldType name="long" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0"/> </types> </schema> Solr successfully imported mysql database using full http://[localSolr]:8983/solr/#/collection1/dataimport?command=full-import My question is, how to access that mysql imported database now?

    Read the article

  • Performing a SVD on tweets. Memory problem

    - by plotti
    I have generated a huge csv file as an output from my pos tagging and stemming. It looks like this: word1, word2, word3, ..., word14400 person1 1 2 0 1 person2 0 0 1 0 ... person650 It contains the word counts for each person. Like this I am getting characteristic vectors for each person. I want to run a SVD on this beast, but it seems the matrix is too big to be held in memory to perform the operation. My quesion is: should i reduce the column size by removing words which have a column sum of for example 1, which means that they have been used only once. Do I bias the data too much with this attempt? I tried the rapidminer attempt, by loading the csv into the db. and then sequentially reading it in with batches for processing, like rapidminer proposes. But Mysql can't store that many columns in a table. If i transpose the data, and then retranspose it on import it also takes ages.... -- So in general I am asking for advice how to perform a svd on such a corpus.

    Read the article

  • Dynamically disable custom (VBA) Excel context menu buttons?

    - by Lopsided
    The Scenario Hi guys, I am about to add a few custom controls to the cell context menu in my Excel workbook using the instructions found on this MSDN page. The only problem I am having is that I need the items to only be enabled for a specific column/range of cells. I've looked around, and I've been unable to find any steps for this--there are some for VSTO development (written in C#), but that is not what I need. I plan to write this using the VBA IDE built into Office, and perhaps a bit of XML using the Custom UI Editor. The Question So basically, I'm looking for a way to run a function at the time the context menu is called (i.e., upon right-click) that validates the selection to make sure it is in the appropriate column. If it isn't, I would like my custom buttons to be greyed out. P.S. Please don't think I am asking you to write my code. Creating these buttons should be very simple, as I have created many before (albeit they were all Ribbon items), and I hope it is okay to ask for some quick assistance on this very specific issue. Thank you in advance!

    Read the article

  • Converting Single dimensional vector or array to two dimension in Matlab

    - by pac
    Well I do not know if I used the exact term. I tried to find an answer on the net. Here is what i need: I have a vector a = 1 4 7 2 5 8 3 6 9 If I do a(4) the value is 4. So it is reading first column top to buttom then continuing to next .... I don't know why. However, What I need is to call it using two indices. As row and column: a(3,2)= 4 or even better if i can call it in the following way: a{3}(2)=4 What is this process really called (want to learn) and how to perform in matlab. I thought of a loop. Is there a built in function Thanks a lot Check this: a = 18 18 16 18 18 18 16 0 0 0 16 16 18 0 18 16 0 18 18 16 18 0 18 18 0 16 0 0 0 18 18 0 18 18 16 0 16 0 18 18 >> a(4) ans = 18 >> a(5) ans = 18 >> a(10) ans = 18 I tried reshape. it is reshaping not converting into 2 indeces

    Read the article

  • Html.RadioButton group defaulting to 0

    - by awrigley
    Hi A default value of 0 is creeping into a Surveys app I am developing, for no reason that I can see. The problem is as follows: I have a group of Html.RadioButtons that represent the possible values a user can choose to answer a survey question (1 == Not at all, 2 == A little, 3 == A lot). I have used a tinyint datatype, that does not allow null, to store the answer to each question. The view code looks like this: <ol class="SurveyQuestions"> <% foreach (SurveyQuestion question in Model.Questions) { string col = question.QuestionColumn; %> <li><%=question.QuestionText%> <ul style="float:right;" class="MultiChoice"> <li><%= Html.RadioButton(col, "1")%></li> <li><%= Html.RadioButton(col, "2")%></li> <li><%= Html.RadioButton(col, "3")%></li> </ul> <%= Html.ValidationMessage(col, "*") %> </li> <% } %> </ol> [Note on the above code:] Each survey has about 70 questions, so I have put the questions text in one table, and store the results in a different table. I have put the Questions into my form view model (hence Model.Questions); the questions table has a field called QuestionColumn that allows me to link up the answer table column to the question, as shown above (<%= Html.RadioButton(col, "1")%, etc) [/Note] However, when the user DOESN'T answer the question, the value 0 is getting inserted into the database column. As a result, I don't get what I expect, ie, a validation error. In no place have I stipulated a default value of 0 for the fields in the answers table. So what is happening? Any ideas?

    Read the article

  • how can i set up a uniqueness constraint in mysql for columns that can be null?

    - by user299689
    I know that in MySQL, UNIQUE constraits don't treat NULL values as equal. So if I have a unique constraint on ColumnX, then two separate rows can have values of NULL for ColumnX and this wouldn't violate the constraint. How can I work around this? I can't just set the value to an arbitrary constant that I can flag, because ColumnX in my case is actually a foreign key to another table. What are my options here? Please note that this table also has an "id" column that is its primary key. Since I'm using Ruby on Rails, it's important to keep this id column as the primary key. Note 2: In reality, my unique key encompasses many columns, and some of them have to be null, because they are foreign keys, and only one of them should be non-null. What I'm actually trying to do is to "simulate" a polymorphic relationship in a way that keep referential integrity in the db, but using the technique outlined in the first part of the question asked here: http://stackoverflow.com/questions/922184/why-can-you-not-have-a-foreign-key-in-a-polymorphic-association

    Read the article

  • loop html table using jquery and for loop (not for each)

    - by Mandeep Singh
    I have a HTML table <table id="mytab"> <thead> <th>col1</th> <th>col2</th> </thead> <tbody> <tr> <td>1</td> <td>2</td> </tr> <tr> <td>11</td> <td>22</td> </tbody> </table> and i am writing the jquery function function LoopTable() { $row = $('#mytab tbody >tr"); //here I have successfully find all the rows. //now i want to loop on rows and find each column row for (var i=0; i<$rows.length; i++) { //need something here to find col data } what I should use to have column values from row

    Read the article

  • How to create custom query for CollectionOfElements

    - by Shervin
    Hi. I have problems creating a custom query. This is what entities: @Entity public class ApplicationProcess { @CollectionOfElements private Set<Template> defaultTemplates; //more fields } And Template.java @Embeddable @EqualsAndHashCode(exclude={"file", "selected", "used"}) public class Template implements Comparable<Template> { @Setter private ApplicationProcess applicationProcess; @Setter private Boolean used = Boolean.valueOf(false); public Template() { } @Parent public ApplicationProcess getApplicationProcess() { return applicationProcess; } @Column(nullable = false) @NotNull public String getName() { return name; } @Column(nullable = true) public Boolean isUsed() { return used; } public int compareTo(Template o) { return getName().compareTo(o.getName()); } } I want to create a update statement. I have tried these two: int v = entityManager.createQuery("update ApplicationProcess_defaultTemplates t set t.used = true " + "WHERE t.applicationProcess.id=:apId").setParameter("apId", ap.getId()) .executeUpdate(); ApplicationProcess_defaultTemplates is not mapped [update ApplicationProcess_defaultTemplates t set t.used = true WHERE t.applicationProcess.id=:apId] And I have tried int v = entityManager.createQuery("update Template t set t.used = true " + "WHERE t.applicationProcess.id=:apId").setParameter("apId", ap.getId()) .executeUpdate(); With the same error: Template is not mapped [update Template t set t.used = true WHERE t.applicationProcess.id=:apId] Any ideas? UPDATE I fixed it by creating native query int v = entityManager.createNativeQuery("update ApplicationProcess_defaultTemplates t set t.used=true where t.ApplicationProcess_id=:apId").setParameter("apId", ap.getId()).executeUpdate();

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >