Search Results

Search found 42428 results on 1698 pages for 'database query'.

Page 653/1698 | < Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >

  • Android: forward search queries to one single activity that handles search

    - by Stefan Klumpp
    I have an activity handling search (ACTIVITY_1), which works perfectly when I use the search (via SEARCH button on the phone) within/from this activity. However, when I use search from another activity (ACTIVITY_2..x) by implementing onNewIntent and forward the query string to my Search_Activity.class (ACTIVITY_1) @Override protected void onNewIntent(Intent intent) { Log.i(TAG, "onNewIntent()"); if (Intent.ACTION_SEARCH.equals(intent.getAction())) { Log.i(TAG, "===== Intent: ACTION_SEARCH ====="); Intent myIntent = new Intent(getBaseContext(), Search_Activity.class); myIntent.setAction(Intent.ACTION_SEARCH); myIntent.putExtra(SearchManager.QUERY, intent.getStringExtra(SearchManager.QUERY)); startActivity(myIntent); } } it always pauses ACTIVITY_2 first and then goes to onCreate() of ACTIVITY_2. Why does it recreate my ACTIVITY_2 when it is already there and doesn't go to onNewIntent directly? Is there another way I can forward search queries directly to ACTIVITY_1? For example via a setting in the Manifest.xml Is it possible to generally forward all search queries automatically to ACTIVITY_1 without even implementing onNewIntent in all the other activities? Currently I have to put an <intent-filter> in every single activity to "activate" my custom search there and forward the query then to the activity that handles search via the onNewIntent (as shown above). <activity android:name=".Another_Activity" android:theme="@style/MyTheme"> <intent-filter> <action android:name="android.intent.action.SEARCH" /> <category android:name="android.intent.category.DEFAULT" /> </intent-filter> <meta-data android:name="android.app.searchable" android:resource="@xml/searchable" /> </activity>

    Read the article

  • Inserting an image into sqlserver gives an "operand type clash"

    - by Termedi
    I'm trying to save an image in a sql server 2000 database. The data type of the column is image. Here is the code: Image Upload: <?php include('config.php'); if(is_uploaded_file($_FILES['userfile']['tmp_name'])) { $fileName = $_FILES['userfile']['name']; $tmpName = $_FILES['userfile']['tmp_name']; $fileSize = $_FILES['userfile']['size']; $fileType = $_FILES['userfile']['type']; $size = filesize($tmpName); set_magic_quotes_runtime(0);//to desactive the default escape spacials caracters made by PHP in the externes files $img_binaire = base64_encode(fread(fopen(str_replace("'","''",$tmpName), "r"), $size)); $query = "INSERT INTO test_image (image_name, image_content, image_size) ". "VALUES ('{$fileName}','{$img_binaire}', '{$size}')"; odbc_exec($conn, $query) or die('Error, query failed'); echo "<br>File $fileName uploaded<br>"; echo "<br>File Size: $fileSize <br>"; } ?> Image Show: <?php include('config.php'); $sql = "select * from test_image where id =2"; $rsl = odbc_exec($conn, $sql); $image_info = odbc_fetch_array($rsl); //$count = sizeof($image_info['image_content']); //header('Accept-Ranges: bytes'); //header('Content-Length: '.$image_info['image_size']); //header("Content-length: 17397"); header('Content-Type: image/jpeg'); echo base64_decode($image_info['image_content']); //echo bindec($image_info['image_content']); ?> It gives the following error: Error: Warning: odbc_exec() [function.odbc-exec]: SQL error: [Microsoft][ODBC SQL Server Driver][SQL Server]Operand type clash: text is incompatible with image, SQL state 22005 in SQLExecDirect in C:\xampp\htdocs\test\upload.php on line 25 Error, query failed What do I need to do differently?

    Read the article

  • How to avoid chaotic ASP.NET web application deployment?

    - by emzero
    Ok, so here's the thing. I'm developing an existing (it started being an ASP classic app, so you can imagine :P) web application under ASP.NET 4.0 and SQLServer 2005. We are 4 developers using local instances of SQL Server 2005 Express, having the source-code and the Visual Studio database project This webapp has several "universes" (that's how we call it). Every universe has its own database (currently on the same server) but they all share the same schema (tables, sprocs, etc) and the same source/site code. So manually deploying is really annoying, because I have to deploy the source code and then run the sql scripts manually on each database. I know that manual deploying can cause problems, so I'm looking for a way of automating it. We've recently created a Visual Studio Database Project to manage the schema and generate the diff-schema scripts with different targets. I don't have idea how to put the pieces together I would like to: Have a way to make a "sync" deploy to a target server (thanksfully I have full RDC access to the servers so I can install things if required). With "sync" deploy I mean that I don't want to fully deploy the whole application, because it has lots of files and I just want to deploy those new or changed. Generate diff-sql update scripts for every database target and combine it to just 1 script. For this I should have some list of the databases names somewhere. Copy the site files and executing the generated sql script in an easy and automated way. I've read about MSBuild, MS WebDeploy, NAnt, etc. But I don't really know where to start and I really want to get rid of this manual deploy. If there is a better and easier way of doing it than what I enumerated, I'll be pleased to read your option. I know this is not a very specific question but I've googled a lot about it and it seems I cannot figure out how to do it. I've never used any automation tool to deploy. Any help will be really appreciated, Thank you all, Regards

    Read the article

  • Self-referencing tables in Linq2Sql

    - by J-Man
    Hi, I've seen a lot of questions on self-referencing tables in Linq2Sql and how to eagerly load all child records for a particular root object. I've implemented a temporary solution by accessing all underlying properties, but you can see that this doesn't do the performance any good. The thing is though, that all records are correlated with each-other using a correlation GUID. Example below: RootElement - Id: 1 - ParentId: null - CorrelationId: 4D68E512-4B55-44f4-BA5A-174B630A03DD ChildElement1 - Id: 2 - ParentId: 1 - CorrelationId: 4D68E512-4B55-44f4-BA5A-174B630A03DD ChildElement2 - Id: 3 - ParentId: 2 - CorrelationId: 4D68E512-4B55-44f4-BA5A-174B630A03DD ChildElement1 - Id: 4 - ParentId: 2 - CorrelationId: 4D68E512-4B55-44f4-BA5A-174B630A03DD In my case, I do have access to the correlationId, so I can retrieve all of my records by performing the following query: from element in db.Elements where element.CorrelationId == '4D68E512-4B55-44f4-BA5A-174B630A03DD' select element; But, of course, I want these elements associated with each other by executing this query: from element in db.Elements where element.CorrelationId == '4D68E512-4B55-44f4-BA5A-174B630A03DD' && element.ParentId == null select element; My question is: is it possible to combine the results the first query as some sort of 'caching mechanism' for the query where I get the root element? Thanks for the input. J.

    Read the article

  • By passing active x object while sending request to url?

    - by Burak Dede
    Hi i am trying to login site with request, using username and password , i can succesfully login the user than i want to parse some html and send it to database ,what it return is activex object that i can not do anything(by the way i am using java to do that) ,in order to bypasss the active x object what is your solution? 1-First come to my mind i can write c# application that using internex explorer dll than login user, it parse html and send it to database than i can use data in database

    Read the article

  • Sharing ASP.NET State databases between multiple apps

    - by MikeWyatt
    Is it better for a collection of ASP.NET web apps to share the same session database, or should each one have its own? If there is no significant difference, having a single database would be preferable due to easier maintenance. Background My team has an assortment of ASP.NET web apps, all written in either Monorail 1.1 or ASP.NET MVC 1.0. Each app currently uses a dedicated session state database. I'm working on adding a new site to that list, and am debating whether I should create another new session database, or just share an existing one with another app.

    Read the article

  • (EXCEL)VBA Spin button which steps through in an sql databases date time

    - by Gulredy
    I have an sql Database table in MySQL which have lots of rows with varied date time values. For example: 2012-08-21 10:10:00 <-- with these date there are around 12 rows 2012-08-21 15:31:00 <-- with these date there are around 5 rows 2012-08-22 11:40:00 <-- with these date there are around 10 rows 2012-08-22 12:17:00 <-- with these date there are around 9 rows 2012-08-22 12:18:00 <-- with these date there are around 7 rows 2012-08-25 07:21:00 <-- with these date there are around 6 rows If the user clicks on the SpinButton1_SpinUp() or SpinButton1_SpinDown() button then it should do the following: The SpinButton1_SpinUp() button should filter out those data from an sql table which is the next after what we are currently on now. Example: We have currently selected: 2012-08-21 15:31:00. The user hits the SpinUp button then the program selects those date from the database, which is the next higher value like this one: 2012-08-22 11:40:00. So the user hits the SpinUp button the data which is selected in the database will change from those with date: 2012-08-21 15:31:00 to those with date: 2012-08-22 11:40:00 The SpinButton1_SpinDown() will do exactly the reverse of the SpinUp button. When the user hits the SpinDown button the data which is selected in the database will change from those with date: 2012-08-21 15:31:00 to those with date 2012-08-21 10:10:00 So I think the date which we are currently on, should be stored in a variable. But on button hit not every bigger or lower data should be selected in the database, only those which are the closest bigger or the closest lower date. How can I do this? I hope I described my problem understandable. My native language is not english, so misunderstandings can occur! Please ask if you don't understand something! Thank you for reading!

    Read the article

  • org.hibernate.hql.ast.QuerySyntaxException: TABLE NAME is not mapped

    - by Coronatus
    I have two models, Item and ShopSection. They have a many-to-many relationship. @Entity(name = "item") public class Item extends Model { @ManyToMany(cascade = CascadeType.PERSIST) public Set<ShopSection> sections; } @Entity(name = "shop_section") public class ShopSection extends Model { public List<Item> findActiveItems(int page, int length) { return Item.find("select distinct i from Item i join i.sections as s where s.id = ?", id).fetch(page, length); } } findActiveItems is meant to find items in a section, but I get this error: org.hibernate.hql.ast.QuerySyntaxException: Item is not mapped [select distinct i from Item i join i.sections as s where s.id = ?] at org.hibernate.hql.ast.util.SessionFactoryHelper.requireClassPersister(SessionFactoryHelper.java:180) at org.hibernate.hql.ast.tree.FromElementFactory.addFromElement(FromElementFactory.java:111) at org.hibernate.hql.ast.tree.FromClause.addFromElement(FromClause.java:93) at org.hibernate.hql.ast.HqlSqlWalker.createFromElement(HqlSqlWalker.java:322) at org.hibernate.hql.antlr.HqlSqlBaseWalker.fromElement(HqlSqlBaseWalker.java:3441) at org.hibernate.hql.antlr.HqlSqlBaseWalker.fromElementList(HqlSqlBaseWalker.java:3325) at org.hibernate.hql.antlr.HqlSqlBaseWalker.fromClause(HqlSqlBaseWalker.java:733) at org.hibernate.hql.antlr.HqlSqlBaseWalker.query(HqlSqlBaseWalker.java:584) at org.hibernate.hql.antlr.HqlSqlBaseWalker.selectStatement(HqlSqlBaseWalker.java:301) at org.hibernate.hql.antlr.HqlSqlBaseWalker.statement(HqlSqlBaseWalker.java:244) at org.hibernate.hql.ast.QueryTranslatorImpl.analyze(QueryTranslatorImpl.java:254) at org.hibernate.hql.ast.QueryTranslatorImpl.doCompile(QueryTranslatorImpl.java:185) at org.hibernate.hql.ast.QueryTranslatorImpl.compile(QueryTranslatorImpl.java:136) at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:101) at org.hibernate.engine.query.HQLQueryPlan.<init>(HQLQueryPlan.java:80) at org.hibernate.engine.query.QueryPlanCache.getHQLQueryPlan(QueryPlanCache.java:124) at org.hibernate.impl.AbstractSessionImpl.getHQLQueryPlan(AbstractSessionImpl.java:156) at org.hibernate.impl.AbstractSessionImpl.createQuery(AbstractSessionImpl.java:135) at org.hibernate.impl.SessionImpl.createQuery(SessionImpl.java:1770) at org.hibernate.ejb.AbstractEntityManagerImpl.createQuery(AbstractEntityManagerImpl.java:272) ... 8 more What am I doing wrong?

    Read the article

  • i have problem with include file

    - by user309381
    //this is intializer.php defined('DS')? null :define('DS',DIRECTORY_SEPARATOR); defined('SITE_ROOT')? null : define('SITE_ROOT',DS.'C:',DS.'wamp',DS.'www',DS.'photo_gallery'); defined('LIB_PATH')?null:define('LIB_PATH',SITE_ROOT.DS.'includes'); require_once(LIB_PATH.DS.'datainfo.php'); require_once(LIB_PATH.DS.'function.php'); require_once(LIB_PATH.DS.'session.php'); require_once(LIB_PATH.DS.'database.php'); require_once(LIB_PATH.DS.'user.php'); //this is other file where i call php file // ERROR Use of undefined constant LIB_PATH - assumed 'LIB_PATH' in //C:\wamp\www\photo_gallery\includes\database.php on //Notice: Use of undefined constant DS - assumed 'DS' in //C:\wamp\www\photo_gallery\includes\database.php on include(LIB_PATH.DS."database.php") ?

    Read the article

  • SQL Server Multi-statement UDF - way to store data temporarily required

    - by Kharlos Dominguez
    Hello, I have a relatively complex query, with several self joins, which works on a rather large table. For that query to perform faster, I thus need to only work with a subset of the data. Said subset of data can range between 12 000 and 120 000 rows depending on the parameters passed. More details can be found here: http://stackoverflow.com/questions/3054843/sql-server-cte-referred-in-self-joins-slow As you can see, I was using a CTE to return the data subset before, which caused some performance problems as SQL Server was re-running the Select statement in the CTE for every join instead of simply being run once and reusing its data set. The alternative, using temporary tables worked much faster (while testing the query in a separate window outside the UDF body). However, when I tried to implement this in a multi-statement UDF, I was harshly reminded by SQL Server that multi-statement UDFs do not support temporary tables for some reason... UDFs do allow table variables however, so I tried that, but the performance is absolutely horrible as it takes 1m40 for my query to complete whereas the the CTE version only took 40minutes. I believe the table variables is slow for reasons listed in this thread: http://stackoverflow.com/questions/1643687/table-variable-poor-performance-on-insert-in-sql-server-stored-procedure Temporary table version takes around 1 seconds, but I can't make it into a function due to the SQL Server restrictions, and I have to return a table back to the caller. Considering that CTE and table variables are both too slow, and that temporary tables are rejected in UDFs, What are my options in order for my UDF to perform quickly? Thanks a lot in advance.

    Read the article

  • environment change in rake task

    - by Mellon
    I am developing Rails v2.3 app with MySQL database and mysql2 gem. I faced a weird situation which is about changing the environment in rake task. (all my setting and configurations for environment and database are correct, no problem for that.) Here is my simple story : I have a rake task like following: namespace :db do task :do_something => :environment do #1. run under 'development' environment my_helper.run_under_development_env #2. change to 'custom' environment RAILS_ENV='custom' Rake::Task['db:create'] Rake::Task['db:migrate'] #3. change back to 'development' environment RAILS_ENV='development' #4. But it still run in 'customer' environment, why? my_helper.run_under_development_env end end The rake task is quite simple, what it does is: 1. Firstly, run a method from my_helper under "development" environment 2. Then, change to "custom" environment and run db:create and db:migrate until now, everything is fine, the environment did change to "custom" 3. Then, change it back again to "development" environment 4. run helper method again under "development" environment But, though I have changed the environment back to "development" in step 3, the last method still run in "custom" environment, why? and how to get rid of it? --- P.S. --- I have also checked a post with the similar situation here, and tried to use the solution there like (in step 2): ActiveRecord::Base.establish_connection('custom') Rake::Task['db:create'] Rake::Task['db:migrate'] to change the database connection instead of changing environment but, the db:create and db:migrate will still run under "development" database, though the linked post said it should run for "custom" database... weird

    Read the article

  • ado.net managing connections

    - by madlan
    Hi, I'm populating a listview with a list of databases on a selected SQL instance, then retrieving a value from each database (It's internal product version, column doesn't always exist) I'm calling the below function to populate the second column of the listview: item.SubItems.Add(DBVersionCheck(serverName, database.Name)) Function DBVersionCheck(ByVal SelectedInstance As String, ByVal SelectedDatabase As String) Dim m_Connection As New SqlConnection("Server=" + SelectedInstance + ";User Id=sa;Password=password;Database=" + SelectedDatabase) Dim db_command As New SqlCommand("select Setting from SystemSettings where [Setting] = 'version'", m_Connection) Try m_Connection.Open() Return db_command.ExecuteScalar().trim m_Connection.Dispose() Catch ex As Exception 'MessageBox.Show(ex.Message) Return "NA" Finally m_Connection.Dispose() End Try End Function This works fine except it's creating a connection to each database and leaving it open. My understanding is the close()\dispose() releases only the connection from the pool in ado rather than the actual connection to sql. How would I close the actual connections after I've retrieved the value? Leaving these open will create hundreds of connections to databases that will probably not be used for that session.

    Read the article

  • Are these tables too big for SQL Server or Oracle

    - by Jeffrey Cameron
    Hey all, I'm not much of a database guru so I would like some advice. Background We have 4 tables that are currently stored in Sybase IQ. We don't currently have any choice over this, we're basically stuck with what someone else decided for us. Sybase IQ is a column-oriented database that is perfect for a data warehouse. Unfortunately, my project needs to do a lot of transactional updating (we're more of an operational database) so I'm looking for more mainstream alternatives. Question Given these tables' dimensions, would anyone consider SQL Server or Oracle to be a viable alternative? Table 1 : 172 columns * 32 million rows Table 2 : 453 columns * 7 million rows Table 3 : 112 columns * 13 million rows Table 4 : 147 columns * 2.5 million rows Given the size of data what are the things I should be concerned about in terms of database choice, server configuration, memory, platform, etc.?

    Read the article

  • How to recalculate all-pairs shorthest paths on-line if nodes are getting removed?

    - by Pavel Shved
    Latest news about underground bombing made me curious about the following problem. Assume we have a weighted undirected graph, nodes of which are sometimes removed. The problem is to re-calculate shortest paths between all pairs of nodes fast after such removals. With a simple modification of Floyd-Warshall algorithm we can calculate shortest paths between all pairs. These paths may be stored in a table, where shortest[i][j] contains the index of the next node on the shortest path between i and j (or NULL value if there's no path). The algorithm requires O(n³) time to build the table, and eacho query shortest(i,j) takes O(1). Unfortunately, we should re-run this algorithm after each removal. The other alternative is to run graph search on each query. This way each removal takes zero time to update an auxiliary structure (because there's none), but each query takes O(E) time. What algorithm can be used to "balance" the query and update time for all-pairs shortest-paths problem when nodes of the graph are being removed?

    Read the article

  • sql exception when transferring project from usb to c:\

    - by jello
    I'm working on a C# windows program with Visual Studio 2008. Usually, I work from school, directly on my usb drive. But when I copy the folder on my hard drive at home, an sql exception is unhandled whenever I try to write to the database. it is unhandled at the conn.Open(); line. here's the exception unhandled Database 'L:\system\project\the_project\the_project\bin\Debug\PatientMonitoringDatabase.mdf' already exists. Choose a different database name. Cannot attach the file 'C:\Documents and Settings\Administrator\My Documents\system\project\the_project\the_project\bin\Debug\PatientMonitoringDatabase.mdf' as database 'PatientMonitoringDatabase'. it's weird, because my connection string says |DataDirectory|, so it should work on any drive... here's my connection string: string connStr = "Data Source=.\\SQLEXPRESS;AttachDbFilename=|DataDirectory|\\PatientMonitoringDatabase.mdf; " + "Initial Catalog=PatientMonitoringDatabase; " + "Integrated Security=True";

    Read the article

  • Run SQL scripts on windows mobile installer

    - by Guillermo Vasconcelos
    Hi, We are working on a Windows Mobile 6.5 application. The application is already installed in some devices, and we need to distribute a new version with changes in the database schema (we added a few tables). Is there a way to make a "patch" windows mobile installer that will replace the application and update the embedded SQL database with some scripts? In a normal windows installer we would create a custom action in the installation process to apply the changes in the database, but I'm not sure how to do that for Windows Mobile Thanks.

    Read the article

  • simpletest - Why does setReturnValue() seem to change behaviour depending whether test is run in iso

    - by JW
    I am using SimpleTest version 1.0.1 for a unit test. I create a new mock object within a test method and on it i do: $MockDbAdaptor->setReturnValue('query',1); Now, when i run this in a standalone unit test my tested object is happy to see 1 returned when query() is called on the mock db adaptor. However, when this exact same test is run as part of my 'all_tests' TestSuite, the test is failing. This happens because a call to the mock's query() method does not appear to return any value - thus causing my test subject to complain and trigger an unexpected exception that fails the test. So, the behaviour of setReturnValue() seems to change depending on whether the test is run in isolation or not. I can get it to work in both a standalone and TestSuite contexts by using this instead: $MockDbAdaptor->setReturnValueAt(0,'query',1); So my immediate problem can be fixed ...but it feels like a hack. I thought if i create a new mock within a test method then why is the setReturnValue() behaviour getting affected by the context in which the test class instance is run? It feel like a bug.

    Read the article

  • Mysqli results memory usage

    - by Poe
    Why is the memory consumption in this query continuing to rise as the internal pointer progresses through loop? How to make this more efficient and lean? $link = mysqli_connect(...); $result = mysqli_query($link,$query); // 403,268 rows in result set while ($row = mysqli_fetch_row($result)) { // print time, (get memory usage), -- row number } mysqli_free_result(); mysqli_close($link); /* 06:55:25 (1240336) -- Run query 06:55:26 (39958736) -- Query finished 06:55:26 (39958784) -- Begin loop 06:55:26 (39960688) -- Row 0 06:55:26 (45240712) -- Row 10000 06:55:26 (50520712) -- Row 20000 06:55:26 (55800712) -- Row 30000 06:55:26 (61080712) -- Row 40000 06:55:26 (66360712) -- Row 50000 06:55:26 (71640712) -- Row 60000 06:55:26 (76920712) -- Row 70000 06:55:26 (82200712) -- Row 80000 06:55:26 (87480712) -- Row 90000 06:55:26 (92760712) -- Row 100000 06:55:26 (98040712) -- Row 110000 06:55:26 (103320712) -- Row 120000 06:55:26 (108600712) -- Row 130000 06:55:26 (113880712) -- Row 140000 06:55:26 (119160712) -- Row 150000 06:55:26 (124440712) -- Row 160000 06:55:26 (129720712) -- Row 170000 06:55:27 (135000712) -- Row 180000 06:55:27 (140280712) -- Row 190000 06:55:27 (145560712) -- Row 200000 06:55:27 (150840712) -- Row 210000 06:55:27 (156120712) -- Row 220000 06:55:27 (161400712) -- Row 230000 06:55:27 (166680712) -- Row 240000 06:55:27 (171960712) -- Row 250000 06:55:27 (177240712) -- Row 260000 06:55:27 (182520712) -- Row 270000 06:55:27 (187800712) -- Row 280000 06:55:27 (193080712) -- Row 290000 06:55:27 (198360712) -- Row 300000 06:55:27 (203640712) -- Row 310000 06:55:27 (208920712) -- Row 320000 06:55:27 (214200712) -- Row 330000 06:55:27 (219480712) -- Row 340000 06:55:27 (224760712) -- Row 350000 06:55:27 (230040712) -- Row 360000 06:55:27 (235320712) -- Row 370000 06:55:27 (240600712) -- Row 380000 06:55:27 (245880712) -- Row 390000 06:55:27 (251160712) -- Row 400000 06:55:27 (252884360) -- End loop 06:55:27 (1241264) -- Free */

    Read the article

  • Execute sybase stored proc from hibernate

    - by Padmanabh
    I am having issues with executing a simple sybase stored proc from hibernate. The procedure takes some input and returns one record. I tried with the following tag in hibernate mappings file and java code. <hibernate-mapping> <sql-query name="sybaseproc" callable="true"> <return class="Myentity"> <return-property name="next" column="next"/> </return> { ? = call nextnum(?,?) } </sql-query> </hibernate-mapping> java code is as follows Query q = session.getNamedQuery("sybaseproc"); q.setString(0,"test"); q.setInteger(1,new Integer(10)); Myentity entity = (Myentity) q.uniqueResult(); When I run my test. I get a error saying "Errors in Named Query sybaseproc" and the test does not run. Any help is appreciated. Thanks Padmanabh

    Read the article

  • rich suggestions - why input is null? (seam framework)

    - by Cristian Boariu
    Hi, I'm trying to build a rich suggestions and i do not understand WHY the input value is null... I mean, why inputText value is not taken when i enter something. The .xhtml code: <h:inputText value="#{suggestion.input}" id="text"> </h:inputText> <rich:suggestionbox id="suggestionBoxId" for="text" tokens=",[]" suggestionAction="#{suggestion.getSimilarSpacePaths()}" var="result" fetchValue="#{result.path}" first="0" minChars="2" nothingLabel="No similar space paths found" columnClasses="center" > <h:column> <h:outputText value="#{result.path}" style="font-style:italic"/> </h:column> </rich:suggestionbox> and action class: @Name("suggestion") @Scope(ScopeType.CONVERSATION) public class Suggestion { @In protected EntityManager entityManager; private String input; public String getInput() { return input; } public void setInput(final String input) { this.input = input; } public List<Space> getSimilarSpacePaths() { List<Space> suggestionsList = new ArrayList<Space>(); if (!StringUtils.isEmpty(input) && !input.equals("/")) { final Query query = entityManager.createNamedQuery("SpaceByPathLike"); query.setParameter("path", input + '%'); suggestionsList = (List<Space>) query.getResultList(); } return suggestionsList; } } So, input beeing null, suggestionList is always empty... Why input's value is not posted?

    Read the article

  • Handling national language prefix for checkconstraints

    - by Chris Chilvers
    I'm trying to create a check constraint such as CHECK Type IN (N'Create', N'Remove') for an enumeration's value. Sqlite complains about this syntax and only accepts CHECK Type IN ('Create', 'Remove'). The main database will be Sql Server 2005, but I use sqlite's in memory database for unit tests. Is there any way to get sqlite to recognise the national language (N) prefix? Alternatively, is there an easy way when using FluentNHibernate to adapt an nvarchar constant to match the database's dialect?

    Read the article

  • Convert or strip out "illegal" Unicode characters

    - by Oli
    I've got a database in MSSQL that I'm porting to SQLite/Django. I'm using pymssql to connect to the database and save a text field to the local SQLite database. However for some characters, it explodes. I get complaints like this: UnicodeDecodeError: 'ascii' codec can't decode byte 0x97 in position 1916: ordinal not in range(128) Is there some way I can convert the chars to proper unicode versions? Or strip them out?

    Read the article

  • Using LinqExtender to make OData feed fails

    - by BurningIce
    A pretty simple question, has anyone here tried to make a OData feed based on a IQueryable created with LinqExtender? I have created a simple Linq-provider that supports Where, Select, OrderBy and Take and wanted to expose it as an OData Feed. I keep getting an error though, and the Exception is a NullReference with the following StackTrace at System.Data.Services.Serializers.Serializer.GetObjectKey(Object resource, IDataServiceProvider provider, String containerName) at System.Data.Services.Serializers.Serializer.GetUri(Object resource, IDataServiceProvider provider, ResourceContainer container, Uri absoluteServiceUri) at System.Data.Services.Serializers.SyndicationSerializer.WriteEntryElement(IExpandedResult expanded, Object element, Type expectedType, Uri absoluteUri, String relativeUri, SyndicationItem target) at System.Data.Services.Serializers.SyndicationSerializer.<DeferredFeedItems>d__0.MoveNext() at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteItems(XmlWriter writer, IEnumerable`1 items, Uri feedBaseUri) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeedTo(XmlWriter writer, SyndicationFeed feed, Boolean isSourceFeed) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeed(XmlWriter writer) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteTo(XmlWriter writer) at System.Data.Services.Serializers.SyndicationSerializer.WriteTopLevelElements(IExpandedResult expanded, IEnumerator elements, Boolean hasMoved) at System.Data.Services.Serializers.Serializer.WriteRequest(IEnumerator queryResults, Boolean hasMoved) at System.Data.Services.ResponseBodyWriter.Write(Stream stream) I've kinda narrowed it down to a issue where LinqExtender wraps every returned object, so that my object actually inherits itself - thats at least how it looks like in the debugger. These two queries are basicly the same. The first is the legacy-api where the OrderBy and Select is regular Linq to Objects. The second query is a "real" linq-provider made with LinqExtender. var db = CalendarDataProvider.GetCalendarEntriesByDate(DateTime.Now, DateTime.Now.AddMonths(1), Guid.Empty) .OrderBy(o => o.Title) .Select(o => new ODataCalendarEntry(o)); var query = new ODataCalendarEntryQuery() .Where(o => o.Start > DateTime.Now && o.End < DateTime.Now.AddMonths(1)) .OrderBy(o => o.Title); When returning db for the OData feed everything is fine, but returning query throws a NullRefenceException. I've tried all kind of tricks and even tried to project all the data into a new object like this, but still the same error return query.Select(o => new ODataCalendarEntry { Title = o.Title, Start = o.Start, End = o.End, Name = o.Name });

    Read the article

  • Django Threaded Commenting System

    - by Yasin Ozel
    (and sorry for my english) I am learning Python and Django. Now, my challange is developing threaded generic comment system. There is two models, Post and Comment. -Post can be commented. -Comment can be commented. (endless/threaded) -Should not be a n+1 query problem in system. (No matter how many comments, should not increase the number of queries) My current models are like this: class Post(models.Model): title = models.CharField(max_length=100) content = models.TextField() child = generic.GenericRelation( 'Comment', content_type_field='parent_content_type', object_id_field='parent_object_id' ) class Comment(models.Model): content = models.TextField() child = generic.GenericRelation( 'self', content_type_field='parent_content_type', object_id_field='parent_object_id' ) parent_content_type = models.ForeignKey(ContentType) parent_object_id = models.PositiveIntegerField() parent = generic.GenericForeignKey( "parent_content_type", "parent_object_id") Are my models right? And how can i get all comment (with hierarchy) of post, without n+1 query problem? Note: I know mttp and other modules but I want to learn this system. Edit: I run "Post.objects.all().prefetch_related("child").get(pk=1)" command and this gave me post and its child comment. But when I wanna get child command of child command a new query is running. I can change command to ...prefetch_related("child__child__child...")... then still a new query running for every depth of child-parent relationship. Is there anyone who has idea about resolve this problem?

    Read the article

< Previous Page | 649 650 651 652 653 654 655 656 657 658 659 660  | Next Page >