Search Results

Search found 37260 results on 1491 pages for 'command query responsibil'.

Page 380/1491 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • MySQL 1064 error, works in command line and phpMyAdmin; not in app

    - by hundleyj
    Here is my query: select * from (select *, 3956 * 2 * ASIN(SQRT(POWER(SIN(RADIANS(45.5200077 - lat)/ 2), 2) + COS(RADIANS(45.5200077)) * COS(RADIANS(lat)) * POWER(SIN(RADIANS(-122.6942014 - lng)/2),2))) AS distance from stops order by distance, route asc) as p group by route, dir order by distance asc limit 10 This works fine at the command line and in PHPMyAdmin. I'm using Dbslayer to connect to MySQL via my JavaScript backend, and the request is returning a 1064 error. Here is the encoded DBSlayer request string: http://localhost:9090/db?{%22SQL%22:%22select%20*%20from%20%28select%20*,%203956%20*%202%20*%20ASIN%28SQRT%28POWER%28SIN%28RADIANS%2845.5200077%20-%20lat%29/%202%29,%202%29%20+%20COS%28RADIANS%2845.5200077%29%29%20*%20COS%28RADIANS%28lat%29%29%20*%20POWER%28SIN%28RADIANS%28-122.6942014%20-%20lng%29/2%29,2%29%29%29%20AS%20distance%20from%20%60stops%60%20order%20by%20%60distance%60,%20%60route%60%20asc%29%20as%20p%20group%20by%20%60route%60,%20%60dir%60%20order%20by%20%60distance%60%20asc%20limit%2010%22} And the response: {"MYSQL_ERRNO" : 1064 , "MYSQL_ERROR" : "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '(RADIANS(45.5200077)) * COS(RADIANS(lat)) * POWER(SIN(RADIANS(-122.6942014 - lng' at line 1" , "SERVER" : "trimet"} Thanks!

    Read the article

  • rookie MySql question about paging; Is one query enough?

    - by Camran
    I have managed to get paging to work, almost. I want to display to the user, total nr of records found, and the currently displayed records. Ex: 4000 found, displaying 0-100. I am testing this with the nr 2 (because I don't have that many records, have like 20). So I am using LIMIT $start, $nr_results; Do I have to make two queries in order to display the results the way I want, one query fetching all records and then make a mysql_num_rows to get all records, then the one with the LIMIT ? I have this: mysql_num_rows($qry_result); $total_pages = ceil($num_total / $res_per_page); //$res_per_page==2 and $num_total = 2 if ($p - 10 < 1) { $pagemin=1; } else { $pagemin = $p - 10; } if ($p + 10 $total_pages) { $pagemax = $total_pages; } else { $pagemax = $p + 10; } Here is the query: SELECT mt.*, fordon.*, boende.*, elektronik.*, business.*, hem_inredning.*, hobby.* FROM classified mt LEFT JOIN fordon ON fordon.classified_id = mt.classified_id LEFT JOIN boende ON boende.classified_id = mt.classified_id LEFT JOIN elektronik ON elektronik.classified_id = mt.classified_id LEFT JOIN business ON business.classified_id = mt.classified_id LEFT JOIN hem_inredning ON hem_inredning.classified_id = mt.classified_id LEFT JOIN hobby ON hobby.classified_id = mt.classified_id ORDER BY modify_date DESC LIMIT 0, 2 Thanks, if you need more input let me know. Basically Q is, do I have to make two queries?

    Read the article

  • What are good strategies for organizing single class per query service layer?

    - by KallDrexx
    Right now my Asp.net MVC application is structured as Controller - Services - Repositories. The services consist of aggregate root classes that contain methods. Each method is a specific operation that gets performed, such as retrieving a list of projects, adding a new project, or searching for a project etc. The problem with this is that my service classes are becoming really fat with a lot of methods. As of right now I am separating methods out into categories separated by #region tags, but this is quickly becoming out of control. I can definitely see it becoming hard to determine what functionality already exists and where modifications need to go. Since each method in the service classes are isolated and don't really interact with each other, they really could be more stand alone. After reading some articles, such as this, I am thinking of following the single query per class model, as it seems like a more organized solution. Instead of trying to figure out what class and method you need to call to perform an operation, you just have to figure out the class. My only reservation with the single query per class method is that I need some way to organize the 50+ classes I will end up with. Does anyone have any suggestions for strategies to best organize this type of pattern?

    Read the article

  • Adding defaults and indexes to a script/generate command in a Rails Template?

    - by charliepark
    I'm trying to set up a Rails Template that would allow for comprehensive set-up of a specific Rails app. Using Pratik Naik's overview (http://m.onkey.org/2008/12/4/rails-templates), I was able to set up a couple of scaffolds and models, with a line that looks something like this ... generate("scaffold", "post", "title:string", "body:string") I'm now trying to add in Delayed Jobs, which normally has a migration file that looks like this: create_table :delayed_jobs, :force => true do |table| table.integer :priority, :default => 0 # Allows some jobs to jump to the front of the queue table.integer :attempts, :default => 0 # Provides for retries, but still fail eventually. table.text :handler # YAML-encoded string of the object that will do work table.text :last_error # reason for last failure (See Note below) table.datetime :run_at # When to run. Could be Time.now for immediately, or sometime in the future. table.datetime :locked_at # Set when a client is working on this object table.datetime :failed_at # Set when all retries have failed (actually, by default, the record is deleted instead) table.string :locked_by # Who is working on this object (if locked) table.timestamps end So, what I'm trying to do with the Rails template, is to add in that :default = 0 into the master template file. I know that the rest of the template's command should look like this: generate("migration", "createDelayedJobs", "priority:integer", "attempts:integer", "handler:text", "last_error:text", "run_at:datetime", "locked_at:datetime", "failed_at:datetime", "locked_by:string") Where would I put (or, rather, what is the syntax to add) the :default values in that? And if I wanted to add an index, what's the best way to do that?

    Read the article

  • Inserting variables into a query string - it won't work!

    - by Jonesy
    Basically i have a query string that when i hardcode in the catalogue value its fine. when I try adding it via a variable it just doesn't pick it up. This works: Dim WaspConnection As New SqlConnection("Data Source=JURA;Initial Catalog=WaspTrackAsset_NROI;User id=" & ConfigurationManager.AppSettings("WASPDBUserName") & ";Password='" & ConfigurationManager.AppSettings("WASPDBPassword").ToString & "';") This doesn't: Public Sub GetWASPAcr() connection.Open() Dim dt As New DataTable() Dim username As String = HttpContext.Current.User.Identity.Name Dim sqlCmd As New SqlCommand("SELECT WASPDatabase FROM dbo.aspnet_Users WHERE UserName = '" & username & "'", connection) Dim sqlDa As New SqlDataAdapter(sqlCmd) sqlDa.Fill(dt) If dt.Rows.Count > 0 Then For i As Integer = 0 To dt.Rows.Count - 1 If dt.Rows(i)("WASPDatabase") Is DBNull.Value Then WASP = "" Else WASP = "WaspTrackAsset_" + dt.Rows(i)("WASPDatabase") End If Next End If connection.Close() End Sub Dim WaspConnection As New SqlConnection("Data Source=JURA;Initial Catalog=" & WASP & ";User id=" & ConfigurationManager.AppSettings("WASPDBUserName") & ";Password='" & ConfigurationManager.AppSettings("WASPDBPassword").ToString & "';") When I debug the catalog is empty in the query string but the WASP variable holds the value "WaspTrackAsset_NROI" Any idea's why? Cheers, jonesy

    Read the article

  • SQL Server service broker reporting as off when I have written a query to turn it on

    - by dotnetdev
    I have made a small ASP.NET website. It uses sqlcachedependency The SQL Server Service Broker for the current database is not enabled, and as a result query notifications are not supported. Please enable the Service Broker for this database if you wish to use notifications. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: The SQL Server Service Broker for the current database is not enabled, and as a result query notifications are not supported. Please enable the Service Broker for this database if you wish to use notifications. Source Error: Line 12: System.Data.SqlClient.SqlDependency.Start(connString); This is the erroneous line in my global.asax. However, in sql server (2005), I enabled service broker like so (I connect and run the SQL Server service when I debug my site): ALTER DATABASE mynewdatabase SET ENABLE_BROKER with rollback immediate And this was successful. What am I missing? I am trying to use sql caching dependency and have followed all procedures. Thanks

    Read the article

  • Specifying different initial values for fields in inherited models (django)

    - by Shawn Chin
    Question : What is the recommended way to specify an initial value for fields if one uses model inheritance and each child model needs to have different default values when rendering a ModelForm? Take for example the following models where CompileCommand and TestCommand both need different initial values when rendered as ModelForm. # ------ models.py class ShellCommand(models.Model): command = models.Charfield(_("command"), max_length=100) arguments = models.Charfield(_("arguments"), max_length=100) class CompileCommand(ShellCommand): # ... default command should be "make" class TestCommand(ShellCommand): # ... default: command = "make", arguments = "test" I am aware that one can used the initial={...} argument when instantiating the form, however I would rather store the initial values within the context of the model (or at least within the associated ModelForm). My current approach What I'm doing at the moment is storing an initial value dict within Meta, and checking for it in my views. # ----- forms.py class CompileCommandForm(forms.ModelForm): class Meta: model = CompileCommand initial_values = {"command":"make"} class TestCommandForm(forms.ModelForm): class Meta: model = TestCommand initial_values = {"command":"make", "arguments":"test"} # ------ in views FORM_LOOKUP = { "compile": CompileCommandFomr, "test": TestCommandForm } CmdForm = FORM_LOOKUP.get(command_type, None) # ... initial = getattr(CmdForm, "initial_values", {}) form = CmdForm(initial=initial) This feels too much like a hack. I am eager for a more generic / better way to achieve this. Suggestions appreciated. Other attempts I have toyed around with overriding the constructor for the submodels: class CompileCommand(ShellCommand): def __init__(self, *args, **kwargs): kwargs.setdefault('command', "make") super(CompileCommand, self).__init__(*args, **kwargs) and this works when I try to create an object from the shell: >>> c = CompileCommand(name="xyz") >>> c.save() <CompileCommand: 123> >>> c.command 'make' However, this does not set the default value when the associated ModelForm is rendered, which unfortunately is what I'm trying to achieve. Update 2 (looks promising) I now have the following in forms.py which allow me to set Meta.default_initial_values without needing extra code in views. class ModelFormWithDefaults(forms.ModelForm): def __init__(self, *args, **kwargs): if hasattr(self.Meta, "default_initial_values"): kwargs.setdefault("initial", self.Meta.default_initial_values) super(ModelFormWithDefaults, self).__init__(*args, **kwargs) class TestCommandForm(ModelFormWithDefaults): class Meta: model = TestCommand default_initial_values = {"command":"make", "arguments":"test"}

    Read the article

  • HTTP POST with URL query parameters -- good idea or not?

    - by Steven Huwig
    I'm designing an API to go over HTTP and I am wondering if using the HTTP POST command, but with URL query parameters only and no request body, is a good way to go. Considerations: "Good Web design" requires non-idempotent actions to be sent via POST. This is a non-idempotent action. It is easier to develop and debug this app when the request parameters are present in the URL. The API is not intended for widespread use. It seems like making a POST request with no body will take a bit more work, e.g. a Content-Length: 0 header must be explicitly added. It also seems to me that a POST with no body is a bit counter to most developer's and HTTP frameworks' expectations. Are there any more pitfalls or advantages to sending parameters on a POST request via the URL query rather than the request body? Edit: The reason this is under consideration is that the operations are not idempotent and have side effects other than retrieval. See the HTTP spec: In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe". This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested. ... Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N 0 identical requests is the same as for a single request. The methods GET, HEAD, PUT and DELETE share this property. Also, the methods OPTIONS and TRACE SHOULD NOT have side effects, and so are inherently idempotent.

    Read the article

  • Help with mysql sum and group query and managing results for jquery graph.

    - by Scarface
    I have a system I am trying to design that will retrieve information from a database, so that it can be plotted in a jquery graph. I need to retrieve the information and somehow put it in the necessary coordinates format (for example two coordinates var d = [[1269417600000, 10],[1269504000000, 15]];). My table that I am selecting from is a table that stores user votes with fields: points_id (1=vote up, 2=vote down), user_id, timestamp, topic_id What I need to do is select all the votes and somehow group them into respective days and then sum the difference between 1 votes and 2 votes for each day. I then need to somehow display the data in the appropriate plotting format shown earlier. For example April 1, 4 votes. The data needs to be separated by commas, except the last plot entry, so I am not sure how to approach that. I showed an example below of the kind of thing I need but it is not correct, echo "var d=["; $query=mysql_query( "SELECT *, SUM(IF(points_id = \"1\", 1,0))-SUM(IF([points_id = \"2\", 1,0)) AS 'total' FROM points LEFT JOIN topic ON topic.topic_id=points.topic_id WHERE topic.creator='$user' GROUP by timestamp HAVING certain time interval" ); while ($row=mysql_fetch_assoc($query)){ $timestamp=$row['timestamp']; $votes=$row['total']; echo "[$timestamp,$vote],"; } echo "];";

    Read the article

  • Why does my jQuery/YQL call not return anything?

    - by tastyapple
    I'm trying to access YQL with jQuery but am not getting a response: http://jsfiddle.net/tastyapple/grMb3/ Anyone know why? $(function(){ $.extend( { _prepareYQLQuery: function (query, params) { $.each( params, function (key) { var name = "#{" + key + "}"; var value = $.trim(this); if (!value.match(/^[0-9]+$/)) { value = '"' + value + '"'; } query = query.replace(name, value); } ); return query; }, yql: function (query) { var $self = this; var successCallback = null; var errorCallback = null; if (typeof arguments[1] == 'object') { query = $self._prepareYQLQuery(query, arguments[1]); successCallback = arguments[2]; errorCallback = arguments[3]; } else if (typeof arguments[1] == 'function') { successCallback = arguments[1]; errorCallback = arguments[2]; } var doAsynchronously = successCallback != null; var yqlJson = { url: "http://query.yahooapis.com/v1/public/yql", dataType: "jsonp", success: successCallback, async: doAsynchronously, data: { q: query, format: "json", env: 'store://datatables.org/alltableswithkeys', callback: "?" } } if (errorCallback) { yqlJson.error = errorCallback; } $.ajax(yqlJson); return $self.toReturn; } } ); $.yql( "SELECT * FROM github.repo WHERE id='#{username}' AND repo='#{repository}'", { username: "jquery", repository: "jquery" }, function (data) { if (data.results.repository["open-issues"].content > 0) { alert("Hey dude, you should check out your new issues!"); } } ); });

    Read the article

  • Parallel processing from a command queue on Linux (bash, python, ruby... whatever)

    - by mlambie
    I have a list/queue of 200 commands that I need to run in a shell on a Linux server. I only want to have a maximum of 10 processes running (from the queue) at once. Some processes will take a few seconds to complete, other processes will take much longer. When a process finishes I want the next command to be "popped" from the queue and executed. Does anyone have code to solve this problem? Further elaboration: There's 200 pieces of work that need to be done, in a queue of some sort. I want to have at most 10 pieces of work going on at once. When a thread finishes a piece of work it should ask the queue for the next piece of work. If there's no more work in the queue, the thread should die. When all the threads have died it means all the work has been done. The actual problem I'm trying to solve is using imapsync to synchronize 200 mailboxes from an old mail server to a new mail server. Some users have large mailboxes and take a long time tto sync, others have very small mailboxes and sync quickly.

    Read the article

  • As3 & PHP URLEncoding problem!

    - by Jk_
    Hi everyone, I'm stuck with a stupid problem of encoding. My problem is that all my accentuated characters are displayed as weird iso characters. Example : é is displayed %E9 I send a string to my php file : XMLLoader.load(new URLRequest(online+"/query.php?Query=" + q)); XMLLoader.addEventListener(Event.COMPLETE,XMLLoaded); When I trace q, I get : "INSERT INTO hello_world (message) values('éàaà');" The GOOD query My php file look like this : <?php include("conection.php");//Conectiong to database $Q = $_GET['Query']; $query = $Q; $resultID = mysql_query($query) or die("Could not execute or probably SQL statement malformed (error): ". mysql_error()); $xml_output = "<?xml version=\"1.0\"?>\n"; // XML header $xml_output .= "<answers>\n"; $xml_output .= "<lastID id=".'"'.mysql_insert_id().'"'." />\n"; $xml_output .= "<query string=".'"'.$query.'"'." />\n"; $xml_output .= "</answers>"; echo $xml_output;//Output the XML ?> When I get back my XML into flash the $query looks like this : "INSERT INTO hello_world (message) values('%E9%E0a%E0');" And these values are then displayed into my DB which is annoying. Any help would be appreciated! Cheers. Jk_

    Read the article

  • is it better to query database or grab from file? php & mysql

    - by pfunc
    I am keeping a large amount of words in a database that I want to match up articles to. I was thinking that it would just be better to keep these words in an array and grab that array whenever needed instead of querying the database every time (since the words won't be changing that much). Is there much performance difference in doing this? And if I were to do this, how to I write a script that writes the array to a a new php file. I tried writing the array like so: while( $row = mysql_fetch_assoc($query)) { $newArray[] = $row; } $fp = fopen('noWordsArr.php', 'w'); fwrite($fp, $newArray); fclose($fp); But all I get in the other file is "Array". So i figured I could write this and then write have a chron hit up the file every few days or so in case things have changed. But I guess if there is no performance advantage then it prob won't be necessary and I can just query the database every time I need to access the words.

    Read the article

  • What should I use to increase performance. View/Query/Temporary Table

    - by Shantanu Gupta
    I want to know the performance of using Views, Temp Tables and Direct Queries Usage in a Stored Procedure. I have a table that gets created every time when a trigger gets fired. I know this trigger will be fired very rare and only once at the time of setup. Now I have to use that created table from triggers at many places for fetching data and I confirms it that no one make any changes in that table. i.e ReadOnly Table. I have to use this tables data along with multiple tables to join and fetch result for further queries say select * from triggertable By Using temp table select ... into #tx from triggertable join t2 join t3 and so on select a,b, c from #tx --do something select d,e,f from #tx ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. By Using Views create view viewname ( select ... from triggertable join t2 join t3 and so on ) select a,b, c from viewname --do something select d,e,f from viewname ---do somethign --and so on --around 6-7 queries in a row in a stored procedure. This View can be used in other places as well. So I will be creating at database rather than at sp By Using Direct Query select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something select a,b, c from select ... into #tx from triggertable join t2 join t3 join ... --do something . . --and so on --around 6-7 queries in a row in a stored procedure. Now I can create a view/temporary table/ directly query usage in all upcoming queries. What would be the best to use in this case.

    Read the article

  • In .NET, how do you send multiple arguments into a DOS command prompt?

    - by donde
    I am trying to execute DOS commands in ASP.NET 2.0. What I have now calls a BAT file which, in turn, calls a CMD file. It works (with the end result being a file gets ftp'ed). However, I'd like to dump the BAT and CMD files and run everything in .NET. What is the format of sending multiple arguments into the command window? Here is what I have now. The .NET Code: System.Diagnostics.Process proc = new System.Diagnostics.Process(); proc.EnableRaisingEvents = false; proc.StartInfo.FileName = "C:\\MyBat.BAT"; proc.Start(); proc.WaitForExit(); The Bat File looks like this (all it does is run the cmd file): ftp.exe -s:C:\MyCMD.cmd And here is the content of the Cmd file: open <my host> <my user name> <my pw> quote site cyl pri=1 sec=1 lrecl=1786 blksize=0 recfm=fb retpd=30 put C:\MyDTLFile.dtl 'MyDTLFile.dtl' quit

    Read the article

  • em.createQuery keeps returning null

    - by Developer106
    I have this application which i use JSF 2.0 and EclipseLink, i have entities created for a database made in MySQL, Created these entities using netbeans 7.1.2, it gets created automaticly. Then i use session beans to work with these entities, the thing is the em.createQuery always returns a null, though I checked NamedQueries in the entities and they perfectly match a sample from the entities named queries:- @NamedQueries({ @NamedQuery(name = "Users.findAll", query = "SELECT u FROM Users u"), @NamedQuery(name = "Users.findByUserId", query = "SELECT u FROM Users u WHERE u.userId = :userId"), @NamedQuery(name = "Users.findByUsername", query = "SELECT u FROM Users u WHERE u.username = :username"), @NamedQuery(name = "Users.findByEmail", query = "SELECT u FROM Users u WHERE u.email = :email"), notice how i use this findByEmail query in the session bean :- public Users findByEmail(String email){ em.getTransaction().begin(); String find = "Users.findByEmail"; Query query = em.createNamedQuery(find); query.setParameter("email", email); Users user = (Users) query.getSingleResult(); but it always returns null from this em.createNamedQuery, i tried using .createQuery first but it also was no good. the stacktrace of the exception Caused by: java.lang.NullPointerException at com.readme.entities.sessionBeans.UsersFacade.findByEmail(UsersFacade.java:48) at com.readme.user.signup.SignupBean.checkAvailability(SignupBean.java:137) at com.readme.user.signup.SignupBean.save(SignupBean.java:146) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) What Seems To Be The Problem Here ?

    Read the article

  • lapply slower than for-loop when used for a BiomaRt query. Is that expected?

    - by ptocquin
    I would like to query a database using BiomaRt package. I have loci and want to retrieve some related information, let say description. I first try to use lapply but was surprise by the time needed for the task to be performed. I thus tried a more basic for-loop and get a faster result. Is that expected or is something wrong with my code or with my understanding of apply ? I read other posts dealing with *apply vs for-loop performance (Here, for example) and I was aware that improved performance should not be expected but I don't understand why performance here is actually lower. Here is a reproducible example. 1) Loading the library and selecting the database : library("biomaRt") athaliana <- useMart("plants_mart_14") athaliana <- useDataset("athaliana_eg_gene",mart=athaliana) 2) Querying the database : loci <- c("at1g01300", "at1g01800", "at1g01900", "at1g02335", "at1g02790", "at1g03220", "at1g03230", "at1g04040", "at1g04110", "at1g05240" ) I create a function for the use in lapply : foo <- function(loci) { getBM("description","tair_locus",loci,athaliana) } When I use this function on the first element : > system.time(foo(cwp_loci[1])) utilisateur système écoulé 0.020 0.004 1.599 When I use lapply to retrieve the data for all values : > system.time(lapply(loci, foo)) utilisateur système écoulé 0.220 0.000 16.376 I then created a new function, adding a for-loop : foo2 <- function(loci) { for (i in loci) { getBM("description","tair_locus",loci[i],athaliana) } } Here is the result : > system.time(foo2(loci)) utilisateur système écoulé 0.204 0.004 10.919 Of course, this will be applied to a big list of loci, so the best performing option is needed. I thank you for assistance. EDIT Following recommendation of @MartinMorgan Simply passing the vector loci to getBM greatly improves the query efficiency. Simpler is better. > system.time(lapply(loci, foo)) utilisateur système écoulé 0.236 0.024 110.512 > system.time(foo2(loci)) utilisateur système écoulé 0.208 0.040 116.099 > system.time(foo(loci)) utilisateur système écoulé 0.028 0.000 6.193

    Read the article

  • Slow query. Wrong database structure?

    - by Tin
    I have a database with table that contains tasks. Tasks have a lifecycle. The status of the task's lifecycle can change. These state transitions are stored in a separate table tasktransitions. Now I wrote a query to find all open/reopened tasks and recently changed tasks but I already see with a rather small number of tasks (<1000) that execution time has becoming very long (0.5s). Tasks +-------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+---------+------+-----+---------+----------------+ | taskid | int(11) | NO | PRI | NULL | auto_increment | | description | text | NO | | NULL | | +-------------+---------+------+-----+---------+----------------+ Tasktransitions +------------------+-----------+------+-----+-------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+-----------+------+-----+-------------------+----------------+ | tasktransitionid | int(11) | NO | PRI | NULL | auto_increment | | taskid | int(11) | NO | MUL | NULL | | | status | int(11) | NO | MUL | NULL | | | description | text | NO | | NULL | | | userid | int(11) | NO | | NULL | | | transitiondate | timestamp | NO | | CURRENT_TIMESTAMP | | +------------------+-----------+------+-----+-------------------+----------------+ Query SELECT tasks.taskid,tasks.description,tasklaststatus.status FROM tasks LEFT OUTER JOIN ( SELECT tasktransitions.taskid,tasktransitions.transitiondate,tasktransitions.status FROM tasktransitions INNER JOIN ( SELECT taskid,MAX(transitiondate) AS lasttransitiondate FROM tasktransitions GROUP BY taskid ) AS tasklasttransition ON tasklasttransition.lasttransitiondate=tasktransitions.transitiondate AND tasklasttransition.taskid=tasktransitions.taskid ) AS tasklaststatus ON tasklaststatus.taskid=tasks.taskid WHERE tasklaststatus.status IS NULL OR tasklaststatus.status=0 or tasklaststatus.transitiondate>'2013-09-01'; I'm wondering if the database structure is best choice performance wise. Could adding indexes help? I already tried to add some but I don't see great improvements. +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | tasktransitions | 0 | PRIMARY | 1 | tasktransitionid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 1 | taskid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 2 | transitiondate | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | status_ix | 1 | status | A | 3 | NULL | NULL | | BTREE | | | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ Any other suggestions?

    Read the article

  • How to pick a specific object from a Linq Query using Distinct?

    - by Holli
    I have a list with two or more objects of class Agent. Name = "A" Priority = 0 ResultCount = 100 ; Name = "B" Priority = 1 ResultCount = 100 ; Both objects have the same ResultCount. In that case I only need one object and not two or more. I did this with a Linq Query with Distinct and an custom made Comparer. IEnumerable<Agent> distinctResultsAgents = (from agt in distinctUrlsAgents select agt).Distinct(comparerResultsCount); With this query I get only one object from the list but I never know which one. But I don't want just any object, I want object "B" because the Priority is higher then object "A". How can I do that? My custom Comparer is very simple and has a method like this: public bool Equals(Agent x, Agent y) { if (x == null || y == null) return false; if (x.ResultCount == y.ResultCount) return true; return false; }

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • How to combine the multiple part linq into one query?

    - by user2943399
    Operator should be ‘AND’ and not a ‘OR’. I am trying to refactor the following code and i understood the following way of writing linq query may not be the correct way. Can somone advice me how to combine the following into one query. AllCompany.Where(itm => itm != null).Distinct().ToList(); if (AllCompany.Count > 0) { //COMPANY NAME if (isfldCompanyName) { AllCompany = AllCompany.Where(company => company["Company Name"].StartsWith(fldCompanyName)).ToList(); } //SECTOR if (isfldSector) { AllCompany = AllCompany.Where(company => fldSector.Intersect(company["Sectors"].Split('|')).Any()).ToList(); } //LOCATION if (isfldLocation) { AllCompany = AllCompany.Where(company => fldLocation.Intersect(company["Location"].Split('|')).Any()).ToList(); } //CREATED DATE if (isfldcreatedDate) { AllCompany = AllCompany.Where(company => company.Statistics.Created >= createdDate).ToList(); } //LAST UPDATED DATE if (isfldUpdatedDate) { AllCompany = AllCompany.Where(company => company.Statistics.Updated >= updatedDate).ToList(); } //Allow Placements if (isfldEmployerLevel) { fldEmployerLevel = (fldEmployerLevel == "Yes") ? "1" : ""; AllCompany = AllCompany.Where(company => company["Allow Placements"].ToString() == fldEmployerLevel).ToList(); }

    Read the article

  • node.js: looping, email each matching row from query but it it emails the same user by # number of matches?

    - by udonsoup16
    I have a node.js email server that works fine however I noticed a problem. If the query string found 4 rows, it would send four emails but only to the first result instead of to each found email address. var querystring1 = 'SELECT `date_created`,`first_name`,`last_name`,`email` FROM `users` WHERE TIMESTAMPDIFF(SECOND,date_created, NOW()) <=298 AND `email`!="" LIMIT 0,5'; connection.query(querystring1, function (err, row) { if (err) throw err; for (var i in row) { // Email content; change text to html to have html emails. row[i].column name will pull relevant database info var sendit = { to: row[i].email, from: '******@******', subject: 'Hi ' + row[i].first_name + ', Thanks for joining ******!', html: {path:__dirname + '/templates/welcome.html'} }; // Send emails transporter.sendMail(sendit,function(error,response){ if (error) { console.log(error); }else{ console.log("Message sent1: " + row[i].first_name);} transporter.close();} ) }}); .... How do I have it loop through each found row and send a custom email using row data individual row data?

    Read the article

  • Disable a form and all contained elements until an ajax query completes (or another solution to prev

    - by Max Williams
    I have a search form with inputs and selects, and when any input/select is changed i run some js and then make an ajax query with jquery. I want to stop the user from making further changes to the form while the request is in progress, as at the moment they can initiate several remote searches at once, effectively causing a race between the different searches. It seems like the best solution to this is to prevent the user from interacting with the form while waiting for the request to come back. At the moment i'm doing this in the dumbest way possible by hiding the form before making the ajax query and then showing it again on success/error. This solves the problem but looks horrible and isn't really acceptable. Is there another, better way to prevent interaction with the form? To make things more complicated, to allow nice-looking selects, the user actually interacts with spans which have js hooked up to them to tie them to the actual, hidden, selects. So, even though the spans aren't inputs, they are contained in the form and represent the actual interactive elements of the form. Grateful for any advice - max. Here's what i'm doing now: function submitQuestionSearchForm(){ //bunch of irrelevant stuff var questionSearchForm = jQuery("#searchForm"); questionSearchForm.addClass("searching"); jQuery.ajax({ async: true, data: jQuery.param(questionSearchForm.serializeArray()), dataType: 'script', type: 'get', url: "/questions", success: function(msg){ //more irrelevant stuff questionSearchForm.removeClass("searching"); }, error: function(msg){ questionSearchForm.removeClass("searching"); } }); return true; }

    Read the article

  • Error : 'java' is not recognized as an internal or external command, operable program or batch file.

    - by Setu
    I have my application online on the Google Apps Engine. When I deploy this application, this error is generated. I am using Netbeans 6.9. My jdk is installed at : "C:\Program Files\Java\jdk1.6.0_23\". I have installed Google App Engine for Java. This application is getting deployed & run on localhost very well. I am also able to start Google App Engine Server. I have set Environment variables as : 1) classpath : C:\Program Files\Java\jdk1.6.0_23\bin; 2) path : C:\Program Files\Java\jdk1.6.0_23\bin; 3) JAVA_HOME : C:\Program Files\Java\jdk1.6.0_23\bin; 4) include : C:\Program Files\Java\jdk1.6.0_23\include; 5) lib : C:\Program Files\Java\jdk1.6.0_23\lib; However, in system32, I am not finding java.exe, javac.exe or javawc.exe. Also, while running this : 1) java.exe 2) javac.exe 3) java -version from Command Line, they give proper output. How do I make my application properly deployed ?

    Read the article

  • VS 2010 SP1 and SQL CE

    - by ScottGu
    Last month we released the Beta of VS 2010 Service Pack 1 (SP1).  You can learn more about the VS 2010 SP1 Beta from Jason Zander’s two blog posts about it, and from Scott Hanselman’s blog post that covers some of the new capabilities enabled with it.   You can download and install the VS 2010 SP1 Beta here. Last week I blogged about the new Visual Studio support for IIS Express that we are adding with VS 2010 SP1. In today’s post I’m going to talk about the new VS 2010 SP1 tooling support for SQL CE, and walkthrough some of the cool scenarios it enables.  SQL CE – What is it and why should you care? SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Easy Migration to SQL Server SQL CE is an embedded database – which makes it ideal for development, testing, and light-usage scenarios.  For high-volume sites and applications you’ll probably want to migrate your database to use SQL Server Express (which is free), SQL Server or SQL Azure.  These servers enable much better scalability, more development features (including features like Stored Procedures – which aren’t supported with SQL CE), as well as more advanced data management capabilities. We’ll ship migration tools that enable you to optionally take SQL CE databases and easily upgrade them to use SQL Server Express, SQL Server, or SQL Azure.  You will not need to change your code when upgrading a SQL CE database to SQL Server or SQL Azure.  Our goal is to enable you to be able to simply change the database connection string in your web.config file and have your application just work. New Tooling Support for SQL CE in VS 2010 SP1 VS 2010 SP1 includes much improved tooling support for SQL CE, and adds support for using SQL CE within ASP.NET projects for the first time.  With VS 2010 SP1 you can now: Create new SQL CE Databases Edit and Modify SQL CE Database Schema and Indexes Populate SQL CE Databases within Data Use the Entity Framework (EF) designer to create model layers against SQL CE databases Use EF Code First to define model layers in code, then create a SQL CE database from them, and optionally edit the DB with VS Deploy SQL CE databases to remote servers using Web Deploy and optionally convert them to full SQL Server databases You can take advantage of all of the above features from within both ASP.NET Web Forms and ASP.NET MVC based projects. Download You can enable SQL CE tooling support within VS 2010 by first installing VS 2010 SP1 (beta). Once SP1 is installed, you’ll also then need to install the SQL CE Tools for Visual Studio download.  This is a separate download that enables the SQL CE tooling support for VS 2010 SP1. Walkthrough of Two Scenarios In this blog post I’m going to walkthrough how you can take advantage of SQL CE and VS 2010 SP1 using both an ASP.NET Web Forms and an ASP.NET MVC based application. Specifically, we’ll walkthrough: How to create a SQL CE database using VS 2010 SP1, then use the EF4 visual designers in Visual Studio to construct a model layer from it, and then display and edit the data using an ASP.NET GridView control. How to use an EF Code First approach to define a model layer using POCO classes and then have EF Code-First “auto-create” a SQL CE database for us based on our model classes.  We’ll then look at how we can use the new VS 2010 SP1 support for SQL CE to inspect the database that was created, populate it with data, and later make schema changes to it.  We’ll do all this within the context of an ASP.NET MVC based application. You can follow the two walkthroughs below on your own machine by installing VS 2010 SP1 (beta) and then installing the SQL CE Tools for Visual Studio download (which is a separate download that enables SQL CE tooling support for VS 2010 SP1). Walkthrough 1: Create a SQL CE Database, Create EF Model Classes, Edit the Data with a GridView This first walkthrough will demonstrate how to create and define a SQL CE database within an ASP.NET Web Form application.  We’ll then build an EF model layer for it and use that model layer to enable data editing scenarios with an <asp:GridView> control. Step 1: Create a new ASP.NET Web Forms Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET Web Forms project.  We’ll use the “ASP.NET Web Application” project template option so that it has a default UI skin implemented: Step 2: Create a SQL CE Database Right click on the “App_Data” folder within the created project and choose the “Add->New Item” menu command: This will bring up the “Add Item” dialog box.  Select the “SQL Server Compact 4.0 Local Database” item (new in VS 2010 SP1) and name the database file to create “Store.sdf”: Note that SQL CE database files have a .sdf filename extension. Place them within the /App_Data folder of your ASP.NET application to enable easy deployment. When we clicked the “Add” button above a Store.sdf file was added to our project: Step 3: Adding a “Products” Table Double-clicking the “Store.sdf” database file will open it up within the Server Explorer tab.  Since it is a new database there are no tables within it: Right click on the “Tables” icon and choose the “Create Table” menu command to create a new database table.  We’ll name the new table “Products” and add 4 columns to it.  We’ll mark the first column as a primary key (and make it an identify column so that its value will automatically increment with each new row): When we click “ok” our new Products table will be created in the SQL CE database. Step 4: Populate with Data Once our Products table is created it will show up within the Server Explorer.  We can right-click it and choose the “Show Table Data” menu command to edit its data: Let’s add a few sample rows of data to it: Step 5: Create an EF Model Layer We have a SQL CE database with some data in it – let’s now create an EF Model Layer that will provide a way for us to easily query and update data within it. Let’s right-click on our project and choose the “Add->New Item” menu command.  This will bring up the “Add New Item” dialog – select the “ADO.NET Entity Data Model” item within it and name it “Store.edmx” This will add a new Store.edmx item to our solution explorer and launch a wizard that allows us to quickly create an EF model: Select the “Generate From Database” option above and click next.  Choose to use the Store.sdf SQL CE database we just created and then click next again.  The wizard will then ask you what database objects you want to import into your model.  Let’s choose to import the “Products” table we created earlier: When we click the “Finish” button Visual Studio will open up the EF designer.  It will have a Product entity already on it that maps to the “Products” table within our SQL CE database: The VS 2010 SP1 EF designer works exactly the same with SQL CE as it does already with SQL Server and SQL Express.  The Product entity above will be persisted as a class (called “Product”) that we can programmatically work against within our ASP.NET application. Step 6: Compile the Project Before using your model layer you’ll need to build your project.  Do a Ctrl+Shift+B to compile the project, or use the Build->Build Solution menu command. Step 7: Create a Page that Uses our EF Model Layer Let’s now create a simple ASP.NET Web Form that contains a GridView control that we can use to display and edit the our Products data (via the EF Model Layer we just created). Right-click on the project and choose the Add->New Item command.  Select the “Web Form from Master Page” item template, and name the page you create “Products.aspx”.  Base the master page on the “Site.Master” template that is in the root of the project. Add an <h2>Products</h2> heading the new Page, and add an <asp:gridview> control within it: Then click the “Design” tab to switch into design-view. Select the GridView control, and then click the top-right corner to display the GridView’s “Smart Tasks” UI: Choose the “New data source…” drop down option above.  This will bring up the below dialog which allows you to pick your Data Source type: Select the “Entity” data source option – which will allow us to easily connect our GridView to the EF model layer we created earlier.  This will bring up another dialog that allows us to pick our model layer: Select the “StoreEntities” option in the dropdown – which is the EF model layer we created earlier.  Then click next – which will allow us to pick which entity within it we want to bind to: Select the “Products” entity in the above dialog – which indicates that we want to bind against the “Product” entity class we defined earlier.  Then click the “Enable automatic updates” checkbox to ensure that we can both query and update Products.  When you click “Finish” VS will wire-up an <asp:EntityDataSource> to your <asp:GridView> control: The last two steps we’ll do will be to click the “Enable Editing” checkbox on the Grid (which will cause the Grid to display an “Edit” link on each row) and (optionally) use the Auto Format dialog to pick a UI template for the Grid. Step 8: Run the Application Let’s now run our application and browse to the /Products.aspx page that contains our GridView.  When we do so we’ll see a Grid UI of the Products within our SQL CE database. Clicking the “Edit” link for any of the rows will allow us to edit their values: When we click “Update” the GridView will post back the values, persist them through our EF Model Layer, and ultimately save them within our SQL CE database. Learn More about using EF with ASP.NET Web Forms Read this tutorial series on the http://asp.net site to learn more about how to use EF with ASP.NET Web Forms.  The tutorial series uses SQL Express as the database – but the nice thing is that all of the same steps/concepts can also now also be done with SQL CE.   Walkthrough 2: Using EF Code-First with SQL CE and ASP.NET MVC 3 We used a database-first approach with the sample above – where we first created the database, and then used the EF designer to create model classes from the database.  In addition to supporting a designer-based development workflow, EF also enables a more code-centric option which we call “code first development”.  Code-First Development enables a pretty sweet development workflow.  It enables you to: Define your model objects by simply writing “plain old classes” with no base classes or visual designer required Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping Optionally auto-create a database based on the model classes you define – allowing you to start from code first I’ve done several blog posts about EF Code First in the past – I really think it is great.  The good news is that it also works very well with SQL CE. The combination of SQL CE, EF Code First, and the new VS tooling support for SQL CE, enables a pretty nice workflow.  Below is a simple example of how you can use them to build a simple ASP.NET MVC 3 application. Step 1: Create a new ASP.NET MVC 3 Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET MVC 3 project.  We’ll use the “Internet Project” template so that it has a default UI skin implemented: Step 2: Use NuGet to Install EFCodeFirst Next we’ll use the NuGet package manager (automatically installed by ASP.NET MVC 3) to add the EFCodeFirst library to our project.  We’ll use the Package Manager command shell to do this.  Bring up the package manager console within Visual Studio by selecting the View->Other Windows->Package Manager Console menu command.  Then type: install-package EFCodeFirst within the package manager console to download the EFCodeFirst library and have it be added to our project: When we enter the above command, the EFCodeFirst library will be downloaded and added to our application: Step 3: Build Some Model Classes Using a “code first” based development workflow, we will create our model classes first (even before we have a database).  We create these model classes by writing code. For this sample, we will right click on the “Models” folder of our project and add the below three classes to our project: The “Dinner” and “RSVP” model classes above are “plain old CLR objects” (aka POCO).  They do not need to derive from any base classes or implement any interfaces, and the properties they expose are standard .NET data-types.  No data persistence attributes or data code has been added to them.   The “NerdDinners” class derives from the DbContext class (which is supplied by EFCodeFirst) and handles the retrieval/persistence of our Dinner and RSVP instances from a database. Step 4: Listing Dinners We’ve written all of the code necessary to implement our model layer for this simple project.  Let’s now expose and implement the URL: /Dinners/Upcoming within our project.  We’ll use it to list upcoming dinners that happen in the future. We’ll do this by right-clicking on our “Controllers” folder and select the “Add->Controller” menu command.  We’ll name the Controller we want to create “DinnersController”.  We’ll then implement an “Upcoming” action method within it that lists upcoming dinners using our model layer above.  We will use a LINQ query to retrieve the data and pass it to a View to render with the code below: We’ll then right-click within our Upcoming method and choose the “Add-View” menu command to create an “Upcoming” view template that displays our dinners.  We’ll use the “empty” template option within the “Add View” dialog and write the below view template using Razor: Step 4: Configure our Project to use a SQL CE Database We have finished writing all of our code – our last step will be to configure a database connection-string to use. We will point our NerdDinners model class to a SQL CE database by adding the below <connectionString> to the web.config file at the top of our project: EF Code First uses a default convention where context classes will look for a connection-string that matches the DbContext class name.  Because we created a “NerdDinners” class earlier, we’ve also named our connectionstring “NerdDinners”.  Above we are configuring our connection-string to use SQL CE as the database, and telling it that our SQL CE database file will live within the \App_Data directory of our ASP.NET project. Step 5: Running our Application Now that we’ve built our application, let’s run it! We’ll browse to the /Dinners/Upcoming URL – doing so will display an empty list of upcoming dinners: You might ask – but where did it query to get the dinners from? We didn’t explicitly create a database?!? One of the cool features that EF Code-First supports is the ability to automatically create a database (based on the schema of our model classes) when the database we point it at doesn’t exist.  Above we configured  EF Code-First to point at a SQL CE database in the \App_Data\ directory of our project.  When we ran our application, EF Code-First saw that the SQL CE database didn’t exist and automatically created it for us. Step 6: Using VS 2010 SP1 to Explore our newly created SQL CE Database Click the “Show all Files” icon within the Solution Explorer and you’ll see the “NerdDinners.sdf” SQL CE database file that was automatically created for us by EF code-first within the \App_Data\ folder: We can optionally right-click on the file and “Include in Project" to add it to our solution: We can also double-click the file (regardless of whether it is added to the project) and VS 2010 SP1 will open it as a database we can edit within the “Server Explorer” tab of the IDE. Below is the view we get when we double-click our NerdDinners.sdf SQL CE file.  We can drill in to see the schema of the Dinners and RSVPs tables in the tree explorer.  Notice how two tables - Dinners and RSVPs – were automatically created for us within our SQL CE database.  This was done by EF Code First when we accessed the NerdDinners class by running our application above: We can right-click on a Table and use the “Show Table Data” command to enter some upcoming dinners in our database: We’ll use the built-in editor that VS 2010 SP1 supports to populate our table data below: And now when we hit “refresh” on the /Dinners/Upcoming URL within our browser we’ll see some upcoming dinners show up: Step 7: Changing our Model and Database Schema Let’s now modify the schema of our model layer and database, and walkthrough one way that the new VS 2010 SP1 Tooling support for SQL CE can make this easier.  With EF Code-First you typically start making database changes by modifying the model classes.  For example, let’s add an additional string property called “UrlLink” to our “Dinner” class.  We’ll use this to point to a link for more information about the event: Now when we re-run our project, and visit the /Dinners/Upcoming URL we’ll see an error thrown: We are seeing this error because EF Code-First automatically created our database, and by default when it does this it adds a table that helps tracks whether the schema of our database is in sync with our model classes.  EF Code-First helpfully throws an error when they become out of sync – making it easier to track down issues at development time that you might otherwise only find (via obscure errors) at runtime.  Note that if you do not want this feature you can turn it off by changing the default conventions of your DbContext class (in this case our NerdDinners class) to not track the schema version. Our model classes and database schema are out of sync in the above example – so how do we fix this?  There are two approaches you can use today: Delete the database and have EF Code First automatically re-create the database based on the new model class schema (losing the data within the existing DB) Modify the schema of the existing database to make it in sync with the model classes (keeping/migrating the data within the existing DB) There are a couple of ways you can do the second approach above.  Below I’m going to show how you can take advantage of the new VS 2010 SP1 Tooling support for SQL CE to use a database schema tool to modify our database structure.  We are also going to be supporting a “migrations” feature with EF in the future that will allow you to automate/script database schema migrations programmatically. Step 8: Modify our SQL CE Database Schema using VS 2010 SP1 The new SQL CE Tooling support within VS 2010 SP1 makes it easy to modify the schema of our existing SQL CE database.  To do this we’ll right-click on our “Dinners” table and choose the “Edit Table Schema” command: This will bring up the below “Edit Table” dialog.  We can rename, change or delete any of the existing columns in our table, or click at the bottom of the column listing and type to add a new column.  Below I’ve added a new “UrlLink” column of type “nvarchar” (since our property is a string): When we click ok our database will be updated to have the new column and our schema will now match our model classes. Because we are manually modifying our database schema, there is one additional step we need to take to let EF Code-First know that the database schema is in sync with our model classes.  As i mentioned earlier, when a database is automatically created by EF Code-First it adds a “EdmMetadata” table to the database to track schema versions (and hash our model classes against them to detect mismatches between our model classes and the database schema): Since we are manually updating and maintaining our database schema, we don’t need this table – and can just delete it: This will leave us with just the two tables that correspond to our model classes: And now when we re-run our /Dinners/Upcoming URL it will display the dinners correctly: One last touch we could do would be to update our view to check for the new UrlLink property and render a <a> link to it if an event has one: And now when we refresh our /Dinners/Upcoming we will see hyperlinks for the events that have a UrlLink stored in the database: Summary SQL CE provides a free, embedded, database engine that you can use to easily enable database storage.  With SQL CE 4 you can now take advantage of it within ASP.NET projects and applications (both Web Forms and MVC). VS 2010 SP1 provides tooling support that enables you to easily create, edit and modify SQL CE databases – as well as use the standard EF designer against them.  This allows you to re-use your existing skills and data knowledge while taking advantage of an embedded database option.  This is useful both for small applications (where you don’t need the scalability of a full SQL Server), as well as for development and testing scenarios – where you want to be able to rapidly develop/test your application without having a full database instance.  SQL CE makes it easy to later migrate your data to a full SQL Server or SQL Azure instance if you want to – without having to change any code in your application.  All we would need to change in the above two scenarios is the <connectionString> value within the web.config file in order to have our code run against a full SQL Server.  This provides the flexibility to scale up your application starting from a small embedded database solution as needed. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >