Search Results

Search found 19966 results on 799 pages for 'datetime query'.

Page 468/799 | < Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >

  • jQuery addClass on $.post

    - by Tim
    I am basically trying to create a small registration form. If the username is already taken, I want to add the 'red' class, if not then 'green'. The PHP here works fine, and returns either a "YES" or "NO" to determine whether it's ok. The CSS: input { border:1px solid #ccc; } .red { border:1px solid #c00; } .green { border:1px solid green; background:#afdfaf; } The Javascript I'm using is: $("#username").change(function() { var value = $("#username").val(); if(value!= '') { $.post("check.php", { value: value }, function(data){ $("#test").html(data); if(data=='YES') { $("#username").removeClass('red').addClass('green'); } if(data=='NO') { $("#username").removeClass('green').addClass('red'); } }); } }); I've got the document.ready stuff too... It all works fine because the #test div html changes to either "YES" or "NO", apart from the last part where I check what the value of the data is. Here is the php: $value=$_POST['value']; if ($value!="") { $sql="select * FROM users WHERE username='".$value."'"; $query = mysql_query($sql) or die ("Could not match data because ".mysql_error()); $num_rows = mysql_num_rows($query); if ($num_rows 0) { echo "NO"; } else { echo "YES"; } }

    Read the article

  • VS2008 EF and non crud SP usage.

    - by SteveO
    Using an edmx version of EF. My returned data is a join between tables that has a COMPOUND filter on the primary table. In essence this query is going to return a SEGMENT of Law codes and descriptions that a user can tie to a Sex Offender report. I have a complex SP because Linq2SQL cannot pass in a between statement, or at least that is how I understand the error. The Code itself is broken up by '-' marks. 39-13-504 "Aggravated Sexual Battery" User wants to have a query with 4 parmas 39, 13, 500, 599. Get all codes from Title 39 and Chapter 13 with parts between 500 and 599. I have the SP in place to do the work, is there are way to consume the SP within the EF? I find many blogs about SPs with CRUD operations as their use of an SP. That doesn't fit this need at all. I do not have a single table but a join to the "prior selections" table that maps the key for the code. Any pointers on how to get a READ with an SP? TIA

    Read the article

  • Lucene - querying with long strings

    - by Mikos
    I have an index, with a field "Affiliation", some example values are: "Stanford University School of Medicine, Palo Alto, CA USA", "Institute of Neurobiology, School of Medicine, Stanford University, Palo Alto, CA", "School of Medicine, Harvard University, Boston MA", "Brigham & Women's, Harvard University School of Medicine, Boston, MA" "Harvard University, Cambridge MA" and so on... (the bottom-line being the affiliations are written in multiple ways with no apparent consistency) I query the index on the affiliation field using say "School of Medicine, Stanford University, Palo Alto, CA" (with QueryParser) to find all Stanford related documents, I get a lot of false +ves, presumably because of the presence of School of Medicine etc. etc. (note: I cannot use Phrase query because of variability in the way affiliation is constructed) I have tried the following: Use a SpanNearQuery by splitting the search phrase with a whitespace (here I get no results!) Tried boosting (using ^) by splitting with the comma and boosting the last parts such as "Palo Alto CA" with a much higher boost than the initial phrases. Here I still get lots of false +ves. Any suggestions on how to approach this? If SpanNearQuery the way to go, Any ideas on why I get 0 results?

    Read the article

  • LINQ to SQL : Too much CPU Usage: What happens when there are multiple users.

    - by soldieraman
    I am using LINQ to SQL and seeing my CPU Usage sky rocketting. See below screenshot. I have three questions What can I do to reduce this CPU Usage. I have done profiling and basically removed everything. Will making every LINQ to SQL statement into a compiled query help? I also find that even with compiled queries simple statements like ByID() can take 3 milliseconds on a server with 3.25GB RAM 3.17GHz - this will just become slower on a less powerful computer. Or will the compiled query get faster the more it is used? The CPU Usage (on the local server goes to 12-15%) for a single user will this multiply with the number of users accessing the server - when the application is put on a live server. i.e. 2 users at a time will mean 15*2 = 30% CPU Usage. If this is the case is my application limited to maximum 4-5 users at a time then. Or doesnt LINQ to SQL .net share some CPU usage.

    Read the article

  • How can I write a clean Repository without exposing IQueryable to the rest of my application?

    - by Simucal
    So, I've read all the Q&A's here on SO regarding the subject of whether or not to expose IQueryable to the rest of your project or not (see here, and here), and I've ultimately decided that I don't want to expose IQueryable to anything but my Model. Because IQueryable is tied to certain persistence implementations I don't like the idea of locking myself into this. Similarly, I'm not sure how good I feel about classes further down the call chain modifying the actual query that aren't in the repository. So, does anyone have any suggestions for how to write a clean and concise Repository without doing this? One problem I see, is my Repository will blow up from a ton of methods for various things I need to filter my query off of. Having a bunch of: IEnumerable GetProductsSinceDate(DateTime date); IEnumberable GetProductsByName(string name); IEnumberable GetProductsByID(int ID); If I was allowing IQueryable to be passed around I could easily have a generic repository that looked like: public interface IRepository<T> where T : class { T GetById(int id); IQueryable<T> GetAll(); void InsertOnSubmit(T entity); void DeleteOnSubmit(T entity); void SubmitChanges(); } However, if you aren't using IQueryable then methods like GetAll() aren't really practical since lazy evaluation won't be taking place down the line. I don't want to return 10,000 records only to use 10 of them later. What is the answer here? In Conery's MVC Storefront he created another layer called the "Service" layer which received IQueryable results from the respository and was responsible for applying various filters. Is this what I should do, or something similar? Have my repository return IQueryable but restrict access to it by hiding it behind a bunch of filter classes like GetProductByName, which will return a concrete type like IList or IEnumerable?

    Read the article

  • SQLCMD.EXE generates ugly report. How to format it?

    - by Juri Bogdanov
    I did batch to run SQL query like use [AxDWH_Central_Reporting] GO EXEC sp_spaceused @updateusage = N'TRUE' GO It displays 2 tables and generates some ugly report with some kind of 'P' unneeded letters... See below Changed database context to 'AxDWH_Central_Reporting'. database_name Pdatabase_size Punallocated space --------------------------------------------------------------------------------------------------------------------------------P------------------P------------------ AxDWH_Central_Reporting P10485.69 MB P7436.85 MB reserved Pdata Pindex_size Punused ------------------P------------------P------------------P------------------ 3121176 KB P3111728 KB P7744 KB P1704 KB ---------------------------------------------------------------- I also tryed to generate 1 table from this procedure with next query declare @dbname sysname, @dbsize bigint, @logsize bigint, @reservedpages bigint select @reservedpages = sum(a.total_pages) from sys.partitions p join sys.allocation_units a on p.partition_id = a.container_id left join sys.internal_tables it on p.object_id = it.object_id select @dbsize = sum(convert(bigint,case when status & 64 = 0 then size else 0 end)), @logsize = sum(convert(bigint,case when status & 64 <> 0 then size else 0 end)) from dbo.sysfiles select 'database name' = db_name(), 'database size' = ltrim(str((convert (dec (15,2),@dbsize) + convert (dec (15,2),@logsize)) * 8192 / 1048576,15,2) + ' MB'), 'unallocated space' = ltrim(str((case when @dbsize >= @reservedpages then (convert (dec (15,2),@dbsize) - convert (dec (15,2),@reservedpages)) * 8192 / 1048576 else 0 end),15,2) + ' MB') But got similar ugly report: database name Pdatabase size Punallocated space --------------------------------------------------------------------------------------------------------------------------------P------------------P------------------ master P5.75 MB P1.52 MB (1 rows affected) Is it possible to change the layout formatting for report? To make it more beautifull?

    Read the article

  • How to best handle exception to repeating calendar events

    - by blcArmadillo
    I'm working on a project that will require me to implement a calendar. I'm trying to come up with a system that is very flexible: can handle repeating events, exceptions to repeats, etc. I've looked at the schema for applications like iCal, Lotus Notes, and Mozilla to get an idea of how to go about implementing such a system. Currently I'm having trouble deciding what is the best way to handle exceptions to repeating events. I've used databases quite a bit but don't have a ton of experience with really optimizing everything so I'm not sure which method of the two I'm considering would be optimal in terms of overall performance and ability to query/search: Breaking the repeating event. So taking the changing the ending date on the current row for the repeating event, inserting a new row with the exception, and adding another row continuing the old sequence. Simply adding an exception. So adding a new row with some field that indicates it as an override. So here is why I can't decide. Method one will result in a lot more rows since each edit requires 2 extra rows as apposed to only one row by the second method. On the other hand I think the query to find an event would be much simper, and thus possibly faster(?) using the first method. The second method seems like it will require more calculating on the application server since once you get the data you'll have to remove the intersection of the two rows. I know databases are often the bottleneck for websites and while I'm sure a lot of you are thinking either is fine because your project will probably never get large enough for the difference in efficiency to really matter, I'd still like to implement the best solution. So what method would you guys pick, or would you do something completely different? Also, as a side note I'll be using MySQL and PHP. If there is another technology that you think would be better suited for this, especially in the database area, please mention it. Thanks for the advice.

    Read the article

  • Autocomplete or Select box? (design problem)

    - by Craig Whitley
    I'm working on a comparison website, so needless to say the search function is the primary feature of the site. I have two input text boxes and a search button. At the moment, the input text boxes use Ajax to query the database and show a drop-down box, but I'm wondering if it would be more intuitive to use a select box instead? The second box is dependant on the first, as when the first is selected theres another ajax query so only the available options for the first selection appear in the autocomplete box. Autocomplete Pros: - "Feels" right? - Looks more appealing than a select box (css design)? Cons: - the user has to be instructed on how to use the search (made to think?) - Only really works off the bat with javascript enabled. - The user may get confused if they type in what they want and no box appears (i.e., no results) Select Box Pros: - Can bring up the list of options / know whats there from the outset. - We use select boxes every day (locations etc.) so we're used to how they work. (more intuitive?) Cons: - Can look a little unaesthetic when theres too many options to choose from. I'm thinking maybe at most around 100 options for my site over time. Any thoughts on how I could go about this would be appreciated!

    Read the article

  • Threading best practice when using SFTP in C#

    - by Christian
    Ok, this is more one of these "conceptual questions", but I hope I got some pointers in the right direction. First the desired scenario: I want to query an SFTP server for directory and file lists I want to upload or download files simulaneously Both things are pretty easy using a SFTP class provided by Tamir.SharpSsh, but if I only use one thread, it is kind of slow. Especially the recursion into subdirs gets very "UI blocking", because we are talking about 10.000 of directories. My basic approach is simple, create some kind of "pool" where I keep 10 open SFTP connections. Then query the first worker for a list of dirs. If this list was obtained, send the next free workers (e.g. 1-10, first one is also free again) to get the subdirectory details. As soon as there is a worker free, send him for the subsubdirs. And so on... I know the ThreadPool, simple Threads and did some Tests. What confuses me a little bit is the following: I basically need... A list of threads I create, say 10 Connect all threads to the server If a connection drops, create a new thread / sftp client If there is work to do, take the first free thread and handle the work I am currently not sure about the implementation details, especially the "work to do" and the "maintain list of threads" parts. Is it a good idea to: Enclose the work in an object, containing a job description (path) and a callback Send the threads into an infinite loop with 100ms wait to wait for work If SFTP is dead, either revive it, or kill the whole thread and create a new one How to encapsulate this, do I write my own "10ThreadsManager" or are there some out Ok, so far... Btw, I could also use PRISM events and commands, but I think the problem is unrelated. Perhaps the EventModel to signal a done processing of a "work package"... Thanks for any ideas, critic.. Chris

    Read the article

  • How to configure replication? - This database is not enabled for publication.

    - by truthseeker
    Hi, I'm trying to configure repication on SQL Server 2005. I can done it using wizard. But when I'm trying to run generated scripts by this wizard the error message appears: Msg 14013, Level 16, State 1, Procedure sp_MSrepl_addpublication, Line 159 This database is not enabled for publication. Msg 18757, Level 16, State 1, Procedure sp_MSrepl_addpublication_snapshot, Line 66 Unable to execute procedure. The database is not published. Execute the procedure in a database that is published for replication. Msg 14013, Level 16, State 1, Procedure sp_MSrepl_addarticle, Line 168 This database is not enabled for publication. Msg 14294, Level 16, State 1, Procedure sp_verify_job_identifiers, Line 25 Supply either @job_id or @job_name to identify the job. It's a bit strange, because when I'm running this query on database where I clicked and then removed publication, everyting is going well. The problem is when I'm using my query on new database. What is more I'm using sp_replicationdboption stored procedure. When I'm tryin to run it, it says: The replication option 'publish' of database 'ReplicationTest00' has already been set to true. Please help me resolve this issue.

    Read the article

  • python urllib post question

    - by paul
    hello ALL im making some simple python post script but it not working well. there is 2 part to have to login. first login is using 'http://mybuddy.buddybuddy.co.kr/userinfo/UserInfo.asp' this one. and second login is using 'http://user.buddybuddy.co.kr/usercheck/UserCheckPWExec.asp' i can login first login page, but i couldn't login second page website. and return some error 'illegal access' such like . i heard this is related with some cooke but i don't know how to implement to resolve this problem. if anyone can help me much appreciated!! Thanks! import re,sys,os,mechanize,urllib,time import datetime,socket params = urllib.urlencode({'ID':'ph896011', 'PWD':'pk1089' }) rq = mechanize.Request("http://mybuddy.buddybuddy.co.kr/userinfo/UserInfo.asp", params) rs = mechanize.urlopen(rq) data = rs.read() logged_fail = r';history.back();</script>' in data if not logged_fail: print 'login success' try: params = urllib.urlencode({'PASSWORD':'pk1089'}) rq = mechanize.Request("http://user.buddybuddy.co.kr/usercheck/UserCheckPWExec.asp", params ) rs = mechanize.urlopen(rq) data = rs.read() print data except: print 'error'

    Read the article

  • NHibernate - joining on a subquery using ICriteria

    - by owensymes.mp
    I have a SQL query that I need to represent using NHibernate's ICriteria API. SELECT u.Id as Id, u.Login as Login, u.FirstName as FirstName, u.LastName as LastName, gm.UserGroupId_FK as UserGroupId, inner.Data1, inner.Data2, inner.Data3 FROM dbo.User u inner join dbo.GroupMember gm on u.Id = gm.UserAnchorId_FK left join ( SELECT di.UserAnchorId_FK, sum(di.Data1) as Data1, sum(di.Data2) as Data2, sum(di.Data3) as Data3 FROM dbo.DailyInfo di WHERE di.Date between '2009-04-01' and '2009-06-01' GROUP BY di.UserAnchorId_FK ) inner ON inner.UserAnchorId_FK = u.Id WHERE gm.UserGroupId_FK = 195 Attempts so far have included mapping 'User' and 'DailyInfo' classes (my entities) and making a DailyInfo object a property of the User object. However, how to map the foreign key relationship between them is still a mystery, ie <one-to-one></one-to-one> <one-to-many></one-to-many> <generator class="foreign"><param name="property">Id</param></generator> (!) Solutions on the web are generally to do with subqueries within a WHERE clause, however I need to left join on this subquery instead to ensure NULL values are returned for rows that do not join. I have the feeling that I should be using a Criteria for the outer query, then forming a 'join' with a DetachedCriteria to represent the subquery?

    Read the article

  • Why shouldn't I always use nullable types in C#.

    - by Matthew Vines
    I've been searching for some good guidance on this since the concept was introduced in .net 2.0. Why would I ever want to use non-nullable data types in c#? (A better question is why wouldn't I choose nullable types by default, and only use non-nullable types when that explicitly makes sense.) Is there a 'significant' performance hit to choosing a nullable data type over its non-nullable peer? I much prefer to check my values against null instead of Guid.empty, string.empty, DateTime.MinValue,<= 0, etc, and to work with nullable types in general. And the only reason I don't choose nullable types more often is the itchy feeling in the back of my head that makes me feel like it's more than backwards compatibility that forces that extra '?' character to explicitly allow a null value. Is there anybody out there that always (most always) chooses nullable types rather than non-nullable types? Thanks for your time,

    Read the article

  • T-SQL While Loop and concatenation

    - by JustinT
    I have a SQL query that is supposed to pull out a record and concat each to a string, then output that string. The important part of the query is below. DECLARE @counter int; SET @counter = 1; DECLARE @tempID varchar(50); SET @tempID = ''; DECLARE @tempCat varchar(255); SET @tempCat = ''; DECLARE @tempCatString varchar(5000); SET @tempCatString = ''; WHILE @counter <= @tempCount BEGIN SET @tempID = ( SELECT [Val] FROM #vals WHERE [ID] = @counter); SET @tempCat = (SELECT [Description] FROM [Categories] WHERE [ID] = @tempID); print @tempCat; SET @tempCatString = @tempCatString + '<br/>' + @tempCat; SET @counter = @counter + 1; END When the script runs, @tempCatString outputs as null while @tempCat always outputs correctly. Is there some reason that concatenation won't work inside a While loop? That seems wrong, since incrementing @counter works perfectly. So is there something else I'm missing?

    Read the article

  • Kill a node in dojo.dnd.source ?

    - by Soulhuntre
    Related to my SO issue at http://stackoverflow.com/questions/3010996/dojo-extending-dojo-dnd-source-move-not-happening-ideas/3012518#3012518 I am now almost done. I have a dnd.Source derived class - we can consider it a dnd.Source for now, that has within it a node that has a specific class. function declare_mockupSmartDndUl(){ dojo.require("dojo.dnd.Source"); dojo.provide("mockup.SmartDndUl"); dojo.declare("mockup.SmartDndUl", dojo.dnd.Source, { markupFactory: function(params, node){ //params._skipStartup = true; return new mockup.SmartDndUl(node, params); }, onDropExternal: function(source, nodes, copy){ console.debug('onDropExternal called...'); // dojo.destroy(this.getAllNodes().query(".dndInstructions")); this.inherited(arguments); var x = source.getAllNodes().length; if( x == 0 ){ newnode = document.createElement('li'); newnode.innerHTML = "Hello!"; dojo.addClass(newnode,"dndInstructions"); source.node.appendChild(newnode); } return true; // return dojo.dnd.Source.prototype.onDropExternal.call(this, source, nodes, copy); } }); } You can see the place I mean from the dojo.destroy that is commented out because it was totally n00b :) If I do this var y = this.getAllNodes().query(".dndInstructions") the nodelist in y absolutely does contain the node. Now I need t kill it, nuke it - get it out of there. Out of the dnd.Source, out of the DOM... gone. Any ideas how to do it safely? It will be the ONLY node in the list at the time we do whatever it is we are goign to do to kill the thing. Thanks!

    Read the article

  • RadControl DateTimePicker Selecting new time doesn't remove highlight from previous selection

    - by Jason Beck
    This is not browser specific - the behavior exists in Firefox and IE. The RadControl is being used within a User Control in a SiteFinity site. Very little customization has been done to the control. <telerik:RadDateTimePicker ID="RadDateTimePicker1" runat="server" MinDate="2010/1/1" Width="250px"> <ClientEvents></ClientEvents> <TimeView starttime="08:00:00" endtime="20:00:00" interval="02:00:00"></TimeView> <DateInput runat="server" ID="DateInput"></DateInput> </telerik:RadDateTimePicker> protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { RadDateTimePicker1.MinDate = DateTime.Now; } }

    Read the article

  • undefined symbol: PyUnicodeUCS2_Decode whilst trying to install psycopg2

    - by Marco Fucci
    I'm getting an error whilst trying to install psycopg2 on ubuntu 9.10 64 bit. The error is: >>> import psycopg2 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "psycopg2/__init__.py", line 69, in <module> from _psycopg import BINARY, NUMBER, STRING, DATETIME, ROWID ImportError: psycopg2/_psycopg.so: undefined symbol: PyUnicodeUCS2_Decode I've tried downloading the package from http://initd.org/pub/software/psycopg/ and installing it. I've tried by using easy_install too. No error during the installation. It's quite weird as my python (2.6.2) has been compiled with UCS4 and so the installation should just work without problems. Any help would be appreciated. Cheers

    Read the article

  • ASP.NET MVC, Url Routing: Maximum Path (URL) Length

    - by Martin Aatmaa
    The Scenario I have an application where we took the good old query string URL structure: ?x=1&y=2&z=3&a=4&b=5&c=6 and changed it into a path structure: /x/1/y/2/z/3/a/4/b/5/c/6 We're using ASP.NET MVC and (naturally) ASP.NET routing. The Problem The problem is that our parameters are dynamic, and there is (theoretically) no limit to the amount of parameters that we need to accommodate for. This is all fine until we got hit by the following train: HTTP Error 400.0 - Bad Request ASP.NET detected invalid characters in the URL. IIS would throw this error when our URL got past a certain length. The Nitty Gritty Here's what we found out: This is not an IIS problem IIS does have a max path length limit, but the above error is not this. Learn dot iis dot net How to Use Request Filtering Section "Filter Based on Request Limits" If the path was too long for IIS, it would throw a 404.14, not a 400.0. Besides, the IIS max path (and query) length are configurable: <requestLimits maxAllowedContentLength="30000000" maxUrl="260" maxQueryString="25" /> This is an ASP.NET Problem After some poking around: IIS Forums Thread: ASP.NET 2.0 maximum URL length? http://forums.iis.net/t/1105360.aspx it turns out that this is an ASP.NET (well, .NET really) problem. The shit of the matter is that, as far as I can tell, ASP.NET cannot handle paths longer than 260 characters. The nail in the coffin in that this is confirmed by Phil the Haack himself: Stack Overflow ASP.NET url MAX_PATH limit Question ID 265251 The Question So what's the question? The question is, how big of a limitation is this? For my app, it's a deal killer. For most apps, it's probably a non-issue. What about disclosure? No where where ASP.NET Routing is mentioned have I ever heard a peep about this limitation. The fact that ASP.NET MVC uses ASP.NET routing makes the impact of this even bigger. What do you think?

    Read the article

  • How can parallelism affect number of results?

    - by spender
    I have a fairly complex query that looks something like this: create table Items(SomeOtherTableID int,SomeField int) create table SomeOtherTable(Id int,GroupID int) with cte1 as ( select SomeOtherTableID,COUNT(*) SubItemCount from Items t where t.SomeField is not null group by SomeOtherTableID ),cte2 as ( select tc.SomeOtherTableID,ROW_NUMBER() over (partition by a.GroupID order by tc.SubItemCount desc) SubItemRank from Items t inner join SomeOtherTable a on a.Id=t.SomeOtherTableID inner join cte1 tc on tc.SomeOtherTableID=t.SomeOtherTableID where t.SomeField is not null ),cte3 as ( select SomeOtherTableID from cte2 where SubItemRank=1 ) select * from cte3 t1 inner join cte3 t2 on t1.SomeOtherTableID<t2.SomeOtherTableID option (maxdop 1) The query is such that cte3 is filled with 6222 distinct results. In the final select, I am performing a cross join on cte3 with itself, (so that I can compare every value in the table with every other value in the table at a later point). Notice the final line : option (maxdop 1) Apparently, this switches off parallelism. So, with 6222 results rows in cte3, I would expect (6222*6221)/2, or 19353531 results in the subsequent cross joining select, and with the final maxdop line in place, that is indeed the case. However, when I remove the maxdop line, the number of results jumps to 19380454. I have 4 cores on my dev box. WTF? Can anyone explain why this is? Do I need to reconsider previous queries that cross join in this way?

    Read the article

  • Getting Started with CacheMoney

    - by Matt Grande
    I recently installed cache-money. After some difficulties getting memcached and cache-money set up, I thought I had it working. It cached the one query on my login page fine. I login, and go to my message index page and get this error: indices delegated to @cache_config.indices, but @cache_config is nil: Slug(id: integer, name: string, sluggable_id: integer, sequence: integer, sluggable_type: string, scope: string, created_at: datetime) Searching for the first part of that error message returns 0 hits on Google, so I'm at a loss on where to even begin. Any suggestions?

    Read the article

  • F# Optional Record Field

    - by akaphenom
    I have a F# record type and want one of the fields to be optional: type legComponents = { shares : int<share> ; price : float<dollar / share> ; totalInvestment : float<dollar> ; } type tradeLeg = { id : int ; tradeId : int ; legActivity : LegActivityType ; actedOn : DateTime ; estimates : legComponents ; ?actuals : legComponents ; } in the tradeLeg type I owuld like the the actuals field to be optional. I cant seem to figure it out nor can I seem to find a reliable example on the web. It seem like this sohuld be easy like let ?t : int = None but i realy can't seem to get this to work. Ugh - thank you T

    Read the article

  • Linq to SQL not inserting data onto the DB

    - by Jesus Rodriguez
    Hello! I have a little / weird behaviour here and Im looking over internet and SO and I didn't find a response. I have to admit that this is my first time using databases, I know how to use them with SQL but never used it actually. Anyway, I have a problem with my app inserting data, I just created a very simple project for testing that and no solution yet. I have an example database with Sql Server Id - int (identity primary key) Name - nchar(10) (not null) The table is called "Person", simple as pie. I have this: static void Main(string[] args) { var db = new ExampleDBDataContext {Log = Console.Out}; var jesus = new Person {Name = "Jesus"}; db.Persons.InsertOnSubmit(jesus); db.SubmitChanges(); var query = from person in db.Persons select person; foreach (var p in query) { Console.WriteLine(p.Name); } } As you can see, nothing extrange. It show Jesus in the console. But if you see the table data, there is no data, just empty. I comment the object creation and insertion and the foreach doesn't print a thing (normal, there is no data in the database) The weird thing is that I created a row in the database manually and the Id was 2 and no 1 (Was the linq really playing with the database but it didn't create the row?) There is the log: INSERT INTO [dbo].Person VALUES (@p0) SELECT CONVERT(Int,SCOPE_IDENTITY()) AS [value] -- @p0: Input NChar (Size = 10; Prec = 0; Scale = 0) [Jesus] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.30729.4926 SELECT [t0].[Id], [t0].[Name] FROM [dbo].[Person] AS [t0] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.30729.4926 I am really confused, All the blogs / books use this kind of snippet to insert an element to a database. Thank you for helping.

    Read the article

  • Designing a fluid Javascript interface to abstract away the asynchronous nature of AJAX

    - by Anurag
    How would I design an API to hide the asynchronous nature of AJAX and HTTP requests, or basically delay it to provide a fluid interface. To show an example from Twitter's new Anywhere API: // get @ded's first 20 statuses, filter only the tweets that // mention photography, and render each into an HTML element T.User.find('ded').timeline().first(20).filter(filterer).each(function(status) { $('div#tweets').append('<p>' + status.text + '</p>'); }); function filterer(status) { return status.text.match(/photography/); } vs this (asynchronous nature of each call is clearly visible) T.User.find('ded', function(user) { user.timeline(function(statuses) { statuses.first(20).filter(filterer).each(function(status) { $('div#tweets').append('<p>' + status.text + '</p>'); }); }); }); It finds the user, gets their tweet timeline, filters only the first 20 tweets, applies a custom filter, and ultimately uses the callback function to process each tweet. I am guessing that a well designed API like this should work like a query builder (think ORMs) where each function call builds the query (HTTP URL in this case), until it hits a looping function such as each/map/etc., the HTTP call is made and the passed in function becomes the callback. An easy development route would be to make each AJAX call synchronous, but that's probably not the best solution. I am interested in figuring out a way to make it asynchronous, and still hide the asynchronous nature of AJAX.

    Read the article

  • Child objects in MongoDB

    - by Jeremy B.
    I have been following along with Rob Conery's Linq for MongoDB and have come across a question. In the example he shows how you can easily nest a child object. For my current experiment I have the following structure. class Content { ... Profile Profile { get; set; } } class Profile { ... } This works great when looking at content items. The dilemma I'm facing now is if I want to treat the Profile as an atomic object. As it stands, it appears as if I can not query the Profile object directly but that it comes packaged with Content results. If I want it to be inclusive, but also be able to query on just Profile I feel like my first instinct would be to make Profiles a top level object and then create a foreign key like structure under the Content class to tie the two together. To me it feels like I'm falling back on RDBMS practices and that feels like I'm most likely going against the spirit of Mongo. How would you treat an object you need to act upon independently yet also want as a child object of another object?

    Read the article

  • Java (Tomcat): how to configure a cookieless subdomain to serve static content

    - by Webinator
    One of the tip given by both Google and Yahoo! to speed up webpages loading is to configure a cookieless subdomain to server static content. How do you configure a "cookieless subdomain" using Tomcat in standalone mode (this question is not about how to use Apache to serve static content in a cookieless-way, but about how to do it in Tomcat-standalone mode)? Note that I don't care about filters supporting If-Modified-Since nor care about filters supporting gzipping: the static content I'm serving is forever cacheable (or its name will change) and it is already compressed data (so gzip would only slow down the transfer). Do I need two different Tomcat webapps? (one "cookiefull" and one "cookieless") Do I need two different servlets? (as of now I've got only one dispatcher/controller servlet). Why would a "regular" link to, say, a static image be called in a cookiefull way when it would be on the same domain as the main webapp and then be called in a "cookie-less" way when it is on a subdomain? I don't understand exactly what is going on: is it the browser that decides to append or not cookies to the query? If so, why would it not append the cookies to a static query on a "cookieless" subdomain. Any example as to what is going on behind the scene is most welcome :)

    Read the article

< Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >