Search Results

Search found 21434 results on 858 pages for 'query master'.

Page 484/858 | < Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >

  • How to best handle exception to repeating calendar events

    - by blcArmadillo
    I'm working on a project that will require me to implement a calendar. I'm trying to come up with a system that is very flexible: can handle repeating events, exceptions to repeats, etc. I've looked at the schema for applications like iCal, Lotus Notes, and Mozilla to get an idea of how to go about implementing such a system. Currently I'm having trouble deciding what is the best way to handle exceptions to repeating events. I've used databases quite a bit but don't have a ton of experience with really optimizing everything so I'm not sure which method of the two I'm considering would be optimal in terms of overall performance and ability to query/search: Breaking the repeating event. So taking the changing the ending date on the current row for the repeating event, inserting a new row with the exception, and adding another row continuing the old sequence. Simply adding an exception. So adding a new row with some field that indicates it as an override. So here is why I can't decide. Method one will result in a lot more rows since each edit requires 2 extra rows as apposed to only one row by the second method. On the other hand I think the query to find an event would be much simper, and thus possibly faster(?) using the first method. The second method seems like it will require more calculating on the application server since once you get the data you'll have to remove the intersection of the two rows. I know databases are often the bottleneck for websites and while I'm sure a lot of you are thinking either is fine because your project will probably never get large enough for the difference in efficiency to really matter, I'd still like to implement the best solution. So what method would you guys pick, or would you do something completely different? Also, as a side note I'll be using MySQL and PHP. If there is another technology that you think would be better suited for this, especially in the database area, please mention it. Thanks for the advice.

    Read the article

  • Autocomplete or Select box? (design problem)

    - by Craig Whitley
    I'm working on a comparison website, so needless to say the search function is the primary feature of the site. I have two input text boxes and a search button. At the moment, the input text boxes use Ajax to query the database and show a drop-down box, but I'm wondering if it would be more intuitive to use a select box instead? The second box is dependant on the first, as when the first is selected theres another ajax query so only the available options for the first selection appear in the autocomplete box. Autocomplete Pros: - "Feels" right? - Looks more appealing than a select box (css design)? Cons: - the user has to be instructed on how to use the search (made to think?) - Only really works off the bat with javascript enabled. - The user may get confused if they type in what they want and no box appears (i.e., no results) Select Box Pros: - Can bring up the list of options / know whats there from the outset. - We use select boxes every day (locations etc.) so we're used to how they work. (more intuitive?) Cons: - Can look a little unaesthetic when theres too many options to choose from. I'm thinking maybe at most around 100 options for my site over time. Any thoughts on how I could go about this would be appreciated!

    Read the article

  • Threading best practice when using SFTP in C#

    - by Christian
    Ok, this is more one of these "conceptual questions", but I hope I got some pointers in the right direction. First the desired scenario: I want to query an SFTP server for directory and file lists I want to upload or download files simulaneously Both things are pretty easy using a SFTP class provided by Tamir.SharpSsh, but if I only use one thread, it is kind of slow. Especially the recursion into subdirs gets very "UI blocking", because we are talking about 10.000 of directories. My basic approach is simple, create some kind of "pool" where I keep 10 open SFTP connections. Then query the first worker for a list of dirs. If this list was obtained, send the next free workers (e.g. 1-10, first one is also free again) to get the subdirectory details. As soon as there is a worker free, send him for the subsubdirs. And so on... I know the ThreadPool, simple Threads and did some Tests. What confuses me a little bit is the following: I basically need... A list of threads I create, say 10 Connect all threads to the server If a connection drops, create a new thread / sftp client If there is work to do, take the first free thread and handle the work I am currently not sure about the implementation details, especially the "work to do" and the "maintain list of threads" parts. Is it a good idea to: Enclose the work in an object, containing a job description (path) and a callback Send the threads into an infinite loop with 100ms wait to wait for work If SFTP is dead, either revive it, or kill the whole thread and create a new one How to encapsulate this, do I write my own "10ThreadsManager" or are there some out Ok, so far... Btw, I could also use PRISM events and commands, but I think the problem is unrelated. Perhaps the EventModel to signal a done processing of a "work package"... Thanks for any ideas, critic.. Chris

    Read the article

  • Transferring FSMO roles over vpn

    - by Tom Bowman
    I have a server located at one of our offices which is quite old and is due to be upgraded soon, this server holds the FSMO roles, I have another server in another office, both are DC's in the same domain and both are replicated, both run Server 2003 standard. I need to transfer the FSMO roles from the old server to the the one I have in the other office before I upgrade. Also I am looking at bringing in Exchange 2010 server however I cant install/configure that until I transfer the roles as it needs to be at the same site as the schema master. My question really is as both servers replicate over a vpn, how quickly will the roles transfer and will there be downtime as I need to make sure that while the transfer is running, both servers will service logon's and share files. or would it be better to do it out of hours? many thanks and apologies if I've missed out anything Regards Tom

    Read the article

  • How to configure replication? - This database is not enabled for publication.

    - by truthseeker
    Hi, I'm trying to configure repication on SQL Server 2005. I can done it using wizard. But when I'm trying to run generated scripts by this wizard the error message appears: Msg 14013, Level 16, State 1, Procedure sp_MSrepl_addpublication, Line 159 This database is not enabled for publication. Msg 18757, Level 16, State 1, Procedure sp_MSrepl_addpublication_snapshot, Line 66 Unable to execute procedure. The database is not published. Execute the procedure in a database that is published for replication. Msg 14013, Level 16, State 1, Procedure sp_MSrepl_addarticle, Line 168 This database is not enabled for publication. Msg 14294, Level 16, State 1, Procedure sp_verify_job_identifiers, Line 25 Supply either @job_id or @job_name to identify the job. It's a bit strange, because when I'm running this query on database where I clicked and then removed publication, everyting is going well. The problem is when I'm using my query on new database. What is more I'm using sp_replicationdboption stored procedure. When I'm tryin to run it, it says: The replication option 'publish' of database 'ReplicationTest00' has already been set to true. Please help me resolve this issue.

    Read the article

  • error exporting data using mysql workbench

    - by Rajneesh Rana
    hi, i have been getting warning of version mismatch when i was trying to export data dump using mysql workbench. So, i copied mysqldump from mysql server folder and placed it in workbench folder. Now when i am trying to export data i am getting error Operation failed with exitcode -1073741819 here is a entry of log 16:31:25 Dumping wordpress (wp_posts) Running: "mysqldump.exe" --defaults-extra-file="c:\docume~1\rajneesh.r\locals~1\temp\1\tmpxau7tz" --no-create-info=FALSE --order-by-primary=FALSE --force=FALSE --no-data=FALSE --tz-utc=TRUE --flush-privileges=FALSE --compress=FALSE --replace=FALSE --host=localhost --insert-ignore=FALSE --extended-insert=TRUE --user=root --quote-names=TRUE --hex-blob=FALSE --complete-insert=FALSE --add-locks=TRUE --port=3306 --disable-keys=TRUE --delayed-insert=FALSE --create-options=TRUE --delete-master-logs=FALSE --comments=TRUE --default-character-set=utf8 --max_allowed_packet=1G --flush-logs=FALSE --dump-date=TRUE --lock-tables=TRUE --allow-keywords=FALSE --events=FALSE "wordpress" "wp_posts" Operation failed with exitcode -1073741819 Please help me with these issues Thank You

    Read the article

  • NHibernate - joining on a subquery using ICriteria

    - by owensymes.mp
    I have a SQL query that I need to represent using NHibernate's ICriteria API. SELECT u.Id as Id, u.Login as Login, u.FirstName as FirstName, u.LastName as LastName, gm.UserGroupId_FK as UserGroupId, inner.Data1, inner.Data2, inner.Data3 FROM dbo.User u inner join dbo.GroupMember gm on u.Id = gm.UserAnchorId_FK left join ( SELECT di.UserAnchorId_FK, sum(di.Data1) as Data1, sum(di.Data2) as Data2, sum(di.Data3) as Data3 FROM dbo.DailyInfo di WHERE di.Date between '2009-04-01' and '2009-06-01' GROUP BY di.UserAnchorId_FK ) inner ON inner.UserAnchorId_FK = u.Id WHERE gm.UserGroupId_FK = 195 Attempts so far have included mapping 'User' and 'DailyInfo' classes (my entities) and making a DailyInfo object a property of the User object. However, how to map the foreign key relationship between them is still a mystery, ie <one-to-one></one-to-one> <one-to-many></one-to-many> <generator class="foreign"><param name="property">Id</param></generator> (!) Solutions on the web are generally to do with subqueries within a WHERE clause, however I need to left join on this subquery instead to ensure NULL values are returned for rows that do not join. I have the feeling that I should be using a Criteria for the outer query, then forming a 'join' with a DetachedCriteria to represent the subquery?

    Read the article

  • T-SQL While Loop and concatenation

    - by JustinT
    I have a SQL query that is supposed to pull out a record and concat each to a string, then output that string. The important part of the query is below. DECLARE @counter int; SET @counter = 1; DECLARE @tempID varchar(50); SET @tempID = ''; DECLARE @tempCat varchar(255); SET @tempCat = ''; DECLARE @tempCatString varchar(5000); SET @tempCatString = ''; WHILE @counter <= @tempCount BEGIN SET @tempID = ( SELECT [Val] FROM #vals WHERE [ID] = @counter); SET @tempCat = (SELECT [Description] FROM [Categories] WHERE [ID] = @tempID); print @tempCat; SET @tempCatString = @tempCatString + '<br/>' + @tempCat; SET @counter = @counter + 1; END When the script runs, @tempCatString outputs as null while @tempCat always outputs correctly. Is there some reason that concatenation won't work inside a While loop? That seems wrong, since incrementing @counter works perfectly. So is there something else I'm missing?

    Read the article

  • How to recover bitlocker encrypted partition that is now 'unallocated'/'free space'?

    - by Atishay Jain
    My hard drive had 5 partitions(including 1(some 4-5GB) bit locker encrypted one). When I used disk mgmt I could view 2 partitions(24.4GB and 8.94GB) in green colour labeled Empty space. So, I wanted to merge them and I used minitool partition wizard for the purpose. I don't know, what that software did, but all I was left with 2 partitions and lots of green free space. I recovered 2 partitions using EaseUS partition master, but the bitlocker encrypted partition cannot be searched by it(and also minitool partition recovery). Now, the disk mgmt shows 2 free space partitions of 28.36GB and 8.94GB respectively. Here is a screenshot http://s14.postimage.org/4tvij041t/Screen_Shot003.jpg Please, tell me a way to recover the bitlocker encrypted partition that is showing as a free space in disk management. P.S. - It contains very important data.

    Read the article

  • Linux Ubuntu: Updating the GRUB menu

    - by Mr X
    So apparently there are 2 versions of Linux on my laptop(I have a dual boot system with Linux and Windows 8 but Linux is the master OS). One of them uses the Kernel version 3.11.0 whose headers + source code are incompatible with my wireless driver. My laptop is a Toshiba Satellite with a Realtek RTL8188CE wireless lan.So the newer kernel version is still the first option on the menu and to get to Linux I use the "Previous versions of linux" which is running Kernel version 3.2.0-55. What can I do to update the grub menu so that Linux version with the 3.2.0-55 Kernel appears as the primary option? Do I need to get rid of/uninstall the newer kernel version? How can I do this without screwing up Linux entirely?

    Read the article

  • Kill a node in dojo.dnd.source ?

    - by Soulhuntre
    Related to my SO issue at http://stackoverflow.com/questions/3010996/dojo-extending-dojo-dnd-source-move-not-happening-ideas/3012518#3012518 I am now almost done. I have a dnd.Source derived class - we can consider it a dnd.Source for now, that has within it a node that has a specific class. function declare_mockupSmartDndUl(){ dojo.require("dojo.dnd.Source"); dojo.provide("mockup.SmartDndUl"); dojo.declare("mockup.SmartDndUl", dojo.dnd.Source, { markupFactory: function(params, node){ //params._skipStartup = true; return new mockup.SmartDndUl(node, params); }, onDropExternal: function(source, nodes, copy){ console.debug('onDropExternal called...'); // dojo.destroy(this.getAllNodes().query(".dndInstructions")); this.inherited(arguments); var x = source.getAllNodes().length; if( x == 0 ){ newnode = document.createElement('li'); newnode.innerHTML = "Hello!"; dojo.addClass(newnode,"dndInstructions"); source.node.appendChild(newnode); } return true; // return dojo.dnd.Source.prototype.onDropExternal.call(this, source, nodes, copy); } }); } You can see the place I mean from the dojo.destroy that is commented out because it was totally n00b :) If I do this var y = this.getAllNodes().query(".dndInstructions") the nodelist in y absolutely does contain the node. Now I need t kill it, nuke it - get it out of there. Out of the dnd.Source, out of the DOM... gone. Any ideas how to do it safely? It will be the ONLY node in the list at the time we do whatever it is we are goign to do to kill the thing. Thanks!

    Read the article

  • ASP.NET MVC, Url Routing: Maximum Path (URL) Length

    - by Martin Aatmaa
    The Scenario I have an application where we took the good old query string URL structure: ?x=1&y=2&z=3&a=4&b=5&c=6 and changed it into a path structure: /x/1/y/2/z/3/a/4/b/5/c/6 We're using ASP.NET MVC and (naturally) ASP.NET routing. The Problem The problem is that our parameters are dynamic, and there is (theoretically) no limit to the amount of parameters that we need to accommodate for. This is all fine until we got hit by the following train: HTTP Error 400.0 - Bad Request ASP.NET detected invalid characters in the URL. IIS would throw this error when our URL got past a certain length. The Nitty Gritty Here's what we found out: This is not an IIS problem IIS does have a max path length limit, but the above error is not this. Learn dot iis dot net How to Use Request Filtering Section "Filter Based on Request Limits" If the path was too long for IIS, it would throw a 404.14, not a 400.0. Besides, the IIS max path (and query) length are configurable: <requestLimits maxAllowedContentLength="30000000" maxUrl="260" maxQueryString="25" /> This is an ASP.NET Problem After some poking around: IIS Forums Thread: ASP.NET 2.0 maximum URL length? http://forums.iis.net/t/1105360.aspx it turns out that this is an ASP.NET (well, .NET really) problem. The shit of the matter is that, as far as I can tell, ASP.NET cannot handle paths longer than 260 characters. The nail in the coffin in that this is confirmed by Phil the Haack himself: Stack Overflow ASP.NET url MAX_PATH limit Question ID 265251 The Question So what's the question? The question is, how big of a limitation is this? For my app, it's a deal killer. For most apps, it's probably a non-issue. What about disclosure? No where where ASP.NET Routing is mentioned have I ever heard a peep about this limitation. The fact that ASP.NET MVC uses ASP.NET routing makes the impact of this even bigger. What do you think?

    Read the article

  • How can parallelism affect number of results?

    - by spender
    I have a fairly complex query that looks something like this: create table Items(SomeOtherTableID int,SomeField int) create table SomeOtherTable(Id int,GroupID int) with cte1 as ( select SomeOtherTableID,COUNT(*) SubItemCount from Items t where t.SomeField is not null group by SomeOtherTableID ),cte2 as ( select tc.SomeOtherTableID,ROW_NUMBER() over (partition by a.GroupID order by tc.SubItemCount desc) SubItemRank from Items t inner join SomeOtherTable a on a.Id=t.SomeOtherTableID inner join cte1 tc on tc.SomeOtherTableID=t.SomeOtherTableID where t.SomeField is not null ),cte3 as ( select SomeOtherTableID from cte2 where SubItemRank=1 ) select * from cte3 t1 inner join cte3 t2 on t1.SomeOtherTableID<t2.SomeOtherTableID option (maxdop 1) The query is such that cte3 is filled with 6222 distinct results. In the final select, I am performing a cross join on cte3 with itself, (so that I can compare every value in the table with every other value in the table at a later point). Notice the final line : option (maxdop 1) Apparently, this switches off parallelism. So, with 6222 results rows in cte3, I would expect (6222*6221)/2, or 19353531 results in the subsequent cross joining select, and with the final maxdop line in place, that is indeed the case. However, when I remove the maxdop line, the number of results jumps to 19380454. I have 4 cores on my dev box. WTF? Can anyone explain why this is? Do I need to reconsider previous queries that cross join in this way?

    Read the article

  • Settings on php.ini ignored

    - by bfavaretto
    I can't get my server to obey the settings from php.ini (I'm trying to change memory_limit and upload_max_filesize). As far as I can tell, I'm editing the correct file. phpinfo() gives: Loaded Configuration File /etc/php.ini The file permission is 644. There are also some extra .ini files on /etc/php.d, but none include any of the keys I'm trying to change. No matter what I do, phpinfo reports the default values on both "Local" and "Master" columns. I also scanned my Apache config files, but found nothing related to PHP (besides loading the PHP module). The only way I was able to change those settings was by adding some php_value lines to my .htaccess. Is there something obvious I'm missing? This is a virtual server, and I can perform root commands with sudo. I'm running Apache 2.1.3 and PHP 5.3.3. System info (from uname -a) is: Linux sesctbapp01 2.6.18-308.1.1.el5 #1 SMP Wed Mar 7 04:16:51 EST 2012 x86_64

    Read the article

  • Linq to SQL not inserting data onto the DB

    - by Jesus Rodriguez
    Hello! I have a little / weird behaviour here and Im looking over internet and SO and I didn't find a response. I have to admit that this is my first time using databases, I know how to use them with SQL but never used it actually. Anyway, I have a problem with my app inserting data, I just created a very simple project for testing that and no solution yet. I have an example database with Sql Server Id - int (identity primary key) Name - nchar(10) (not null) The table is called "Person", simple as pie. I have this: static void Main(string[] args) { var db = new ExampleDBDataContext {Log = Console.Out}; var jesus = new Person {Name = "Jesus"}; db.Persons.InsertOnSubmit(jesus); db.SubmitChanges(); var query = from person in db.Persons select person; foreach (var p in query) { Console.WriteLine(p.Name); } } As you can see, nothing extrange. It show Jesus in the console. But if you see the table data, there is no data, just empty. I comment the object creation and insertion and the foreach doesn't print a thing (normal, there is no data in the database) The weird thing is that I created a row in the database manually and the Id was 2 and no 1 (Was the linq really playing with the database but it didn't create the row?) There is the log: INSERT INTO [dbo].Person VALUES (@p0) SELECT CONVERT(Int,SCOPE_IDENTITY()) AS [value] -- @p0: Input NChar (Size = 10; Prec = 0; Scale = 0) [Jesus] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.30729.4926 SELECT [t0].[Id], [t0].[Name] FROM [dbo].[Person] AS [t0] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.30729.4926 I am really confused, All the blogs / books use this kind of snippet to insert an element to a database. Thank you for helping.

    Read the article

  • How can I connect to my ACT database to export data?

    - by Adam Gessel
    I am trying to export data from an MSSQL server that ACT uses. It is ACT 2005. I have tried tons of different things, from trying to starting the MSSQL server in single user mode (still can't login), I have tried copying the mdf files from it and putting it on another server (it complains about having the same name as another database for master.mdf and almost every other file), I have tried putting Administrator in the group that the MSSQL instance runs under, and nothing seems to work! Can anybody with experience with this help me out? Thanks!

    Read the article

  • Child objects in MongoDB

    - by Jeremy B.
    I have been following along with Rob Conery's Linq for MongoDB and have come across a question. In the example he shows how you can easily nest a child object. For my current experiment I have the following structure. class Content { ... Profile Profile { get; set; } } class Profile { ... } This works great when looking at content items. The dilemma I'm facing now is if I want to treat the Profile as an atomic object. As it stands, it appears as if I can not query the Profile object directly but that it comes packaged with Content results. If I want it to be inclusive, but also be able to query on just Profile I feel like my first instinct would be to make Profiles a top level object and then create a foreign key like structure under the Content class to tie the two together. To me it feels like I'm falling back on RDBMS practices and that feels like I'm most likely going against the spirit of Mongo. How would you treat an object you need to act upon independently yet also want as a child object of another object?

    Read the article

  • Designing a fluid Javascript interface to abstract away the asynchronous nature of AJAX

    - by Anurag
    How would I design an API to hide the asynchronous nature of AJAX and HTTP requests, or basically delay it to provide a fluid interface. To show an example from Twitter's new Anywhere API: // get @ded's first 20 statuses, filter only the tweets that // mention photography, and render each into an HTML element T.User.find('ded').timeline().first(20).filter(filterer).each(function(status) { $('div#tweets').append('<p>' + status.text + '</p>'); }); function filterer(status) { return status.text.match(/photography/); } vs this (asynchronous nature of each call is clearly visible) T.User.find('ded', function(user) { user.timeline(function(statuses) { statuses.first(20).filter(filterer).each(function(status) { $('div#tweets').append('<p>' + status.text + '</p>'); }); }); }); It finds the user, gets their tweet timeline, filters only the first 20 tweets, applies a custom filter, and ultimately uses the callback function to process each tweet. I am guessing that a well designed API like this should work like a query builder (think ORMs) where each function call builds the query (HTTP URL in this case), until it hits a looping function such as each/map/etc., the HTTP call is made and the passed in function becomes the callback. An easy development route would be to make each AJAX call synchronous, but that's probably not the best solution. I am interested in figuring out a way to make it asynchronous, and still hide the asynchronous nature of AJAX.

    Read the article

  • Java (Tomcat): how to configure a cookieless subdomain to serve static content

    - by Webinator
    One of the tip given by both Google and Yahoo! to speed up webpages loading is to configure a cookieless subdomain to server static content. How do you configure a "cookieless subdomain" using Tomcat in standalone mode (this question is not about how to use Apache to serve static content in a cookieless-way, but about how to do it in Tomcat-standalone mode)? Note that I don't care about filters supporting If-Modified-Since nor care about filters supporting gzipping: the static content I'm serving is forever cacheable (or its name will change) and it is already compressed data (so gzip would only slow down the transfer). Do I need two different Tomcat webapps? (one "cookiefull" and one "cookieless") Do I need two different servlets? (as of now I've got only one dispatcher/controller servlet). Why would a "regular" link to, say, a static image be called in a cookiefull way when it would be on the same domain as the main webapp and then be called in a "cookie-less" way when it is on a subdomain? I don't understand exactly what is going on: is it the browser that decides to append or not cookies to the query? If so, why would it not append the cookies to a static query on a "cookieless" subdomain. Any example as to what is going on behind the scene is most welcome :)

    Read the article

  • standart packages list

    - by Valintinr
    Im learning puppet system and now need to do next task. So we have few servers with same OS (Altlinux p6,t6) - puppet-agents and have puppet-master. On agents installed some packages, eg. 200 packages on first, 300 on second .... But necessary only 180 installed. We know names of necessary packages but dont know names of other (unnecessary packages) So task: Have i can check or install (if not installed yet) necessary packages and delete other packages (we dont know names of other installed packages) Help please WBR Valentin

    Read the article

  • PostgreSQL has no service name on CentOS

    - by Kyle MacFarlane
    I installed PostgreSQL in a pretty standard way on CentOS 5.5: rpm -ivh http://yum.pgrpms.org/reporpms/9.0/pgdg-centos-9.0-2.noarch.rpm yum install postgresql90-server postgresql90-contrib chkconfig postgresql-90 on /etc/init.d/postgresql-90 initdb But for some reason I can't use it with the service command because it has no name, .e.g if I do service --status-all I get back the following: master (pid 3095) is running... (pid 3009) is running... rdisc is stopped Or even just /etc/init.d/postgresql-90 status: (pid 3009) is running... So how can I give it a name so that I don't have to type out the whole init script path each time?

    Read the article

  • SQL - Count grouped entries and then get the max values grouped by date

    - by Marcus
    hello, I am out of any logic how to write the right sql statment. I've got a sqlite table holding every played track in a row with played date/time Now I will count the plays of all artists, grouped by day and then find the artist with the max playcount per day. I used this Query SELECT COUNT(ARTISTID) AS artistcount, ARTIST AS artistname,strftime('%Y-%m-%d', playtime) AS day_played FROM playcount GROUP BY artistname to get this result "93"|"The Skygreen Leopards"|"2010-06-16" "2" |"Arcade Fire" |"2010-06-15" "2" |"Dead Kennedys" |"2010-06-15" "2" |"Wolf People" |"2010-06-15" "3" |"16 Horsepower" |"2010-06-15" "3" |"Alela Diane" |"2010-06-15" "46"|"Motorama" |"2010-06-15" "1" |"Ariel Pink's Haunted" |"2010-06-14" I tried then to query this virtual table but I always get false results in artistname. SELECT MAX(artistcount), artistname , day_played FROM ( SELECT COUNT(ARTISTID) AS artistcount, ARTIST AS artistname,strftime('%Y-%m-%d', playtime) AS day_played FROM playcount GROUP BY artistname ) GROUP BY strftime('%Y-%m-%d',day_played) result in this "93"|"lilium" |"2010-06-16" "46"|"Wolf People"|"2010-06-15" "30"|"of Montreal"|"2010-06-14" but the artist name is false. I think through the grouping by day, it just use the last artist, or so. I tested stuff like INNER JOIN or GROUP BY ... HAVING in trial and error, I read examples of similar issues but always get lost in columnnames and stuff (I am a bit burned out) I hope someone can give me a hint. thanks m

    Read the article

  • "Executing SQL directly; no cursor" error when using SCOPE_IDENTITY

    - by Chris
    There wasn't much on google about this error, so I'm askin here. I'm switching a PHP web application from using MySQL to SQL Server 2008 (using ODBC, not php_mssql). Running queries or anything else isn't a problem, but when I try to do scope_identity (or any similar functions), I get the error "Executing SQL directly; no cursor". I'm doing this immediately after an insert, so it should still be in scope. Running the same insert statement then query for the insert ID works fine in SQL Server Management Studio. Here's my code right now (everything else in the database wrapper class works fine for other queries, so I'll assume it isn't relevant right now): function insert_id(){ $x = $this->query_first("SELECT SCOPE_IDENTITY('session_log') as insert_id"); echo "($x)"; return $x; } query_first being a function that returns the first result from the first field of a query (basically the equivalent of execute_scalar() on .net). The full error message: Warning: odbc_exec() [function.odbc-exec]: SQL error: [Microsoft][SQL Server Native Client 10.0][SQL Server]Executing SQL directly; no cursor., SQL state 01000 in SQLExecDirect in C:[...]\Database_MSSQL.php on line 110

    Read the article

  • Why is my Lucene index getting locked?

    - by Andrew Bullock
    I had an issue with my search not return the results I expect. I tried to run Luke on my index, but it said it was locked and I needed to Force Unlock it (I'm not a Jedi/Sith though) I tried to delete the index folder and run my recreate-indicies application but the folder was locked. Using unlocker I've found that there are about 100 entries of w3wp.exe (same PID, different Handle) with a lock on the index. Whats going on? I'm doing this in my NHibernate configuration: c.SetListener(ListenerType.PostUpdate, new FullTextIndexEventListener()); c.SetListener(ListenerType.PostInsert, new FullTextIndexEventListener()); c.SetListener(ListenerType.PostDelete, new FullTextIndexEventListener()); And here is the only place i query the index: var fullTextSession = NHibernate.Search.Search.CreateFullTextSession(this.unitOfWork.Session); var fullTextQuery = fullTextSession.CreateFullTextQuery(query, typeof (Person)); fullTextQuery.SetMaxResults(100); return fullTextQuery.List<Person>(); Whats going on? What am i doing wrong? Thanks

    Read the article

  • Echange Server 2003 Replication

    - by Campo
    We have 2 Exchange 2003 Servers. 1 is the master and is currently hosting all the mailboxes and public stores. I would like to setup a standby exchange server that is replicating all the mailboxes and public stores from the primary exchange server. This way if the primary goes down we can default to the standby. I have searched all over for some guidance but have not been able to find anything detailing this for Exchange 2003. Any help is much appreciated.

    Read the article

< Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >