Search Results

Search found 62606 results on 2505 pages for 'sql files'.

Page 491/2505 | < Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >

  • What's the best way to store sort order in SQL?

    - by Duracell
    The guys at the top want sort order to be customizable in our app. So I have a table that effectively defines the data type. What is the best way to store our sort order. If I just created a new column called 'Order' or something, every time I updated the order of one row I imagine I would have to update the order of every row to ensure posterity. Is there a better way to do it?

    Read the article

  • MySql Query lag time / deadlock?

    - by Click Upvote
    When there are multiple PHP scripts running in parallel, each making an UPDATE query to the same record in the same table repeatedly, is it possible for there to be a 'lag time' before the table is updated with each query? I have basically 5-6 instances of a PHP script running in parallel, having been launched via cron. Each script gets all the records in the items table, and then loops through them and processes them. However, to avoid processing the same item more than once, I store the id of the last item being processed in a seperate table. So this is how my code works: function getCurrentItem() { $sql = "SELECT currentItemId from settings"; $result = $this->db->query($sql); return $result->get('currentItemId'); } function setCurrentItem($id) { $sql = "UPDATE settings SET currentItemId='$id'"; $this->db->query($sql); } $currentItem = $this->getCurrentItem(); $sql = "SELECT * FROM items WHERE status='pending' AND id > $currentItem'"; $result = $this->db->query($sql); $items = $result->getAll(); foreach ($items as $i) { //Check if $i has been processed by a different instance of the script, and if so, //leave it untouched. if ($this->getCurrentItem() > $i->id) continue; $this->setCurrentItem($i->id); // Process the item here } But despite of all the precautions, most items are being processed more than once. Which makes me think that there is some lag time between the update queries being run by the PHP script, and when the database actually updates the record. Is it true? And if so, what other mechanism should I use to ensure that the PHP scripts always get only the latest currentItemId even when there are multiple scripts running in parrallel? Would using a text file instead of the db help?

    Read the article

  • can I see all SQL statements sent over an ODBC connection?

    - by Dave Cameron
    I'm working with a third-party application that uses ODBC to connect to, and alter, a database. During certain failure modes, the end-results are not what I expect. To understand it better, I'd like some way of inspecting all the statements sent to the database. Is there a way to do this with ODBC? I know with JDBC I could use http://www.p6spy.com/ to see all statements sent, for example when debugging hibernate. p6spy is a "proxy" driver that records commands sent and forwards them on to the real JDBC driver. Another possibility might be a protocol sniffer that would capture statements over the wire. Although, I'm unsure if ODBC includes a standard wire protocol, or only specifieds the API. Does anyone know of existing tools that would allow me to do either of these things? Alternatively, is there another approach I could take?

    Read the article

  • How do you transfer data from SQL Server to db4o?

    - by dde
    I came across this question after searching for a ODBC or JDBC. To my surprise, since I am new to db4o I found there are tools to browse db4o, including a Netbeans and Eclipse plug in. However, when it comes to the question at hand, I only found one company, and the product is not being sold nor demoed (makes me think is not ready yet). So, how do you transfer data? Is there a tool or script I have not found yet?

    Read the article

  • About can't export the performance problems.

    - by kyrathy
    At first,let me describe my environment My VirtualCenter is install on window 2008 ,and theDataBase that VC used is SQL 2008 I really want to ask is ..... When I use vsphere clinet to connect VC.....I got a problem. the performance chart only can show "realtime "...... whatever I only want to view the chart , or I want to export the performance log . when i manually want to export performance, and I select the time to 1 hour ,1 day ,1 month, or from a to b. it showed "No performance data to report for selected objects" only select realtime can export data normally. Before I install Vsphere 4 , I install the SQL 2008 , used the schema in the install CD(I follow the step to create SQL DB for vSphere) Could anybody help me how to solve this problem? And if need any information ,just tell me to provide. Thanks a lot.

    Read the article

  • SQL: select random row from table where the ID of the row isn't in another table?

    - by johnrl
    I've been looking at fast ways to select a random row from a table and have found the following site: http://74.125.77.132/search?q=cache:http://jan.kneschke.de/projects/mysql/order-by-rand/&hl=en&strip=1 What I want to do is to select a random url from my table 'urls' that I DON'T have in my other table 'urlinfo'.The query I am using now selects a random url from 'urls' but I need it modified to only return a random url that is NOT in the 'urlinfo' table. Heres the query: SELECT url FROM urls JOIN (SELECT CEIL(RAND() * (SELECT MAX(urlid) FROM urls ) ) AS urlid ) AS r2 USING(urlid); And the two tables: CREATE TABLE urls ( urlid INT NOT NULL AUTO_INCREMENT PRIMARY KEY, url VARCHAR(255) NOT NULL, ) ENGINE=INNODB; CREATE TABLE urlinfo ( urlid INT NOT NULL PRIMARY KEY, urlinfo VARCHAR(10000), FOREIGN KEY (urlid) REFERENCES urls (urlid) ) ENGINE=INNODB;

    Read the article

  • How can I work around SQL Server - Inline Table Value Function execution plan variation based on par

    - by Ovidiu Pacurar
    Here is the situation: I have a table value function with a datetime parameter ,lest's say tdf(p_date) , that filters about two million rows selecting those with column date smaller than p_date and computes some aggregate values on other columns. It works great but if p_date is a custom scalar value function (returning the end of day in my case) the execution plan is altered an the query goes from 1 sec to 1 minute execution time. A proof of concept table - 1K products, 2M rows: CREATE TABLE [dbo].[POC]( [Date] [datetime] NOT NULL, [idProduct] [int] NOT NULL, [Quantity] [int] NOT NULL ) ON [PRIMARY] The inline table value function: CREATE FUNCTION tdf (@p_date datetime) RETURNS TABLE AS RETURN ( SELECT idProduct, SUM(Quantity) AS TotalQuantity, max(Date) as LastDate FROM POC WHERE (Date < @p_date) GROUP BY idProduct ) The scalar value function: CREATE FUNCTION [dbo].[EndOfDay] (@date datetime) RETURNS datetime AS BEGIN DECLARE @res datetime SET @res=dateadd(second, -1, dateadd(day, 1, dateadd(ms, -datepart(ms, @date), dateadd(ss, -datepart(ss, @date), dateadd(mi,- datepart(mi,@date), dateadd(hh, -datepart(hh, @date), @date)))))) RETURN @res END Query 1 - Working great SELECT * FROM [dbo].[tdf] (getdate()) The end of execution plan: Stream Aggregate Cost 13% <--- Clustered Index Scan Cost 86% Query 2 - Not so great SELECT * FROM [dbo].[tdf] (dbo.EndOfDay(getdate())) The end of execution plan: Stream Aggregate Cost 4% <--- Filter Cost 12% <--- Clustered Index Scan Cost 86%

    Read the article

  • Using Visual sudio .ncb file for reflection.

    - by Rushi
    I am developing visual game level editor in c++. For this I want reflection(RTTI) mechanism to know class attributes at runtime. I am currently using PDB files for this.But using PDB I couldn't retrieve actual code line for extra information in commented format which is given for that attribute. Visual studio uses NCB files for intelligence. So will it be better idea to use NCB instead PDB? If yes,How to retrieve information from NCB files? Is there any SDK like DIA SDK?

    Read the article

  • SQL Reporting Services: Why does my report shrink when it's emailed?

    - by Josh Yeager
    I created a simple report and uploaded it to my report server. It looks correct on the report server, but when I set up an email subscription, the report is much narrower than it is supposed to be. Here is what the report looks like in the designer. It looks similar when I view it on the report server: [http://img58.imageshack.us/img58/4893/designqj3.png] Here is what the email looks like: [http://img58.imageshack.us/img58/9297/emailmy8.png] Does anyone know why this is happening?

    Read the article

  • Check to see if file transfer is complete

    - by Cymon
    We have a daily job that processes files delivered from an external source. The process usually runs fine without any issues but every once in a while we have an issue of attempting to process a file that is not completely transferred. The external source SCPs these files from a UNIX server to our Windows server. From there we try to process the files. Is there a way to check to see if a file is still being transferred? Does UNIX put a lock on a file while SCPing it that we could check on the Windows side?

    Read the article

  • combine two GCC compiled .o object files into a third .o file

    - by ~lucian.grijincu
    How does one combine two GCC compiled .o object files into a third .o file? $ gcc -c a.c -o a.o $ gcc -c b.c -o b.o $ ??? a.o b.o -o c.o $ gcc c.o other.o -o executable If you have access to the source files the -combine GCC flag will merge the source files before compilation: $ gcc -c -combine a.c b.c -o c.o However this only works for source files, and GCC does not accept .o files as input for this command. Normally, linking .o files does not work properly, as you cannot use the output of the linker as input for it. The result is a shared library and is not linked statically into the resulting executable. $ gcc -shared a.o b.o -o c.o $ gcc c.o other.o -o executable $ ./executable ./executable: error while loading shared libraries: c.o: cannot open shared object file: No such file or directory $ file c.o c.o: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, not stripped $ file a.o a.o: ELF 32-bit LSB relocatable, Intel 80386, version 1 (SYSV), not stripped

    Read the article

  • In SQL / MySQL, can a Left Outer Join be used to find out the duplicates when there is no Primary ID

    - by Jian Lin
    I would like to try using Outer Join to find out duplicates in a table: If a table has Primary Index ID, then the following outer join can find out the duplicate names: mysql> select * from gifts; +--------+------------+-----------------+---------------------+ | giftID | name | filename | effectiveTime | +--------+------------+-----------------+---------------------+ | 2 | teddy bear | bear.jpg | 2010-04-24 04:36:03 | | 3 | coffee | coffee123.jpg | 2010-04-24 05:10:43 | | 6 | beer | beer_glass.png | 2010-04-24 05:18:12 | | 10 | heart | heart_shape.jpg | 2010-04-24 05:11:29 | | 11 | ice tea | icetea.jpg | 2010-04-24 05:19:53 | | 12 | cash | cash.png | 2010-04-24 05:27:44 | | 13 | chocolate | choco.jpg | 2010-04-25 04:04:31 | | 14 | coffee | latte.jpg | 2010-04-27 05:49:52 | | 15 | coffee | espresso.jpg | 2010-04-27 06:03:03 | +--------+------------+-----------------+---------------------+ 9 rows in set (0.00 sec) mysql> select * from gifts g1 LEFT JOIN (select * from gifts group by name) g2 on g1.giftID = g2.giftID where g2.giftID IS NULL; +--------+--------+--------------+---------------------+--------+------+----------+---------------+ | giftID | name | filename | effectiveTime | giftID | name | filename | effectiveTime | +--------+--------+--------------+---------------------+--------+------+----------+---------------+ | 14 | coffee | latte.jpg | 2010-04-27 05:49:52 | NULL | NULL | NULL | NULL | | 15 | coffee | espresso.jpg | 2010-04-27 06:03:03 | NULL | NULL | NULL | NULL | +--------+--------+--------------+---------------------+--------+------+----------+---------------+ 2 rows in set (0.00 sec) But what if the table doesn't have a Primary Index ID, then can an outer join still be used to find out duplicates?

    Read the article

  • T-SQL - Left Outer Joins - Fileters in the where clause versus the on clause.

    - by Greg Potter
    I am trying to compare two tables to find rows in each table that is not in the other. Table 1 has a groupby column to create 2 sets of data within table one. groupby number ----------- ----------- 1 1 1 2 2 1 2 2 2 4 Table 2 has only one column. number ----------- 1 3 4 So Table 1 has the values 1,2,4 in group 2 and Table 2 has the values 1,3,4. I expect the following result when joining for Group 2: `Table 1 LEFT OUTER Join Table 2` T1_Groupby T1_Number T2_Number ----------- ----------- ----------- 2 2 NULL `Table 2 LEFT OUTER Join Table 1` T1_Groupby T1_Number T2_Number ----------- ----------- ----------- NULL NULL 3 The only way I can get this to work is if I put a where clause for the first join: PRINT 'Table 1 LEFT OUTER Join Table 2, with WHERE clause' select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table1 LEFT OUTER join table2 --****************************** on table1.number = table2.number --****************************** WHERE table1.groupby = 2 AND table2.number IS NULL and a filter in the ON for the second: PRINT 'Table 2 LEFT OUTER Join Table 1, with ON clause' select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table2 LEFT OUTER join table1 --****************************** on table2.number = table1.number AND table1.groupby = 2 --****************************** WHERE table1.number IS NULL Can anyone come up with a way of not using the filter in the on clause but in the where clause? The context of this is I have a staging area in a database and I want to identify new records and records that have been deleted. The groupby field is the equivalent of a batchid for an extract and I am comparing the latest extract in a temp table to a the batch from yesterday stored in a partioneds table, which also has all the previously extracted batches as well. Code to create table 1 and 2: create table table1 (number int, groupby int) create table table2 (number int) insert into table1 (number, groupby) values (1, 1) insert into table1 (number, groupby) values (2, 1) insert into table1 (number, groupby) values (1, 2) insert into table2 (number) values (1) insert into table1 (number, groupby) values (2, 2) insert into table2 (number) values (3) insert into table1 (number, groupby) values (4, 2) insert into table2 (number) values (4)

    Read the article

  • SQL Server 2005. Full Text Search. Need Thesaurus working with NEAR/AND/OR keywords

    - by user305924
    Hi, does anyone know if it's possible to do a thesaurus search together with NEAR or AND/OR keywords. Here is an example of the type of query I want to run: SELECT Title, RANK FROM Item INNER JOIN CONTAINSTABLE(Item, Title, 'FORMSOF(Thesaurus, "red" NEAR "wine")') AS KEY_TBL ON Item.ItemID = KEY_TBL.[KEY] ORDER BY RANK DESC ....But I get the error message: Syntax error near 'NEAR' in the full-text search condition 'FORMSOF(Thesaurus, "red" NEAR "wine")'.

    Read the article

  • C1083 : Permission denied on .sbr files

    - by speps
    Hello, I am using Visual Studio 2005 (with SP1) and I am getting weird errors concerning .sbr files. These files, as I read on MSDN, are intermediate files for BSCMAKE to generate a .bsc file. The errors I get are, for example (on different builds) : 11string.cpp : fatal error C1083: Impossible d'ouvrir le fichier généré(e) par le compilateur : '.\debug\String.sbr' : Permission denied 58type.cpp : fatal error C1083: Impossible d'ouvrir le fichier généré(e) par le compilateur : '.\Debug/Type.sbr' : Permission denied Translation : cannot open compiler intermediate file It seems to be consistent (I have at least 5 or 6 examples like this) with a .cpp file being compiled twice in the same project, respectively : 11String.cpp *some warnings, 2 lines* 11String.cpp 58Type.cpp *some warnings and other files compiled, a lot of lines* 58Type.cpp I already checked the .vcproj files for duplicate entries and it does not seem to be the problem. I would appreciate any help regarding this issue. Deactivating the build of .bsc files seems to be a workaround but maybe someone has better information than this. Thanks.

    Read the article

  • Looking for suitable backup solution Mac OS X to offsite Centos 6 server 1TB of working data

    - by Brady
    I'll start by saying what we have in place currently: On site file server (Mac OS X Server) that is used by GFX designers and they have a working 1TB of data. Offsite server with 2TB available storage (Centos 6) Mac OS X server rsync data to offsite server every 6 hours (rsync -avz --delete --progress -e ssh ...) Mac OS X server does full data backup to LTO 4 tape on a 10 day recycle (Mon-Fri for 2 weeks) rsync pushes about 60GB of file changes a day. The problem: The onsite tape backup is failing as 1TB of graphics files don't compress well to fit onto a 800GB LTO4 tape. Backup is incredibly slow doing a full backup. Pain in the backside getting people to remember to change the tape. Often gets forgotten etc The quick solution: Buy LTO5 Drive and tapes. However this has been turned down because of the cost... What I would like: Something that works in the same way rysnc works. Only changed data is sent over the wire and can be scheduled to run multiple times during the day. Data that is sent is compressed and sent over SSH. Something that keeps a 14day retention but doesn't keep duplicate data So as an example if I have 1TB of working data and 60GB of changes are made each day then I expect around 1.84TB of data to be stored on the offsite server. To work with the Mac OS X server and Centos 6 server. Not cost an arm and a leg. Must be a cheaper solution than buying an LTO5 drive with tapes (around £1500). Be able to be setup to run autonomously. Have some sort of control panel that will allow an admin to easily restore a file/folder. Any recommendations?

    Read the article

  • Please help debug this ASP.Net [VB] code. Trying to write to text file from SQL Server DB.

    - by NJTechGuy
    I am a PHP programmer. I have no .Net coding experience (last seen it 4 years ago). Not interested in code-behind model since this is a quick temporary hack. What I am trying to do is generate an output.txt file whenever the user submits new data. So an output.txt file if exists should be replaced with the new one. I want to write data in this format : 123|Java Programmer|2010-01-01|2010-02-03 124|VB Programmer|2010-01-01|2010-02-03 125|.Net Programmer|2010-01-01|2010-02-03 I don't know VB, so not sure about string manipulations. Hope a kind soul can help me with this. I will be grateful to you. Thank you :) <%@ Import Namespace="System.IO" %> <%@ Import Namespace="System.Data" %> <%@ Import Namespace="System.Data.SqlClient" %> <script language="vb" runat="server"> sub Page_Load(sender as Object, e as EventArgs) Dim sqlConn As New SqlConnection("Data Source=winsqlus04.1and1.com;Initial Catalog=db28765269;User Id=dbo2765469;Password=ByhgstfH;") Dim myCommand As SqlCommand Dim dr As SqlDataReader Dim FILENAME as String = Server.MapPath("Output4.txt") Dim objStreamWriter as StreamWriter ' If Len(Dir$(FILENAME)) > 0 Then Kill(FILENAME) objStreamWriter = File.AppendText(FILENAME) Try sqlConn.Open() 'opening the connection myCommand = New SqlCommand("SELECT id, title, CONVERT(varchar(10), expirydate, 120) AS [expirydate],CONVERT(varchar(10), creationdate, 120) AS [createdate] from tblContact where flag = 0 AND ACTIVE = 1", sqlConn) 'executing the command and assigning it to connection dr = myCommand.ExecuteReader() While dr.Read() objStreamWriter.WriteLine("JobID: " & dr(0).ToString()) objStreamWriter.WriteLine("JobID: " & dr(2).ToString()) objStreamWriter.WriteLine("JobID: " & dr(3).ToString()) End While dr.Close() sqlConn.Close() Catch x As Exception End Try objStreamWriter.Close() Dim objStreamReader as StreamReader objStreamReader = File.OpenText(FILENAME) Dim contents as String = objStreamReader.ReadToEnd() lblNicerOutput.Text = contents.Replace(vbCrLf, "<br>") objStreamReader.Close() end sub </script> <asp:label runat="server" id="lblNicerOutput" Font-Name="Verdana" />

    Read the article

  • In SQL, what's the difference between count(column) and count(*)?

    - by Bill the Lizard
    I have the following query: select column_name, count(column_name) from table group by column_name having count(column_name) > 1; What would be the difference if I replaced all calls to count(column_name) to count(*)? This question was inspired by a previous one. Edit: To clarify the accepted answer (and maybe my question), using count(*) in this case returns an extra row in the result that contains a null and the count of null values in the column.

    Read the article

  • Can i lock a record from a join sql statment using ROWLOCK,UPDLOCK ?

    - by Andrea.Ko
    I have a store procedure to get the data i want: SELECT a.SONum, a.Seq1, a.SptNum, a.Qty1, a.SalUniPriP, a.PayNum, a.InvNum, a.BLNum, c.ETD, c.ShpNum, f.IssBan FROM OrdD a JOIN OrdH b ON a.SONum = b.SONum LEFT JOIN Invh c ON a.InvNum = c.InvNum LEFT JOIN cus d ON b.CusCod = d.CusCod LEFT JOIN BL e ON a.BLNum = e.BLNum LEFT JOIN PayMasH f ON f.PayNum = a.PayNum LEFT JOIN Shipment g ON g.ShpNum = c.ShpNum WHERE b.CusCod IN (SELECT CusCod FROM UsrInc WHERE UseID=@UserID and UseLev=@UserLvl) AND d.CusGrp = @CusGrp After i get those records into cursor, i used RowLock, UpdLock to lock all the related invoice number. SELECT InvNum FROM Invh WITH (ROWLOCK,UPDLOCK) WHERE InvNum = Can i issue locking on the table INVH at the point i select the table from a few table using join command? Any advice, please!

    Read the article

  • SQL Alchemy MVC and cross controller joins

    - by Khorkrak
    When using SQL Alchemy for abstracting your data access layer and using controllers as the way to access objects from that abstraction layer, how should joins be handled? So for example, say you have an Orders controller class that manages Order objects such that it provides getOrder, saveOrder, etc methods and likewise a similar controller for User objects. First of all do you even need these controllers? Should you instead just treat SQL Alchemy as "the" thing for handling data access. Why bother with object oriented controller stuff there when you instead have a clean declarative way to obtain and persist objects without having to write SQL directly either. Well one reason could be that perhaps you may want to replace SQL Alchemy with direct SQL or Storm or whatever else. So having controller classes there to act as an intermediate layer helps limit what would need to change then. Anyway - back to the main question - so assuming you have these two controllers, now lets say you want the list of orders for a certain set of users meeting some criteria. How do you go about doing this? Generally you don't want the controllers crossing domains - the Orders controllers knows only about Orders and the User controller just about Users - they don't mess with each other. You also don't want to go fetch all the Users that match and then feed a big list of user ids to the Orders controller to go find the matching Orders. What's needed is a join. Here's where I'm stuck - that seems to mean either the controllers must cross domains or perhaps they should be done away with altogether and you simply do the join via SQL Alchemy directly and get the resulting User and / or Order objects as needed. Thoughts?

    Read the article

  • LINQ TO SQL error: An attempt has been made to Attach or Add an entity that is not new...

    - by Collin Estes
    "An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported." I have scene a lot of solutions dealing with the Attach() method but I'm just trying to add in a new record. Not sure what is going on. Here is my code, It is failing on the star'd line.: try { LINQDataContext datacontext = new LINQDataContext(); TrackableItem ti = datacontext.TrackableItems.FirstOrDefault(_t => _t.pkId == obj.fkTrackableItemId); arcTrackableItem ati = new arcTrackableItem(); ati.barcode = ti.barcode; ati.dashNumber = ti.dashNumber; ati.dateDown = ti.dateDown; ati.dateUp = ti.dateUp; ati.fkItemStatusId = ti.fkItemStatusId; ati.fkItemTypeId = ti.fkItemTypeId; ati.partNumber = ti.partNumber; ati.serialNumber = ti.serialNumber; ati.archiveDate = DateTime.Now; datacontext.arcTrackableItems.InsertOnSubmit(ati); datacontext.SubmitChanges(); arcPWR aItem = new arcPWR(); aItem.comments = obj.comments; aItem.fkTrackableItemId = ati.pkId; aItem.fkPWRStatusId = obj.fkPWRStatusId; aItem.PwrStatus = obj.PwrStatus; **datacontext.arcPWRs.InsertOnSubmit(aItem);** datacontext.SubmitChanges();

    Read the article

< Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >