Search Results

Search found 22358 results on 895 pages for 'django raw query'.

Page 135/895 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • How do I break down MySQL query results into categories, each with a specific number of rows?

    - by Mel
    Hello, Problem: I want to list n number of games from each genre (order not important) The following MySQL query resides inside a ColdFusion function. It is meant to list all games under a platform (for example, list all PS3 games; list all Xbox 360 games; etc...). The variable for PlatformID is passed through the URL. I have 9 genres, and I would like to list 10 games from each genre. SELECT games.GameID AS GameID, games.GameReleaseDate AS rDate, titles.TitleName AS tName, titles.TitleShortDescription AS sDesc, genres.GenreName AS gName, platforms.PlatformID, platforms.PlatformName AS pName, platforms.PlatformAbbreviation AS pAbbr FROM (((games join titles on((games.TitleID = titles.TitleID))) join genres on((genres.GenreID = games.GenreID))) join platforms on((platforms.PlatformID = games.PlatformID))) WHERE (games.PlatformID = '#ARGUMENTS.PlatformID#') ORDER BY GenreName ASC, GameReleaseDate DESC Once the query results come back I group them in ColdFusion as follows: <cfoutput query="ListGames" group="gName"> (first loop which lists genres) #ListGames.gName# <cfoutput> (nested loop which lists games) #ListGames.tName# </cfoutput> </cfoutput> The problem is that I only want 10 games from each genre to be listed. If I place a "limit" of 50 in the SQL, I will get ~ 50 games of the same genre (depending on how much games of that genre there are). The second issue is I don't want the overload of querying the database for all games when each person will only look at a few. What is the correct way to do this? Many thanks!

    Read the article

  • Does query plan optimizer works well with joined/filtered table-valued functions?

    - by smoothdeveloper
    In SQLSERVER 2005, I'm using table-valued function as a convenient way to perform arbitrary aggregation on subset data from large table (passing date range or such parameters). I'm using theses inside larger queries as joined computations and I'm wondering if the query plan optimizer work well with them in every condition or if I'm better to unnest such computation in my larger queries. Does query plan optimizer unnest table-valued functions if it make sense? If it doesn't, what do you recommend to avoid code duplication that would occur by manually unnesting them? If it does, how do you identify that from the execution plan? code sample: create table dbo.customers ( [key] uniqueidentifier , constraint pk_dbo_customers primary key ([key]) ) go /* assume large amount of data */ create table dbo.point_of_sales ( [key] uniqueidentifier , customer_key uniqueidentifier , constraint pk_dbo_point_of_sales primary key ([key]) ) go create table dbo.product_ranges ( [key] uniqueidentifier , constraint pk_dbo_product_ranges primary key ([key]) ) go create table dbo.products ( [key] uniqueidentifier , product_range_key uniqueidentifier , release_date datetime , constraint pk_dbo_products primary key ([key]) , constraint fk_dbo_products_product_range_key foreign key (product_range_key) references dbo.product_ranges ([key]) ) go . /* assume large amount of data */ create table dbo.sales_history ( [key] uniqueidentifier , product_key uniqueidentifier , point_of_sale_key uniqueidentifier , accounting_date datetime , amount money , quantity int , constraint pk_dbo_sales_history primary key ([key]) , constraint fk_dbo_sales_history_product_key foreign key (product_key) references dbo.products ([key]) , constraint fk_dbo_sales_history_point_of_sale_key foreign key (point_of_sale_key) references dbo.point_of_sales ([key]) ) go create function dbo.f_sales_history_..snip.._date_range ( @accountingdatelowerbound datetime, @accountingdateupperbound datetime ) returns table as return ( select pos.customer_key , sh.product_key , sum(sh.amount) amount , sum(sh.quantity) quantity from dbo.point_of_sales pos inner join dbo.sales_history sh on sh.point_of_sale_key = pos.[key] where sh.accounting_date between @accountingdatelowerbound and @accountingdateupperbound group by pos.customer_key , sh.product_key ) go -- TODO: insert some data -- this is a table containing a selection of product ranges declare @selectedproductranges table([key] uniqueidentifier) -- this is a table containing a selection of customers declare @selectedcustomers table([key] uniqueidentifier) declare @low datetime , @up datetime -- TODO: set top query parameters . select saleshistory.customer_key , saleshistory.product_key , saleshistory.amount , saleshistory.quantity from dbo.products p inner join @selectedproductranges productrangeselection on p.product_range_key = productrangeselection.[key] inner join @selectedcustomers customerselection on 1 = 1 inner join dbo.f_sales_history_..snip.._date_range(@low, @up) saleshistory on saleshistory.product_key = p.[key] and saleshistory.customer_key = customerselection.[key] I hope the sample makes sense. Much thanks for your help!

    Read the article

  • Big Data – Buzz Words: What is NewSQL – Day 10 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the relational database. In this article we will take a quick look at the what is NewSQL. What is NewSQL? NewSQL stands for new scalable and high performance SQL Database vendors. The products sold by NewSQL vendors are horizontally scalable. NewSQL is not kind of databases but it is about vendors who supports emerging data products with relational database properties (like ACID, Transaction etc.) along with high performance. Products from NewSQL vendors usually follow in memory data for speedy access as well are available immediate scalability. NewSQL term was coined by 451 groups analyst Matthew Aslett in this particular blog post. On the definition of NewSQL, Aslett writes: “NewSQL” is our shorthand for the various new scalable/high performance SQL database vendors. We have previously referred to these products as ‘ScalableSQL‘ to differentiate them from the incumbent relational database products. Since this implies horizontal scalability, which is not necessarily a feature of all the products, we adopted the term ‘NewSQL’ in the new report. And to clarify, like NoSQL, NewSQL is not to be taken too literally: the new thing about the NewSQL vendors is the vendor, not the SQL. In other words - NewSQL incorporates the concepts and principles of Structured Query Language (SQL) and NoSQL languages. It combines reliability of SQL with the speed and performance of NoSQL. Categories of NewSQL There are three major categories of the NewSQL New Architecture – In this framework each node owns a subset of the data and queries are split into smaller query to sent to nodes to process the data. E.g. NuoDB, Clustrix, VoltDB MySQL Engines – Highly Optimized storage engine for SQL with the interface of MySQ Lare the example of such category. E.g. InnoDB, Akiban Transparent Sharding – This system automatically split database across multiple nodes. E.g. Scalearc  Summary In simple words – NewSQL is kind of database following relational database principals and provides scalability like NoSQL. Tomorrow In tomorrow’s blog post we will discuss about the Role of Cloud Computing in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Detect Virtual Log Files (VLF) in LDF

    - by pinaldave
    In one of the recent training engagements, I was asked if it true that there are multiple small log files in the large log file (LDF). I found this question very interesting as the answer is yes. Multiple small Virtual Log Files commonly known as VLFs together make an LDF file. The writing of the VLF is sequential and resulting in the writing of the LDF file is sequential as well. This leads to another talk that one does not need more than one log file in most cases. However, in short, you can use following DBCC command to know how many Virtual Log Files or VLFs are present in your log file. DBCC LOGINFO You can find the result of above query to something as displayed in following image. You can see the column which is marked as 2 which means it is active VLF and the one with 0 which is inactive VLF. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Template Browser – A Very Important and Useful Feature of SSMS

    - by pinaldave
    Let me start today’s blog post with a direction question. How many of you have ever used Template Browser? Template Browser is a very important and useful feature of SQL Server Management Studio (SSMS). Every time when I am talking about SQL Server there is always someone comes up with the question, why there is no step by step procedure included in SSMS for features. Honestly every time I get this question, the question I ask back is How many of you have ever used Template Browser? I think the answer to this question is most of the time either no or we have not heard of the feature. One of the people asked me back – have you ever written about it on your blog? I have not yet written about it. Basically there is nothing much to write about it. It is pretty straight forward feature, like any other feature and it is indeed difficult to elaborate. However, I will try to give a quick introduction to this feature. Templates are like a quick cheat sheet or quick reference. Templates are available to create objects like databases, tables, views, indexes, stored procedures, triggers, statistics, and functions. Templates are also available for Analysis Services as well. The template scripts contain parameters to help you customize the code. You can Replace Template Parameters dialog box to insert values into the script. Additionally users can create new custom templates as well with folder structure. To open a template from Template Explorer Go to View menu >> Template Explorer or type CTRL+ALT+L. You will find a list of categories click on any category and expand the folder structure. For our sample example let us expand Index Folder. In this folder you will notice the various T-SQL Scripts. These scripts can be opened by double click or can be dragged to editor area and modified as needed. Sample template is now available in the query editor area with all the necessary parameter place folder. You can replace the same parameter by typing either CTRL+SHIFT+M or by going to Query Menu >> Specify Values for Template Parameters. In this screen it will show  Specify Values for Template Parameters dialog box, accept the value or replace it with a new value. This will now get your script ready to go. Check it one more time and change the script to fit your requirement. I personally use template explorer for two things. First one is obviously for templates but the hidden one and an important one is for learning new features and T-SQL commands. There is so much to learn and so little time. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Did You Know? More online seminars!

    - by Kalen Delaney
    I am in Tucson again, having just recorded two more online workshops to be broadcast by SSWUG. We haven't set the dates yet, but we are thinking about offering a special package deal for the two of them. The topics really are related and I think they would work well together. They are both on aspects of Query Processing. The first was on how to interpret Query Plans and is an introduction to the topic. However, it only includes a discussion of how SQL Server actually processes your queries. For example,...(read more)

    Read the article

  • SQL SERVER – Retrieving Random Rows from Table Using NEWID()

    - by pinaldave
    I have previously written about how to get random rows from SQL Server. SQL SERVER – Generate A Single Random Number for Range of Rows of Any Table – Very interesting Question from Reader SQL SERVER – Random Number Generator Script – SQL Query However, I have not blogged about following trick before. Let me share the trick here as well. You can generate random scripts using following methods as well. USE AdventureWorks2012 GO -- Method 1 SELECT TOP 100 * FROM Sales.SalesOrderDetail ORDER BY NEWID() GO -- Method 2 SELECT TOP 100 * FROM Sales.SalesOrderDetail ORDER BY CHECKSUM(NEWID()) GO You will notice that using NEWID() in the ORDER BY will return random rows in the result set. How many of you knew this trick? You can run above script multiple times and it will give random rows every single time. Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Mysql: ROLLBACK for multiple queries

    - by Raj
    Hi I have more than three MySql queiries in a PHP script triggered by scheduled task. If a query catch an error, script throw an exception and rollback that Mysql query. It works fine. However if first query works fine, but not 2nd query, throw an exception, it rollback 2nd one but not 1st query. I am using begin_trans(), commit and rollback() for individual queries because Sometimes i need to rollback one query, sometimes all queries. Is there any way to rollback one query or all queries? Thanks in advance UPDATE: I got it working, there was no problem with in begin_trans(), commit and rollback(), the database connection config was different for one query from other queries, crazy code without any comments!!!

    Read the article

  • Packaging a web application for deploying at customer site

    - by chitti
    I want to develop a django webapp that would get deployed at the customer site. The web app would run in a private cloud environment (ESX server here) of the customer. My web app would use a mysql database. The problem is I would not have direct access/control of the webapp. My question is, how to package such a web application with it's database and other entities so that it's easier to upgrade/update the app and it's database in future. Right now the idea I have is that I would provide a vm with the django app and database setup. The customer can just start the vm and he would have the webapp running. What are the other options I should consider?

    Read the article

  • Help with MySQL query... Need help ordering a group of rows

    - by user156814
    I can tell it best by explaining the query I have, and what I need. I need to be able to get a group of items from the database, grouped by category, manufacturer, and year made. The groupings need to be sorted based on total amount of items within the group. This part is done with the query below. Secondly, I need to be able to show an image of the most expensive item out of the group, which is why I use MAX(items.current_price). I thought MAX() gets the ENTIRE row corresponding to the largest column value. I was wrong, as MAX only gets the numeric value of the largest price. So the query doesnt work well for that. SELECT items.id, items.year, items.manufacturer, COUNT(items.id) AS total, MAX(items.current_price) AS price, items.gallery_url, FROM ebay AS items WHERE items.primary_category_id = 213 AND items.year <> '' AND items.manufacturer <> '' AND items.bad_item <> 1 GROUP BY items.primary_category_id, items.manufacturer, items.year ORDER BY total DESC, price ASC LIMIT 10 if that doesnt explain it well, the results should be something like this id 10548 year 1989 manufacturer bowman total 451 price 8500.00 (The price of the most expensive item in the table/ not the price of item 10548) gallery_url http://ebay.xxxxx (The image of item 10548) A little help please. Thanks

    Read the article

  • how to diffrentiate between same field names of two tables in a select query??

    - by developer
    i have more than two tables in my database and all of them contains same field names like table A table B table C field1 field1 field1 field2 field2 field2 field3 field3 field3 . . . . . . . . . . . . I have to write a SELECT query which gets almost all same fields from these 3 tables.Iam using something like this :- select a.field1,a.field2,a.field3,b.field1,b.field2,b.field3,c.field1,c.field2,c.field3 from table A as a, table B as b,table C as c where so and so. but when i print field1's value it gives me the last table values. How can i get all the values of three tables with the same field names??? do i have to write individual query for every table OR there is any ways of fetching them all in a single query????

    Read the article

  • How to remove/hide <div></div> tags only without the content?

    - by candies
    For example I have: <div id ="test">[the content here]</div> The content within the div tags will appear after I called the id of div using ajax. This is the code: function dinamic(add) { var kode = add.value; if (!kode) return; xmlhttp2.open('get', '../template/get_id.php?kode='+kode, true); xmlhttp2.onreadystatechange = function() { if ((xmlhttp2.readyState == 4) && (xmlhttp2.status == 200)) { var add = document.getElementById("test"); add.innerHTML = xmlhttp2.responseText; } return false; } xmlhttp2.send(null); } So it will appear <div id="test">A</div> I'd like to put the content of div - A into mysql query. $test = $_GET['test']; $query = "select * from example where category='$test'"; I've tried to make variable $test of the div id to get the content but result of the query in category is none. I tried again, I put the div in to the query $query = "select * from example where category='<div id=\"test\">A</div>'"; Yes, It works. But when I did query on navicat, no results I got because there's spaces between A that is <div> and </div>. How to remove/hide the div tags only so its only appear the content? > $query = "select * from example where category='A'"; < Edit: If I echo the query on firefox browser will say "$query = "select * from example where category='[space]A[space]'";" And look at the bug(I use firebug), it will say "$query = "select * from example where category='<div id="test">A</div>'";" So my guessing why can't get result after query on navicat is there's spaces between A([space]A[space]), just have no idea how to remove/hide the div tags, I want to get this result only "$query = "select * from example where category='A'";" Thanks.

    Read the article

  • Why would using a Temp table be faster than a nested query?

    - by Mongus Pong
    We are trying to optimise some of our queries. One query is doing the following: SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, INTO [#Gadget] FROM task t SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) as Client FROM [#Gadget] order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC DROP TABLE [#Gadget] (I have removed the complex subquery, cos I dont think its relevant other than to explain why this query has been done as a two stage process.) Now I would have thought it would be far more efficient to merge this down into a single query using subqueries as : SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) FROM ( SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, FROM task t ) as sub order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC This would give the optimiser better information to work out what was going on and avoid any temporary tables. It should be faster. But it turns out it is a lot slower. 8 seconds vs under 5 seconds. I cant work out why this would be the case as all my knowledge of databases imply that subqueries would always be faster than using temporary tables. Can anyone explain what could be going on!?!?

    Read the article

  • mysql query trying to search by alias involving CASES and aggregate functions UGH!

    - by dqhendricks
    I have two tables left joined. The query is grouped by the left table's ID column. The right table has a date column called close_date. The problem is, if there are any right table records that have not been closed (thus having a close_date of 0000-00-00), then I do not want any of the left table records to be shown, and if there are NO right table records with a close_date of 0000-00-00, I would like only the right table record with the MAX close date to be returned. So for simplicity sake, let's say the tables look like this: Table1 id 1 2 Table2 table1_id | close_date 1 | 0000-00-00 1 | 2010-01-01 2 | 2010-01-01 2 | 2010-01-02 I would like the query to only return this: Table1.id | Table2.close_date 2 | 2010-01-02 I tried to come up with an answer using aliased CASES and aggregate functions, but I could not search by the result, and I was attempting not to make a 3 mile long query to solve the problem. I looked through a few of the related posts on here, but none seem to meet the criteria of this particular case. Any pushes in the right direction would be greatly appreciated. Thanks!

    Read the article

  • jquery-autocomplete does not work with my django app.

    - by HWM-Rocker
    Hi everybody, I have a problem with the jquery-autocomplete pluging and my django script. I want an easy to use autocomplete plugin. And for what I see this (http://code.google.com/p/jquery-autocomplete/) one seems very usefull and easy. For the django part I use this (http://code.google.com/p/django-ajax-selects/) I modified it a little, because the out put looked a little bit weired to me. It had 2 '\n' for each new line, and there was no Content-Length Header in the response. First I thought this could be the problem, because all the online examples I found had them. But that was not the problem. I have a very small test.html with the following body: <body> <form action="" method="post"> <p><label for="id_tag_list">Tag list:</label> <input id="id_tag_list" name="tag_list" maxlength="200" type="text" /> </p> <input type="submit" value="Submit" /> </form> </body> And this is the JQuery call to add autocomplete to the input. function formatItem_tag_list(bla,row) { return row[2] } function formatResult_tag_list(bla,row) { return row[1] } $(document).ready(function(){ $("input[id='id_tag_list']").autocomplete({ url:'http://gladis.org/ajax/tag', formatItem: formatItem_tag_list, formatResult: formatResult_tag_list, dataType:'text' }); }); When I'm typing something inside the Textfield Firefox (firebug) and Chromium-browser indicates that ther is an ajax call but with no response. If I just copy the line into my browser, I can see the the response. (this issue is solved, it was a safety feature from ajax not to get data from another domain) For example when I am typing Bi in the textfield, the url "http://gladis.org/ajax/tag?q=Bi&max... is generated. When you enter this in your browser you get this response: 4|Bier|Bier 43|Kolumbien|Kolumbien 33|Namibia|Namibia Now my ajax call get the correct response, but there is still no list showing up with all the possible entries. I tried also to format the output, but this doesn't work either. I set brakepoints to the function and realized that they won't be called at all. Here is a link to my minimum HTML file http://gladis.org/media/input.html Has anybody an idea what i did wrong. I also uploaded all the files as a small zip at http://gladis.org/media/example.zip. Thank you for your help! [Edit] here is the urls conf: (r'^ajax/(?P<channel>[a-z]+)$', 'ajax_select.views.ajax_lookup'), and the ajax lookup channel configuration AJAX_LOOKUP_CHANNELS = { # the simplest case, pass a DICT with the model and field to search against : 'tag' : dict(model='htags.Tag', search_field='text'), } and the view: def ajax_lookup(request,channel): """ this view supplies results for both foreign keys and many to many fields """ # it should come in as GET unless global $.ajaxSetup({type:"POST"}) has been set # in which case we'll support POST if request.method == "GET": # we could also insist on an ajax request if 'q' not in request.GET: return HttpResponse('') query = request.GET['q'] else: if 'q' not in request.POST: return HttpResponse('') # suspicious query = request.POST['q'] lookup_channel = get_lookup(channel) if query: instances = lookup_channel.get_query(query,request) else: instances = [] results = [] for item in instances: results.append(u"%s|%s|%s" % (item.pk,lookup_channel.format_item(item),lookup_channel.format_result(item))) ret_string = "\n".join(results) resp = HttpResponse(ret_string,mimetype="text/html") resp['Content-Length'] = len(ret_string) return resp

    Read the article

  • How to print PCL Directly to the printer? Vb.net VS 2010

    - by Justin
    Good day, I've been trying to upgrade an old Print Program that prints raw pcl directly to the printer. The print program takes a PCL Mask and a PCL Batch Spool File (just pcl with page turn commands, I think) merges them and sends them off to the printer. I'm able to send to the Printer a file stream of PCL but I get mixed results and i do not understand why printing is so difficult under .net. I mean yes there is the PrintDocument class, but to print PCL... Let's just say I'm ready to detach my printer from the network and burn it a live. Here is my class PrintDirect (rather a hybrid class) Imports System Imports System.Text Imports System.Runtime.InteropServices <StructLayout(LayoutKind.Sequential)> _ Public Structure DOCINFO <MarshalAs(UnmanagedType.LPWStr)> _ Public pDocName As String <MarshalAs(UnmanagedType.LPWStr)> _ Public pOutputFile As String <MarshalAs(UnmanagedType.LPWStr)> _ Public pDataType As String End Structure 'DOCINFO Public Class PrintDirect <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=False, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function OpenPrinter(ByVal pPrinterName As String, ByRef phPrinter As IntPtr, ByVal pDefault As Integer) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=False, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function StartDocPrinter(ByVal hPrinter As IntPtr, ByVal Level As Integer, ByRef pDocInfo As DOCINFO) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=True, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function StartPagePrinter(ByVal hPrinter As IntPtr) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Ansi, ExactSpelling:=True, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function WritePrinter(ByVal hPrinter As IntPtr, ByVal data As String, ByVal buf As Integer, ByRef pcWritten As Integer) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=True, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function EndPagePrinter(ByVal hPrinter As IntPtr) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=True, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function EndDocPrinter(ByVal hPrinter As IntPtr) As Long End Function <DllImport("winspool.drv", CharSet:=CharSet.Unicode, ExactSpelling:=True, CallingConvention:=CallingConvention.StdCall)> _ Public Shared Function ClosePrinter(ByVal hPrinter As IntPtr) As Long End Function 'Helps uses to work with the printer Public Class PrintJob ''' <summary> ''' The address of the printer to print to. ''' </summary> ''' <remarks></remarks> Public PrinterName As String = "" Dim lhPrinter As New System.IntPtr() ''' <summary> ''' The object deriving from the Win32 API, Winspool.drv. ''' Use this object to overide the settings defined in the PrintJob Class. ''' </summary> ''' <remarks>Only use this when absolutly nessacary.</remarks> Public OverideDocInfo As New DOCINFO() ''' <summary> ''' The PCL Control Character or the ascii escape character. \x1b ''' </summary> ''' <remarks>ChrW(27)</remarks> Public Const PCL_Control_Character As Char = ChrW(27) ''' <summary> ''' Opens a connection to a printer, if false the connection could not be established. ''' </summary> ''' <returns></returns> ''' <remarks></remarks> Public Function OpenPrint() As Boolean Dim rtn_val As Boolean = False PrintDirect.OpenPrinter(PrinterName, lhPrinter, 0) If Not lhPrinter = 0 Then rtn_val = True End If Return rtn_val End Function ''' <summary> ''' The name of the Print Document. ''' </summary> ''' <value></value> ''' <returns></returns> ''' <remarks></remarks> Public Property DocumentName As String Get Return OverideDocInfo.pDocName End Get Set(ByVal value As String) OverideDocInfo.pDocName = value End Set End Property ''' <summary> ''' The type of Data that will be sent to the printer. ''' </summary> ''' <value>Send the print data type, usually "RAW" or "LPR". To overide use the OverideDocInfo Object</value> ''' <returns>The DataType as it was provided, if it is not part of the enumeration it is set to "UNKNOWN".</returns> ''' <remarks></remarks> Public Property DocumentDataType As PrintDataType Get Select Case OverideDocInfo.pDataType.ToUpper Case "RAW" Return PrintDataType.RAW Case "LPR" Return PrintDataType.LPR Case Else Return PrintDataType.UNKOWN End Select End Get Set(ByVal value As PrintDataType) OverideDocInfo.pDataType = [Enum].GetName(GetType(PrintDataType), value) End Set End Property Public Property DocumentOutputFile As String Get Return OverideDocInfo.pOutputFile End Get Set(ByVal value As String) OverideDocInfo.pOutputFile = value End Set End Property Enum PrintDataType UNKOWN = 0 RAW = 1 LPR = 2 End Enum ''' <summary> ''' Initializes the printing matrix ''' </summary> ''' <param name="PrintLevel">I have no idea what the hack this is...</param> ''' <remarks></remarks> Public Sub OpenDocument(Optional ByVal PrintLevel As Integer = 1) PrintDirect.StartDocPrinter(lhPrinter, PrintLevel, OverideDocInfo) End Sub ''' <summary> ''' Starts the next page. ''' </summary> ''' <remarks></remarks> Public Sub StartPage() PrintDirect.StartPagePrinter(lhPrinter) End Sub ''' <summary> ''' Writes a string to the printer, can be used to write out a section of the document at a time. ''' </summary> ''' <param name="data">The String to Send out to the Printer.</param> ''' <returns>The pcWritten as an integer, 0 may mean the writter did not write out anything.</returns> ''' <remarks>Warning the buffer is automatically created by the lenegth of the string.</remarks> Public Function WriteToPrinter(ByVal data As String) As Integer Dim pcWritten As Integer = 0 PrintDirect.WritePrinter(lhPrinter, data, data.Length - 1, pcWritten) Return pcWritten End Function Public Sub EndPage() PrintDirect.EndPagePrinter(lhPrinter) End Sub Public Sub CloseDocument() PrintDirect.EndDocPrinter(lhPrinter) End Sub Public Sub ClosePrint() PrintDirect.ClosePrinter(lhPrinter) End Sub ''' <summary> ''' Opens a connection to a printer and starts a new document. ''' </summary> ''' <returns>If false the connection could not be established. </returns> ''' <remarks></remarks> Public Function Open() As Boolean Dim rtn_val As Boolean = False rtn_val = OpenPrint() If rtn_val Then OpenDocument() End If Return rtn_val End Function ''' <summary> ''' Closes the document and closes the connection to the printer. ''' </summary> ''' <remarks></remarks> Public Sub Close() CloseDocument() ClosePrint() End Sub End Class End Class 'PrintDirect Here is how I print my file. I'm simple printing the PCL Masks, to show proof of concept. But I can't even do that. I can effectively create PCL and send it the printer without reading the file and it works fine... Plus I get mixed results with different stream reader text encoding as well. Dim pJob As New PrintDirect.PrintJob pJob.DocumentName = " test doc" pJob.DocumentDataType = PrintDirect.PrintJob.PrintDataType.RAW pJob.PrinterName = sPrinter.GetDevicePath '//This is where you'd stick your device name, sPrinter stands for Selected Printer from a dropdown (combobox). 'pJob.DocumentOutputFile = "C:\temp\test-spool.txt" If Not pJob.OpenPrint() Then MsgBox("Unable to connect to " & pJob.PrinterName, MsgBoxStyle.OkOnly, "Print Error") Exit Sub End If pJob.OpenDocument() pJob.StartPage() Dim sr As New IO.StreamReader(Me.txtFile.Text, Text.Encoding.ASCII) '//I've got best results with ASCII before, but only mized. Dim line As String = sr.ReadLine 'Fix for fly code on first run 'If line = 27 Then line = PrintDirect.PrintJob.PCL_Control_Character Do While (Not line Is Nothing) pJob.WriteToPrinter(line) line = sr.ReadLine Loop 'pJob.WriteToPrinter(ControlChars.FormFeed) pJob.EndPage() pJob.CloseDocument() sr.Close() sr.Dispose() sr = Nothing pJob.Close() I was able to get to print the mask, in the morning... now I'm getting strange printer characters scattered through 5 pages... I'm totally clueless. I must have changed something or the printer is at fault. You can access my test Check Mask, here http://kscserver.com/so/chk_mask.zip .

    Read the article

  • How to make a simple grafical interface in C# for DCRAW

    - by Espinas.iss
    Hello, i have a problem. I need to make a simple GUI in Visual Studio 2008 using C Sharp that uses a Dave Coffins DCRAW written in C but I don't know how to "connect" dcraw.c (DCRAW source code) file with Csharp... UFRAW is the example of grafical interface that uses dcraw but I can't find it's source code. My application should be very simple: to recognize raw file on digital camera or any disc and uses one interpolatio algorithm on that raw file.

    Read the article

  • Slow MySQL query....only sometimes

    - by Shane N
    I have a query that's used in a reporting system of ours that sometimes runs quicker than a second, and other times takes 1 to 10 minutes to run. Here's the entry from the slow query log: # Query_time: 543 Lock_time: 0 Rows_sent: 0 Rows_examined: 124948974 use statsdb; SELECT count(distinct Visits.visitorid) as 'uniques' FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime>=1275721200 and visittime<=1275807599 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9 AND Visits.visitorid NOT IN (SELECT Visits.visitorid FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime<1275721200 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9); It's basically counting unique visitors, and it's doing that by counting the visitors for today and then substracting those that have been here before. If you know of a better way to do this, let me know. I just don't understand why sometimes it can be so quick, and other times takes so long - even with the same exact query under the same server load. Here's the EXPLAIN on this query. As you can see it's using the indexes I've set up: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY Visits range visittime_visitorid,visitorid visittime_visitorid 4 NULL 82500 Using where; Using index 1 PRIMARY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where 2 DEPENDENT SUBQUERY Visits ref visittime_visitorid,visitorid visitorid 8 func 1 Using where 2 DEPENDENT SUBQUERY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where I tried to optimize the query a few weeks ago and came up with a variation that consistently took about 2 seconds, but in practice it ended up taking more time since 90% of the time the old query returned much quicker. Two seconds per query is too long because we are calling the query up to 50 times per page load, with different time periods. Could the quick behavior be due to the query being saved in the query cache? I tried running 'RESET QUERY CACHE' and 'FLUSH TABLES' between my benchmark tests and I was still getting quick results most of the time. Note: last night while running the query I got an error: Unable to save result set. My initial research shows that may be due to a corrupt table that needs repair. Could this be the reason for the behavior I'm seeing? In case you want server info: Accessing via PHP 4.4.4 MySQL 4.1.22 All tables are InnoDB We run optimize table on all tables weekly The sum of both the tables used in the query is 500 MB MySQL config: key_buffer = 350M max_allowed_packet = 16M thread_stack = 128K sort_buffer = 14M read_buffer = 1M bulk_insert_buffer_size = 400M set-variable = max_connections=150 query_cache_limit = 1048576 query_cache_size = 50777216 query_cache_type = 1 tmp_table_size = 203554432 table_cache = 120 thread_cache_size = 4 wait_timeout = 28800 skip-external-locking innodb_file_per_table innodb_buffer_pool_size = 3512M innodb_log_file_size=100M innodb_log_buffer_size=4M

    Read the article

  • Five Query Optimizations in MySQL

    Query optimization is an often overlooked part of applications. Sean Hull encourages at least some attention to query optimization up front and helps you identify some of the more common optimizations you may run across.

    Read the article

  • How to build a Query Template Explorer

    Having introduced his cross-platform Query Template solution, Michael now gives us the technical details on how to integrate his .NET controls into applications both simple and complex. With screenshots and code samples, this has everything you need to build your own powerful SQL editor or Query Template explorer.

    Read the article

  • Search For a Query in RDL Files with PowerShell

    - by AllenMWhite
    In tracking down poorly performing queries for clients I often encounter the query text in a trace file I've captured, but don't know the source of the query. I've found that many of the poorest performing queries are those written into the reports the business users need to make their decisions. If I can't figure out where they came from, usually years after the queries were written, I can't fix them. First thing I did was find a great utility called RSScripter , which opens up a Windows dialog...(read more)

    Read the article

  • SQL Server Prefetch and Query Performance

    Prefetching can make a surprising difference to SQL Server query execution times where there is a high incidence of waiting for disk i/o operations, but the benefits come at a cost. Mostly, the Query Optimizer gets it right, but occasionally there are queries that would benefit from tuning. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • Activate a python virtual environment using activate_this.py in a fabfile on Windows

    - by Rudy Lattae
    I have a Fabric task that needs to access the settings of my Django project. On Windows, I'm unable to install Fabric into the project's virtualenv (issues with Paramiko + pycrypto deps). However, I am able to install Fabric in my system-wide site-packages, no problem. I have installed Django into the project's virtualenv and I am able to use all the " python manage.py" commands easily when I activate the virtualenv with the "VIRTUALENV\Scripts\activate.bat" script. I have a fabric tasks file (fabfile.py) in my project that provides tasks for setup, test, deploy, etc. Some of the tasks in my fabfile need to access the settings of my django project through "from django.conf import settings". Since the only usable Fabric install I have is in my system-wide site-packages, I need to activate the virtualenv within my fabfile so django becomes available. To do this, I use the "activate_this" module of the project's virtualenv in order to have access to the project settings and such. Using "print sys.path" before and after I execute activate_this.py, I can tell the python path changes to point to the virtualenv for the project. However, I still cannot import django.conf.settings. I have been able to successfully do this on *nix (Ubuntu and CentOS) and in Cygwin. Do you use this setup/workflow on Windows? If so Can you help me figure out why this wont work on Windows or provide any tips and tricks to get around this issue? Thanks and Cheers. REF: http://virtualenv.openplans.org/#id9 | Using Virtualenv without bin/python Local development environment: Python 2.5.4 Virtualenv 1.4.6 Fabric 0.9.0 Pip 0.6.1 Django 1.1.1 Windows XP (SP3)

    Read the article

  • sorl-thumbnail unit tests fail by 1 pixel (!)

    - by stevejalim
    Hi I'm using sorl-thumbnail in a Django 1.2 (currently 1.2 RC) project and getting a surprising failure of four of sorl's built-in unit tests. Essentially, the resized images are all 1px shorter than the unit tests expect them to be. See below for details I'm developing on OSX 10.5.8 (not Snow Leopard) with Python 2.5.1 (r251:54863, Feb 6 2009, 19:02:12) and PIL 1.1.6. Any thoughts what might be up? Cheers Steve ====================================================================== FAIL: test_extension (sorl.thumbnail.tests.fields.FieldTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/django/myprojectnamehere/lib/sorl/thumbnail/tests/fields.py", line 66, in test_extension self.verify_thumbnail((50, 37), thumb, expected_filename) File "/usr/local/django/myprojectnamehere/lib/sorl/thumbnail/tests/base.py", line 92, in verify_thumbnail self.assertEqual(image.size, expected_size) AssertionError: (50, 38) != (50, 37) ====================================================================== FAIL: test_thumbnail (sorl.thumbnail.tests.fields.ImageWithThumbnailsFieldTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/django/myprojectnamehere/lib/sorl/thumbnail/tests/fields.py", line 111, in test_thumbnail self.verify_thumbnail((50, 37), thumb, expected_filename) File "/usr/local/django/myprojectnamehere/lib/sorl/thumbnail/tests/base.py", line 92, in verify_thumbnail self.assertEqual(image.size, expected_size) AssertionError: (50, 38) != (50, 37) ====================================================================== FAIL: testTag (sorl.thumbnail.tests.templatetags.ThumbnailTagTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "/usr/local/django/myprojectnamehere/lib/sorl/thumbnail/tests/templatetags.py", line 118, in testTag self.verify_thumbnail((90, 67), expected_filename=expected_fn) File "/usr/local/django/myprojectnamehere/lib/sorl/thumbnail/tests/base.py", line 92, in verify_thumbnail self.assertEqual(image.size, expected_size) AssertionError: (90, 68) != (90, 67)

    Read the article

  • How make this special many2many fields validation using Django ORM?

    - by e-satis
    I have the folowing model: class Step(models.Model): order = models.IntegerField() latitude = models.FloatField() longitude = models.FloatField() date = DateField(blank=True, null=True) class Journey(models.Model): boat = models.ForeignKey(Boat) route = models.ManyToManyField(Step) departure = models.ForeignKey(Step, related_name="departure_of", null=True) arrival = models.ForeignKey(Step, related_name="arrival_of", null=True) I would like to implement the following check: # If a there is less than one step, raises ValidationError. routes = tuple(self.route.order_by("date")) if len(routes) <= 1: raise ValidationError("There must be at least two setps in the route") # save the first and the last step as departure and arrival self.departure = routes[0] self.arrival = routes[-1] # departure and arrival must at least have a date if not (self.departure.date or self.arrival.date): raise ValidationError("There must be an departure and an arrival date. " "Please set the date field for the first and last Step of the Journey") # departure must occurs before arrival if not (self.departure.date > self.arrival.date): raise ValidationError("Departure must take place the same day or any date before arrival. " "Please set accordingly the date field for the first and last Step of the Journey") I tried to do that by overloading save(). Unfortunately, Journey.route is empty in save(). What's more, Journey.id doesn't exists yet. I didn't try django.db.models.signals.post_save but suppose it will fail because Journey.route is empty as well (when does this get filled anyway?). I see a solution in django.db.models.signals.m2m_changed but there are a lot of steps (thousands), and I want to avoid to perform an operation for every single of them.

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >