Search Results

Search found 31328 results on 1254 pages for 'sql join'.

Page 407/1254 | < Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >

  • Interview with Geoff Bones, developer on SQL Storage Compress

    - by red(at)work
    How did you come to be working at Red Gate? I've been working at Red Gate for nine months; before that I had been at a multinational engineering company. A number of my colleagues had left to work at Red Gate and spoke very highly of it, but I was happy in my role and thought, 'It can't be that great there, surely? They'll be back!' Then one day I visited to catch up them over lunch in the Red Gate canteen. I was so impressed with what I found there, that, three days later, I'd applied for a role as a developer. And how did you get into software development? My first job out of university was working as a systems programmer on IBM mainframes. This was quite a while ago: there was a lot of assembler and loading programs from tape drives and that kind of stuff. I learned a lot about how computers work, and this stood me in good stead when I moved over the development in the 90s. What's the best thing about working as a developer at Red Gate? Where should I start? One of the great things as a developer at Red Gate is the useful feedback and close contact we have with the people who use our products, either directly at trade shows and other events or through information coming through the product managers. The company's whole ethos is built around assisting the user, and this is in big contrast to my previous development roles. We aim to produce tools that people really want to use, that they enjoy using, and, as a developer, this is a great thing to aim for and a great feeling when we get it right. At Red Gate we also try to cut out the things that distract and stop us doing our jobs. As a developer, this means that I can focus on the code and the product I'm working on, knowing that others are doing a first-class job of making sure that the builds are running smoothly and that I'm getting great feedback from the testers. We keep our process light and effective, as we want to produce great software more than we want to produce great audit trails. Tell us a bit about the products you are currently working on. You mean HyperBac? First let me explain a bit about what HyperBac is. At heart it's a compression and encryption technology, but with a few added features that open up a wealth of really exciting possibilities. Right now we have the HyperBac technology in just three products: SQL HyperBac, SQL Virtual Restore and SQL Storage Compress, but we're only starting to develop what it can do. My personal favourite is SQL Virtual Restore; for example, I love the way you can use it to run independent test databases that are all backed by a single compressed backup. I don't think the market yet realises the kind of things you do once you are using these products. On the other hand, the benefits of SQL Storage Compress are straightforward: run your databases but use only 20% of the disk space. Databases are getting larger and larger, and, as they do, so does your ROI. What's a typical day for you? My days are pretty varied. We have our daily team stand-up meeting and then sometimes I will work alone on a current issue, or I'll be pair programming with one of my colleagues. From time to time we give half a day up to future planning with the team, when we look at the long and short term aims for the product and working out the development priorities. I also get to go to conferences and events, which is unusual for a development role and gives me the chance to meet and talk to our customers directly. Have you noticed anything different about developing tools for DBAs rather than other IT kinds of user? It seems to me that DBAs are quite independent minded; they know exactly what the problem they are facing is, and often have a solution in mind before they begin to look for what's on the market. This means that they're likely to cherry-pick tools from a range of vendors, picking the ones that are the best fit for them and that disrupt their environments the least. When I've met with DBAs, I've often been very impressed at their ability to summarise their set up, the issues, the obstacles they face when implementing a tool and their plans for their environment. It's easier to develop products for this audience as they give such a detailed overview of their needs, and I feel I understand their problems.

    Read the article

  • Building dynamic OLAP data marts on-the-fly

    - by DrJohn
    At the forthcoming SQLBits conference, I will be presenting a session on how to dynamically build an OLAP data mart on-the-fly. This blog entry is intended to clarify exactly what I mean by an OLAP data mart, why you may need to build them on-the-fly and finally outline the steps needed to build them dynamically. In subsequent blog entries, I will present exactly how to implement some of the techniques involved. What is an OLAP data mart? In data warehousing parlance, a data mart is a subset of the overall corporate data provided to business users to meet specific business needs. Of course, the term does not specify the technology involved, so I coined the term "OLAP data mart" to identify a subset of data which is delivered in the form of an OLAP cube which may be accompanied by the relational database upon which it was built. To clarify, the relational database is specifically create and loaded with the subset of data and then the OLAP cube is built and processed to make the data available to the end-users via standard OLAP client tools. Why build OLAP data marts? Market research companies sell data to their clients to make money. To gain competitive advantage, market research providers like to "add value" to their data by providing systems that enhance analytics, thereby allowing clients to make best use of the data. As such, OLAP cubes have become a standard way of delivering added value to clients. They can be built on-the-fly to hold specific data sets and meet particular needs and then hosted on a secure intranet site for remote access, or shipped to clients' own infrastructure for hosting. Even better, they support a wide range of different tools for analytical purposes, including the ever popular Microsoft Excel. Extension Attributes: The Challenge One of the key challenges in building multiple OLAP data marts based on the same 'template' is handling extension attributes. These are attributes that meet the client's specific reporting needs, but do not form part of the standard template. Now clearly, these extension attributes have to come into the system via additional files and ultimately be added to relational tables so they can end up in the OLAP cube. However, processing these files and filling dynamically altered tables with SSIS is a challenge as SSIS packages tend to break as soon as the database schema changes. There are two approaches to this: (1) dynamically build an SSIS package in memory to match the new database schema using C#, or (2) have the extension attributes provided as name/value pairs so the file's schema does not change and can easily be loaded using SSIS. The problem with the first approach is the complexity of writing an awful lot of complex C# code. The problem of the second approach is that name/value pairs are useless to an OLAP cube; so they have to be pivoted back into a proper relational table somewhere in the data load process WITHOUT breaking SSIS. How this can be done will be part of future blog entry. What is involved in building an OLAP data mart? There are a great many steps involved in building OLAP data marts on-the-fly. The key point is that all the steps must be automated to allow for the production of multiple OLAP data marts per day (i.e. many thousands, each with its own specific data set and attributes). Now most of these steps have a great deal in common with standard data warehouse practices. The key difference is that the databases are all built to order. The only permanent database is the metadata database (shown in orange) which holds all the metadata needed to build everything else (i.e. client orders, configuration information, connection strings, client specific requirements and attributes etc.). The staging database (shown in red) has a short life: it is built, populated and then ripped down as soon as the OLAP Data Mart has been populated. In the diagram below, the OLAP data mart comprises the two blue components: the Data Mart which is a relational database and the OLAP Cube which is an OLAP database implemented using Microsoft Analysis Services (SSAS). The client may receive just the OLAP cube or both components together depending on their reporting requirements.  So, in broad terms the steps required to fulfil a client order are as follows: Step 1: Prepare metadata Create a set of database names unique to the client's order Modify all package connection strings to be used by SSIS to point to new databases and file locations. Step 2: Create relational databases Create the staging and data mart relational databases using dynamic SQL and set the database recovery mode to SIMPLE as we do not need the overhead of logging anything Execute SQL scripts to build all database objects (tables, views, functions and stored procedures) in the two databases Step 3: Load staging database Use SSIS to load all data files into the staging database in a parallel operation Load extension files containing name/value pairs. These will provide client-specific attributes in the OLAP cube. Step 4: Load data mart relational database Load the data from staging into the data mart relational database, again in parallel where possible Allocate surrogate keys and use SSIS to perform surrogate key lookup during the load of fact tables Step 5: Load extension tables & attributes Pivot the extension attributes from their native name/value pairs into proper relational tables Add the extension attributes to the views used by OLAP cube Step 6: Deploy & Process OLAP cube Deploy the OLAP database directly to the server using a C# script task in SSIS Modify the connection string used by the OLAP cube to point to the data mart relational database Modify the cube structure to add the extension attributes to both the data source view and the relevant dimensions Remove any standard attributes that not required Process the OLAP cube Step 7: Backup and drop databases Drop staging database as it is no longer required Backup data mart relational and OLAP database and ship these to the client's infrastructure Drop data mart relational and OLAP database from the build server Mark order complete Start processing the next order, ad infinitum. So my future blog posts and my forthcoming session at the SQLBits conference will all focus on some of the more interesting aspects of building OLAP data marts on-the-fly such as handling the load of extension attributes and how to dynamically alter the structure of an OLAP cube using C#.

    Read the article

  • SQL University: What and why of database refactoring

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Tools of the trade This is a second part of the series and in it we’ll take a look at what database refactoring is and why do it. Why refactor a database To know why refactor we first have to know what refactoring actually is. Code refactoring is a process where we change module internals in a way that does not change that module’s input/output behavior. For successful refactoring there is one crucial thing we absolutely must have: Tests. Automated unit tests are the only guarantee we have that we haven’t broken the input/output behavior before refactoring. If you haven’t go back ad read my post on the matter. Then start writing them. Next thing you need is a code module. Those are views, UDFs and stored procedures. By having direct table access we can kiss fast and sweet refactoring good bye. One more point to have a database abstraction layer. And no, ORM’s don’t fall into that category. But also know that refactoring is NOT adding new functionality to your code. Many have fallen into this trap. Don’t be one of them and resist the lure of the dark side. And it’s a strong lure. We developers in general love to add new stuff to our code, but hate fixing our own mistakes or changing existing code for no apparent reason. To be a good refactorer one needs discipline and focus. Now we know that refactoring is all about changing inner workings of existing code. This can be due to performance optimizations, changing internal code workflows or some other reason. This is a typical black box scenario to the outside world. If we upgrade the car engine it still has to drive on the road (preferably faster) and not fly (no matter how cool that would be). Also be aware that white box tests will break when we refactor. What to refactor in a database Refactoring databases doesn’t happen that often but when it does it can include a lot of stuff. Let us look at a few common cases. Adding or removing database schema objects Adding, removing or changing table columns in any way, adding constraints, keys, etc… All of these can be counted as internal changes not visible to the data consumer. But each of these carries a potential input/output behavior change. Dropping a column can result in views not working anymore or stored procedure logic crashing. Adding a unique constraint shows duplicated data that shouldn’t exist. Foreign keys break a truncate table command executed from an application that runs once a month. All these scenarios are very real and can happen. With the proper database abstraction layer fully covered with black box tests we can make sure something like that does not happen (hopefully at all). Changing physical structures Physical structures include heaps, indexes and partitions. We can pretty much add or remove those without changing the data returned by the database. But the performance can be affected. So here we use our performance tests. We do have them, right? Just by adding a single index we can achieve orders of magnitude performance improvement. Won’t that make users happy? But what if that index causes our write operations to crawl to a stop. again we have to test this. There are a lot of things to think about and have tests for. Without tests we can’t do successful refactoring! Fixing bad code We all have some bad code in our systems. We usually refer to that code as code smell as they violate good coding practices. Examples of such code smells are SQL injection, use of SELECT *, scalar UDFs or cursors, etc… Each of those is huge code smell and can result in major code changes. Take SELECT * from example. If we remove a column from a table the client using that SELECT * statement won’t have a clue about that until it runs. Then it will gracefully crash and burn. Not to mention the widely unknown SELECT * view refresh problem that Tomas LaRock (@SQLRockstar on Twitter) and Colin Stasiuk (@BenchmarkIT on Twitter) talk about in detail. Go read about it, it’s informative. Refactoring this includes replacing the * with column names and most likely change to application using the database. Breaking apart huge stored procedures Have you ever seen seen a stored procedure that was 2000 lines long? I have. It’s not pretty. It hurts the eyes and sucks the will to live the next 10 minutes. They are a maintenance nightmare and turn into things no one dares to touch. I’m willing to bet that 100% of time they don’t have a single test on them. Large stored procedures (and functions) are a clear sign that they contain business logic. General opinion on good database coding practices says that business logic has no business in the database. That’s the applications part. Refactoring such behemoths requires writing lots of edge case tests for the stored procedure input/output behavior and then start to refactor it. First we split the logic inside into smaller parts like new stored procedures and UDFs. Those then get called from the master stored procedure. Once we’ve successfully modularized the database code it’s best to transfer that logic into the applications consuming it. This only leaves the stored procedure with common data manipulation logic. Of course this isn’t always possible so having a plethora of performance and behavior unit tests is absolutely necessary to confirm we’ve actually improved the codebase in some way.   Refactoring is not a popular chore amongst developers or managers. The former don’t like fixing old code, the latter can’t see the financial benefit. Remember how we talked about being lousy at estimating future costs in the previous post? But there comes a time when it must be done. Hopefully I’ve given you some ideas how to get started. In the last post of the series we’ll take a look at the tools to use and an example of testing and refactoring.

    Read the article

  • Nhibernate upgraded getting 'Antlr.Runtime.NoViableAltException' on outer join using *=

    - by user86431
    so we upgraded to newer Nhibernate and Fluent Nhibernate. now I' getting this exception: FailedNHibernate.Hql.Ast.ANTLR.QuerySyntaxException: Exception of type 'Antlr.Runtime.NoViableAltException' was thrown. near line 1, column 459 On this hql, which worked fine before the upgrade. SELECT s.StudId, s.StudLname, s.StudFname, s.StudMi, s.Ssn, s.Sex, s.Dob, et.EnrtypeId, et.Active, et.EnrId, sss.StaffLname, sss.StaffFname, sss.StaffMi,vas.CurrentAge FROM CIS3G.Jcdc.EO.StudentEO s , CIS3G.Jcdc.EO.EnrollmentEO e , CIS3G.Jcdc.EO.EnrollmentTypeEO et , CIS3G.Jcdc.EO.VwStaffStudentStaffEO sss, CIS3G.Jcdc.EO.VwAgeStudentEO vas WHERE ( e.EnrId = et.EnrId ) AND ( s.StudId = vas.StudId ) AND ( s.StudId = e.StudId ) AND ( et.EnrtypeId *= sss.EnrtypeId ) AND ( Isnull ( sss.StudStaffRoleCd , 1044 ) = 1044 ) AND ( s.StudId = 4000 ) Clearly it does nto like the *= syntax, I tried rewritign is as ansi sql outer join and no joy. Can anyone tell me what ineed to change the sql to so I can get the outer join to work correctly? Thanks, Eric-

    Read the article

  • mysql true row merge... not just a union

    - by panofish
    What is the mysql I need to achieve the result below given these 2 tables: table1: +----+-------+ | id | name | +----+-------+ | 1 | alan | | 2 | bob | | 3 | dave | +----+-------+ table2: +----+---------+ | id | state | +----+---------+ | 2 | MI | | 3 | WV | | 4 | FL | +----+---------+ I want to create a temporary view that looks like this desired result: +----+---------+---------+ | id | name | state | +----+---------+---------+ | 1 | alan | | | 2 | bob | MI | | 3 | dave | WV | | 4 | | FL | +----+---------+---------+ I tried a mysql union but the following result is not what I want. create view table3 as (select id,name,"" as state from table1) union (select id,"" as name,state from table2) table3 union result: +----+---------+---------+ | id | name | state | +----+---------+---------+ | 1 | alan | | | 2 | bob | | | 3 | dave | | | 2 | | MI | | 3 | | WV | | 4 | | FL | +----+---------+---------+ First suggestion results: SELECT * FROM table1 LEFT OUTER JOIN table2 USING (id) UNION SELECT * FROM table1 RIGHT OUTER JOIN table2 USING (id) +----+---------+---------+ | id | name | state | +----+---------+---------+ | 1 | alan | | | 2 | bob | MI | | 3 | dave | WV | | 2 | MI | bob | | 3 | WV | dave | | 4 | FL | | +----+---------+---------+

    Read the article

  • MySQL calling in Username to show instead of ID!

    - by Jess
    I have a users table, books table and authors table. An author can have many books, while a user can also have many books. (This is how my DB is currently setup). As I'm pretty new to So far my setup is like bookview.php?book_id=23 from accessing authors page, then seeing all books for the author. The single book's details are all displayed on this new page...I can get the output to display the user ID associated with the book, but not the user name, and this also applies for the author's name, I can the author ID to display, but not the name, so somewhere in the query below I am not calling in the correct values: SELECT users.user_id, authors.author_id, books.book_id, books.bookname, books.bookprice, books.bookplot FROM books INNER JOIN authors on books.book_id = authors.book_id INNER JOIN users ON books.book_id = users.user_id WHERE books.book_id=" . $book_id; Could someone help me correct this so I can display the author name and user name both associated with the book! Thanks for the help :)

    Read the article

  • How to join two wav file using python??

    - by kaushik
    I am using python programming language,I want to join to wav file one at the end of other wav file? I have a Question in the forum which suggest how to merge two wav file i.e add the contents of one wav file at certain offset,but i want to join two wav file at the end of each other... And also i had a prob playing the my own wav file,using winsound module..I was able to play the sound but using the time.sleep for certain time before playin any windows sound,disadvantage wit this is if i wanted to play a sound longer thn time.sleep(N),N sec also,the windows sound wil jst overlap after N sec play the winsound nd stop.. Can anyone help??please kindly suggest to how to solve these prob... Thanks in advance

    Read the article

  • What are some useful SQL statements that should be known by all developers who may touch the Back en

    - by Jian Lin
    What are some useful SQL statements that should be known by all developers who may touch the Back end side of the project? (Update: just like in algorithm, we know there are sorting problems, shuffling problems, and we know some solutions to them. This question is aiming at the same thing). For example, ones I can think of are: Get a list of Employees and their boss. Or one with the employee's salary greater than the boss. (Self-join) Get a list of the most popular Classes registered by students, from the greatest number to the smallest. (Count, group by, order by) Get a list of Classes that are not registered by any students. (Outer join and check whether the match is NULL, or by Get from Classes table, all ClassIDs which are NOT IN (a subquery to get all ClassIDs from the Registrations table)) Are there some SQL statements that should be under the sleeve of all developers that might touch back end data?

    Read the article

  • AspectJ join point with simple types

    - by Jon
    Hi! Are there defined join points in arithmetics that I can catch? Something like: int a = 4; int b = 2; int c = a + b; Can I make a pointcut that catches any one of those lines? And what context will I be able to get? I would like to add a before() to all int/float/double manipulation done in a particular method on a class, is that possible. I see in the AspectJ docs that there are defined join points for object initialization and method calls. Is declaring an int an object initialization and does the + operator count as a method call? Thanks!

    Read the article

  • What are some useful SQL statements / usage patterns that should be known by all developers who may

    - by Jian Lin
    What are some useful SQL statements that should be known by all developers who may touch the Back end side of the project? (Update: just like in algorithm, we know there are sorting problems, shuffling problems, and we know some solutions to them. This question is aiming at the same thing). For example, ones I can think of are: Get a list of Employees and their boss. Or one with the employee's salary greater than the boss. (Self-join) Get a list of the most popular Classes registered by students, from the greatest number to the smallest. (Count, group by, order by) Get a list of Classes that are not registered by any students. (Outer join and check whether the match is NULL, or by Get from Classes table, all ClassIDs which are NOT IN (a subquery to get all ClassIDs from the Registrations table)) Are there some SQL statements that should be under the sleeve of all developers that might touch back end data?

    Read the article

  • JOIN two tables to show already purchased items

    - by Norbert
    I have a table where I keep all my templates: templates template_id template_name template_price These templates can be purchased by a registered user and then are inserted in the payments table: payments payment_id template_id user_id Is there a way to join these two tables and get not just a list of templates that have been purchased by a certain user, but all the templates? And then figure out from there which ones have already been purchased? I used this SELECT, but only the ones that the user bought showed up. I would like to have all the rows from templates, but empty in case the user_id doesn't match. SELECT * FROM templates LEFT JOIN payments ON templates.template_id = payments.template_id WHERE user_id = 2 GROUP BY templates.template_id

    Read the article

  • SQL join from multiple tables

    - by Kenny Anderson
    Hi all We've got a system (MS SQL 2008 R2-based) that has a number of "input" database and a one "output" database. I'd like to write a query that will read from the output DB, and JOIN it to data in one of the source DB. However, the source table may be one or more individual tables :( The name of the source DB is included in the output DB; ideally, I'd like to do something like the following (pseudo-SQL ahoy) SELECT output.UID, output.description, input.data from output.dbo.description LEFT JOIN (SELECT input.UID, input.data FROM [output.sourcedb].dbo.datatable ) AS input ON input.UID=output.UID Is there any way to do something like the above - "dynamically" specify the database and table to be joined on for each row in the query?

    Read the article

  • Custom OData operation / customize EF model to hide join table in many-to-many relationship

    - by AC
    I've got a data model that has two tables with a join table for a many to many relationship & creating an OData service to expose the data for CRUD ops in a Silverlight app. What I'd like to do is abstract the join table from the service. I'm not sure if the best way to do this would be in the model (using EF in .NET3.5SP1) or if I should do it with a custom service operation. If I do it in the EF model (not sure how I'd do this), then the OOTB WCF Data Service stuff would make it easy to say [..]/Courses(1)/Modules ... otherwise I'd have to create a custom operation to do this. Is it possible to do this in the EF model and if so, how does that work?

    Read the article

  • MySQL: INNER JOIN

    - by ABC
    I have a table which contains UserId & his Friends Id like: ---------------------------------------------- UserFriendsId | UserId | FriendId ---------------------------------------------- 1 1 2 ---------------------------------------------- 2 1 3 ---------------------------------------------- 3 2 1 ---------------------------------------------- 4 2 3 ---------------------------------------------- This table data shows that User-1 & User-2 are friend & they also have frndship with User-3. Now I want to find common friend(s) among UserId 1 & UserId 2 for eg: In sentance my query is: User 1 & User 2 have 1 common Friend FriendId 3. For this I used SQL query for INNER JOIN: SELECT t1.* FROM userfriends t1 INNER JOIN userfriends t2 ON t1.FriendId = t2.FriendId WHERE t1.UserId = 2 But not return required result..

    Read the article

  • Join with ADO.NET Linq to Entity in C#

    - by aladdin
    Hello I'm try to migrate a system to ADO.NET Entity I have 3 table A => (Id, Name, ...) B => (Id, Domain, ...) c => (IdA, IdB) VS.NET generate 2 entity A and B and both have reference to the other table but this reference is a collection. I need make a join between tables. from a in A join b in B on a.? equal b.? where condition select new { Name = a.Name, Domain = b.Domain }; I cant do that follow the reference in entity bu when the problem grows can be a problem. Any Help?

    Read the article

  • velocity: join optional fields with a separator/prefix

    - by SlowStrider
    What would be the most concise/readable way in a velocity template to join multiple fields with a separator while leaving out empty or null Strings without adding excess separators? As an example we have a tooltip or appointments that goes like: Appointment ($number) [with $employee] [-] [$remarks] [-] [$roomToVisit] Where I used brackets to indicate optional data. When filled in it would normally show as Appointment (3) with John - ballroom - serve Java coffee When $remarks is empty but $roomToVisit is not, this becomes: Appointment (3) with John - ballroom When $remarks is "serve Java coffee" but $roomToVisit is empty we get: Appointment (3) with John - serve Java coffee When both are empty: Appointment (3) with John Bonus: also make the field prefix optional. When only $employee is empty we should get: Appointment (2) serve Java coffee - ballroom Ideally I would like the velocity template to look very similar to the first code box. If this is not possible, how would you achieve this with a minimum of distracting code tags? Similar ideas (first is much more verbose): Join with intelligent separators velocity: do something except in last loop iteration

    Read the article

  • Need help optimizing MYSQL query with join

    - by makeee
    I'm doing a join between the "favorites" table (3 million rows) the "items" table (600k rows). The query is taking anywhere from .3 seconds to 2 seconds, and I'm hoping I can optimize it some. Favorites.faver_profile_id and Items.id are indexed. Instead of using the faver_profile_id index I created a new index on (faver_profile_id,id), which eliminated the filesort needed when sorting by id. Unfortunately this index doesn't help at all and I'll probably remove it (yay, 3 more hours of downtime to drop the index..) Any ideas on how I can optimize this query? In case it helps: Favorite.removed and Item.removed are "0" 98% of the time. Favorite.collection_id is NULL about 80% of the time. SELECT `Item`.`id`, `Item`.`source_image`, `Item`.`cached_image`, `Item`.`source_title`, `Item`.`source_url`, `Item`.`width`, `Item`.`height`, `Item`.`fave_count`, `Item`.`created` FROM `favorites` AS `Favorite` LEFT JOIN `items` AS `Item` ON (`Item`.`removed` = 0 AND `Favorite`.`notice_id` = `Item`.`id`) WHERE ((`faver_profile_id` = 1) AND (`collection_id` IS NULL) AND (`Favorite`.`removed` = 0) AND (`Item`.`removed` = '0')) ORDER BY `Favorite`.`id` desc LIMIT 50;

    Read the article

  • mysql left join

    - by user1019538
    I have two table one is index and another is the price structure as under table : index : column : trandate ,indexcode Table : price : Column: trandate,symbol,price i want to know the missing price. I issue the query select i.trandate,i.indexcode,p.trandate,p.price from index i left join price p on i.trandate = p.trandate where p.symbol='ABC' and indexcode="New" the above query does not show the null date even though various price in missing in price table. Only reason i understand is that the index table does not have the symbol field that's why...but as per theory if you want to show all the rows of one table and only the match value of another table then use the left or right join query...please anybody can help

    Read the article

  • For improving the join of two wave files

    - by kaki
    i want to get the values of the last 30 frames of the first wav file and first thirty frames of the second wave file in integer format and stored in a list or array. i have written the code for joining but during this manupalation i am getting in byte format and tried to convert it to integer but couldn't. as told before i want to get the frame detail of 1st 30 and last 30 in integer format,and by performing other operations join can be more successful looking for your help in this,please... thanking you, import wave m=['C:/begpython/S0001_0002.wav', 'C:/begpython/S0001_0001.wav'] i=1 a=m[i] infiles = [a] outfile = "C:/begpython/S0001_00367.wav" data= [] data1=[] for infile in infiles: w = wave.open(infile, 'rb') data1=[w.getnframes] #print w.readframes(100) data.append( [w.getparams(), w.readframes(w.getnframes())] ) #print w.readframes(1) #data1 = [ord(character) for character in data1] #print data1 #data1 = ''.join(chr(character) for character in data1) w.close() print data output = wave.open(outfile, 'wb') output.setparams(data[0][0]) output.writeframes(data[0][1]) output.writeframes(data[1][1]) output.writeframes(data[2][1]) output.close()

    Read the article

  • group by, order by, with join

    - by Scarface
    Hey guys, quick question, I have this query, and I am trying to get the latest comment for each topic and then sort those results in descending order (therefore one comment per topic). I have what I think should work, but my join always messes my results up. Somehow, it seems to have sorted the end results properly, but has not taken the latest comment from each topic instead it seems to have just taken a random comment. If anyone has any ideas, would really appreciate any advice SELECT * FROM comments JOIN topic ON topic.topic_id=comments.topic_id WHERE topic.creator='admin' GROUP BY comments.topic_id ORDER BY comments.time DESC table comments is structured like id time user message topic_id table topic is structured like topic_id subject_id topic_title creator timestamp description

    Read the article

  • Compare two object lists with LINQ on specific property

    - by Niklas
    I have these two lists (where the Value in a SelectListItem is a bookingid): List<SelectListItem> selectedbookings; List<Booking> availableBookings; I need to find the ids from selectedBookings that are not in availableBookings. The LINQ join below will only get me the bookingids that are in availableBookings, and I'm not sure how to do it the other way around. != won't work since I'm comparing strings. results = ( from s in selectedbookings join a in availableBookings on s.bookingID.ToString() equals a.Value select s);

    Read the article

  • Performing LINQ Self Join

    - by senfo
    I'm not getting the results I want for a query I'm writing in LINQ using the following: var config = (from ic in repository.Fetch() join oc in repository.Fetch() on ic.Slot equals oc.Slot where ic.Description == "Input" && oc.Description == "Output" select new Config { InputOid = ic.Oid, OutputOid = oc.Oid }).Distinct(); The following SQL returns 53 rows (which is correct), but the above LINQ returns 96 rows: SELECT DISTINCT ic.Oid AS InputOid, oc.Oid AS OutputOid FROM dbo.Config AS ic INNER JOIN dbo.Config AS oc ON ic.Slot = oc.Slot WHERE ic.Description = 'Input' AND oc.Description = 'Output' How would I replicate the above SQL in a LINQ query? Update: I don't think it matters, but I'm working with LINQ to Entities 4.0.

    Read the article

  • Hibernate - join un related objects

    - by CuriousMind
    I have a requirement, wherein I have to join two unrelated objects using Hibernate HQL. Here is the sample POJO class class Product{ int product_id; String name; String description; } and Class Item{ int item_id; String name; String description; int quantity; int product_id; //Note that there is no composed product object. } Now I want to perform a query like select * from Product p left outer join Item i on p.product_id = i.item_id I want a multidimensional array as an output of this query so that I can have separate instances of Product and Item, instead of one composed in another. Is there any way to do this in Hibernate?

    Read the article

  • SUM of column with Left Outer Join

    - by Matt
    I am trying to get the Count of all records that have at least on person who is authorized on the record. Basically, a Record can have more than one person associated with it. I want to return the count of Total Records, a count of total Authorized Records where at least 1 person is authorized, and a count of total NotAuthorized records where no person associated with record is authorized. It doesn't matter if one person is authorized per Record or if 3 people are authorized for that record, that should add 1 to the Authorized counter. The current query is incrementing Auth and Non auth for each person added per record rather, than one per record. If no people are assigned to the record that should also count towards Not Auth. SELECT Count(DISTINCT Record.RecordID) AS TotalRecords, SUM(CASE WHEN People.PersonLevel = 1 THEN 1 ELSE 0 END) AS Authorized, SUM(CASE WHEN People.PersonLevel <> 1 THEN 1 ELSE 0 END) AS NotAuthorized FROM Record LEFT OUTER JOIN RecordPeople ON Record.RecordID = RecordPeople.RecordID LEFT OUTER JOIN People ON RecordPeople.PersonID = People.PersonID

    Read the article

< Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >