Search Results

Search found 10101 results on 405 pages for 'temporary tables'.

Page 265/405 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • SQL Server: SELECT rows with MAX(Column A), MAX(Column B), DISTINCT by related columns

    - by z531
    Scenario: Table A MasterID, Added Date, Added By, Updated Date, Updated By, 1, 1/1/2010, 'Fred', null, null 2, 1/2/2010, 'Barney', 'Mr. Slate', 1/7/2010 3, 1/3/2010, 'Noname', null, null Table B MasterID, Added Date, Added By, Updated Date, Updated By, 1, 1/3/2010, 'Wilma', 'The Great Kazoo', 1/5/2010 2, 1/4/2010, 'Betty', 'Dino', 1/4/2010 Table C MasterID, Added Date, Added By, Updated Date, Updated By, 1, 1/5/2010, 'Pebbles', null, null 2, 1/6/2010, 'BamBam', null, null Table D MasterID, Added Date, Added By, Updated Date, Updated By, 1, 1/2/2010, 'Noname', null, null 3, 1/4/2010, 'Wilma', null, null I need to return the max added date and corresponding user, and max updated date and corresponding user for each distinct record when tables A,B,C&D are UNION'ed, i.e.: 1, 1/5/2010, 'Pebbles', 'The Great Kazoo', 1/5/2010 2, 1/6/2010, 'BamBam', 'Mr. Slate', 1/7/2010 3, 1/4/2010, 'Wilma', null, null I know how to do this with one date/user per row, but with two is beyond me. DBMS is SQL Server 2005. T-SQL solution preferred. Thanks in advance, Dave

    Read the article

  • Manipulate COBOL data structure

    - by Morewinder
    Hello. I would like informations to manipulate tables. I encounter few problem with a piece of cobol code like below: 01 TABLE-1. 05 STRUCT-1 OCCURS 25 TIMES. 10 VALUE-1 PIC AAA. 10 VALUE-2 PIC 9(5)V999. 05 NUMBER-OF-OCCURS PIC 99. How do you update values? (update a VALUE-2 when you know a VALUE-1) How look up a value and add new one? Thanks a lot!

    Read the article

  • Linq join with COUNT

    - by shivesh
    I have 2 tables, Forums and Posts. I want to retrieve all Forums fields with a new extra field: count all post that belong to this forum. I have this for now: var v =(from forum in Forums join post in Posts on forum.ForumID equals post.Forum.ForumID select new { forum, //Need to retrieve all fields/columns from forum PostCount = //count all post that belong to this forum with a condition: count it only if post.Showit==1 } ).Distinct() The join must be Left join: if there are no post that belongs to some forum, the forums fields should be retrieved but PostCount field should be 0. The result set must be distinct (join gives me the full cross...or how it's called)

    Read the article

  • L2E many to many query

    - by 5YrsLaterDBA
    I have four tables: Users PrivilegeGroups rdPrivileges LinkPrivilege ----------- ---------------- --------------- --------------- userId(pk) privilegeGroupId(pk) privilegeId(pk) privilegeId(pk, fk) privilegeGroupId(fk) name code privilegeGroupId(pk, fk) L2E will not create LinkPrivilege entity for me. So we only have Users, PrivilegeGroups and rdPrivileges entities. PrivilegeGroups and rdPrivileges are many to many relationship. What I need to do is retrieve all code from rdPrivileges table based on a passed in userId. How can I do it? EDIT working code: var acc = from u in db.Users from pg in db.PrivilegeGroups from p in pg.rdPrivileges where u.UserId == userId && u.PrivilegeGroups.PrivilegeGroupId == pg.PrivilegeGroupId select p.Code;

    Read the article

  • Entity Relationship diagram - Composition

    - by GigaPr
    Hi, I am implementing a small database(university Project) and i am facing the following problem. I created a class diagram where i have a class Train {Id, Name, Details} And a class RollingStock which is than generalized in Locomotive and FreightWagon. A train is Composed by multiple RollingStock at a certain time(on different days the rolling stock will compose a different train). I represented the relationship train - rolling stock as a diamond filled (UML) but still I have a many to many relationship between the two tables. so i guess i have to create an additional table to solve the many to many relationship train_RollingStock. but how do i represent the Composition? Can i still use the filled diamond? If yes on which side? Thanks

    Read the article

  • WPF - Virtualizing an ItemsControl?

    - by Rachel
    I have an ItemsControl containing a list of data that I would like to virtualize, however VirtualizingStackPanel.IsVirtualizing="True" does not seem to work with an ItemsControl. Is this really the case or is there another way of doing this that I am not aware of? To test I have been using the following block of code: <ItemsControl VirtualizingStackPanel.IsVirtualizing="True" ItemsSource="{Binding Path=AccountViews.Tables[0]}"> <ItemsControl.ItemTemplate> <DataTemplate> <TextBlock Initialized="TextBlock_Initialized" Margin="5,50,5,50" Text="{Binding Path=Name}" /> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> If I change the ItemsControl to a ListBox, I can see that the Initialized function only runs a handful of times (the huge margins are just so I only have to go through a few records), however as an ItemsControl every item gets initialized. I have tried setting the ItemsControlPanelTemplate to a VirtualizingStackPanel but that doesn't seem to help.

    Read the article

  • A question about "empty" lists in Python

    - by bitrex
    I've started teaching myself Python, and as an exercise I've set myself the task of generating lookup tables I need for another project. I need to generate a list of 256 elements in which each element is the value of math.sin(2pi/256). The problem is I don't know how to generate a list initialized to "dummy" values that I can then use a for loop to step through and assign the values of the sin function. Using list[] seems to create an "empty" list, but with no elements so I get a "list assignment index out of range" error in the loop. Is there a way to this other than explicitly creating a list declaration containing 256 elements all with "0" as a value? Thanks!

    Read the article

  • MySQLi PHP: Check if SQL INSERT query was fully successful using MySQLi

    - by Jonathan
    Hi, I have this big function that gets a lot of different data and insert it into multiple tables.. Not all the data is always available so not all the SQL INSERT queries are successful. I need to check which SQL INSERT query was fully successful and which wasn't to the do something with this data (like inserting into a log table or similar). Just to give you an example of how I think it can be done: $sql = 'INSERT INTO data_table (ID, column1, column2) VALUES(?, ?, ?)'; if ($stmt->prepare($sql)) { $stmt->bind_param('iss', $varID, $var1, $var2); if ($stmt->execute()) { $success == TRUE; //or something like that } } I'm not completely sure this is the best way and if its always really show if the data was inserted into the table... Any suggestions??

    Read the article

  • general database modeling and django specific modeling

    - by Shreko
    I'm wondering what is the best way to model something like the following. Lets say my company sells metal bars (parameters/fields are: length, profile_type, quantity etc.) of different profiles, where profiles may be pipe(pipe_diameter, wall_thickness) or hollow_rectangle(base, height, wall_thickness), or maybe some other profile with different parameters. Lets say maximum number of profiles would be 12, each profile having between 2-5 parameters. Should everything be in a single table like table_bars: id, length, quantity, profile_type, pipe_diameter, wall_thickness, base, height, etc.) where profile type would be (pipe, rectangle etc.) or should every shape have its own table with its own parameters and in table_bars keep only id, length, quantity profile_type and profile_id) and are there any django specific issues is multiple tables are the best answer? Thanks

    Read the article

  • .DBML file and LINQ to SQL

    - by Rishabh Ohri
    In my DBML file I have mapped some tables and stored procedures, and the stored procedures return type is ISingleResult . T is some mapped table. But I want to take the data into my own created entities rather than LINQ to SQL created entites. The entites created by me are also the same as the mapped table entities and their use lies when we send data across the a web service. So , how can I proceed by creating a wrapper around the DBML file so that I always get data in my own created entites.

    Read the article

  • Microsoft Sync Framework - How to reprovision a table (or entire scope) fater schema changes?

    - by Rabbi
    B"H I have already setup Syncing with Microsoft Sync Framework, and now I need to add fields to a table. How do I re-provision the databases? The setup is exceedingly simple: Two sql express 2008 servers The scope includes the entire database Using Microsoft Sync Framework 2.0 Synchronizing by direct access. Using the standard new SqlSyncProvider Do I make the structural changes at both ends? Or do I only change one Server and let Sync Framework somehow propagate the change? Do I need to delete the _tracking tables and/or the stored procedures? How about the triggers? Has anyone been using the Sync Framework? Please help.

    Read the article

  • MS SQL Bridge Table Constraints

    - by greg
    Greetings - I have a table of Articles and a table of Categories. An Article can be used in many Categories, so I have created a table of ArticleCategories like this: BridgeID int (PK) ArticleID int CategoryID int Now, I want to create constraints/relationships such that the ArticleID-CategoryID combinations are unique AND that the IDs must exist in the respective primary key tables (Articles and Categories). I have tried using both VS2008 Server Explorer and Enterprise Manager (SQL-2005) to create the FK relationships, but the results always prevent Duplicate ArticleIDs in the bridge table, even though the CategoryID is different. I am pretty sure I am doing something obviously wrong, but I appear to have a mental block at this point. Can anyone tell me please how should this be done? Greaty appreciated!

    Read the article

  • Using nginx's proxy_redirect when the response location's domain varies

    - by Chalky
    I am making an web app using SoundCloud's API. Requesting an MP3 to stream involves two requests. I'll give an example. Firstly: http://api.soundcloud.com/tracks/59815100/stream This returns a 302 with a temporary link to the actual MP3 (which varies each time), for example: http://ec-media.soundcloud.com/xYZk0lr2TeQf.128.mp3?ff61182e3c2ecefa438cd02102d0e385713f0c1faf3b0339595667fd0907ea1074840971e6330e82d1d6e15dd660317b237a59b15dd687c7c4215ca64124f80381e8bb3cb5&AWSAccessKeyId=AKIAJ4IAZE5EOI7PA7VQ&Expires=1347621419&Signature=Usd%2BqsuO9wGyn5%2BrFjIQDSrZVRY%3D The issue I had was that I am attempting to load the MP3 via JavaScript's XMLHTTPRequest, and for security reasons the browser can't follow the 302, as ec-media.soundcloud.com does not set a header saying it is safe for the browser to access via XMLHTTPRequest. So instead of using the SoundCloud URL, I set up two locations in nginx, so the browser only interacts with the server my app is hosted on and no security errors come up: location /soundcloud/tracks/ { # rewrite URL to match api.soundcloud.com's URL structure rewrite \/soundcloud\/tracks\/(\d*) /tracks/$1/stream break; proxy_set_header Host api.soundcloud.com; proxy_pass http://api.soundcloud.com; # the 302 will redirect to /soundcloud/media instead of the original domain proxy_redirect http://ec-media.soundcloud.com /soundcloud/media; } location /soundcloud/media/ { rewrite \/soundcloud\/media\/(.*) /$1 break; proxy_set_header Host ec-media.soundcloud.com; proxy_pass http://ec-media.soundcloud.com; } So myserver/soundcloud/tracks/59815100 returns a 302 to /myserver/soundcloud/media/xYZk0lr2TeQf.128.mp3...etc, which then forwards the MP3 on. This works! However, I have hit a snag. Sometimes the 302 location is not ec-media.soundcloud.com, it's ak-media.soundcloud.com. There are possibly even more servers out there and presumably more could appear at any time. Is there any way I can handle an arbitrary 302 location without having to manually enter each possible variation? Or is it possible for nginx to handle the redirect and return the response of the second step? So myserver/soundcloud/tracks/59815100 follows the 302 behind the scenes and returns the MP3? The browser automatically follows the redirect, so I can't do anything with the initial response on the client side. I am new to nginx and in a bit over my head so apologies if I've missed something obvious, or it's beyond the scope of nginx. Thanks a lot for reading.

    Read the article

  • multiple pivot table consolidation to another pivot table

    - by phill
    I have to SQL Server views being drawn to 2 seperate worksheets as pivot tables in an excel 2007 file. the results on worksheet1 include example data: - company_name, tickets, month, year company1, 3, 1,2009 company2, 4, 1,2009 company3, 5, 1,2009 company3, 2, 2,2009 results from worksheet2 include example data: company_name, month, year , fee company1, 1 , 2009 , 2.00 company2, 1 , 2009 , 3.00 company3, 1 , 2009 , 4.00 company3, 2 , 2009 , 2.00 I would like the results of one worksheet to be reflected onto the pivot table of another with their corresponding companies. for example in this case: - company_name, tickets, month, year, fee company1, 3, 1,2009 , 2 company2, 4, 1,2009 , 3 company3, 5, 1,2009 , 4 company3, 2, 2,2009 , 2 Is there a way to do this without vba? thanks in advance

    Read the article

  • NHibernate: How to save list in the database?

    - by Anry
    I have a table Order, Transaction, Payment. Class Order has the properties: public virtual Guid Id { get; set; } public virtual DateTime Created { get; set; } ... I added properties: public virtual IList<Transaction> Transactions { get; set; } public virtual IList<Payment> Payments { get; set; } These properties contain a record of tables [Transaction] and [Payment]. How to keep these lists in the database?

    Read the article

  • OCZ Vertex 2 not recognized by Ubuntu installer

    - by Zsub
    As I boot into the Ubuntu 10.10 (or 11.04, doesn't matter) live environment or installer, it just refuses to recognise my Vertex 2. It reports the disk as ATA and not supporting smart, shows no serial number, and doesn't list the size correctly. All fdisk tells me is Unable to read /dev/sda (it's the only storage in the PC). I'm now running a temporary install of Windows 7 off of it, which worked like a charm, so where am I going wrong with Ubuntu... Specs: Asus M4N68T-M LE V2 (BIOS 0702, most recent) OCZ Vertex 2 SSD 60 GB Amd Athlon II X4 640 Patriot PSD34G13332 4GB DDR3 ram (two banks) EDIT I installed a second drive, installed Ubuntu on that and booted, it recognised the SSD just fine. I'm now trying to apt-get upgrade the live-environment. I wonder if there is any way to sort of install Ubuntu from Ubuntu (I boot into the working install on the other drive, install it on the SSD and then boot from the SSD). EDIT2 Ok, so that doesn't work. The install detects the SSD, however, it cannot format it. EDIT3 After a fresh boot I can read out SMART-data and even perform a read-benchmark, but if I try to format it, or do a write-bench, it'll crap out and after that it says SMART is not supported. So basically it seems I can't write to the disk, as it will stop working when I do, I will try to run repeated read-benchmarks to see if that has any effect. EDIT4 I'm running several read benchmarks on the drive right now, they give results that are to be expected from an SSD. If the read-benches don't fail, I can use fdisk on the disk, but it is now stuck trying to re-read the partition table after issueing the 'w' command. EDIT5 Parted Magic did recognize the drive and with hdparm -I even could tell me the drive was in a frozen state. I powercycled it (just pull out the plug from the SSD and plug it back in) and it wasn't frozen anymore. After that I could upgrade the firmware on the drive (still using Parted Magic) and format it to Ext4. After I rebooted into the Ubuntu installer, it wouldn't get recognized and hdparm didn't want to talk to it saying HDIO_DRIVE_CMD(identify) failed: Invalid exchange. EDIT6 For some reason if I enable one of the RAID controllers (the one the SSD is connected to, obviously) Ubuntu will let me format it, mount it and write to it. The installer also recognizes it. However if the raid controller is enabled but no array is defined the motherboard can't boot from it :(

    Read the article

  • HTML table headers always visible at top of window when viewing a large table

    - by Craig McQueen
    I would like to be able to "tweak" an HTML table's presentation to add a single feature: when scrolling down through the page so that the table is on the screen but the header rows are off-screen, I would like the headers to remain visible at the top of the viewing area. This would be conceptually like the "freeze panes" feature in Excel. However, an HTML page might contain several tables in it and I only would want it to happen for the table that is currently in-view, only while it is in-view. Note: I've seen one solution where the table data area is made scrollable while the headers do not scroll. That's not the solution I'm looking for.

    Read the article

  • Run a shell command with arguments from powershell script

    - by Mike Weerasinghe
    Hello, I need to extract and save a some tables from a remote SQL database using bcp. I would like to write a powershell script to invoke bcp for each table and save the data. So far I have this script that creates the necessary args for bcp. However I can not figure out how to pass the args to bcp. Every time I run the script it just shows the bcp help instead. This must be something really easy that I am not getting. #commands bcp database.dbo.tablename out c:\temp\users.txt -N -t, -U uname -P pwd -S <servername> $bcp_path = "C:\Program Files\Microsoft SQL Server\90\Tools\Binn\bcp.exe" $serverinfo =@{} $serverinfo.add('table','database.dbo.tablename') $serverinfo.add('uid','uname') $serverinfo.add('pwd','pwd') $serverinfo.add('server','servername') $out_path= "c:\Temp\db\" $args = "$($serverinfo['table']) out $($out_path)test.dat -N -t, -U $($serverinfo['uid']) -P $($serverinfo['pwd']) -S $($serverinfo['server'])" #this is the part I can't figure out & $bcp_path $args

    Read the article

  • Tutorials for .NET database app using SQLite

    - by ChrisC
    I have some MS Access experience, and had a class on console c++ apps, now I am trying to develop my first program. It's a little C# db app. I have the db tables and columns planned and keyed into VS, but that's where I'm stuck. I'm needing C#/VS tutorials that will guide me on configuring relationships, datatyping, etc, on the db so I can get it ready for testing of the schema. The only tutorials I've been able to find either talk about general db basics (ie, not helping me with VS/C#), or about C# communications with an existing SQL db. Thank you. (In case it matters, I'm using the open source System.Data.SQLite (sqlite.phxsoftware.com) for the db. I chose it over SQL Server CE after seeing a comparison between the two. Also I wanted a server-less version of SQL because this little app will be on other people's computers and I want to to do as little support as possible.)

    Read the article

  • How to read XML from the internet using a Web Proxy?

    - by Mark Allison
    This is a follow-up to this question: How to load XML into a DataTable? I want to read an XML file on the internet into a DataTable. The XML file is here: http://rates.fxcm.com/RatesXML If I do: public DataTable GetCurrentFxPrices(string url) { WebProxy wp = new WebProxy("http://mywebproxy:8080", true); wp.Credentials = CredentialCache.DefaultCredentials; WebClient wc = new WebClient(); wc.Proxy = wp; MemoryStream ms = new MemoryStream(wc.DownloadData(url)); DataSet ds = new DataSet("fxPrices"); ds.ReadXml(ms); DataTable dt = ds.Tables["Rate"]; return dt; } It works fine. I'm struggling with how to use the default proxy set in Internet Explorer. I don't want to hard-code the proxy. I also want the code to work if no proxy is specified in Internet Explorer.

    Read the article

  • getting a blank data report vb6

    - by arvind
    Hi, I am new to vb6. I am working to create the invoice generation application. I am using data report to show the generated invoice. The step by step working of process is Entering the data in to Invoice and ItemsInvoice tables. Then getting the maxId using (Adodc) from the data base to show the last generated Invoice. Then passing the max Id as parameter to the data report which is showing the invoice according to the invoice id. It is working fine when I first time generate invoice. Now for 2nd invoice withou closing application I am getting a blank data report. For data report I am using dataenvironment. I am guessing the reason of blank data report is blank because there was no record for that Id. But actually the record is inserting in the database. Please help me.

    Read the article

  • Is it a Good Practice to Write HTML Using a StringBuilder in my ASP.NET Codebehind?

    - by d3020
    I'm interested to hear from other developers their opinion on an approach that I typically take. I have a web application, asp.net 2.0, c#. What I usually do to write out drop downs, tables, input controls, etc. is in the code behind use StringBuilder and write out something like sb.Append(" I don't find myself using to many .net controls as I typically write out the html in the code behind. When I want to use jQuery or call JavaScript I just put that function call in my sb.Append tag like sb.Append("td...onblur='fnCallJS()'. I've gotten pretty comfortable with this approach. For data access I use EntitySpaces. I'm just kind of curious if this sort of approach is horribly wrong, ok depending on the context, good, time to learn 3.0, etc. I'm interested in learning and was just looking for some input.

    Read the article

  • L2E delete exception

    - by 5YrsLaterDBA
    I have following code to delete an user from database: try { var user = from u in db.Users where u.Username == username select u; if (user.Count() > 0) { db.DeleteObject(user.First()); db.SaveChanges(); } } but I got exception like this: at System.Data.Mapping.Update.Internal.UpdateTranslator.Update(IEntityStateManager stateManager, IEntityAdapter adapter) at System.Data.EntityClient.EntityAdapter.Update(IEntityStateManager entityCache) at System.Data.Objects.ObjectContext.SaveChanges(Boolean acceptChangesDuringSave) at System.Data.Objects.ObjectContext.SaveChanges() at MyCompany.SystemSoftware.DQMgr.User.DeleteUser(String username) in C:\workspace\SystemSoftware\SystemSoftware\src\dqm\User.cs:line 479 The Users table is referenced by few other tables. It is probably caused by the foreign key constraint?

    Read the article

  • Flash was "not designed to function across LANs". Any workarounds?

    - by Triynko
    See: http://helpx.adobe.com/flash/kb/problems-using-flash-authoring-across.html Issue When using Adobe Flash across a local area network (LAN) and networked drives/folders, you may experience any of the following problems:" Flash crashes while performing a test movie on FLA files located on a networked drive or folder. FLA files get corrupted when opening from or saving to networked drives or folder. Flash does not reflect changes in custom class after compiling. Flash, Flash Video Encoder, or Adobe Media Encodercrashes or corrupts Flash Video (FLV) files while encoding source located on networked drives or folder. Flash Video Encoder or Adobe Media Encoder crashes or corrupts FLV files where the output folder is a networked drive or folder. Published Flash Player (SWF) files and projectors are unable to load content located on networked drives or folder. More than one instance of a SWF or Projector on client machines cannot play back FLV files located on a networked drive or folder. Reason The Adobe Flash IDE, FLV Encoder, Adobe Media Encoderand Flash Player were not designed to function across LANs. Solution Use of Flash files across local networks is not supported in any context. Published content should access data through a web server. All file sources should be opened and saved on the local system. Using Flash in such a scenario for project collaboration or content deployment is highly discouraged and may corrupt your source files. If you need to work in a collaborative environment or store source files on a server, use the project panel and/or a third-party version control system. SERIOUSLY? I cannot work on files located on a mapped network drive? How did they mess that one up? Does the Flash IDE really open the source file and wipe it clean to do the saving, rather than saving a copy first then replacing it as an atomic file system operation? How hard would it be for them make a dummy temporary file for saving then issue a MOVE command? Any workarounds for this, like something that can make a network drive as stable as a local drive, like some kind of automatic local caching and synching?

    Read the article

  • Best way to produce automated exports in tab-delimited form from Teradata?

    - by Cade Roux
    I would like to be able to produce a file by running a command or batch which basically exports a table or view (SELECT * FROM tbl), in text form (default conversions to text for dates, numbers, etc are fine), tab-delimited, with NULLs being converted to empty field (i.e. a NULL colum would have no space between tab characters, with appropriate line termination (CRLF or Windows), preferably also with column headings. This is the same export I can get in SQL Assistant 12.0, but choosing the export option, using tab delimiter, setting my NULL value to '' and including column headings. I have been unable to find the right combination of options - the closest I have gotten is by building a single column with CAST and '09'XC, but the rows still have a leading 2-byte length indicator in most settings I have tried. I would prefer not to have to build large strings for the various different tables.

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >