Search Results

Search found 9929 results on 398 pages for 'azure tables'.

Page 344/398 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • SQL Server Long Query

    - by thormj
    Ok... I don't understand why this query is taking so long (MSSQL Server 2005): [Typical output 3K rows, 5.5 minute execution time] SELECT dbo.Point.PointDriverID, dbo.Point.AssetID, dbo.Point.PointID, dbo.Point.PointTypeID, dbo.Point.PointName, dbo.Point.ForeignID, dbo.Pointtype.TrendInterval, coalesce(dbo.Point.trendpts,5) AS TrendPts, LastTimeStamp = PointDTTM, LastValue=PointValue, Timezone FROM dbo.Point LEFT JOIN dbo.PointType ON dbo.PointType.PointTypeID = dbo.Point.PointTypeID LEFT JOIN dbo.PointData ON dbo.Point.PointID = dbo.PointData.PointID AND PointDTTM = (SELECT Max(PointDTTM) FROM dbo.PointData WHERE PointData.PointID = Point.PointID) LEFT JOIN dbo.SiteAsset ON dbo.SiteAsset.AssetID = dbo.Point.AssetID LEFT JOIN dbo.Site ON dbo.Site.SiteID = dbo.SiteAsset.SiteID WHERE onlinetrended =1 and WantTrend=1 PointData is the biggun, but I thought its definition should allow me to pick up what I want easily enough: CREATE TABLE [dbo].[PointData]( [PointID] [int] NOT NULL, [PointDTTM] [datetime] NOT NULL, [PointValue] [real] NULL, [DataQuality] [tinyint] NULL, CONSTRAINT [PK_PointData_1] PRIMARY KEY CLUSTERED ( [PointID] ASC, [PointDTTM] ASC ) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO CREATE NONCLUSTERED INDEX [IX_PointDataDesc] ON [dbo].[PointData] ( [PointID] ASC, [PointDTTM] DESC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO PointData is 550M rows, and Point (source of PointID) is only 28K rows. I tried making an Indexed View, but I can't figure out how to get the Last Timestamp/Value out of it in a compatible way (no Max, no subquery, no CTE). This runs twice an hour, and after it runs I put more data into those 3K PointID's that I selected. I thought about creating LastTime/LastValue tables directly into Point, but that seems like the wrong approach. Am I missing something, or should I rebuild something? (I'm also the DBA, but I know very little about A'ing a DB!)

    Read the article

  • Performance impact when using XML columns in a table with MS SQL 2008

    - by Sam Dahan
    I am using a simple table with 6 columns, 3 of which are of XML type, not schema-constrained. When the table reaches a size around 120,000 or 150,000 rows, I see a dramatic performance cost in doing any query in the table. For comparison, I have another table, which grows in size at about the same rate, but only contain scalar types (int, datetime, a few float columns). That table performs perfectly fine even after 200,000 rows. And by the way, I am not using XQuery on the xml columns, i am only using regular SQL query statements. Some specifics: both tables contain a DateTime field called SampleTime. a statement like (it's in a stored procedure but I show you the actual statement) SELECT MAX(sampleTime) SampleTime FROM dbo.MyRecords WHERE PlacementID=@somenumber takes 0 seconds on the table without xml columns, and anything from 13 to 20 seconds on the table with XML columns. That depends on which drive I set my database on. At the moment it sits on a different spindle (not C:) and it takes 13 seconds. Has anyone seen this behavior before, or have any hint at what I am doing wrong? I tried this with SQL 2008 EXPRESS and the full-blown SQL Server 2008, that made no difference. Oh, one last detail: I am doing this from a C# application, .NET 3.5, using SqlConnection, SqlReader, etc.. I'd appreciate some insight into that, thanks! Sam

    Read the article

  • Query broke down and left me stranded in the woods

    - by user1290323
    I am trying to execute a query that deletes all files from the images table that do not exist in the filters tables. I am skipping 3,500 of the latest files in the database as to sort of "Trim" the table back to 3,500 + "X" amount of records in the filters table. The filters table holds markers for the file, as well as the file id used in the images table. The code will run on a cron job. My Code: $sql = mysql_query("SELECT * FROM `images` ORDER BY `id` DESC") or die(mysql_error()); while($row = mysql_fetch_array($sql)){ $id = $row['id']; $file = $row['url']; $getId = mysql_query("SELECT `id` FROM `filter` WHERE `img_id` = '".$id."'") or die(mysql_error()); if(mysql_num_rows($getId) == 0){ $IdQue[] = $id; $FileQue[] = $file; } } for($i=3500; $i<$x; $i++){ mysql_query("DELETE FROM `images` WHERE id='".$IdQue[$i]."' LIMIT 1") or die("line 18".mysql_error()); unlink($FileQue[$i]) or die("file Not deleted"); } echo ($i-3500)." files deleted."; Output: 0 files deleted. Database contents: images table: 10,000 rows filters table: 63 rows Amount of rows in filters table that contain an images table id: 63 Execution time of php script: 4 seconds +/- 0.5 second Relevant DB structure TABLE: images id url etc... TABLE: filter id img_id (CONTAINS ID FROM images table) etc...

    Read the article

  • Oracle ODBC x64 - getting 0 when selecting a number(9) column

    - by MatsL
    I'm having a really weird problem with a third party web service that uses an ODBC connection to Oracle 10.2.0.3.0. I've written a .NET client that generates the same SQL as the web service so I can find out what's going on. The web service is hosted by IIS 6 that's in x64 mode so we use Oracle x64 client. The oracle client version is 10.2.0.1.0. I have a table that looks like this (I've removed some columns and names): SQL> describe tablename; Name Null? Type ----------------------------------------- -------- ---------------------------- KOD VARCHAR2(30) ORDNING NUMBER(5) AVGIFT NUMBER(9) I then in SQL*Plus issue the following statement: SELECT KOD as kod, AVGIFT as riskPoang FROM tablename Where upper(KODTYP) = 'OBJLIVSV_RISKVERKSAMTYP' ORDER BY ORDNING And I get the following result: KOD RISKPOANG ------------------------------ ---------- Hög risk 55 Mellan risk 35 Låg risk 15 Mycket låg risk 5 But when I execute the exact same SQL using the same DSN on the same machine I get this: Values Kod: Hög risk RiskPoäng: 0 Kod: Mellan risk RiskPoäng: 0 Kod: Låg risk RiskPoäng: 0 Kod: Mycket låg risk RiskPoäng: 0 If I first cast the number to varchar and then back again to number, like this: SELECT KOD as kod, to_number(to_char(AVGIFT, '99'), '9999999999') as riskPoang FROM tablename Where upper(KODTYP) = 'OBJLIVSV_RISKVERKSAMTYP' ORDER BY ORDNING I get the correct result: Values Kod: Hög risk RiskPoäng: 55 Kod: Mellan risk RiskPoäng: 35 Kod: Låg risk RiskPoäng: 15 Kod: Mycket låg risk RiskPoäng: 5 Has anyone else experiences this? It's incredibly annoying and I'm completely stuck and not sure what to do next. We have a third party web service that use these tables so I must get the original SQL-statement to work since I can't modify its code. And pointers are greatly appreciated! :-) Best regards, Mats

    Read the article

  • How to not use JavaScript with in the elements events attributes but still load via AJAX

    - by thecoshman
    I am currently loading HTMl content via AJAX. I have code for things on different elements onclick attributes (and other event attributes). It does work, but I am starting to find that the code is getting rather large, and hard to read. I have also read that it is considered bad practice to have the event code 'inline' like this and that I should really do by element.onclick = foobar and have foobar defined somewhere else. I understand how with a static page it is fairly easy to do this, just have a script tag at the bottom of the page and once the page is loaded have it executed. This can then attach any and all events as you need them. But how can I get this sort of affect when loading content via AJAX. There is also the slight case that the content loaded can very depending on what is in the database, some times certain sections of HTML, such as tables of results, will not even be displayed there will be something else entirely. I can post some samples of code if any body needs them, but I have no idea what sort of things would help people with this one. I will point out, that I am using Jquery already so if it has some helpful little functions that would be rather sweet¬

    Read the article

  • Best practice - logging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime via automated process etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

  • Overflow in table cells

    - by Ezdaroth
    I need to create a chat layout that uses all the available space and scales nicely, but has few fixed sizes. Here's the structure: <table style="width: 100%; height: 100%"> <tr> <td></td> <td style="width: 200px; background: red;"></td> </tr> <tr> <td style="height: 100px; background: blue"></td> <td></td> </tr> </table> However, I want to place a lot of content in the first table cell and I want it to scroll, so it won't expand the table. Is it possible to make it overflow properly, without having a fixed height for the cell? Simply adding overflow: auto doesn't seem to work. PS. I hate tables, but can't figure out a very clean and cross-browser way to do a layout like this with divs and css. If someone can come up with one, I'll gladly use it.

    Read the article

  • Fast, easy, and secure method to perform DB actions with GET

    - by rob - not a robber
    Hey All, Sort of a methods/best practices question here that I am sure has been addressed, yet I can't find a solution based on the vague search terms I enter. I know starting off the question with "Fast and easy" will probably draw out a few sighs, so my apologies. Here is the deal. I have a logged in area where an ADMIN can do a whole host of POST operations to input data relating to their profile. The way I have data structured is pretty distinct and well segmented in most tables as it relates to the ID of the admin. Now, I have a table where I dump one type of data into and differentiate this data by assigning the ADMIN's unique ID to each record. In other words, all ADMINs have this one type of data writing to this table. I just differentiate by the ADMIN ID with each record. I was planning on letting the ADMIN remove these records by clicking on a link with a query string - obviously using GET. Obviously, the query structure is in the link so any logged in admin could then exploit the URL and delete a competitor's records. Is the only way to safely do this through POST or should I pass through the session info that includes password and validate it against the ADMIN ID that is requesting the delete? This is obviously much more work for me. As they said in the auto repair biz I used to work in... there are 3 ways to do a job: Fast, Good, and Cheap. You can only have two at a time. Fast and cheap will not be good. Good and cheap will not have fast turnaround. Fast and good will NOT be cheap. haha I guess that applies here... can never have Fast, Easy and Secure all at once ;) Thanks in advance...

    Read the article

  • Group / User based security. Table / SQL question

    - by Brett
    Hi, I'm setting up a group / user based security system. I have 4 tables as follows: user groups group_user_mappings acl where acl is the mapping between an item_id and either a group or a user. The way I've done the acl table, I have 3 columns of note (actually 4th one as an auto-id, but that is irrelevant) col 1 item_id (item to access) col 3 user_id (user that is allowed to access) col 3 group_id (group that is allowed to access) So for example item1, peter, , item2, , group1 item3, jane, , so either the acl will give access to a user or a group. Any one line in the ACL table with either have an item - user mapping, or an item group. If I want to have a query that returns all objects a user has access to, I think I need to have a SQL query with a UNION, because I need 2 separate queries that join like.. item - acl - group - user AND item - acl - user This I guess will work OK. Is this how its normally done? Am I doing this the right way? Seems a little messy. I was thinking I could get around it by creating a single user group for each person, so I only ever deal with groups in my SQL, but this seems a little messy as well..

    Read the article

  • Alignment in assembly

    - by jena
    Hi, I'm spending some time on assembly programming (Gas, in particular) and recently I learned about the align directive. I think I've understood the very basics, but I would like to gain a deeper understanding of its nature and when to use alignment. For instance, I wondered about the assembly code of a simple C++ switch statement. I know that under certain circumstances switch statements are based on jump tables, as in the following few lines of code: .section .rodata .align 4 .align 4 .L8: .long .L2 .long .L3 .long .L4 .long .L5 ... .align 4 aligns the following data on the next 4-byte boundary which ensures that fetching these memory locations is efficient, right? I think this is done because there might be things happening before the switch statement which caused misalignment. But why are there actually two calls to .align? Are there any rules of thumb when to call .align or should it simply be done whenever a new block of data is stored in memory and something prior to this could have caused misalignment? In case of arrays, it seems that alignment is done on 32-byte boundaries as soon as the array occupies at least 32 byte. Is it more efficient to do it this way or is there another reason for the 32-byte boundary? I'd appreciate any explanation or hint on literature.

    Read the article

  • Reading/Writing DataTables to and from an OleDb Database LINQ

    - by jsmith
    My current project is to take information from an OleDbDatabase and .CSV files and place it all into a larger OleDbDatabase. I have currently read in all the information I need from both .CSV files, and the OleDbDatabase into DataTables.... Where it is getting hairy is writing all of the information back to another OleDbDatabase. Right now my current method is to do something like this: OleDbTransaction myTransaction = null; try { OleDbConnection conn = new OleDbConnection("PROVIDER=Microsoft.Jet.OLEDB.4.0;" + "Data Source=" + Database); conn.Open(); OleDbCommand command = conn.CreateCommand(); string strSQL; command.Transaction = myTransaction; strSQL = "Insert into TABLE " + "(FirstName, LastName) values ('" + FirstName + "', '" + LastName + "')"; command.CommandType = CommandType.Text; command.CommandText = strSQL; command.ExecuteNonQuery(); conn.close(); catch (Exception) { // IF invalid data is entered, rolls back the database myTransaction.Rollback(); } Of course, this is very basic and I'm using an SQL command to commit my transactions to a connection. My problem is I could do this, but I have about 200 fields that need inserted over several tables. I'm willing to do the leg work if that's the only way to go. But I feel like there is an easier method. Is there anything in LINQ that could help me out with this?

    Read the article

  • Problem importing Oracle .dmp file

    - by BitFiddler
    So I have looked at all the suggested ways of importing .dmp files and non of them seem to answer this question: where does the data go once you import it? Context: I created a user like so: SQL> create user IMPORTER identified by "12345"; SQL> grant connect, unlimited tablespace, resource to IMPORTER; I then ran the 'imp' command as follows: C:\>imp system/password FROMUSER=OVIEDOE TOUSER=IMPORTER file=c:\database1.dmp Now there were 9 .dmp files, after each one it asked me for the next one and then I received the message "Import terminated successfully with warnings." The warning was: Warning: the objects were exported by OVIEDOE, not by you import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set export client uses WE8ISO8859P1 character set (possible charset conversion) IMP-00046: using FILESIZE value from export file of 2147483648 Now it says it was terminated successfully so my assumption (I am new to oracle so this may be wrong) is that the data was loaded. However, when I use SQL developer to connect to the database and look under the 'tables' node under the IMPORTER user, there is nothing there. What is going on? Did the data load? If so, where can I find it?

    Read the article

  • Character Set Issues when Upgrading from Symfony 2.0.* to Symfony 2.1.*?

    - by Adam Stacey
    I have recently upgraded my staging test site to the latest version of Symfony and updated all the vendors using composer as instructed in the upgrade document that comes with the download. Everything has all updated fine, but I have noticed now that some bits of HTML are not displaying in the Twig templates. I did a comparison with the current live site and it appears to be a character set issue. As an example I had a drop down list that had the following value in: Kitchen Ducting > Ducting Kits > Ducting Kit 4” / 100mm In the updated site the drop-down list item just appeared blank. When I used Twig's raw function it then displayed the item again, but with the dreaded question mark in a black diamond. Kitchen Ducting > Ducting Kits > Ducting Kit 4? / 100mm Things that you should know that may help: The staging test site and live site are both on the same server. In my httpd.conf file I have 'AddDefaultCharset utf-8'. In my php.ini file I have 'default_charset = "utf-8"'. The HTML file served has the Content-Type meta tag 'content="text/html; charset=utf-8"' My database is InnoDB and uses 'utf8' as the default character set and 'utf8_general_ci' as default collation. All tables in the database also use the defaults. I looked into BOM with UTF8, but could not work out if that was a problem or not?

    Read the article

  • Change with jQuery a cell of a table created with JSF

    - by perissf
    From within a xhtml page created with JSF, I need to use JavaScript / jQuery for changing the content of a cell of a table. I know how to assign a unique id to the div containing the table, and to the tbody. I can also assign unique class names to the div itself and to the target column. The target row is identified by the data-rk attribute. <div id="tabForm:centerTabView:personsTable" class="ui-datatable ui-widget personsTable"> <table role="grid"> <tbody id="tabForm:centerTabView:personsTable_data" > <tr data-rk="2" > <td ... /> <td class="lastNameCol" role="gridcell"> <div> To Be Edited </div> </td> <td ... /> </tr> <tr ... /> </tbody> </table> </div> I have tried with many combinations of different jQuery selectors, but I am really lost. I need to search my target row and my target column inside that particular div or inside that particular table, because the xhtml page may contain other tables with different unique ids (and accidentally with the same row and column ids).

    Read the article

  • How can I make this SQL query more efficient? PHP.

    - by Alan Grant
    Hi all, I have a system whereby a user can view categories that they've subscribed to individually, and also those that are available in the region they belong in by default. So, the tables are as follows: Categories UsersCategories RegionsCategories I'm querying the db for all the categories within their region, and also all the individual categories that they've subscribed to. My query is as follows: Select * FROM (categories c) LEFT JOIN users_categories uc on uc.category_id = c.id LEFT JOIN regions_categories rc on rc.category_id = c.id WHERE (rc.region_id = ? OR uc.user_id = ?) At least I believe that's the query, I'm creating it using Cake's ORM layer, so the exact one is: $conditions = array( array( "OR" => array ( 'RegionsCategories.region_id' => $region_id, 'UsersCategories.user_id' => $user_id ) )); $this->find('all', $conditions); This turns out to be incredibly slow (sometimes around 20 seconds or so. Each table has around 5,000 rows). Is my design at fault here? How can I retrieve both the users' individual categories and those within their region all in one query without it taking ages? Thanks!

    Read the article

  • LinqToSQL not updating database

    - by codegarten
    Hi. I created a database and dbml in visual studio 2010 using its wizards. Everything was working fine until i checked the tables data (also in visual studio server explorer) and none of my updates were there. using (var context = new CenasDataContext()) { context.Log = Console.Out; context.Cenas.InsertOnSubmit(new Cena() { id = 1}); context.SubmitChanges(); } This is the code i am using to update my database. At this point my database has one table with one field (PK) named ID. *INSERT INTO [dbo].Cenas VALUES (@p0) -- @p0: Input Int (Size = -1; Prec = 0; Scale = 0) [1] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 4.0.30319.1* This is LOG from the execution (printed the context log into the console). The problem i'm having is that these updates are not persistent in the database. I mean that when i query my database (visual studio server explorer - new query) i see the table is empty, every time. I am using a SQL Server database file (.mdf).

    Read the article

  • SQL Server INSERT ... SELECT Statement won't parse

    - by Jim Barnett
    I am getting the following error message with SQL Server 2005 Msg 120, Level 15, State 1, Procedure usp_AttributeActivitiesForDateRange, Line 18 The select list for the INSERT statement contains fewer items than the insert list. The number of SELECT values must match the number of INSERT columns. I have copy and pasted the select list and insert list into excel and verified there are the same number of items in each list. Both tables an additional primary key field with is not listed in either the insert statement or select list. I am not sure if that is relevant, but suspicious it may be. Here is the source for my stored procedure: CREATE PROCEDURE [dbo].[usp_AttributeActivitiesForDateRange] ( @dtmFrom DATETIME, @dtmTo DATETIME ) AS BEGIN SET NOCOUNT ON; DECLARE @dtmToWithTime DATETIME SET @dtmToWithTime = DATEADD(hh, 23, DATEADD(mi, 59, DATEADD(s, 59, @dtmTo))); -- Get uncontested DC activities INSERT INTO AttributedDoubleClickActivities ([Time], [User-ID], [IP], [Advertiser-ID], [Buy-ID], [Ad-ID], [Ad-Jumpto], [Creative-ID], [Creative-Version], [Creative-Size-ID], [Site-ID], [Page-ID], [Country-ID], [State Province], [Areacode], [OS-ID], [Domain-ID], [Keyword], [Local-User-ID], [Activity-Type], [Activity-Sub-Type], [Quantity], [Revenue], [Transaction-ID], [Other-Data], Ordinal, [Click-Time], [Event-ID]) SELECT [Time], [User-ID], [IP], [Advertiser-ID], [Buy-ID], [Ad-ID], [Ad-Jumpto], [Creative-ID], [Creative-Version], [Creative-Size-ID], [Site-ID], [Page-ID], [Country-ID], [State Province], [Areacode], [OS-ID], [Domain-ID], [Keyword], [Local-User-ID] [Activity-Type], [Activity-Sub-Type], [Quantity], [Revenue], [Transaction-ID], [Other-Data], REPLACE(Ordinal, '?', '') AS Ordinal, [Click-Time], [Event-ID] FROM Activity_Reports WHERE [Time] BETWEEN @dtmFrom AND @dtmTo AND REPLACE(Ordinal, '?', '') IN (SELECT REPLACE(Ordinal, '?', '') FROM Activity_Reports WHERE [Time] BETWEEN @dtmFrom AND @dtmTo EXCEPT SELECT CONVERT(VARCHAR, TripID) FROM VisualSciencesActivities WHERE [Time] BETWEEN @dtmFrom AND @dtmTo); END GO

    Read the article

  • How to reference using Entity Framework and Asp.Net Mvc 2

    - by Picflight
    Tables CREATE TABLE [dbo].[Users]( [UserId] [int] IDENTITY(1,1) NOT NULL, [UserName] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Email] [varchar](255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [BirthDate] [smalldatetime] NULL, [CountryId] [int] NULL, CONSTRAINT [PK_Users] PRIMARY KEY CLUSTERED ([UserId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] CREATE TABLE [dbo].[TeamMember]( [UserId] [int] NOT NULL, [TeamMemberUserId] [int] NOT NULL, [CreateDate] [smalldatetime] NOT NULL CONSTRAINT [DF_TeamMember_CreateDate] DEFAULT (getdate()), CONSTRAINT [PK_TeamMember] PRIMARY KEY CLUSTERED ([UserId] ASC, [TeamMemberUserId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] dbo.TeamMember has both UserId and TeamMemberUserId as the index key. My goal is to show a list of Users on my View. In the list I want to flag, or highlight the Users that are Team Members of the LoggedIn user. My ViewModel public class UserViewModel { public int UserId { get; private set; } public string UserName { get; private set; } public bool HighLight { get; private set; } public UserViewModel(Users users, bool highlight) { this.UserId = users.UserId; this.UserName = users.UserName; this.HighLight = highlight; } } View <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<MvcPaging.IPagedList<MyProject.Mvc.Models.UserViewModel>>" %> <% foreach (var item in Model) { %> <%= item.UserId %> <%= item.UserName %> <%if (item.HighLight) { %> Team Member <% } else { %> Not Team Member <% } %> How do I toggle the TeamMember or Not If I add dbo.TeamMember to the EDM, there are no relationships on this table, how will I wire it to Users object? So I am comparing the LoggedIn UserId with this list(SELECT TeamMemberUserId FROM TeamMember WHERE UserId = @LoggedInUserId)

    Read the article

  • mysqldb interfaceError

    - by Johanna
    I have a very weird probleme with mysqldb (mysql module for python). I have a file with queries for inserting records in tables. If I call the functions from the file, it works just fine. But when I try to call one of the functions from another file it throws me a _mysql_exception.InterfaceError: (0, '') I really don't get what I'm doing wrong here.. I call the function from buildDB.py : import create create.newFormat("HD", 0,0,0) The function newFormat(..) is in create.py (imported) : from Database import Database db = Database() def newFormat(name, width=0, height=0, fps=0): format_query = "INSERT INTO Format (form_name, form_width, form_height, form_fps) VALUES ('"+name+"',"+str(width)+","+str(height)+","+str(fps)+");" db.execute(format_query) And the class Databse is the following : import MySQLdb from MySQLdb.constants import FIELD_TYPE class Database(): def __init__(self): server = "localhost" login = "seq" password = "seqmanager" database = "Sequence" my_conv = { FIELD_TYPE.LONG: int } self.conn = MySQLdb.connection(host=server, user=login, passwd=password, db=database, conv=my_conv) # self.cursor = self.conn.cursor() def close(self): self.conn.close() def execute(self, query): self.conn.query(query) (I put only relevant code) Traceback : Z:\sequenceManager\mysql>python buildDB.py D:\ProgramFiles\Python26\lib\site-packages\MySQLdb\__init__.py:34: DeprecationWa rning: the sets module is deprecated from sets import ImmutableSet INSERT INTO Format (form_name, form_width, form_height, form_fps) VALUES ('HD',0 ,0,0); Traceback (most recent call last): File "buildDB.py", line 182, in <module> create.newFormat("HD") File "Z:\sequenceManager\mysql\create.py", line 52, in newFormat db.execute(format_query) File "Z:\sequenceManager\mysql\Database.py", line 19, in execute self.conn.query(query) _mysql_exceptions.InterfaceError: (0, '') The warning has never been a problem before so I don't think it's related.

    Read the article

  • World's Most Challening MySQL SQL Query (least I think so...)

    - by keruilin
    Whoever answers this question can claim credit for solving the world's most challenging SQL query, according to yours truly. Working with 3 tables: users, badges, awards. Relationships: user has many awards; award belongs to user; badge has many awards; award belongs to badge. So badge_id and user_id are foreign keys in the awards table. The business logic at work here is that every time a user wins a badge, he/she receives it as an award. A user can be awarded the same badge multiple times. Each badge is assigned a designated point value (point_value is a field in the badges table). For example, BadgeA can be worth 500 Points, BadgeB 1000 Points, and so on. As further example, let's say UserX won BadgeA 10 times and BadgeB 5 times. BadgeA being worth 500 Points, and BadgeB being worth 1000 Points, UserX has accumulated a total of 10,000 Points ((10 x 500) + (5 x 1000)). The end game here is to return a list of top 50 users who have accumulated the most badge points. Can you do it?

    Read the article

  • Finding Specific Descendant Nodes With XSL:Key

    - by DBA_Alex
    Given the following code: <database> <table name="table1"> <column name="id"/> <column name="created_at"/> </table> <table name="table2"> <column name="id"/> <column name="updated_at"/> </table> </database> I want to be able to test using an xsl:key if specific table has a specific column by name. For instance, I want to know if a table has a 'created_at' column so I can write specific code within the transformation. I've gotten a general key that will test if any table has a given column by name, but have not figured out how to make it specific to the table the transformation is currently working with. <xsl:key name="columnTest" match="column" use="@name"/> <xsl:for-each select="database/table"> <xsl:choose> <xsl:when test="key('columnTest', 'created_at')"> <xsl:text>true</xsl:text> </xsl:when> <xsl:otherwise> <xsl:text>false</xsl:text> </xsl:otherwise> </xsl:choose> </xsl:for-each> So I end up with 'true' for all the tables. Any guidance would be appreciated.

    Read the article

  • database structure

    - by jindalsyogesh
    I have a table named ActivityRecording. This table currently has 500,000 records. I need to add a lot of new inputs that relates to activityrecording table. The relation of activityrecording with these new input fields is 1 to 0,1. So, what's going to happen on screen is when user fills the ActivityRecording data, he will then be taken to a new page and this page will show a form based on the user's input (from a dropdown named service) in activityrecording. There will 6 different kinds of form (each form will have 7-8 inputs which includes textareas of size 5kb, textboxes and checkboxes). So, for one activityrecording user will fill one out of 6 forms. There are two ways I know (there could be more), I can design the data structure: Add all the inputs from all these 6 forms into the activityrecording table. So, columns belonging to 5 of these forms will be null in this table, only columns belonging to one of the forms will have values The other way would be add 6 new tables (one for each form) and add 6 foreign key columns to activityrecording table. So, out of 6 foreign keys, 5 will be null and one will actually point to a table Which approach is a better data structure design? Please take into consideration that number of rows in this table are 500,000 and are expected to grow at a faster rate now.

    Read the article

  • Mapping enum with fluent nhibernate

    - by Puneet
    I am following the http://wiki.fluentnhibernate.org/Getting%5Fstarted tutorial to create my first NHibernate project with Fluent NHibernate I have 2 tables 1) Account with fields Id AccountHolderName AccountTypeId 2) AccountType with fields Id AccountTypeName Right now the account types can be Savings or Current So the table AccountTypes stores 2 rows 1 - Savings 2 - Current For AccoutType table I have defined enum public enum AccountType { Savings=1, Current=2 } For Account table I define the entity class public class Account { public virtual int Id {get; private set;} public virtual string AccountHolderName {get; set;} public virtual string AccountType {get; set;} } The fluent nhibernate mappings are: public AgencyMap() { Id(o => o.Id); Map(o => o.AccountHolderName); Map(o => o.AccountType); } When I try to run the solution, it gives an exception - InnerException = {"(XmlDocument)(2,4): XML validation error: The element 'class' in namespace 'urn:nhibernate-mapping-2.2' has incomplete content. List of possible elements expected: 'meta, subselect, cache, synchronize, comment, tuplizer, id, composite-id' in namespace 'ur... I guess that is because I have not speciofied any mapping for AccountType. The questions are: How can I use AccountType enum instead of a AccountType class? Maybe I am going on wrong track. Is there a better way to do this? Thanks!

    Read the article

  • Stored Procedure To Search the AccessRights given to the Users.

    - by thevan
    Hi, I want to display the Access Rights given to the Users for the particular module. I have Seven Tables such as RoleAccess, Roles, Functions, Module, SubModule, Company and Unit. RoleAccess is the Main Table. The AccessRights given will be stored in the RoleAccess Table only. RoleAccess Table has the following columns such as RoleID, CompanyID, UnitID, FunctionID, ModuleID, SubModuleID, Create, Update, Delete, Read, Approve. Here Create_f, Update_f, Delete_f, Read_f and Approve_f are flags. Company Table has two columns such as CompanyID and CompanyName. Unit Table has three columns such as UnitID, UnitName and CompanyID. Roles Table has four columns such as RoleID, RoleName, CompanyID and UnitID. Module Table has two columns such as ModuleID and ModuleName. SubModule Table has three columns such as ModuleID, SubModuleID, SubModuleName. Functions Table has five columns such as FunctionID, FunctionName, ModuleID and SubModuleID. At First, The RoleAccess Table does not contain any records. So I want to display the ModuleName, SubModuleName, FunctionName, CompanyID, RoleID, UnitID, FunctionID, ModuleID, SubModuleID, Create_f, Update_f, Delete_f, Read_f and Approve_f. If the AccessRights is assigned to the Particular RoleID means the flags in the search results will be 1 else it will be 0. I have witten one stored procedure but it displays the records based on the RoleID stored in the RoleAccess table. But I also want to display the Flags as 0 for the Roles not stored in the RoleAccess Table. I want the Stored Procedure for this. Any one please help me.

    Read the article

  • ASP.Net forms authentication - multiple providers

    - by Chris Klepeis
    I have an ASP.Net 4.0 application, and within it is a folder called "Forum", setup as a sub application in IIS 7. This forum package implements a custom provider for .net membership. The forum is running in .net 3.5. I'd like to setup the main site so that when users login, it logs them into both my site and the forum site. Both the main site and the forum have separate .Net membership tables. How can I specify which provider to use with formsauthentication? right now I have FormsAuthentication.SetAuthCookie(...); this, however, just uses my default provider and does nothing with the provider for the forum I tried setting the forum app and my web app to have the same cookie name, as well as setting the machinekey on each: <machineKey validationKey="AutoGenerate" validation="SHA1" /> no dice. I googled and didnt really come up with any example of how to use multiple providers like I want to. I updated my web.config to have both provideers but this is useless if I cannot specify in my code which one to use.

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >