Search Results

Search found 9929 results on 398 pages for 'azure tables'.

Page 226/398 | < Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >

  • how to group fields in crystal reports using vb.net code?

    - by meenakshi
    I am using vb.net 2005. i am trying to set report groupings of a crystal report at runtime based on user defined options. MSDN says this: Dim FieldDef As FieldDefinition FieldDef = Report.Database.Tables.Item(0).Fields.Item(comboBox1().Text) Report.DataDefinition.Groups.Item(0).ConditionField = FieldDef but error shows invalid group number how to solve this?

    Read the article

  • What is the most elegant solution for generating PowerPoint slides online?

    - by Matt
    I would like some advice on the following please. We have an ASP.net site where we need to generate PowerPoint slides of the data. The slides will need to include charts and tables. I have come across Aspose.Slides online which seems a good option Is this the best solution for this? What are your experiences with Aspose.Slides? Are there any other options we can pursue? Thanks

    Read the article

  • Is there a fast way to change all the collation to utf8_unicode?

    - by Mark
    I have realised after making about 20 tables that I need to user utf8_unicode as opposed to utf8_general. What is the fastest way to change it using PHPMyAdmin? I had one idea: Export the database as SQL then using notepad run a find and replace on it and then reimport it... but it sounds like a bit of a headache. Is there a better way?

    Read the article

  • Simple Regex Question

    - by Jim B
    Hey Everyone, I need to search a bunch of files for anything that contains either "tblPayment" or "tblInvoice" I also want to match any tables named "tblPaymentMethod", "tblInvoiceItem", "tblInvoicePayment" Anybody care to help me out with how to write a regular expression for this? Thanks again!

    Read the article

  • Open Source - EER Modeling Tool

    - by Nick Fergis
    Is there a good open source or reasonably priced EER modeling tool for MySQL besides MySQL Workbench? I find the MySQL Workbench interface to be clunky. I would like to be able to manage my production schema beginning all design changes in the EER and propogating those out to my schema for created and altered tables. Is anyone use a tool they love to manage their environments in this way? Thanks. - Nick

    Read the article

  • Mysql query question

    - by brux
    I have 2 tables: Customer: customerid - int, pri-key,auto fname - varchar sname -varchar housenum - varchar street -varchar Items: itemid - int,pri-key,auto type - varchar collectiondate - date releasedate - date customerid - int I need a query which will get me all items that have a releasedate 3 days prior to (and including) the current date. i.e The query should return customerid,fname,sname,street,housenum,type,releasedate for all items which have releasedate within (and including)3 days prior today thanks in advance

    Read the article

  • mysql multi count() in one query

    - by atno
    Hi, I'm trying to count several joined tables but without any luck, what I get is the same numbers for every column (tUsers,tLists,tItems). My query is: select COUNT(users.*) as tUsers, COUNT(lists.*) as tLists, COUNT(items.*) as tItems, companyName from users as c join lists as l on c.userID = l.userID join items as i on c.userID = i.userID group by companyID The result I want to get is --------------------------------------------- # | CompanyName | tUsers | tlists | tItems 1 | RealCoName | 5 | 2 | 15 --------------------------------------------- what modifications do i have to do to my query to get those results? Cheers

    Read the article

  • fastest way to upload an xls file into a database

    - by shmichael
    I have an xls file with ~60 sheets of data. I would like to move them into a database (postgres) such that each sheet's data is stored in a different table. What is the fastest way of creating these tables? I don't care about naming or proper typing of columns. The columns could all be strings for that matter. I don't want to run 60 different csv uploads.

    Read the article

  • How can I improve my select query for storing large versioned data sets?

    - by Jason Francis
    At work, we build large multi-page web applications, consisting mostly of radio and check boxes. The primary purpose of each application is to gather data, but as users return to a page they have previously visited, we report back to them their previous responses. Worst-case scenario, we might have up to 900 distinct variables and around 1.5 million users. For several reasons, it makes sense to use an insert-only approach to storing the data (as opposed to update-in-place) so that we can capture historical data about repeated interactions with variables. The net result is that we might have several responses per user per variable. Our table to collect the responses looks something like this: CREATE TABLE [dbo].[results]( [id] [bigint] IDENTITY(1,1) NOT NULL, [userid] [int] NULL, [variable] [varchar](8) NULL, [value] [tinyint] NULL, [submitted] [smalldatetime] NULL) Where id serves as the primary key. Virtually every request results in a series of insert statements (one per variable submitted), and then we run a select to produce previous responses for the next page (something like this): SELECT t.id, t.variable, t.value FROM results t WITH (NOLOCK) WHERE t.userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') AND t.id IN (SELECT MAX(id) AS id FROM results WITH (NOLOCK) WHERE userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') GROUP BY variable) Which, in this case, would return the most recent responses for the variables "internat", "veteran", and "athlete" for user 2111846. We have followed the advice of the database tuning tools in indexing the tables, and against our data, this is the best-performing version of the select query that we have been able to come up with. Even so, there seems to be significant performance degradation as the table approaches 1 million records (and we might have about 150x that). We have a fairly-elegant solution in place for sharding the data across multiple tables which has been working quite well, but I am open for any advice about how I might construct a better version of the select query. We use this structure frequently for storing lots of independent data points, and we like the benefits it provides. So the question is, how can I improve the performance of the select query? I assume the nested select statement is a bad idea, but I have yet to find an alternative that performs as well. Thanks in advance. NB: Since we emphasize creating over reading in this case, and since we never update in place, there doesn't seem to be any penalty (and some advantage) for using the NOLOCK directive in this case.

    Read the article

  • Where does Drupal store NODE data?

    - by RD
    This is a follow up to my previous question: http://stackoverflow.com/questions/1284476/where-does-drupal-store-node-body-content Now, I tried adding values into node and node-revision, but still the node data is not showing. So, obviously more data is stored somewhere else. So basically, I want to know, which tables are affected when you create a new node?

    Read the article

  • SQL query multi table selection

    - by nemiss
    I have 3 tables, - Section table that defines some general item sections. - Category table - has a "section" column (foreign key). - Product table - has a "category" column (foreign key). I want to get all products that belong to X section. How can I do it? select from select?

    Read the article

  • sqlserver how to set job priority

    - by Buzz
    Is there any way to set one job priority higher then other, In my case there are two jobs those are working on same set of tables,first JOB-A which is running every 12 hr and other JOB-B is every 10 minutes , i think at some time when they run simultaneously JOB-B is getting in to deadlock and get failed, i google the topic and found sqlgoverner is helpful in such cases does anyone know how to resolve?

    Read the article

  • Database Schema Versioning Strategies

    - by Jack Ryan
    I work on a project that uses a reasonably large database, the live version weighing in at somewhere around 60-80GB. The live database is the only real definitive source of our schema, and because of its size duplicating this database is too slow to be done often. This means we have ended up developing our database schema in a pretty ad hoc way, using sql compare to migrate changes from dev dbs to the live system, and only wiping our dev dbs every month or two. I am hoping to get some pointers on how to improve our database development work flow so that we have a little more control. Some things to think about: Currently nobody is really in charge of the database schema, all developers can change it if they need to, though generally these decisions are talked about before they are done. There are stored procedures, functions, and views in the database. These should probably be dumped to files so they can be reloaded on every build. Schema changes should probably be checked in as scripts. We have started to do this recently. However all our scripts must then be numbered (because there may be dependencies between them), and must be re runnable (because our build script currently runs them all in order). This makes them hard to read because they are full of conditionals that check whether tables or columns already exist. This is a step that is often forgotten by developers. Getting a new database should be quick and easy. This is currently a big problem, it takes several hours to get a copy of last nights backup and restore it onto a dev machine. Some mechanism needs to be in place to allow developers to update static data. We have tables that contain data that is never updated through the application, but does potentially need to be changed when we do a new release (often this drives dropdowns). The whole thing needs to be runnable as part of a build script. Are there any tools that can be used to help to do this? Eventually I would like to be at a point where a new DB can be built from scratch without copying any data from the live system. I don't mind writing some scripts to glue all the steps together but each part should be easily editable so that we continue to use it rather than make changes directly on DBs.

    Read the article

  • solution for updating table based on data from another table

    - by I__
    i have 2 tables in access this is what i need: 1. if the PK from table1 exists in table2, then delete the entire record with that PK from table2 and add the entire record from table1 into table2 2. if the PK does not exist then add the record i need help with both the sql statement and the VBA i guess the VBA should be a loop, going through every record in table1. inside the loop i should have the select statement

    Read the article

  • MySQL search for user and their roles

    - by Jenkz
    I am re-writing the SQL which lets a user search for any other user on our site and also shows their roles. An an example, roles can be "Writer", "Editor", "Publisher". Each role links a User to a Publication. Users can take multiple roles within multiple publications. Example table setup: "users" : user_id, firstname, lastname "publications" : publication_id, name "link_writers" : user_id, publication_id "link_editors" : user_id, publication_id Current psuedo SQL: SELECT * FROM ( (SELECT user_id FROM users WHERE firstname LIKE '%Jenkz%') UNION (SELECT user_id FROM users WHERE lastname LIKE '%Jenkz%') ) AS dt JOIN (ROLES STATEMENT) AS roles ON roles.user_id = dt.user_id At the moment my roles statement is: SELECT dt2.user_id, dt2.publication_id, dt.role FROM ( (SELECT 'writer' AS role, link_writers.user_id, link_writers.publication_id FROM link_writers) UNION (SELECT 'editor' AS role, link_editors.user_id, link_editors.publication_id FROM link_editors) ) AS dt2 The reason for wrapping the roles statement in UNION clauses is that some roles are more complex and require a table join to find the publication_id and user_id. As an example "publishers" might be linked accross two tables "link_publishers": user_id, publisher_group_id "link_publisher_groups": publisher_group_id, publication_id So in that instance, the query forming part of my UNION would be: SELECT 'publisher' AS role, link_publishers.user_id, link_publisher_groups.publication_id FROM link_publishers JOIN link_publisher_groups ON lpg.group_id = lp.group_id I'm pretty confident that my table setup is good (I was warned off the one-table-for-all system when researching the layout). My problem is that there are now 100,000 rows in the users table and upto 70,000 rows in each of the link tables. Initial lookup in the users table is fast, but the joining really slows things down. How can I only join on the relevant roles? -------------------------- EDIT ---------------------------------- Explain above (open in a new window to see full resolution). The bottom bit in red, is the "WHERE firstname LIKE '%Jenkz%'" the third row searches WHERE CONCAT(firstname, ' ', lastname) LIKE '%Jenkz%'. Hence the large row count, but I think this is unavoidable, unless there is a way to put an index accross concatenated fields? The green bit at the top just shows the total rows scanned from the ROLES STATEMENT. You can then see each individual UNION clause (#6 - #12) which all show a large number of rows. Some of the indexes are normal, some are unique. It seems that MySQL isn't optimizing to use the dt.user_id as a comparison for the internal of the UNION statements. Is there any way to force this behaviour? Please note that my real setup is not publications and writers but "webmasters", "players", "teams" etc.

    Read the article

  • How to use LINQ for CRUD with a simple SQL table?

    - by Rob Ferno
    Every LINQ blog I found there seemed around 2 years old, I understand the syntax but need more direction on creating the SQL mapping and context classes. I just need to use LINQ for 2 SQL tables I have, nothing complicated. Do folks write the SQL mapping classes by hand for such cases or is there a decent tool for this? Can someone point me in the right direction?

    Read the article

  • MySQL doesn't use index in join query

    - by Kocsonya Laci
    I have two tables: comments(id(primary key), author, ip(index)) and visitors(id(primary key), date_time, ip(index)) I want to join them like that: SELECT visitors.date_time FROM comments LEFT JOIN visitors ON ( comments.ip = visitors.ip ) WHERE comments.author = 'author' LIMIT 10 It works, but very slow.. In EXPLAIN it shows that it doesn't use the index on the visitors table: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE comments ref author author 78 const 9660 Using where 1 SIMPLE visitors ALL NULL NULL NULL NULL 8033 Any ideas? Thanks!

    Read the article

  • Allocation of memory with LE routines

    - by HaWe
    At /questions/2000777/allocation-of-memory-in-variable-length-tables NealB mentioned LE routines to allocate/deallocate memory in a non-CICS COBOL program. I'd very much like to know how this is done: how the LE routine is called. (I'm familiar with the LINKAGE SECTION and with SET ADDRESS.) Since I have no access to an IBM mainframe at the moment - meaning no access to online documentation - some code snippets could enlighten me.

    Read the article

  • Foreign Keys Duplicated in DataGridView

    - by John Doe
    I created a Windows Forms Application to which I added a DataGridView and LINQ to SQL Classes from one of my databases. I can successfully bind one of my database's tables to my DataGridView: var dataSource = from c in _db.NetworkedEquipments select c; dataGridView1.DataSource = dataSource; However, the foreign keys get duplicated, that is, the columns appear twice. How can I prevent this?

    Read the article

  • How update dataset with datagrid view C#

    - by Paul
    I am beginner and I have this problem. How can I can update dataset with datagridview? I binding dataset in datagrid. Edit datagrid. At finish I want update dataset with datagridview. Thank you form any advice Sory, I use Winforms. Example: I bind dataset in datagridview. dataGridViewCustomers.DataSource = _ds.Tables[0]; //edit datagridview //on this place I want update dataset with datagrid view

    Read the article

< Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >