Search Results

Search found 10101 results on 405 pages for 'temporary tables'.

Page 279/405 | < Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >

  • Modifying records in my migration throws an authlogic error

    - by nfm
    I'm adding some columns to one of my database tables, and then populating those columns: def self.up add_column :contacts, :business_id, :integer add_column :contacts, :business_type, :string Contact.reset_column_information Contact.all.each do |contact| contact.update_attributes(:business_id => contact.client_id, :business_type => 'Client') end remove_column :contacts, :client_id end The line contact.update_attributes is causing the following Authlogic error: You must activate the Authlogic::Session::Base.controller with a controller object before creating objects I have no idea what is going on here - I'm not using a controller method to modify each row in the table. Nor am I creating new objects. The error doesn't occur if the contacts table is empty. I've had a google and it seems like this error can occur when you run your controller tests, and is fixed by adding before_filter :activate_authlogic to them, but this doesn't seem relevant in my case. Any ideas? I'm stumped.

    Read the article

  • Converting DAO to ADO

    - by webworm
    I am working with an Access 2003 database that has a subroutine using DAO code. This code loops through the table definitions and refreshes the ODBC connection string. I would like to convert this to ADO so I do not have to reference the DAO object library. Here is the code ... Public Sub RefreshODBCLinks(newConnectionString As String) Dim db As DAO.Database Dim tb As DAO.TableDef Set db = CurrentDb For Each tb In db.TableDefs If Left(tb.Connect, 4) = "ODBC" Then tb.Connect = newConnectionString tb.RefreshLink Debug.Print "Refreshed ODBC table " & tb.Name End If Next tb Set db = Nothing MsgBox "New connection string is " & newConnectionString, vbOKOnly, "ODBC Links refreshed" End Sub The part I am unsure of is how to loop through the tables and get/set their connection strings.

    Read the article

  • Interview question - c#

    - by ltech
    I was tasked to conduct my first interview and would like to pose my question to this world for both their feedback on my question and also on their solutions. Question: I have a legacy system with users and files, the info of all files pertaining to a user are stored on a flat file. I want to upgrade this system by storing all info on a db, design tables, and create a C# system that will populate the new db as well as ftp the files to a new path. Define the desgin consideration and develop a prototype. Note: We are looking more for what design one would use and why rather than code that compiles. If it does then kudos to you and we will give it more weight. @Tim C, I did show the interviewee the file: User1234.txt UserID=1234 ParentPath=\\somewhere\nowehere\everywhere\1234 FileCount=20 File0=something0.ext .. File19=something19.ext @Tim C, I have never conducted an interview and I followed a script given to me by my senior developer who was absent.

    Read the article

  • Weighted Average with LINQ

    - by jsmith
    My goal is to get a weighted average from one table, based on another tables primary key. Example Data: Table1 Key WEIGHTED_AVERAGE 0200 0 Table2 ForeignKey LENGTH PCR 0200 105 52 0200 105 60 0200 105 54 0200 105 -1 0200 47 55 I need to get a weighted average based on the length of a segment and I need to ignore values of -1. I know how to do this in SQL, but my goal is to do this in LINQ. It looks something like this in SQL: SELECT Sum(t2.PCR*t2.LENGTH)/Sum(t2.LENGTH) AS WEIGHTED_AVERAGE FROM Table1 t1, Table2 t2 WHERE t2.PCR <> -1 AND t2.ForeinKey = t1.Key; I am still pretty new to LINQ, and having a hard time figuring out how I would translate this. The result weighted average should come out to roughly 55.3. Thank you.

    Read the article

  • Good piece of software that can manage the creation of complex web forms, including reporting, etc?

    - by Callum
    I have some clients who are requesting for some of their reasonably complex paper-based forms to be converted in to web forms. There's straight Q&A text input stuff, there's questions based around checkboxes, radio boxes, select boxes, maybe the occasional attached image, there's data that has to be entered in a tabular fashion, etc. I am deciding whether I should build a "platform" with properly normalised tables to store all types of form data. But before that, I thought I had better check and see if there is anything like that already on the market. I am looking for a product that can: * Easily create web forms of all types * Store all data in a database * Extensive reporting capability I have had a bit of a look around but there's not a whole lot I can see. Does anyone have any suggestions? Thanks.

    Read the article

  • Is there an application I can show a client the speed of their site via differring internet connecti

    - by Slimpickens
    I have a new client with an old website using massive amounts of tables and inline styles. Some page weigh over 3MB due to huge backgrounds and other large images. I suggested they at least compress the files to speed up the site but they're convinced it's fast. Since they have 13Mbit connection at the office everything seems fast. I'm looking for a tweak, program or web app capable of loading pages at varying speeds so they understand how it might load for slower DSL or (gasp) dialup users. Thanks for any help!

    Read the article

  • Will SQL Server Partitioning increase performance without changing filegroups

    - by Tom
    Scenario I have a 10 million row table. I partition it into 10 partitions, which results in 1 million rows per partition but I do not do anything else (like move the partitions to different file groups or spindles) Will I see a performance increase? Is this in effect like creating 10 smaller tables? If I have queries that perform key lookups or scans, will the performance increase as if they were operating against a much smaller table? I'm trying to understand how partitioning is different from just having a well indexed table, and where it can be used to improve performance. Would a better scenario be to move the old data (using partition switching) out of the primary table to a read only archive table? Is having a table with a 1 million row partition and a 9 million row partition analagous (performance wise) to moving the 9 million rows to another table and leaving only 1 million rows in the original table?

    Read the article

  • Two models, one STI and a Validation

    - by keruilin
    Let's say I have two tables -- Products and Orders. For the sake of simplicity assume that only one product can be purchased at a time so there is no join table like order_items. So the relationship is that Product has many orders, and Order belongs to product. Therefore, product_id is a fk in the Order table. The product table is STI -- with the subclasses being A, B, C. When the user orders subclass Product C, two special validations must be checked on the Order model fields order_details and order_status. These two fields can be nil for all other Product subclasses (ie A and B). In other words, no validation needs to run for these two fields when a user purchases A and B. My question is: How do I write validations (perhaps custom?) in the Order model so that the Order model knows to only run the validations for the two fields -- order_details and order_status -- when Product subclass C is being saved to the orders table?

    Read the article

  • Synchronising local and remote DB

    - by nico
    Hi everyone, I have a general question about DB synchronisation. So, I'm developing a website locally (PHP + MySQL) and I would like to be able to synchronise at least the structure (and maybe the contents) of the two DB when one of the two is changed (normally I would change the local copy). Right now what I'm doing is to use mysqldump to dump the modified tables and then import them in the remote DB or do it by hand if the changes are minimal. However I find this tedious and error-prone. For the PHP I'm currently using Quanta+ which has the handy feature of finding files that have changed and just upload those. Is there something similar for MySQL? Otherwise how do you keep your DBs synchronised? Thanks nico PS: I'm sorry if this was already asked, I saw other questions that deal with similar topics, but couldn't really find an answer.

    Read the article

  • Nhibernate: one-to-many, based on multiple keys?

    - by e36M3
    Lets assume I have two tables Table tA ID ID2 SomeColumns Table tB ID ID2 SomeOtherColumns I am looking to create a Object let's call it ObjectA (based on tA), that will have a one-to-many relationship to ObjectB (based on tB). In my example however, I need to use the combination of ID and ID2 as the foreign key. If I was writing SQL it would look like this: select tB.* from tA, tB where tA.ID = tB.ID and tA.ID2 = tB.ID2; I know that for each ID/ID2 combination in tA I should have many rows in tB, therefor I know it's a one-to-many combination. Clearly the below set is not sufficient for such mapping as it only takes one key into account. <set name="A2" table="A2" generic="true" inverse="true" > <key column="ID" /> <one-to-many class="A2" /> </set> Thanks!

    Read the article

  • How to check if a Statistics is auto-created in a SQL Server 2000 DB using T-SQL?

    - by The Shaper
    Hi all. A while back I had to come up with a way to clean up all indexes and user-created statistics from some tables in a SQL Server 2005 database. After a few attempts it worked, but now I gotta have it working in SQL Server 2000 databases as well. For SQL Server 2005, I used SELECT Name FROM sys.stats WHERE object_id = object_id(@tableName) AND auto_created = 0 to fetch Statistics that were user-created. However, SQL 2000 doesn't have a sys.stats table. I managed to fetch the indexes and statistics in a distinguishable way from the sysindexes table, but I just couldn't figure out what the sys.stats.auto_created match is for SQL 2000. Any pointers? BTW: T-SQL please.

    Read the article

  • How to set width of textbox to be same as MaxLength in ASP.NET

    - by John Galt
    Is there a way I can limit the width of a textbox to be equal to the MaxLength property? In my case right now, the textbox control is placed in a table cell as in this snippet: <td class=TDSmallerBold style="width:90%"> <asp:textbox id="txtTitle" runat="server" CausesValidation="true" Text="Type a title here..be concise but descriptive. Include price." ToolTip="Describe the item with a short pithy title (most important keywords first). Include the price. The title you provide here will be the primary text found by search engines that index Zipeee content." MaxLength="60" Width="100%"> </asp:textbox> (I know I should not use HTML tables to control layout..another subject for sure) but is there a way to constrain the actual width to the max number of characters allowed in the typed box?

    Read the article

  • Using memtables in sql. When is it reasonable and is it safe?

    - by Spiros
    I was just reading an update from a friend's project, mentioning the use of memtables to store data temporatily and then flush to a table on disk. Up to now, I have never faced a situation where I would use a memtable, or a situation where I would think the use of a mem table would be beneficial; so I wonder, when would someone use mem tables? what makes a memtable (appart from access speed) a reasonable choice? and how safe is it, even for temp data? there is always the limitation of available physical memory.

    Read the article

  • A database of questions with unambiguous numeric answers.

    - by dreeves
    I (and co-hackers) are building a sort of trivia game inspired by this blog post: http://messymatters.com/calibration. The idea is to give confidence intervals and learn how to be calibrated (when you're "90% sure" you should be right 90% of the time). We're thus looking for, ideally, thousands of questions with unambiguous numerical answers. Also, they shouldn't be too boring. There are a lot of random statistics out there -- eg, enclosed water area in different countries -- that would make the game mind-numbing. Things like release dates of classic movies are more interesting (to most people). Other interesting ones we've found include Olympic records, median incomes for different professions, dates of famous inventions, and celebrity ages. Scraping things like above, by the way, was my reason for asking this question: http://stackoverflow.com/questions/2611418/scrape-html-tables So, if you know of other sources of interesting numerical facts (in a parsable form) I'm eager for pointers to them. Thanks!

    Read the article

  • Asus M4A79XTD-EVO / AMD Phenom II X4 965 Crashes / BSOD / Hangs / Restarts

    - by Tiby
    I'll try to be as concise as possible because I have a lot to say about my problem, but I'd rather say it when asked or when I feel it's necessary, just to make this initial reading clearer. For about a year and a half I have periods when my system has all the problems in the title (I'll use the word 'crash' for either one). I'll list some patterns and what I tried to do and what were the results, but the list is not exclusive: usually it crashes when a CPU-intensive operation is in progress, like a game or video encoding or HD movie rendering, but also sometimes crashes when I'm doing nothing after a first crash the system is very unstable and sometimes it crashes even during POST, or doesn't boot at all Some months ago I went to a local service (one that you just put your computer on the table and sit there with a guy and trying to figure out the problem, very rare these days) and they used OCCT and it crashed every time he changed some part to test it out (PSU, RAM, video card, HDD). The last one was the CPU. They changed the CPU and it didn't crash any more. Then when they put my CPU back, it also didn't crash. We figured that the trouble was the thermal paste (probably some 2 years old) because it was the only thing changed while testing. Up until 2 weeks ago, I haven't had any more problems. 2 weeks ago the problems reappeared. I changed again the thermal paste, put some Arctic Silver 5, and for about a week everything worked perfect (tried some games, video encoding, no more crashes). But again it started crashing in the same fashion as the first time. But now, instead, I figured out a very odd behaviour: when I start some of the apps above, in most cases it crashes if I start OCCT and turn on the CPU test, and run any of the programs above, it doesn't crash, even if the CPU is on 100% load (and 65-70 degrees Celsius temperature) if I shut down OCCT and continue using the programs, it crashes in a very short period of time (even if the CPU is on 5-10% load and 40 degrees) There are so many patterns and temporary solutions that I figured out in this year and a half period of time, that I can't include them all because I don't know which one are more relevant, but I'll happily provide any details you ask. My system is: CPU: AMD Phenom II X4 965 (3400 MHz - 125W) MB: ASUS M4A79XTD - EVO RAM: Corsair Vengeance 8 GB (2 x 4GB) CL8 1600MHz Video: HIS Radeon HD5770 1GB PSU: Corsair 750W HDD: Western Digital 1TB OS: Win 7 Enterprise 64 BIT (also tried with Windows Server 2008 R2 Trial and Win XP)

    Read the article

  • How to force grails GORM to respect DB scheme ?

    - by fabien-barbier
    I have two domains : class CodeSet { String id String owner String comments String geneRLF String systemAPF static hasMany = [cartridges:Cartridge] static constraints = { id(unique:true,blank:false) } static mapping = { table 'code_set' version false columns { id column:'code_set_id', generator: 'assigned' owner column:'owner' comments column:'comments' geneRLF column:'gene_rlf' systemAPF column:'system_apf' } } and : class Cartridge { String id String code_set_id Date runDate static belongsTo = CodeSet static constraints = { id(unique:true,blank:false) } static mapping = { table 'cartridge' version false columns { id column:'cartridge_id', generator: 'assigned' code_set_id column:'code_set_id' runDate column:'run_date' } } Actually, with those models, I get tables : - code_set, - cartridge, - and table : code_set_cartridge (two fields : code_set_cartridges_id, cartridge_id) I would like to not have code_set_cartridge table, but keep relationship : code_set -- 1:n -- cartridge In other words, how can I keep association between code_set and cartridge without intermediate table ? (using code_set_id as primary key in code_set and code_set_id as foreign key in cartridge). Mapping with GORM can be done without intermediate table?

    Read the article

  • Hash Table: Should I increase the element count on collisions?

    - by Nazgulled
    Hi, Right now my hash tables count the number of every element inserted into the hash table. I use this count, with the total hash table size, to calculate the load factor and when it reaches like 70%, I rehash it. I was thinking that maybe I should only count the inserted elements with fills an empty slot instead of all of them. Cause the collision method I'm using is separate chaining. The factor load keeps increasing but if there can be a few collisions leaving lots of empty slots on the hash table. You are probably thinking that if I have that many collisions, maybe I'm not using the best hashing method. But that's not the point, I'm using one of the know hashing algorithms out there, I tested 3 of them on my sample data and selected the one who produced less collisions. My question still remains. Should I keep counting every element inserted, or just the ones that fill an empty slot in the Hash Table?

    Read the article

  • Linq query with subquery as comma-separated values

    - by Keith
    In my application, a company can have many employees and each employee may have have multiple email addresses. The database schema relates the tables like this: Company - CompanyEmployeeXref - Employee - EmployeeAddressXref - Email I am using Entity Framework and I want to create a LINQ query that returns the name of the company and a comma-separated list of it's employee's email addresses. Here is the query I am attempting: from c in Company join ex in CompanyEmployeeXref on c.Id equals ex.CompanyId join e in Employee on ex.EmployeeId equals e.Id join ax in EmployeeAddressXref on e.Id equals ax.EmployeeId join a in Address on ax.AddressId equals a.Id select new { c.Name, a.Email.Aggregate(x=x + ",") } Desired Output: "Company1", "[email protected],[email protected],[email protected]" "Company2", "[email protected],[email protected],[email protected]" ... I know this code is wrong, I think I'm missing a group by, but it illustrates the point. I'm not sure of the syntax. Is this even possible? Thanks for any help.

    Read the article

  • ASP.Net MVC elegant UI and ModelBinder authorization

    - by SDReyes
    We know authorization stuff is a cross cutting concern, and we do anything we could to avoid merge business logic in our views. But I still not found an elegant way to filter UI components (e.g. widgets, form elements, tables, etc) using the current user roles without contaminate the view with business logic. same applies for model binding. Example Form: Product Creation Fields: Name Price Discount Roles: Role Administrator Is allowed to see and modify the Name field Is allowed to see and modify the Price field Is allowed to see and modify the Discount Role Administrator assistant Is allowed to see and modify the Name Is allowed to see and modify the Price Fields shown in each role are different, and model binding needs to ignore the discount field for 'Administrator assistant' role. How would you do it?

    Read the article

  • What 'best practices' exist for handing enum heirarchies?

    - by FerretallicA
    I'm curious as to any solutions out there for addressing enum heirarchies. I'm working through some docs on Entity Framework 4 and trying to apply it to a simple inventory tracking program. The possible types for inventory to fall into are as follows: INVENTORY ITEM TYPES: Hardware PC Desktop Server Laptop Accessory Input (keyboards, scanners etc) Output (monitors, printers etc) Storage (USB sticks, tape drives etc) Communication (network cards, routers etc) Software What recommendations are there for handling enums in a situation like this? Are enums even the solution? I don't really want to have a ridiculously normalised database for such a relatively simple experiment (eg tables for InventoryType, InventorySubtype, InventoryTypeToSubtype etc). I don't really want to over-complicate my data model with each subtype being inherited even though no additional properties or methods are included (except PC types which would ideally have associated accessories and software but that's probably out of scope here). It feels like there should be a really simple, elegant solution to this but I can't put my finger on it. Any assistance or input appreciated!

    Read the article

  • SQLce Select query problem

    - by DieHard
    Wrote a Truck show Contest voting app, financial etc using sqlite. decided to write backup app for show day using ce 3.5. Created db moved to data directory, created tables configured dgridviews all is well. Entered some test data started management studio 08 ran select query against table and got null returns. Started app from vs studio and found that test data is gone. Re entered data ran query in MS data gone again. If I use VS Studio can start and enter data, close app restart and data is still there, seems only when using outside tool on select query data deletes. I don't know ce that well but this cannot be right. select * from votes = delete * from votes??????????????

    Read the article

  • Most optimal order (of joins) for left join

    - by Ram
    I have 3 tables Table1 (with 1020690 records), Table2(with 289425 records), Table 3(with 83692 records).I have something like this SELECT * FROM Table1 T1 /* OK fine select * is bad when not all columns are needed, this is just an example*/ LEFT JOIN Table2 T2 ON T1.id=T2.id LEFT JOIN Table3 T3 ON T1.id=T3.id and a query like this SELECT * FROM Table1 T1 LEFT JOIN Table3 T3 ON T1.id=T3.id LEFT JOIN Table2 T2 ON T1.id=T2.id The query plan shows me that it uses 2 Merge Join for both the joins. For the first query, the first merge is with T1 and T2 and then with T3. For the second query, the first merge is with T1 and T3 and then with T2. Both these queries take about the same time(40 seconds approx.) or sometimes Query1 takes couple of seconds longer. So my question is, does the join order matter ?

    Read the article

  • NoSql Crash Course/Tutorial

    - by Chris Thompson
    Hi all, I've seen NoSQL pop up quite a bit on SO and I have a solid understanding of why you would use it (from here, Wikipedia, etc). This could be due to the lack of concrete and uniform definition of what it is (more of a paradigm than concrete implementation), but I'm struggling to wrap my head around how I would go about designing a system that would use it or how I would implement it in my system. I'm really stuck in a relational-db mindset thinking of things in terms of tables and joins... At any rate, does anybody know of a crash course/tutorial on a system that would use it (kind of a "hello world" for a NoSQL-based system) or a tutorial that takes an existing "Hello World" app based on SQL and converts it to NoSQL (not necessarily in code, but just a high-level explanation). I see this having one solid answer, but if you guys feel like it should be community wiki, I'll be happy to change it. Thanks! Chris

    Read the article

  • Script or automate feature class creation in ESRI/ArcSDE

    - by Keith G
    I'm looking for info on how to write SQL scripts to automate the creation of a versioned feature class in ArcSDE I want to be able to automate the process itself as well as put the scripts under version control. Can anyone point me to a resource that explains how to do this? Is this even possible? It seems like there are lots of interrelationships between tables and data when a feature class is added. P.S. It doesn't have to be pure SQL, but it should be some kind of scripting so we can save to version control and run outside of ESRI desktop tools.

    Read the article

  • Query to return internal details about stored function in SQL Server database

    - by Anthony
    I have been given access to a SQL Server database that is currently used by 3rd party app. As such, I don't have any documentation on how that application stores the data or how it retrieves it. I can figure a few things out based on the names of various tables and the parameters that the user-defined functions takes and returns, but I'm still getting errors at every other turn. I was thinking that it would be really helpful if I could see what the stored functions were doing with the parameters given to return the output. Right now all I've been able to figure out is how to query for the input parameters and the output columns. Is there any built-in information_schema table that will expose what the function is doing between input and output?

    Read the article

< Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >