Search Results

Search found 42738 results on 1710 pages for 'document database'.

Page 94/1710 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Best datastructure for this relationship...

    - by Travis
    I have a question about database 'style'. I need a method of storing user accounts. Some users "own" other user accounts (sub-accounts). However not all user accounts are owned, just some. Is it best to represent this using a table structure like so... TABLE accounts ( ID ownerID -> ID name ) ...even though there will be some NULL values in the ownerID column for accounts that do not have an owner. Or would it be stylistically preferable to have two tables, like so. TABLE accounts ( ID name ) TABLE ownedAccounts ( accountID -> accounts(ID) ownerID -> accounts(ID) ) Thanks for the advice.

    Read the article

  • Unique constraint on more than 10 columns

    - by tk
    I have a time-series simulation model which has more than 10 input variables. The number of distinct simulation instances would be more than 1 million, and each simulation instance generates a few output rows every day. To save the simulation result in a relational database, i designed tables like this. Table SimulationModel { simul_id : integer (primary key), input0 : string or numeric, input1 : string or numeric, ...} Table SimulationOutput { dt : DateTime (primary key), simul_id : integer (primary key), output0 : numeric, ...} My question is, is it fine to put an unique constraint on all of the input columns of SimulationModel table? If it is not a good idea, then what kind of other options do i have to make sure each model is unique?

    Read the article

  • Should I Split Tables Relevant to X Module Into Different DB? Mysql

    - by Michael Robinson
    I've inherited a rather large and somewhat messy codebase, and have been tasked with making it faster, less noodly and generally better. Currently we use one big database to hold all data for all aspects of the site. As we need to plan for significant growth in the future, I'm considering splitting tables relevant to specific sections of the site into different databases, so if/when one gets too large for one server I can more easily migrate some user data to different mysql servers while retaining overall integrity. I would still need to use joins on some tables across the new databases. Is this a normal thing to do? Would I incur a performance hit because of this?

    Read the article

  • How should I build a simple database package for my python application?

    - by Carson Myers
    I'm building a database library for my application using sqlite3 as the base. I want to structure it like so: db/ __init__.py users.py blah.py etc.py So I would do this in Python: import db db.users.create('username', 'password') I'm suffering analysis paralysis (oh no!) about how to handle the database connection. I don't really want to use classes in these modules, it doesn't really seem appropriate to be able to create a bunch of "users" objects that can all manipulate the same database in the same ways -- so inheriting a connection is a no-go. Should I have one global connection to the database that all the modules use, and then put this in each module: #users.py from db_stuff import connection Or should I create a new connection for each module and keep that alive? Or should I create a new connection for every transaction? How are these database connections supposed to be used? The same goes for cursor objects: Do I create a new cursor for each transaction? Create just one for each database connection?

    Read the article

  • How to find foreign-key dependencies pointing to one record in Oracle?

    - by daveslab
    Hi folks, I have a very large Oracle database, with many many tables and millions of rows. I need to delete one of them, but want to make sure that dropping it will not break any other dependent rows that point to it as a foreign key record. Is there a way to get a list of all the other records, or at least table schemas, that point to this row? I know that I could just try to delete it myself, and catch the exception, but I won't be running the script myself and need it to run clean the first time through. I have the tools SQL Developer from Oracle, and PL/SQL Developer from AllRoundAutomations at my disposal. Thanks in advance!

    Read the article

  • Options for storing large text blobs in/with an SQL database?

    - by kdt
    Hi, I have some large volumes of text (log files) which may be very large (up to gigabytes). They are associated with entities which I'm storing in a database, and I'm trying to figure out whether I should store them within the SQL database, or in external files. It seems like in-database storage may be limited to 4GB for LONGTEXT fields in MySQL, and presumably other DBs have similar limits. Also, storing in the database presumably precludes any kind of seeking when viewing this data -- I'd have to load the full length of the data to render any part of it, right? So it seems like I'm leaning towards storing this data out-of-DB: are my misgivings about storing large blobs in the database valid, and if I'm going to store them out of the database then are there any frameworks/libraries to help with that? (I'm working in python but am interested in technologies in other languages too)

    Read the article

  • Content disappearing

    - by Koen_vdp
    several times now, I've had a document with very large parts of the content simply disappearing. I have Ubuntu (precise pangolin) a few days now. all of the years with windows, I never encountered this-very unpleasant-problem. Anyone knows a solution? Type document: docx; Which Ubuntu: Precise Pangolin LTS. &In the meantime, I'm discovering more & more documents like that with very large parts of the concent just... vanished :-(( A very 'dangereous' situation :-(

    Read the article

  • Need a field / flag / status number for mutliple use ?

    - by Jules
    I want to create a field in my database which will be easy to query. I think if I give a bit of background this will make more sense. My table has listings shown on my website. I run a program which looks at the listings a decides whether to hide them from being shown on the site. I also hide listings manually for various reasons. I want to store these reasons in a field, so more than one reason could be made for hiding. So I need some form of logic to determine which reasons have been used. Can anyone offer me any guidance on what will be future-proof aka new reasons and what will be quick and easy to query upon ?

    Read the article

  • how to design a db like Facebook where users can update their status and of the fb page as admin

    - by Harsha M V
    i am designing a database where users can update status messages of theirs and they can create pages groups like facebook fan page and post status like the admin of the page and not as a user. user(id, name..) group(id, name...) group_admin(group_id, user_id) this is my set up. Is this the way to do it. How to post under the group as an admin. will i need to make a check to every user if he is the admin or not ?

    Read the article

  • Three customer addresses in one table or in separate tables?

    - by DR
    In my application I have a Customer class and an Address class. The Customer class has three instances of the Address class: customerAddress, deliveryAddress, invoiceAddress. Whats the best way to reflect this structure in a database? The straightforward way would be a customer table and a separate address table. A more denormalized way would be just a customer table with columns for every address (Example for "street": customer_street, delivery_street, invoice_street) What are your experiences with that? Are there any advantages and disadvantages of these approaches?

    Read the article

  • Cannot reach reach jQuery (in parent document ) from IFRAME

    - by Michael Joyner
    I have written a backup program for SugarCRM. My program sets a iframe to src=BACKUP.PHP My backup program sends updates to parent window with: echo "<script type='text/javascript'>parent.document.getElementById('file_size').value='".fileSize2human(filesize($_SESSION['archive_file_name']))."';parent.document.getElementById('file_count').value=".$_SESSION['archive_file_count'].";parent.document.getElementById('description').innerHTML += '".$log_entry."\\r\\n';parent.document.getElementById('description').scrollTop = parent.document.getElementById('description').scrollHeight;</script>"; echo str_repeat( ' ', 4096); flush(); ob_flush(); I have added a JQUERY UI PROGRESS BAR and I need to know how I update the progress bar on the parent window. I tried this: $percent_complete = $_SESSION['archive_file_count'] / $_SESSION['archive_total_files']; echo "<script type='text/javascript'>parent.document.jquery('#progressbar').animate_progressbar($percent_complete); </script>"; ......... and get this error in browser. Uncaught TypeError: Object [object HTMLDocument] has no method 'jquery' HOW CAN I UPDATE THE PROGRESS BAR IN PARENT DOCUMENT FROM THE IFRAME?

    Read the article

  • Microsoft Office document is "locked for editing by 'another user'"

    - by Chris
    A few of my users are in and out of various Excel 2007 spreadsheets all day. One of them reports that "50% of the time" she tries to open a spreadsheet from the file server, an information message comes up stating: foo.xlsx is locked for editing by 'another user'. Open "Read-Only" or click "Notify" to open read-only and receive notification when the document is no longer in use. Nine times out of ten the document is not open by another user. My users immediately try to open the same document again, and it works. I imagine this is caused by Excel leaving owner files on the server, but I do not know why. An added clue: When one of my users selects "Notify," a dialog pops up in a moment informing them the file is available for them to edit. Any guidance on how to solve this issue and make my users' days flow better?

    Read the article

  • How to normalize a word document?

    - by AngryHacker
    I was too cheap to hire someone to retype a really, really long scanned document full of legalese. So I OCRed it using OmniPage. But the OCR output was kind of disappointing. I got a word doc that has multiple line spacings. The before and after paragraph heights are different all over the place. This would be easy, if the entire document had the same paragraph settings, but it does not. There are probably a half dozen different styles going on. What is the easiest way to normalize the document? For instance, if one paragraph has a line spacing of 20.4 pt and another one has a spacing of 20.9 pt, then I'd like to consider them the same style and set them to a single value? Or really, any suggestion is welcome at this point.

    Read the article

  • A Firefox extension for scan & upload document?

    - by Ivan Petrushev
    Hello, Do you know such an extension that provides easy document scanning in Firefox? We are building a web site and we want visitors to be able to upload scanned documents to it. The normal procedure for that is: Scan the document via Gimp, Photoshop or some other scanning software. Save the file. Navigate to the upload web page. Find some sort of HTML input type file on that page. Browse and find the saved file. Submit the form. I want an extension or plugin that automatize that process and do everything with 1 click - scan the document with some default settings (for example "grayscale, 300 dpi") creates temporary file, fills the page input field and deletes the temporary file after upload. I tried lots of googling but the term scan in combination with everything web-related gives zillions of virus, malware and port scanners...

    Read the article

  • Compiling LaTeX document makes Google Drive crash

    - by Sander
    I've the issue that when I compile a LaTeX document that is located inside my Google Drive this will after a few turns make the OSX Google Drive application crash. As this is an important document I want to keep it all the time inside the Google Drive location to ensure cloud backup but this ofcourse is not guaranteed if this makes my Google Drive crash all the time. I don't seem te be able to identify what is causing this and I was hoping that maybe some people here have any idea what might cause this? We're talking about a 8 pages document with 3 images, so nothing crazy big or complex.

    Read the article

  • Word document to PDF: open hyperlinks in new window

    - by baens
    I have a Mircosoft Word document with hyperlinks in it. When I save the PDF document, those hyperlinks no longer open that link in a new window. I have tried all the settings under the "Target Frame..." option, but those don't seem to persist. Is there any settings that help with this to make all hyperlinks in the document open in a new window? I am currently using the Acrobat plugin, but could move to a different plugin if it offers this feature.

    Read the article

  • Selection Issues with a PDF from a Word document

    - by syrion
    I have a long Word document that has a running footer. When I try to copy and paste across pages in the PDF generated from this document, the behavior of this footer is unpredictable--sometimes it is unselected, sometimes it is selected, sometimes the footer on the next page is selected. I would prefer to make this portion of the document unselectable, so that it still shows up but doesn't interfere with copying and pasting. Does anyone have an idea of how to do this? No, changing it to an image isn't possible, because it includes a page number.

    Read the article

  • Can't print graphic from Publisher to pre-printed document

    - by Michael Itzoe
    I have a client who was given a stack of 8.5x11 tri-fold (unfolded) brochures printed on standard printer paper that they need to add their bulk mail permit stamp to. They've created a Publisher document with the stamp. If they print the document to standard paper in the tray, it prints fine. When they load the brochures into the tray, the brochure feeds through the printer but the stamp doesn't print. The stamp document is also 8.5x11. The printer is a Canon MF4300-D44. I can remoted in, but obviously have no access to the hardware. Any ideas?

    Read the article

  • strategy /insights for avoiding document content loss due to encryption

    - by pbernatchez
    I'm about to encourage a group of people to begin using S-Mime and GPG for digital signatures and encryption. I foresee a nightmare of encrypted documents which can no longer be recovered because of lost keys. The thorniest issue is archiving. The natural way to preserve privacy in an archive is to archive the encrypted document. But that opens us up to the risk of a lost key when time comes to unarchive a document, or a forgotten password. After all it will be a long way in the future. This would be equivalent to having destroyed the document. First thought is archiving keys with documents, but that still leaves the forgotten pass phrase. Archiving the passphrase too would be tantamount to archiving in the clear. No privacy. What approaches do you use? What insights can you offer on the issue?

    Read the article

  • Excel Document Size is at 0KB, can't be opened

    - by Bassam
    After I saved an Excel document, I remembered that I needed to change something in it, so I go back to open it and it said Excel cannot open the file, because the file format or the file extension is not valid. Verify that the file has not been corrupted and that the file extension matches the format of the file. I know when I saved before, around 2hours ago, it worked just fine. The document size is at 0KB now. How do I recover this document? Its crucial for my business!

    Read the article

  • Word 2010, how to update protected document

    - by Seth
    The document has one "Section Break" with e.g. "Text Form Fields" above. To make this Form Field work properly I use "Restrict Editing", allow "Filling in Forms", "Select Sections" and then protect Section 1. Then "Start Enforcing Protection". Now when the document is protected above the Section Break you can't use CTRL-Aand F9 to make an update of the fields, etc. of the document below the Section Break. Is there any solution for this problem?

    Read the article

  • How the "migrations" approach makes database continuous integration possible

    - by David Atkinson
    Testing a database upgrade script as part of a continuous integration process will only work if there is an easy way to automate the generation of the upgrade scripts. There are two common approaches to managing upgrade scripts. The first is to maintain a set of scripts as-you-go-along. Many SQL developers I've encountered will store these in a folder prefixed numerically to ensure they are ordered as they are intended to be run. Occasionally there is an accompanying document or a batch file that ensures that the scripts are run in the defined order. Writing these scripts during the course of development requires discipline. It's all too easy to load up the table designer and to make a change directly to the development database, rather than to save off the ALTER statement that is required when the same change is made to production. This discipline can add considerable overhead to the development process. However, come the end of the project, everything is ready for final testing and deployment. The second development paradigm is to not do the above. Changes are made to the development database without considering the incremental update scripts required to effect the changes. At the end of the project, the SQL developer or DBA, is tasked to work out what changes have been made, and to hand-craft the upgrade scripts retrospectively. The end of the project is the wrong time to be doing this, as the pressure is mounting to ship the product. And where data deployment is involved, it is prudent not to feel rushed. Schema comparison tools such as SQL Compare have made this latter technique more bearable. These tools work by analyzing the before and after states of a database schema, and calculating the SQL required to transition the database. Problem solved? Not entirely. Schema comparison tools are huge time savers, but they have their limitations. There are certain changes that can be made to a database that can't be determined purely from observing the static schema states. If a column is split, how do we determine the algorithm required to copy the data into the new columns? If a NOT NULL column is added without a default, how do we populate the new field for existing records in the target? If we rename a table, how do we know we've done a rename, as we could equally have dropped a table and created a new one? All the above are examples of situations where developer intent is required to supplement the script generation engine. SQL Source Control 3 and SQL Compare 10 introduced a new feature, migration scripts, allowing developers to add custom scripts to replace the default script generation behavior. These scripts are committed to source control alongside the schema changes, and are associated with one or more changesets. Before this capability was introduced, any schema change that required additional developer intent would break any attempt at auto-generation of the upgrade script, rendering deployment testing as part of continuous integration useless. SQL Compare will now generate upgrade scripts not only using its diffing engine, but also using the knowledge supplied by developers in the guise of migration scripts. In future posts I will describe the necessary command line syntax to leverage this feature as part of an automated build process such as continuous integration.

    Read the article

  • Best pattern for storing (product) attributes in SQL Server

    - by EdH
    We are starting a new project where we need to store product and many product attributes in a database. The technology stack is MS SQL 2008 and Entity Framework 4.0 / LINQ for data access. The products (and Products Table) are pretty straightforward (a SKU, manufacturer, price, etc..). However there are also many attributes to store with each product (think industrial widgets). These may range from color to certification(s) to pipe size. Every product may have different attributes, and some may have multiples of the same attribute (Ex: Certifications). The current proposal is that we will basically have a name/value pair table with a FK back to the product ID in each row. An example of the attributes Table may look like this: ProdID AttributeName AttributeValue 123 Color Blue 123 FittingSize 1.25 123 Certification AS1111 123 Certification EE2212 123 Certification FM.3 456 Pipe 11 678 Color Red 999 Certification AE1111 ... Note: Attribute name would likely come from a lookup table or enum. So the main question here is: Is this the best pattern for doing something like this? How will the performance be? Queries will be based on a JOIN of the product and attributes table, and generally need many WHEREs to filter on specific attributes - the most common search will be to find a product based on a set of known/desired attributes. If anyone has any suggestions or a better pattern for this type of data, please let me know. Thanks! -Ed

    Read the article

  • Struggling with a data modeling problem

    - by rpat
    I am struggling with a data model (I use MySQL for the database). I am uneasy about what I have come up with. If someone could suggest a better approach, or point me to some reference matter I would appreciate it. The data would have organizations of many types. I am trying to do a 3 level classification (Class, Category, Type). Say if I have 'Italian Restaurant', it will have the following classification Food Services Restaurants Italian However, an organization may belong to multiple groups. A restaurant may also serve Chinese and Italian. So it will fit into 2 classifications Food Services Restaurants Italian Food Services Restaurants Chinese The classification reference tables would be like the following: ORG_CLASS (RowId, ClassCode, ClassName) 1, FOOD, Food Services ORG_CATEGORY(RowId, ClassCode, CategoryCode, CategoryName) 1, FOOD, REST, Restaurants ORG_TYPE (RowId, ClassCode, CategoryCode, TypeCode, TypeName) 100, FOOD, REST, ITAL, Italian 101, FOOD, REST, CHIN, Chinese 102, FOOD, REST, SPAN, Spanish 103, FOOD, REST, MEXI, Mexican 104, FOOD, REST, FREN, French 105, FOOD, REST, MIDL, Middle Eastern The actual data tables would be like the following: I will allow an organization a max of 3 classifications. I will have 3 GroupIds each pointing to a row in ORG_TYPE. So I have my ORGANIZATION_TABLE ORGANIZATION_TABLE (OrgGroupId1, OrgGroupId2, OrgGroupId3, OrgName, OrgAddres) 100,103,NULL,MyRestaurant1, MyAddr1 100,102,NULL,MyRestaurant2, MyAddr2 100,104,105, MyRestaurant3, MyAddr3 During data add, a dialog could let the user choose the clssa, category, type and the corresponding GroupId could be populated with the rowid from the ORG_TYPE table. During Search, If all three classification are chosen, It will be more specific. For example, if Food Services Restaurants Italian is the criteria, the where clause would be 'where OrgGroupId1 = 100' If only 2 levels are chosen Food Services Restaurants I have to do 'where OrgGroupId1 in (100,101,102,103,104,105, .....)' - There could be a hundred in that list I will disallow class level search. That is I will force selection of a class and category The Ids would be integers. I am trying to see performance issues and other issues. Overall, would this work? or I need to throw this out and start from scratch.

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >