Search Results

Search found 42428 results on 1698 pages for 'database query'.

Page 406/1698 | < Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >

  • Combining multiple rows into one row, Oracle

    - by Torbjørn
    Hi. I'm working with a database which is created in Oracle and used in a GIS-software through SDE. One of my colleuges is going to make some statistics out of this database and I'm not capable of finding a reasonable SQL-query for getting the data. I have two tables, one with registrations and one with registrationdetails. It's a one to many relationship, so the registration can have one or more details connected to it (no maximum number). table: Registration RegistrationID Date TotLenght 1 01.01.2010 5 2 01.02.2010 15 3 05.02.2009 10 2.table: RegistrationDetail DetailID RegistrationID Owner Type Distance 1 1 TD UB 1,5 2 1 AB US 2 3 1 TD UQ 4 4 2 AB UQ 13 5 2 AB UR 13,1 6 3 TD US 5 I want the resulting selection to be something like this: RegistrationID Date TotLenght DetailID RegistrationID Owner Type Distance DetailID RegistrationID Owner Type Distance DetailID RegistrationID Owner Type Distance 1 01.01.2010 5 1 1 TD UB 1,5 2 1 AB US 2 3 1 TD UQ 4 2 01.02.2010 15 4 2 AB UQ 13 5 2 AB UR 13,1 3 05.02.2009 10 6 3 TD US 5 With a normal join I get one row per each registration and detail. Can anyone help me with this? I don't have administrator-rights for the database, so I can't create any tables or variables. If it's possible, I could copy the tables into Access.

    Read the article

  • Is there anyway to carry a value in php forward to a second page?

    - by Henry Aspden
    I have created a php site, and previously it was listing only products with defined values. I have now changed it to include an array of products for example all products WHERE id = "spotlights" and this works great so it means I can add new products just to the database, but I still have to add the second page manually. e.g going from the product div on the main page, through to www.example.com/spotlight_1.php Is there anyway in PHP to carry the data from my index.php e.g. the ID through to the next page? so that I can have a template product.php page, and I can use a database pull to echo the product information required. So on index.php i click on the product with ID="1" and on the product.php page, it loads the relevant data for product 1. I can write the php SQL/mySQL calls myself, its just the way to carry accross a value from the previous page which I dont understand Regards Henry p.s. all the IDs and things are stored in the database already as 1 to 3digit values e.g. 3 or or 93 or 254 Any advice as always is greatly appreciated Regards Henry

    Read the article

  • SQLite for personal use

    - by ALife
    What are the applications for your personal use that needs a small database like SQLite? I am thinking of trying a few popular databases and SQLite is surely the first one I am planning to try since I know barely nothing about database except some simple programming years ago. I learned that SQLite is good for personal use. But embarrassingly I do not see any application except maybe managing my list of phone numbers/contact info, which has probably a few hundred items. What's your experience? FYI, I use EndNote for my reference and softcopy of books, and I feel iTunes' music/media management is ok since I am not a frequent user anyway. And others? I do lots of coding, but I just use some simple etags tools for that. And I pretty much use .txt file (sometimes in the asciidoc style) for my notes. I have quite a bunch of notes, but not that many either. So, really, what are your personal applications that need a small database instead of existing tools and plain text files?

    Read the article

  • How to publish internal data to the internet - as simple as possible

    - by mlarsen
    We have a .net 2-tier application where a desktop program is talking to a database. We support MS SQL Server 2000, 2005, 2008 and Oracle 9, 10 and 11. The application is sold, not as shrink-wrap, but pretty close. It is quite important for us that the installation and configuration is as easy as possible as installation instructions are usually supplied in written form to the customers internal IT-department. Our application is usually not seen as mission critical for the IT-department, so we need to keep their work down to a minimum. Now we are starting to get wishes for a web application build on top of the same data. The web application will be hosted by us and delivered as a SaaS application. Now the challenge is how to move data back and forth between the web application and the customers internal database. as I see it we have some requirements: We must be ready to handle the situation where the customers database is not accessible from the DMZ. I guess the easiest solution is that all communication is initiated from inside the customers lan. As little firewall configuration as possible. The best is if we can run without any special configuration as long as outgoing traffic from the customers lan are not blocked. If we need something changed in the firewall, we must be able to document that the change is secure. It doesn't have to be real time. Moving data in batches every ten minutes or so is OK. Data moves both ways, but not the same tables, so we don't have to support merges. It would be nice if we don't have to roll our own framework completely. Looking forward to hear your suggestions.

    Read the article

  • how do simple SQLAlchemy relationships work?

    - by Carson Myers
    I'm no database expert -- I just know the basics, really. I've picked up SQLAlchemy for a small project, and I'm using the declarative base configuration rather than the "normal" way. This way seems a lot simpler. However, while setting up my database schema, I realized I don't understand some database relationship concepts. If I had a many-to-one relationship before, for example, articles by authors (where each article could be written by only a single author), I would put an author_id field in my articles column. But SQLAlchemy has this ForeignKey object, and a relationship function with a backref kwarg, and I have no idea what any of it MEANS. I'm scared to find out what a many-to-many relationship with an intermediate table looks like (when I need additional data about each relationship). Can someone demystify this for me? Right now I'm setting up to allow openID auth for my application. So I've got this: from __init__ import Base from sqlalchemy.schema import Column from sqlalchemy.types import Integer, String class Users(Base): __tablename__ = 'users' id = Column(Integer, primary_key=True) username = Column(String, unique=True) email = Column(String) password = Column(String) salt = Column(String) class OpenID(Base): __tablename__ = 'openid' url = Column(String, primary_key=True) user_id = #? I think the ? should be replaced by Column(Integer, ForeignKey('users.id')), but I'm not sure -- and do I need to put openids = relationship("OpenID", backref="users") in the Users class? Why? What does it do? What is a backref?

    Read the article

  • Is it wise to use temporary tables?

    - by Industrial
    Hi guys, We have a mySQL database table for products. We are utilizing a cache layer to reduce database load, but we think that it's a good idea to minimize the actual data needed to be stored in the cache layer to speed up the application further. All the products in the database, that is visible to visitors have a price attached to them: The prices are stored in a different table, called prices . There are multiple price categories depending on which discount level each visitor (customer) applies to. From time to time, there are campaigns which means that a special price for each product is available. The special prices are stored in a table called specials. Is it a bad to make a temp table that binds the tables together? It would only have the neccessary information and would ofcourse be cached. -------------|-------------|------------ | productId | hasPrice | hasSpecial -------------|-------------|------------ 1 | 1 | 0 2 | 1 | 1 By doing such, it would be super easy to know if the specific product really has a price, without having to iterate through the complete prices or specials table each time a product should be listed or presented. Are temp tables a common thing for web applications or is it just bad design?

    Read the article

  • Understanding MongoDB (and NoSQL in general) and how to make the best use of it

    - by Earlz
    Hello, I am beginning to think that my next project I am wanting to do would work better with a NoSQL solution. The project would either involve a ton of 2-column tables or a ton of dynamic queries with dynamically generated columns in a traditional SQL database. So I feel a NoSQL database would be much cleaner. I'm looking at MongoDB and it looks pretty promising. Anyway, I'm attempting to make sense of it all. Also, I will be using MongoMapper in Ruby. Anyway though, I'm confused as to how to layout things in such a freeform database. I've read http://stackoverflow.com/questions/2170152/nosql-best-practices and the answer there says that normalization is usually bad in a NoSQL DB. So how would be the best way of laying out say a simple blog with users, posts, and comments? My natural thought was to have three collections for each and then link them by a unique ID. But this apparently is wrong? So, what are some of the ways to lay out such a thing? My concern with the answer given in the other question is, what if the author's name changed? You'd have to go through updating a ton of posts and comments. But is this an okay thing to do with NoSQL?

    Read the article

  • Understanding MongoDB(and NoSQL in general) and How to make the best use of it

    - by Earlz
    Hello, I am beginning to think that my next project I am wanting to do would work better with a NoSQL solution. The project would either involve a ton of 2-column tables or a ton of dynamic queries with dynamically generated columns in a traditional SQL database. So I feel a NoSQL database would be much cleaner. I'm looking at MongoDB and it looks pretty promising. Anyway, I'm attempting to make sense of it all. Also, I will be using MongoMapper in Ruby. Anyway though, I'm confused as to how to layout things in such a freeform database. I've read http://stackoverflow.com/questions/2170152/nosql-best-practices and the answer there says that normalization is usually bad in a NoSQL DB. So how would be the best way of laying out say a simple blog with users, posts, and comments? My natural thought was to have 3 collections for each and then link them by a unique ID. But this apparently is wrong? So, what are some of the ways to lay out such a thing? My concern with the answer given in the other question is what if the author's name changed. You'd have to go through updating a ton of posts and comments. But is this an ok thing to do with NoSQL?

    Read the article

  • Are document-oriented databases any more suitable than relational ones for persisting objects?

    - by Owen Fraser-Green
    In terms of database usage, the last decade was the age of the ORM with hundreds competing to persist our object graphs in plain old-fashioned RMDBS. Now we seem to be witnessing the coming of age of document-oriented databases. These databases are highly optimized for schema-free documents but are also very attractive for their ability to scale out and query a cluster in parallel. Document-oriented databases also hold a couple of advantages over RDBMS's for persisting data models in object-oriented designs. As the tables are schema-free, one can store objects belonging to different classes in an inheritance hierarchy side-by-side. Also, as the domain model changes, so long as the code can cope with getting back objects from an old version of the domain classes, one can avoid having to migrate the whole database at every change. On the other hand, the performance benefits of document-oriented databases mainly appear to come about when storing deeper documents. In object-oriented terms, classes which are composed of other classes, for example, a blog post and its comments. In most of the examples of this I can come up with though, such as the blog one, the gain in read access would appear to be offset by the penalty in having to write the whole blog post "document" every time a new comment is added. It looks to me as though document-oriented databases can bring significant benefits to object-oriented systems if one takes extreme care to organize the objects in deep graphs optimized for the way the data will be read and written but this means knowing the use cases up front. In the real world, we often don't know until we actually have a live implementation we can profile. So is the case of relational vs. document-oriented databases one of swings and roundabouts? I'm interested in people's opinions and advice, in particular if anyone has built any significant applications on a document-oriented database.

    Read the article

  • rename an html page according to an image within it

    - by Jake
    Hi, firstly I'll give some background regarding the situation. I have a website containing approximately 56k pages each page contain a mapped sketch of a machine part. this machine part is made out of smaller parts which are outlined in the image and hold a certain number. when you hover over the numbers a box with the part item code shows up. I order parts according to this item codes but recently a lot of the items codes have changed, therefore I am looking for a solution. now I own a database with data on all the 56k parts and I want to link the relevant webpage to each record according to the name of the part(a column in my database), the problem is that the webpages names has no logic name that could connect with the part name in any way but the image that is displayed in the page has the exact name of the part. I want to rename all the html files I has according to the Images displayed within them. how can I achieve that without renaming all the 56k pages manually? additionally how can I add the links to all the 56k pages automatically to my database after all the above is done? Thank you for your patience I know it was long.

    Read the article

  • Populating a foreign key table with variable user input

    - by Vincent
    I'm working on a website that will be based on user contributed data, submitted using a regular HTML form. To simplify my question, let's say that there will be two fields in the form: "User Name" and "Country" (this is just an example, not the actual site). There will be two tables in the database : "countries" and "users," with "users.country_id" being a foreign key to the "countries" table (one-to-many). The initial database will be empty. Users from all over the world will submit their names and the countries they live in and eventually the "countries" table will get filled out with all of the country names in the world. Since one country can have several alternative names, input like Chile, Chili, Chilli will generate 3 different records in the countries table, but in fact there is only one country. When I search for records from Chile, Chili and Chilli will not be included. So my question is - what would be the best way to deal with a situation like this, with conditions such that the initial database is empty, no other resources are available and everything is based on user input? How can I organize it in such way that Chile, Chili and Chilli would be treated as one country, with minimum manual interference. What are the best practices when it comes to normalizing user submitted data and is there a scientific term for this? I'm sure this is a common problem. Again, I used country names just to simplify my question, it can be anything that has possible different spellings.

    Read the article

  • Making a simple searchable directory of people and their skills in a day - Which technologies?

    - by gav
    Hi All, I am working with a small theatre company. Currently they have a list of people on paper with notes about their skills next to each one. I want to create a database / directory for them so that they can add, delete, update and search for people. It is a very simple and common scenario I know but the issue here is that I only have a day to build a working solution. The search has to be very simple At first I was thinking LAMP but I'd rather not have to create it all from scratch and host it myself. That lead me to Google Spreadsheet as a database, this has the advantage that they already use google docs for everything and if my front end goes tits up they can still get to the data. Presuming none of you can think of some existing software which does exactly what I want the next step is to make a front end for the database. You can create forms for Google Spreadsheets but they only let you add new entries, I can make a Google Gadget but that will only let me implement the search as the Google Visualisation API provides read only access. It's at this point I'm stuck, should I just create a Java Servlet front end for the Google Spreadsheet and use the Java API to add, search and update? I know this is a broad question but I'm just asking 'What would you do?' to implement this system with a day's development time? Gav

    Read the article

  • Alternative or succesor to GDBM

    - by Anon Guy
    We a have a GDBM key-value database as the backend to a load-balanced web-facing application that is in implemented in C++. The data served by the application has grown very large, so our admins have moved the GDBM files from "local" storage (on the webservers, or very close by) to a large, shared, remote, NFS-mounted filesystem. This has affected performance. Our performance tests (in a test environment) show page load times jumping from hundreds of milliseconds (for local disk) to several seconds (over NFS, local network), and sometimes getting as high as 30 seconds. I believe a large part of the problem is that the application makes lots of random reads from the GDBM files, and that these are slow over NFS, and this will be even worse in production (where the front-end and back-end have even more network hardware between them) and as our database gets even bigger. While this is not a critical application, I would like to improve performance, and have some resources available, including the application developer time and Unix admins. My main constraint is time only have the resources for a few weeks. As I see it, my options are: Improve NFS performance by tuning parameters. My instinct is we wont get much out of this, but I have been wrong before, and I don't really know very much about NFS tuning. Move to a different key-value database, such as memcachedb or Tokyo Cabinet. Replace NFS with some other protocol (iSCSI has been mentioned, but i am not familiar with it). How should I approach this problem?

    Read the article

  • Moving information between databases

    - by williamjones
    I'm on Postgres, and have two databases on the same machine, and I'd like to move some data from database Source to database Dest. Database Source: Table User has a primary key Table Comments has a primary key Table UserComments is a join table with two foreign keys for User and Comments Dest looks just like Source in structure, but already has information in User and Comments tables that needs to be retained. I'm thinking I'll probably have to do this in a few steps. Step 1, I would dump Source using the Postgres Copy command. Step 2, In Dest I would add a temporary second_key column to both User and Comments, and a new SecondUserComments join table. Step 3, I would import the dumped file into Dest using Copy again, with the keys input into the second_key columns. Step 4, I would add rows to UserComments in Dest based on the contents of SecondUserComments, only using the real primary keys this time. Could this be done with a SQL command or would I need a script? Step 5, delete the SecondUserComments table and remove the second_key columns. Does this sound like the best way to do this, or is there a better way I'm overlooking?

    Read the article

  • Validating Login / Changing User settings / Php Mysql

    - by Marcelo
    Hi everyone, my questions are about login, and changing already saved data. (Q1) 'Till now I've only saved input in the tables of the database (registration steps), now I need to check if the input (login steps), are the same of my table in database, in fact I have 3 types of users, then I'll have to check 3 kind of tables. Then if the input data matches with one of those 3 tables I will redirect the user to his specific area. I'm thinking about saved the submitted data $login=$_REQUEST['login']; and $password=$_REQUEST['password']; and compare with the login column in the database. Then if the login matches, I'll compare the password submitted with the one in the row, not in the column. But I don't know how to do this search and comparison,neither what to use. Then if both matches I'll redirect the user. Else I'll send an login error message. (this I know how to do) (Q2) What if need to change an already saved user ? For example to change an email address. My changing user's data web page is exactly the same like the registration user web page. Can I load the already saved options and values of registration (table user for example). Then the user will change whatever he thinks it's necessary, and then when he submits the new information, they would not create a new row in my table, but just be overwritten the old information? How can I do this? Sorry for any mistake in English, and Thanks for the attention.

    Read the article

  • Mysql Performance Question - Essentially about normalizing efficiency

    - by freqmode
    Hi there. Just a quick question about database performance. I'll outline my site purpose below as background. I'm creating a dictionary site that saves the words users define to a database. What I'm wondering is whether or not to create a words table for each user or to keep one massive words table. This site will be used for entire schools so the single words table would be massive! The database structure is as follows: A user table with: User_ID PRIMARY KEY Username First Last Password Email Country Research Standings SendInfo Donated JoinedOn LastLogin Logins Correct Attempts Admin Active And one word table with: User_ID PRIMARY KEY Word Vocab Spell Defined DefinedAttempted Spelled SpelledAttempted Sentenced SentencedAttempted So what I'm asking is , performance-wise, should I create a new table for each user when they join the site - each user could have hundreds or thousands of words over time? Or is it better to have one massive table with thousands and thousands of records and filter by User_ID. I don't think I'll perform many table joins. My gut feeling is to create a new table for each user, but I thought I'd ask for expert advice! Thanks in advance.

    Read the article

  • C# threading solution for long queries

    - by Eddie
    Senerio We have an application that records incidents. An external database needs to be queried when an incident is approved by a supervisor. The queries to this external database are sometimes taking a while to run. This lag is experienced through the browser. Possible Solution I want to use threading to eliminate the simulated hang to the browser. I have used the Thread class before and heard about ThreadPool. But, I just found BackgroundWorker in this post. MSDN states: The BackgroundWorker class allows you to run an operation on a separate, dedicated thread. Time-consuming operations like downloads and database transactions can cause your user interface (UI) to seem as though it has stopped responding while they are running. When you want a responsive UI and you are faced with long delays associated with such operations, the BackgroundWorker class provides a convenient solution. Is BackgroundWorker the way to go when handling long running queries? What happens when 2 or more BackgroundWorker processes are ran simultaneously? Is it handled like a pool?

    Read the article

  • How do I execute queries upon DB connection in Rails?

    - by sycobuny
    I have certain initializing functions that I use to set up audit logging on the DB server side (ie, not rails) in PostgreSQL. At least one has to be issued (setting the current user) before inserting data into or updating any of the audited tables, or else the whole query will fail spectacularly. I can easily call these every time before running any save operation in the code, but DRY makes me think I should have the code repeated in as few places as possible, particularly since this diverges greatly from the ideal of database agnosticism. Currently I'm attempting to override ActiveRecord::Base.establish_connection in an initializer to set it up so that the queries are run as soon as I connect automatically, but it doesn't behave as I expect it to. Here is the code in the initializer: class ActiveRecord::Base # extend the class methods, not the instance methods class << self alias :old_establish_connection :establish_connection # hide the default def establish_connection(*args) ret = old_establish_connection(*args) # call the default # set up necessary session variables for audit logging # call these after calling default, to make sure conn is established 1st db = self.class.connection db.execute("SELECT SV.set('current_user', 'test@localhost')") db.execute("SELECT SV.set('audit_notes', NULL)") # end "empty variable" err ret # return the default's original value end end end puts "Loaded custom establish_connection into ActiveRecord::Base" sycobuny:~/rails$ ruby script/server = Booting WEBrick = Rails 2.3.5 application starting on http://0.0.0.0:3000 Loaded custom establish_connection into ActiveRecord::Base This doesn't give me any errors, and unfortunately I can't check what the method looks like internally (I was using ActiveRecord::Base.method(:establish_connection), but apparently that creates a new Method object each time it's called, which is seemingly worthless cause I can't check object_id for any worthwhile information and I also can't reverse the compilation). However, the code never seems to get called, because any attempt to run a save or an update on a database object fails as I predicted earlier. If this isn't a proper way to execute code immediately on connection to the database, then what is?

    Read the article

  • When to use a foreign key in MySQL

    - by Mel
    Is there official guidance or a threshold to indicate when it is best practice to use a foreign key in a MySQL database? Suppose you created a table for movies. One way to do it is to integrate the producer and director data into the same table. (movieID, movieName, directorName, producerName). However, suppose most directors and producers have worked on many movies. Would it be best to create two other tables for producers and directors, and use a foreign key in the movie table? When does it become best practice to do this? When many of the directors and producers are appearing several times in the column? Or is it best practice to employ a foreign key approach at the start? While it seems more efficient to use a foreign key, it also raises the complexity of the database. So when does the trade off between complexity and normalization become worth it? I'm not sure if there is a threshold or a certain number of cell repetitions that makes it more sensible to use a foreign key. I'm thinking about a database that will be used by hundreds of users, many concurrently. Many thanks!

    Read the article

  • Print table data mysql php

    - by Marcelo
    Hi people, i'm having a problem trying to print some data of a table. I'm new at this php mysql stuff but i think my code is right. Here it is: <html> <body> <h1>Lista de usuários</h1> <?php $host="localhost"; // Host name $username="root"; // Mysql username $password=""; // Mysql password $db_name="sabs"; // Database name $tbl_name="doador"; // Table name // Connect to server and select database. mysql_connect("$host", "$username", "$password")or die("cannot connect"); mysql_select_db("$db_name")or die("cannot select DB"); $sql="SELECT * FROM $tbl_name"; $result=mysql_query($sql); while($rows = mysql_fetch_array($result)){ echo $row['id'] . " " .$row['nome'] . " " . $row['sobrenome'] . " " . $row['email'] . " " . $row['login'] . " " . $row['senha'] . " " . $row['idade'] . " ". $row['peso'] . " " . $row['fuma'] . " " . $row['sexo'] . " " . $row['doencas']; echo "<BR/>"; } mysql_close(); ?> </body> </html> All columns of the echo command exist in my table in the database. Don't get why it's not printing those values. Thanks for the attention.

    Read the article

  • Relating text fields to check boxes in Java

    - by Finzz
    This program requires the user to login and request a database to access. The program then gets a connection object, searches through the database storing the column names into a vector for later use. The problem comes with implementing text fields to allow the user to search for specific values within the database. I can get the check boxes and text fields to appear using a gridlayout and add them to a panel. How do I relate the text fields to their appropriate check box? I've tried adding them to a vector, but then they can't also be added to the panel as well. I've searched for a way to name the text fields as the loop cycles through the column names, but it seems impossible to do without having them declared ahead of time. This can't be done either, as it's impossible to determine the attributes that the user will request. I just need to be able to know the names of the text fields so I can test to see if the user entered information and perform the necessary logic. Let me know if you have to see the rest of the code to give an answer, but hopefully you get the general idea of what I'm trying to accomplish. Picture of UI: try { ResultSet r2 = con.getMetaData().getColumns("", "", rb.getText(), ""); colNames1 = new Vector<String>(); columns1 = new Vector<JCheckBox>(); while (r2.next()) { colNames1.add(r2.getString(4)); JCheckBox cb = new JCheckBox(r2.getString(4)); JTextField tf = new JTextField(10); columns1.add(cb); p3.add(cb); p3.add(tf); } }

    Read the article

  • Is there a way to sync (two way) tables betwen a mysql server and a local MS Access?

    - by Kailen
    Help me figure out a solution to a (not so unique) problem. My research group has gps devices attached to migratory animals. Every once in a while, a research tech will be within range of an animal and will get the chance to download all the logged points. Each individual spits out a single dbf and new locations are just appended to the end (so the file is just cumulative). These data need to be shared among a research group. Everyone else (besides me) wants to use access, so they can make small edits and prefer that interface. They do not like using MySQL. The solution I came up with is: a) The person who downloads the file goes to a web page, enters animal ID into a form, chooses .dbf file and uploads to a mysql database on the server (I still have to write php code to read the dbf and write sql insert statements from it). b) Everyone syncs from their local access database to the server. (This is natively possible from access but very clunky). Is there a tool (preferably open source), that can compare a access table to mysql table and sync the two (both ways)? Alternatively, does anyone have a more elegant solution? The ultimate goal is to allow everyone to have access to the most current data on their computers using their preferred database app.

    Read the article

  • How to improve my software project's speed?

    - by Blitzkr1eg
    I'm doing a school software project with my class mates in Java. We store the info on a remote db. When we start the application we pull all the information from the database and transform it into objects to use in our application (using java sql statemens). In the application we edit some of these objects and then when we exit the application we save or update information in the database using Hibernate. As you see we dont use Hibernate for pulling in information, we use it just for saving and updating. We have 2, but very similar problems. The loading of object(when we start the app) and the saving of objects(with Hibernate) in the db(when closing the app) is taking too much time. And our project its not a huge enterprise application, its a quite small app, we just manage some students, teachers, homeworks and tests. So our db is also very very small. How could we increase performance ? later edit: if we use a local database it runs very quick, it just runs slow on remote databases

    Read the article

  • Installation error on Ubuntu 11.10

    - by Abhishek Chanda
    I upgraded to Ubuntu 11.10 and now, when I try to install or uninstall a software, I get this error installArchives() failed: (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 158945 files and directories currently installed.) Removing aisleriot ... Processing triggers for gconf2 ... Processing triggers for man-db ... Processing triggers for hicolor-icon-theme ... Processing triggers for libglib2.0-0 ... Processing triggers for gnome-menus ... Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Setting up flashplugin-downloader (11.0.1.152ubuntu1) ... Downloading... --2012-05-02 18:47:29-- http://archive.canonical.com/pool/partner/a/adobe-flashplugin/adobe-flashplugin_11.0.1.152.orig.tar.gz Resolving archive.canonical.com... 91.189.92.150, 91.189.92.191 Connecting to archive.canonical.com|91.189.92.150|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2012-05-02 18:47:29 ERROR 404: Not Found. download failed The Flash plugin is NOT installed. dpkg: error processing flashplugin-downloader (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of flashplugin-installer: flashplugin-installer depends on flashplugin-downloader (>= 11.0.1.152ubuntu1); however: Package flashplugin-downloader is not configured yet. dpkg: error processing flashplugin-installer (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: flashplugin-downloader flashplugin-installer Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up flashplugin-downloader (11.0.1.152ubuntu1) ... Downloading... --2012-05-02 18:47:33-- http://archive.canonical.com/pool/partner/a/adobe-flashplugin/adobe-flashplugin_11.0.1.152.orig.tar.gz Resolving archive.canonical.com... 91.189.92.191, 91.189.92.150 Connecting to archive.canonical.com|91.189.92.191|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2012-05-02 18:47:34 ERROR 404: Not Found. download failed The Flash plugin is NOT installed. dpkg: error processing flashplugin-downloader (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of flashplugin-installer: flashplugin-installer depends on flashplugin-downloader (>= 11.0.1.152ubuntu1); however: Package flashplugin-downloader is not configured yet. dpkg: error processing flashplugin-installer (--configure): dependency problems - leaving unconfigured This seems to be a bug that has been reported.Does anyone know a workaround?

    Read the article

  • Diagnosing Microsoft SQL Server error 9001: The log for the database is not available.

    - by Scott Mitchell
    Over the weekend a website I run stopped functioning, recording the following error in the Event Viewer each time a request is made to the website: Event ID: 9001 The log for database 'database name' is not available. Check the event log for related error messages. Resolve any errors and restart the database. The website is hosted on a dedicated server, so I am able to RDP into the server and poke around. The LDF file for the database exists in the C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA folder, but attempting to do any work with the database from Management Studio results in a dialog box reporting the same error - 9001: The log for database is not available... This is the first time I've received this error, and I've been hosting this site (and others) on this dedicated web server for over two years now. It is my understanding that this error indicates a corrupt log file. I was able to get the website back online by Detaching the database and then restoring a backup from a couple days ago, but my concern is that this error is indicative of a more sinister problem, namely a hard drive failure. I emailed support at the web hosting company and this was their reply: There doesn't appear to be any other indications of the cause in the Event Log, so it's possible that the log was corrupted. Currently the memory's resources is at 87%, which also may have an impact but is unlikely. Can the log just "become corrupted?" My question: What are the next steps I should take to diagnose this problem? How can I determine if this is, indeed, a hardware problem? And if it is, are there any options beyond replacing the disk? Thanks

    Read the article

< Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >