Search Results

Search found 31472 results on 1259 pages for 'offline database'.

Page 106/1259 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • How to export SQL Server data from corrupted database (with disk write error)

    - by damitamit
    IT realised there was a disk write error on our production SQL Server 2005 and hence was causing the backups to fail. By the time they had realised this the nightly backup was old, so were not able to just restore the backup on another server. The database is still running and being used constantly. However DBCC CheckDB fails. Also the SQL Server backup task fails, Copy Database fails, Export Data Wizard fails. However it seems all the data can be read from the tables (i.e using bcp etc) Another observation I have made is that the Transaction Log is nearly double the size of the Database. (Does that mean all the changes arent being written to the MDF?) What would be the best plan of attack to get the database to a state where backups are working and the data is safe? Take the database offline and use the MDF/LDF to somehow create the database on another sql server? Export the data from the database using bcp. Create the database (use the Generate Scripts function on the corrupt db to create the schema on the new db) on another sql server and use bcp again to import the data. Some other option that is the right course of action in this situation? The IT manager says the data is safe as if the server fails, the data can be restored from the mdf/ldf. I'm not sure so insisted that we start exporting the data each night as a failsafe (using bcp for example). IT are also having issues on the hardware side of things as supposedly the disk error in on a virtualized disk and can't be rebuilt like a normal raid array (or something like that). Please excuse my use of incorrect terminology and incorrect assumptions on how Sql Server operates. I'm the application developer and have been called to help (as it seems IT know less about SQL Server than I do). Many Thanks, Amit

    Read the article

  • Database cluster... without Master/Slave?

    - by RadiantHex
    Hi there! I'm wondering if it is possible to have a set of SQLdb servers to which data is written and have them replicate, avoiding conflicting information. I imagine that a Master/Slave structure would be mandatory, I would like to know if a system where servers have no hierarchy could support replication. Currently I'm using MySQL, but I would be happy to move to another database if needed. Any ideas? :)

    Read the article

  • Oracle TimesTen In-Memory Database Performance on SPARC T4-2

    - by Brian
    The Oracle TimesTen In-Memory Database is optimized to run on Oracle's SPARC T4 processor platforms running Oracle Solaris 11 providing unsurpassed scalability, performance, upgradability, protection of investment and return on investment. The following demonstrate the value of combining Oracle TimesTen In-Memory Database with SPARC T4 servers and Oracle Solaris 11: On a Mobile Call Processing test, the 2-socket SPARC T4-2 server outperforms: Oracle's SPARC Enterprise M4000 server (4 x 2.66 GHz SPARC64 VII+) by 34%. Oracle's SPARC T3-4 (4 x 1.65 GHz SPARC T3) by 2.7x, or 5.4x per processor. Utilizing the TimesTen Performance Throughput Benchmark (TPTBM), the SPARC T4-2 server protects investments with: 2.1x the overall performance of a 4-socket SPARC Enterprise M4000 server in read-only mode and 1.5x the performance in update-only testing. This is 4.2x more performance per processor than the SPARC64 VII+ 2.66 GHz based system. 10x more performance per processor than the SPARC T2+ 1.4 GHz server. 1.6x better performance per processor than the SPARC T3 1.65 GHz based server. In replication testing, the two socket SPARC T4-2 server is over 3x faster than the performance of a four socket SPARC Enterprise T5440 server in both asynchronous replication environment and the highly available 2-Safe replication. This testing emphasizes parallel replication between systems. Performance Landscape Mobile Call Processing Test Performance System Processor Sockets/Cores/Threads Tps SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 218,400 M4000 SPARC64 VII+, 2.66 GHz 4 16 32 162,900 SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 80,400 TimesTen Performance Throughput Benchmark (TPTBM) Read-Only System Processor Sockets/Cores/Threads Tps SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 7.9M SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 6.5M M4000 SPARC64 VII+, 2.66 GHz 4 16 32 3.1M T5440 SPARC T2+, 1.4 GHz 4 32 256 3.1M TimesTen Performance Throughput Benchmark (TPTBM) Update-Only System Processor Sockets/Cores/Threads Tps SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 547,800 M4000 SPARC64 VII+, 2.66 GHz 4 16 32 363,800 SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 240,500 TimesTen Replication Tests System Processor Sockets/Cores/Threads Asynchronous 2-Safe SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 38,024 13,701 SPARC T5440 SPARC T2+, 1.4 GHz 4 32 256 11,621 4,615 Configuration Summary Hardware Configurations: SPARC T4-2 server 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 1 x 8 Gbs FC Qlogic HBA 1 x 6 Gbs SAS HBA 4 x 300 GB internal disks Sun Storage F5100 Flash Array (40 x 24 GB flash modules) 1 x Sun Fire X4275 server configured as COMSTAR head SPARC T3-4 server 4 x SPARC T3 processors, 1.6 GHz 512 GB memory 1 x 8 Gbs FC Qlogic HBA 8 x 146 GB internal disks 1 x Sun Fire X4275 server configured as COMSTAR head SPARC Enterprise M4000 server 4 x SPARC64 VII+ processors, 2.66 GHz 128 GB memory 1 x 8 Gbs FC Qlogic HBA 1 x 6 Gbs SAS HBA 2 x 146 GB internal disks Sun Storage F5100 Flash Array (40 x 24 GB flash modules) 1 x Sun Fire X4275 server configured as COMSTAR head Software Configuration: Oracle Solaris 11 11/11 Oracle TimesTen 11.2.2.4 Benchmark Descriptions TimesTen Performance Throughput BenchMark (TPTBM) is shipped with TimesTen and measures the total throughput of the system. The workload can test read-only, update-only, delete and insert operations as required. Mobile Call Processing is a customer-based workload for processing calls made by mobile phone subscribers. The workload has a mixture of read-only, update, and insert-only transactions. The peak throughput performance is measured from multiple concurrent processes executing the transactions until a peak performance is reached via saturation of the available resources. Parallel Replication tests using both asynchronous and 2-Safe replication methods. For asynchronous replication, transactions are processed in batches to maximize the throughput capabilities of the replication server and network. In 2-Safe replication, also known as no data-loss or high availability, transactions are replicated between servers immediately emphasizing low latency. For both environments, performance is measured in the number of parallel replication servers and the maximum transactions-per-second for all concurrent processes. See Also SPARC T4-2 Server oracle.com OTN Oracle TimesTen In-Memory Database oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 1 October 2012.

    Read the article

  • How do I rename the Tfs_xxxConfiguration database for TFS 2010?

    - by Jaxidian
    When we installed TFS 2010, we made some poor decisions in naming our databases. I have figured out how to rename everything except for the Tfs_xxxConfiguration database. How can I get this renamed? I can only find a read-only display of the connection string to it - cannot find where to alter that connection string either directly or indirectly. I'd really prefer to not have to rebuild it from scratch... again... Thanks!

    Read the article

  • how many tables can an MS SQL database hold?

    - by Peter Turner
    I've ran into this cryptic statement for SQL Server: Files Per Database 32,767. What does that mean exactly? Is there a maximum number of tables for a given version of SQL Server. We try to support SQL Server post 2005 32-bit and 64-bit. So if anyone has a handy dandy table they use to figure out how many tables they can have per DB for Microsoft SQL Servers I'd heartily appreciate seeing it.

    Read the article

  • Online DATAFILE MOVE in Oracle Database 12c

    - by Ulrike Schwinn (DBA Community)
    Einige Operationen im Datenbankumfeld können nicht nur offline sondern auch online durchgeführt werden. Ein wichtiges Kennzeichen einer Online Operation ist dabei, dass Abfragen und DML Operationen während des Ablaufs der Operation (beispielsweise einer Reorganisation) ohne Unterbrechung weiter laufen können. Je nach einplanbarer Maintenance Zeit ist es daher durchaus wünschenswert, gewisse Operationen online durchzuführen. Generell gibt es in jedem Datenbankrelease einige Erweiterungen im Umfeld von Online Operationen. Besonders im aktuellen Release 12c gibt es eine Vielzahl neuer Operationen zu diesem Thema.Während der ersten 12c Veranstaltungen fand besonders das Kommando "DATAFILE ONLINE MOVE" besondere Aufmerksamkeit bei vielen Kunden. Aus diesem Grund wird im aktuellen Tipp dazu eine kleine Einführung gegeben.  Mehr dazu erfahren Sie hier. 

    Read the article

  • How can I synchronize one set of data with another?

    - by RenderIn
    I have an old database and a new database. The old records were converted to the new database recently. All our old applications continue to point to the old database, but the new applications point to the new database. Currently the old database is the only one being updated, so throughout the day the new database becomes out of sync. It is acceptable for the new database to be out of sync for a day, so until all our applications are pointed to the new database I just need to write a nightly cron job that will bring it up to date. I do not want to purge the new database and run the complete conversion script each night, as that would reduce uptime and would create a mess in our auditing of that table. I'm thinking about selecting all the data from the old database, converting it to the new database structure in memory, and then checking for the existence of each record before inserting it in the new database. After that's done, I'd select everything from the new database and check if it exists in the old one, and if not delete it. Is this the simplest way to do this?

    Read the article

  • Sqlite3 : "Database is locked" error

    - by Miraaj
    Hi all, In my cocoa application I am maintaining a SQLite db within resources folder and trying to do some select, delete operations in it but after some time it starts giving me 'Database is locked' error. The methods which I am using for select delete operations are as follows: // method to retrieve data if (sqlite3_open([databasePath UTF8String], &database) != SQLITE_OK) { sqlite3_close(database); NSAssert(0, @"Failed to open database"); } NSLog(@"mailBodyFor:%d andFlag:%d andFlag:%@",UId,x,Ffolder); NSMutableArray *recordsToReturn = [[NSMutableArray alloc] initWithCapacity:2]; NSString *tempMsg; const char *sqlStatementNew; NSLog(@"before switch"); switch (x) { case 9: // tempMsg=[NSString stringWithFormat:@"SELECT * FROM users_messages"]; tempMsg=[NSString stringWithFormat:@"SELECT message,AttachFileOriName as oriFileName,AttachmentFileName as fileName FROM users_messages WHERE id = (select message_id from users_messages_status where id= '%d')",UId]; NSLog(@"mail body query - %@",tempMsg); break; default: break; } sqlStatementNew = [tempMsg cStringUsingEncoding:NSUTF8StringEncoding]; sqlite3_stmt *compiledStatementNew; NSLog(@"before if statement"); if(sqlite3_prepare_v2(database, sqlStatementNew, -1, &compiledStatementNew, NULL) == SQLITE_OK) { NSLog(@"the sql is finalized"); while(sqlite3_step(compiledStatementNew) == SQLITE_ROW) { NSMutableDictionary *recordDict = [[NSMutableDictionary alloc] initWithCapacity:3]; NSString *message; if((char *)sqlite3_column_text(compiledStatementNew, 0)){ message = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatementNew, 0)]; } else{ message = @""; } NSLog(@"message - %@",message); NSString *oriFileName; if((char *)sqlite3_column_text(compiledStatementNew, 1)){ oriFileName = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatementNew, 1)]; } else{ oriFileName = @""; } NSLog(@"oriFileName - %@",oriFileName); NSString *fileName; if((char *)sqlite3_column_text(compiledStatementNew, 2)){ fileName = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatementNew, 2)]; } else{ fileName = @""; } NSLog(@"fileName - %@",fileName); [recordDict setObject:message forKey:@"message"]; [recordDict setObject:oriFileName forKey:@"oriFileName"]; [recordDict setObject:fileName forKey:@"fileName"]; [recordsToReturn addObject:recordDict]; [recordDict release]; } sqlite3_finalize(compiledStatementNew); sqlite3_close(database); NSLog(@"user messages return -%@",recordsToReturn); return recordsToReturn; } else{ NSLog(@"Error while creating retrieving mailBodyFor in messaging '%s'", sqlite3_errmsg(database)); sqlite3_close(database); } // method to delete data if (sqlite3_open([databasePath UTF8String], &database) != SQLITE_OK) { sqlite3_close(database); NSAssert(0, @"Failed to open database"); } NSString *deleteQuery = [[NSString alloc] initWithFormat:@"delete from users_messages_status where id IN(%@)",ids]; NSLog(@"users_messages_status msg deleteQuery - %@",deleteQuery); sqlite3_stmt *deleteStmnt; const char *sql = [deleteQuery cStringUsingEncoding:NSUTF8StringEncoding]; if(sqlite3_prepare_v2(database, sql, -1, &deleteStmnt, NULL) != SQLITE_OK){ NSLog(@"Error while creating delete statement. '%s'", sqlite3_errmsg(database)); } else{ NSLog(@"successful deletion from users_messages"); } if(SQLITE_DONE != sqlite3_step(deleteStmnt)){ NSLog(@"Error while deleting. '%s'", sqlite3_errmsg(database)); } sqlite3_close(database); Things are going wrong in this sequence Data is retrieved 'Database is locked' error arises on performing delete operation. When I retry to perform 1st step.. it now gives same error. Can anyone suggest me: If I am doing anything wrong or missing some check? Is there any way to unlock it when it gives locked error? Thanks, Miraaj

    Read the article

  • Is XSLT worth investing time in and are there any actual alternatives?

    - by Keeno
    I realize this has been a few other questions on this topic, and people are saying use your language of choice to manipulate the XML etc etc however, not quite fit my question exactly. Firstly, the scope of the project: We want to develop platform independent e-learning, currently, its a bunch of HTML pages but as they grow and develop they become hard to maintain. The idea: Generate up an XML file + Schema, then produce some XSLT files that process the XML into the eLearning modiles. XML to HTML via XSLT. Why: We would like the flexibilty to be able to easy reformat the content (I realize CSS is a viable alternative here) If we decide to alter the pages layout or functionality in anyway, im guessing altering the "shared" XSLT files would be easier than updating the HTML files. So far, we have about 30 modules, with up to 10-30 pages each Depending on some "parameters" we could output drastically different page layouts/structures, above and beyond what CSS can do Now, all this has to be platform independent, and to be able to run "offline" i.e. without a server powering the HTML Negatives I've read so far for XSLT: Overhead? Not exactly sure why...is it the compute power need to convert to HTML? Difficult to learn Better alternatives Now, what I would like to know exactly is: are there actually any viable alternatives for this "offline"? Am I going about it in the correct manner, do you guys have any advice or alternatives. Thanks!

    Read the article

  • XSLT, worth investing time in, any actual alternatives?

    - by Keeno
    I realise this has been a few other questions on this topic, and people are saying use your language of choice to manipulate the xml etc etc however, not quite fit my question exactly. Firstly, the scope of the project: We want to develop platform independant e-learning, currently, its a bunch of HTML pages but as they grow and develop they become hard to maintain. The idea: Generate up an XML file + Schema, then produce some XSLT files that process the XML into the eLearning modiles. XML to HTML via XSLT. Why: We would like the flexibily to be able to easy reformat the content (i realise CSS is a viable alternative here) If we decide to alter the pages layout or functionality in anyway, im guessing altering the "shared" XSLT files would be easier than updating the HTML files. So far, we have about 30 modules, with up to 10-30 pages each Depending on some "parameters" we could output drastically different page layouts/structures, above and beyond what CSS can do Now, all this has to be platform independant, and to be able to run "offline" i.e. without a server powering the HTML Negatives ive read so far for XSLT: Overheard? Not exactly sure why...is it the compute power need to convert to HTML? Difficult to learn Better alternatives Now, what I would like to know exactly is: are there actually any viable alternatives for this "offline"? Am I going about it in the correct manner, do you guys have any advice or alternatives. Thanks!

    Read the article

  • How do I get my ubuntu server to listen for database connections?

    - by Bob Flemming
    I am having a problems connecting to my database outside of phpmyadmin. Im pretty sure this is because my server isn't listening on port 3306. When I type: sudo netstat -ntlp on my OTHER working server I can see the following line: tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 20445/mysqld However, this line does not appear on the server I am having difficulty with. How do I make my sever listen for mysql connections? Here my my.conf file: # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql #skip-networking=off #skip_networking=off #skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 0.0.0.0 # # * Fine Tuning # key_buffer = 64M max_allowed_packet = 64M thread_stack = 650K thread_cache_size = 32 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 2M query_cache_size = 32M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error logging goes to syslog due to /etc/mysql/conf.d/mysqld_safe_syslog.cnf. # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 32M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/

    Read the article

  • What is the relation between database books by Ullman et al.?

    - by macias
    A First Course in Database Systems by Jeffrey D. Ullman, Jennifer Widom (Amazon links) Database System Implementation by Hector Garcia-Molina, Jeffrey D. Ullman, Jennifer D. Widom Database Systems: The Complete Book by Hector Garcia-Molina, Jeffrey D. Ullman, Jennifer Widom As far as I know the second one is the second "part" of the first one. But what about the third one -- is it just first+second published in one volume? I would like to buy them, but I don't want to get redundant reading. Thank you in advance for clarification.

    Read the article

  • Map only certain parts of the class to a database using SQLAlchemy?

    - by Az
    When mapping an object using SQLAlchemy, is there a way to only map certain elements of a class to a database, or does it have to be a 1:1 mapping? Example: class User(object): def __init__(self, name, username, password, year_of_birth): self.name = name self.username = username self.password = password self.year_of_birth = year_of_birth Say, for whatever reason, I only wish to map the name, username and password to the database and leave out the year_of_birth. Is that possible and will this create problems? Edit - 25/03/2010 Additionally, say I want to map username and year_of_birth to a separate database. Will this database and the one above still be connected (via username)?

    Read the article

  • SO what RDF database do i use for product attribute situation initially i thought about using EAV?

    - by keisimone
    Hi, i have a similar issue as espoused in http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters i am convinced to use RDF now. only because of one of the comments made by Bill Karwin in the answer to the above issue but i already have a database in mysql and the code is in php. 1) So what RDF database should I use? 2) do i combine the approach? meaning i have a class table inheritance in the mysql database and just the weird product attributes in the RDF? I dont think i should move everything to a RDF database since it is only just products and the wide array of possible attributes and value that is giving me the problem. 3) what php resources, articles should i look at that will help me better in the creation of this? 4) more articles or resources that helps me to better understand RDF in the context of the above challenge of building something that will better hold all sorts of products' attributes and values will be greatly appreciated. i tend to work better when i have a conceptual understanding of what is going on. Do bear in mind i am a complete novice to this and my knowledge of programming and database is average at best.

    Read the article

  • How do I make software that preserves database integrity and correctness?

    - by user287745
    I have made an application project in Visual Studio 2008 C#, SQL Server from Visual Studio 2008. The database has like 20 tables and many fields in each. I have made an interface for adding deleting editing and retrieving data according to predefined needs of the users. Now I have to Make to project into software which I can deliver to my professor. That is, he can just double click the icon and the software simply starts. No Visual Studio 2008 needed to start the debugging. The database will be on one powerful computer (dual core latest everything Windows XP) and the user will access it from another computer connected using LAN. I am able to change the connection string to the shared database using Visual Studio 2008/ debugger whenever the server changes but how am I supposed to do that when it's software? There will by many clients. Am I supposed to give the same software to every one, so they all can connect to the database? How will the integrity and correctness of the database be maintained? I mean the db.mdf file will be in a folder which will be shared with read and write access. So it's not necessary that only one user will write at a time. So is there any coding for this or?

    Read the article

  • How to use a c# datagridview to update a database file just like Access does?

    - by mackeyka
    I have googled everywhere and I am finally giving up and asking here. I am working in Visual Studio 2010 with C#. I have set up a form with a datagridview connected to a MSSQL database and I need to save changes made in the datagridview back to the physical database. I am having some success but I think that I am going about some of it completely wrong because I can not get it to save consistently. What I really want is for the updates to work just like they do when working with Access. When I edit a row in the datagridview and then leave that row, either by selecting another row or by selecting some other control on the form or even by changing to another form or quitting the application the row should be automatically pdated to the physical database. The first part of this question I think is, what is the proper event to use to trigger the save and then second what methods should be used to actually write the data to the database?

    Read the article

  • PHP e-commerce site talking to internal database for stock / ordering?

    - by CitrusTree
    Hi. I'm working on an e-commerce site (either bespoke with PHP, or using Drupal/Ubercart), and I'd like to investigate the site interacting with an internal (filemaker) database we use to manage stock and orders. Currently we manually transfer orders from the web site to our own database, and the site does not check or record changes in stock. My plan to allow the 2 to interact is as follows: Make the internal database available externaly on a machine with a fixed IP Allow external access from the site only Connect to the internal database using ODBC (or similar) Use simple queries to check stock / record stock changes / record order details Am I missing something here as this sounds quite straight forward? Is there another solution I should be taking a look at? Thanks in advance for any help or comments.

    Read the article

  • In Linux, what's the best way to delegate administration responsibilities, like for Apache, a database, or some other application?

    - by Andrew Banks
    In Linux, what's the best way to delegate administration responsibilities for Apache and other "applications"? File permissions? Sudo? A mix of both? Something else? At work we have two tiers of "administrators" Operating system administrators. These are your run-of-the-mill "server administrators." They are responsible for just the operating system. Application administrators. The people who build the web site. This includes not only writing the SQL, PHP, and HTML, but also setting up and running Apache and PostgreSQL or MySQL. The aforementioned OS admins will install this stuff, but it's mainly up to the app admins to edit all the config files, start and stop processes when needed, and so on. I am one of the app admins. This is different than what I am used to. I used to just write code. The sysadmin took care not only of the OS but also installing, setting up, and keeping up the server software. But he left. Now I'm in charge of setting up Apache and the database. The new sysadmins say they just handle the operating system. It's no problem. I welcome learning new stuff. But there is a learning curve, even for the OS admins. Apache, by default, seems to be set up for administration by root directly. All the config files and scripts are 644 and owned by root:root. I'm not given the root password, naturally, so the OS admins must somehow give my ordinary OS user account all the rights necessary to edit Apache's config files, start and stop it, read its log files, and so on. Right now they're using a mix of: (1) giving me certain sudo rights, (2) adding me to certain groups, and (3) changing the file permissions of various directories, to make them writable by one of the groups I'm in. This never goes smoothly. There's always a back-and-forth between me and the sysadmins. They say it's ready. Then I try certain things, and half of them I still can't do. So they make some more changes. Then finally I seem to be independent and can administer Apache and the database without pestering them anymore. It's the sheer complication and amount of changes that make me uncomfortable. Even though it finally works, more or less, it seems hackneyed. I feel like we're doing it wrong. It seems like the makers of the software would have anticipated this scenario (someone other than root administering it) and have a clean two- or three-step program to delegate responsibility to me. But it feels like we are really chewing up the filesystem and making it far and away from the default set-up. Any suggestions? Are we doing it the recommended way? P.S. For PostgreSQL it seems a little better. Its files are owned by a system user named postgres. So giving me the right to run sudo su - postgres gives me just about everything. I'm just now getting into MySQL, but it seems to be set up similarly. But it seems a little weird doing all my work as another user.

    Read the article

  • What are the best practices for storing PHP session data in a database?

    - by undefined
    I have developed a web application that uses a web server and database hosted by a web host (on the ground) and a server running on Amazon Web Services EC2. Both servers may be used by a user during a session and both will need to know some session information about a user. I don't want to POST the information that is needed by both servers because I dont want it to be visible to browsers / Firebug etc. So I need my session data to persist across servers. And I think that this means that the best option is to store all / some of the data that I need in the database rather than in a session. The easiest thing to do seems to be to keep the sessions but to POST the session_id between servers and use this as the key to lookup the data I need from a 'user_session_data' table in the database. I have looked at Tony Marston's article "Saving PHP Session Data to a database" - should I use this or will a table with the session data that I need and session_id as key suffice? What would be the downside of creating my own table and set of methods for storing the data I need in the database?

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >