Search Results

Search found 33242 results on 1330 pages for 'database optimization'.

Page 65/1330 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Keeping files or database records? Java and Python

    - by danpalmer
    My website will use a Neural Network to predict thing based on user data. The user can select the data to be used in training the network and then use their trained network to predict things. I am using a framework to create, train and query the networks. This uses Java. The framework has persistence for saving a network to an XML file. What is the best way to store these files? I can see several potential ideas, but I need help on choosing which is best: Save each network to a separate XML file with a name that is stored in the database. Load this each time. Save all the networks to the same XML file with each network having a different name that is stored in the database. Somehow pass what would normally be written to an XML file to the Django site for writing to the database. This would need to be returned to the Java code when a prediction needs to be made. I am able to do 1 or 2, but I think their performance will be quite limited and I am on shared hosting at the moment, so I don't know how pleased they would be with thousands of files. Also, after adding a few thousand records to one XML file, I was noticing a massive performance hit on saving to it. If I were able to implement version 3 somehow I think it would be best. No issues with separate processes accessing the database and I think performance would be better. Not to mention having no files lying around. However, the stuff in the neural network framework I am using (Encog) for saving to a file needs access to a Java file object, not a string that could be saved to a database. Unless there is some Java magic I can do here (I know very little Java), the only way I can see of doing this would be with a temporary files but I don't know if this is the correct way to do it. I would appreciate any ideas on the best way to implement any of the above 3 ideas or any alternatives. Thanks!

    Read the article

  • Optimize strategies for xml parsing?

    - by Future2020
    I am looking for general optimization tips and guidelines for xml parsing. One of the optimization strategies is of course selecting the "right" parser. A detailed comparison between the available parsers for ios can be found here http://www.raywenderlich.com/553/how-to-chose-the-best-xml-parser-for-your-iphone-project. However, I am currently trying to investigate general guidelines and tips on how to optimize by payloads to increase the performance as possible. This question is similar to (a question I have posted in the context of ios) but I have not got a sufficient answer. So this question is not in the context of any particular programming language.

    Read the article

  • ???????:“Upgrade to Oracle Database 12c”

    - by hamsun
    ? ?:Brandye Barrington,2014?4?29? ??????????????“Upgrade to Oracle Database 12c”,??Oracle????(OCM ) ??????????Gwen Lazenby??????12??????????43?????,Gwen Lazenby? ?????Oracle???? 1Z0-060???????????, ? ?????????????????????,????????? ???????????????,? ?????? ?????????:Upgrade to Oracle Database 12c? ??????????????????????????Kaplan SelfTest? ???? ??????? ????:Upgrade to Oracle Database 12c, ???? ?????????:Upgrade to Oracle Database 12c? ?????????????????????????????Kaplan SelfTest? ???? ? ?????????????: ??0:???? ?? 1:?????????????? ?? 2:?????CDB?PDB ?? 3: ??CDB?PDB ?? 4:???????????????????????(?CDB/PDB? ?? 5:?CDB/PDB????????? ? ?? 6:??????????? ?? 7:??Unicode?EM Express? ???? ?? 8:????????????? ?? 9:????????????? ?? 10: ?? ?? 11:????????? ?? 12: ?? ?? 13:?????????????? ?? 14: Oracle Database 12c??RMAN? ??? ?? 15: ??????? ?? 16:ADR???? ?? 17:Oracle Database 12c?????? ?? 18:?? Oracle? ???? ?? 19: ??Oracle Database 12c  ?? 20:??Oracle? ??? ?? 21: ??????? ?? 22: ???? ?? 23: ???? ?? 24: ?????? ?? 25: ?????? ?? 26: ?? ?? 27: ?? ?? 28: ??????? ?? 29: ? ??????????? ?? 30: ??????? ?? 31: ?????? ?? 32: ???????? ?? 33:??RMAN? ???????? ?? 34: ??RMAN???? ?? 35: ???? ?? 36: ?????? ?? 37: ?????? ?? 38: ???????? ?? 39: ???? ?? 40: ?????? ?? 41:??SQL ? ?????? ?? 42:?????Direct NFS ?? 43:??Oracle Data Pump? ??? ????????Oracle Database Administrator Certified Professional?????Oracle Database 12c,????????????????,???????????,????????????????,?????????? ????,?????????????????? ? ??? ?????: Certification Exam Prep Seminar: Upgrade to Oracle Database 12c ??????????: Exam Prep Seminar Value Package: Upgrade to Oracle Database 12c ????: 1Z0-060 Upgrade to 12c - New Features of Oracle Database 12c ????:Oracle Database 12c Administrator Certified Professional

    Read the article

  • Index question: Select * with WHERE clause. Where and how to create index

    - by Mestika
    Hi, I’m working on optimizing some of my queries and I have a query that states: select * from SC where c_id ="+c_id” The schema of ** SC** looks like this: SC ( c_id int not null, date_start date not null, date_stop date not null, r_t_id int not null, nt int, t_p decimal, PRIMARY KEY (c_id, r_t_id, date_start, date_stop)); My immediate bid on how the index should be created is a covering index in this order: INDEX(c_id, date_start, date_stop, nt, r_t_id, t_p) The reason for this order I base on: The WHERE clause selects from c_id thus making it the first sorting order. Next, the date_start and date_stop to specify a sort of “range” to be defined in these parameters Next, nt because it will select the nt Next the r_t_id because it is a ID for a specific type of my r_t table And last the t_p because it is just a information. I don’t know if it is at all necessary to order it in a specific way when it is a SELECT ALL statement. I should say, that the SC is not the biggest table. I can say how many rows it contains but a estimate could be between <10 and 1000. The next thing to add is, that the SC, in different queries, inserts the data into the SC, and I know that indexes on tables which have insertions can be cost ineffective, but can I somehow create a golden middle way to effective this performance. Don't know if it makes a different but I'm using IBM DB2 version 9.7 database Sincerely Mestika

    Read the article

  • Copy Selective Data from Database to Invoice, Based on Certain Criteria

    - by Scott
    For starters, here is an example of a microsoft excel database I am working with: Month/Address/Name/Description/Amount January/123 Street/Fred/Painting/100 January/456 Avenue/Scott/Flooring/400 January/789 Road/Scott/Plumbing/100 February/123 Street/Fred/Flooring/600 February/246 Lane/Fred/Electrical/300 March/789 Road/Scott/Drywall/150 What I want to be able to do is selectively copy info from this databse to invoices (also excel). The invoice has three columns: Address/Description/Amount. I want to be able to automatically fill the invoices in as the database is filled in (either automatically, or if I have to actually manually run the macro to do it, that might be fine). Each name (Scott, Fred, etc.) will have their own set of 12 invoices for the year. So, e.g., I want to be able to produce a January invoice for all work done for Scott in January, showing the address, the description and the amount, line by line. So every time work on Scott's address(es) is done, the database is filled in, and i want it to "send" that information to the invoice on the next available line, filling in only the Address/Description/Amount columns from the database. Fred's invoice should fill in as any work is done on Fred's addresses. And once the month changes, the next invoice should start filling in. So first I need to filter the data by the month and the name (and there is actually one more column to filter by, but let's keep this example simpler). Then I need to list the remaining data on the invoice, but only certain cells from the rows that are now left. Help anyone?

    Read the article

  • Creating an AJAX Searchable Database.

    - by Austin
    Currently I am using MySQLi to parse a CSV file into a Database, that step has been accomplished. However, My next step would be to make this Database searchable and automatically updated via jQuery.ajax(). Some people suggest that I print out the Database in another page and access it externally. I'm quite new to jquery + ajax so if anyone could point me in the right direction that would be greatly appreciated. I understand that the documentation on ajax should be enough to tell me what I'm looking for but it appears to talk only about retrieving data from an external file, what about from a mysql database? The code so far stands: <head> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js"></script> </head> <body> <input type="text" id="search" name="search" /> <input type="submit" value="submit"> <?php show_source(__FILE__); error_reporting(E_ALL);ini_set('display_errors', '1'); $category = NULL; $mc = new Memcache; $mc->addServer('localhost','11211'); $sql = new mysqli('localhost', 'user', 'pword', 'db'); $cache = $mc->get("updated_DB"); $query = 'SELECT cat,name,web,kw FROM infoDB WHERE cat LIKE ? OR name LIKE ? OR web LIKE ? OR kw LIKE ?'; $results = $sql->prepare($query); $results->bind_param('ssss', $query, $query, $query, $query); $results->execute(); $results->store_result(); ?> </body> </html>

    Read the article

  • Database migration pattern for Java?

    - by Eno
    Im working on some database migration code in Java. Im also using a factory pattern so I can use different kinds of databases. And each kind of database im using implements a common interface. What I would like to do is have a migration check that is internal to the class and runs some database schema update code automatically. The actual update is pretty straight forward (I check schema version in a table and compare against a constant in my app to decide whether to migrate or not and between which versions of schema). To make this automatic I was thinking the test should live inside (or be called from) the constructor. OK, fair enough, that's simple enough. My problem is that I dont want the test to run every single time I instantiate a database object (it runs a query so having it run on every construction is not efficient). So maybe this should be a class static method? I guess my question is, what is a good design pattern for this type of problem? There ought to be a clean way to ensure the migration test runs only once OR is super-efficient.

    Read the article

  • Database design for sharing photos site?

    - by javaLearner.java
    I am using php and mysql. If I want to do a sharing photos website, whats the best database design for upload and display photos. This is what I have in mind: domain: |_> photos |_> user Logged in user will upload photo in [http://www.domain.com/user/upload.php] The photos are stored in filesystems, and the path-to-photos stored in database. So, in my photos folder would be like: photos/userA/subfolders+photos, photos/userB/subfolders+photos, photos/userC/subfolders+photos etc Public/others people may view his photo in: [http://www.domain.com/photos/user/?photoid=123] where 123 is the photoid, from there, I will query from database to fetch the path and display the image. My questions: Whats the best database design for photo-sharing website (like flickr)? Will there be any problems if I keep creating new folder in "photos" folder. What if hundreds of thousands users registered? Whats the best practices What size of photos should I keep? Currently I only stored thumbnail (100x100) and (max) 1600x1200 px photos. What others things I should take note when developing photos-sharing website?

    Read the article

  • How to read a database record with a DataReader and add it to a DataTable

    - by Olga
    Hello I have some data in a Oracle database table(around 4 million records) which i want to transform and store in a MSSQL database using ADO.NET. So far i used (for much smaller tables) a DataAdapter to read the data out of the Oracle DataBase and add the DataTable to a DataSet for further processing. When i tried this with my huge table, there was a outofmemory exception thrown. ( I assume this is because i cannot load the whole table into my memory) :) Now i am looking for a good way to perform this extract/transfer/load, without storing the whole table in the memory. I would like to use a DataReader and read the single dataRecords in a DataTable. If there are about 100k rows in it, I would like to process them and clear the DataTable afterwards(to have free memory again). Now i would like to know how to add a single datarecord as a row to a dataTable with ado.net and how to completly clear the dataTable out of memory: My code so far: Dim dt As New DataTable Dim count As Int32 count = 0 ' reads data records from oracle database table' While rdr.Read() 'read n records and add them to a dataTable' While count < 10000 dt.Rows.Add(????) count = count + 1 End While 'transform data in the dataTable, and insert it to the destination' ' flush the dataTable after insertion' count = 0 End While Thank you very much for your response!

    Read the article

  • Best way to enforce inter-table constraints inside database

    - by FerranB
    I looking for the best way to check for inter-table constraints an step forward of foreing keys. For instance, to check if a date child record value is between a range date on two parent rows columns. For instance: Parent table ID DATE_MIN DATE_MAX ----- ---------- ---------- 1 01/01/2009 01/03/2009 ... Child table PARENT_ID DATE ---------- ---------- 1 01/02/2009 1 01/12/2009 <--- HAVE TO FAIL! ... I see two approaches: Create materialized views on-commit as shown in this article (or other equivalent on other RDBMS). Use stored-procedures and triggers. Any other approach? Which is the best option? UPDATE: The motivation of this question is not about "putting the constraints on database or on application". I think this is a tired question and anyone does the way she loves. And, I'm sorry for detractors, I'm developing with constraints on database. From here, the question is "which is the best option to manage inter-table constraints on database?". I'm added "inside database" on the question title. UPDATE 2: Some one added the "oracle" tag. Of course materialized views are oracle-tools but I'm interested on any option regardless it's on oracle or others RDBMSs.

    Read the article

  • database schema eligible for delta synchronization

    - by WilliamLou
    it's a question for discussion only. Right now, I need to re-design a mysql database table. Basically, this table contains all the contract records I synchronized from another database. The contract record can be modified, deleted or users can add new contract records via GUI interface. At this stage, the table structure is exactly the same as the Contract info (column: serial number, expiry date etc.). In that case, I can only synchronize the whole table (delete all old records, replace with new ones). If I want to delta(only synchronize with modified, new, deleted records) synchronize the table, how should I change the database schema? here is the method I come up with, but I need your suggestions because I think it's a common scenario in database applications. 1)introduce a sequence number concept/column: for each sequence, mark the new added records, modified records, deleted records with this sequence number. By recording the last synchronized sequence number, only pass those records with higher sequence number; 2) because deleted contracts can be added back, and the original table has primary key constraints, should I create another table for those deleted records? or add a flag column to indicate if this contract has been deleted? I hope I explain my question clearly. Anyway, if you know any articles or your own suggestions about this, please let me know. Thanks!

    Read the article

  • Querying a database using Cocoa.

    - by S1syphus
    Hello everybody. Before I start a disclaimer, I should add a little disclaimer, that I am relatively new to Cocoa development and C in general. However I do have a copy of 'Cocoa Programming for Mac OS X 3rd edition' by Aaron Hillegass, which I am working through and using as a base, if anybody has a copy I am using the 'AmaZone' example on page 346 as a template and base. I trying to develop a small client app that takes a search string then displays results from a database accordingly. The database will contain a: list of files, their location, description & creation date, so for the moment the field number and types will remain the same. After looking around on SO, I saw something like: NSURL *myURL = [NSURL URLWithString:@"http://www.myserver.com/results.php"]; NSArray *sqlResults = [[NSArray alloc] initWithContentsOfURL:myURL]; I've worked with php before, so my current thinking after seeing this is create a php script on the server that queries the database, and creates an XML output. And with the XML response, just parse it. Would this be ok? as is there any major pitfalls anybody can see, that I can't. I know there are some database bundles, I've had a look at BaseTen for Postgres, but being relatively new to this, didn't want to get in over my head. Or if anybody else has any other suggestions and ideas, they would be greatly appreciated.

    Read the article

  • Update database every 30 minutes once

    - by Ravi
    Hi, I am working on C#.Net Windows application with SQL Server 2005. This project i am using ADO.Net Data-service for database maintenance. I am working on industrial automation domain, here data keep on reading more than 8 hours. After reading data from device based on the trigger i have periodically update data to database. for example start reading data on 9.00AM, trigger firing on 9.50AM. Once trigger fire, last 30 minutes(9.20 AM to 9.50AM) data store into data base. After trigger firing keep on reading data from device and store into data base. 10.00AM trigger going to off at time storing data to database has to be stop. Again trigger firing on 11.00AM. Once trigger fire, last 30 minutes(10.30 AM to 11.00AM) data store into data base. After trigger firing keep on reading data from device and store into data base. After 10.00AM trigger not firing means data keep on store locally. Here i don't know until trigger fire keep on reading data where & how to maintain temporarily , After trigger firing, last 30 minutes data how to bring and store into database. I don't know how to achieve it. It would be great if anyone could suggest any idea. Thanks

    Read the article

  • Kohana 3.2 - Database Session losing data on new Page Request

    - by reado
    I've setup my dev Kohana server to use an encrypted database as the default Session type. I'm also using this in combination with Auth to implement user authentication. Right now my user's are able to authenticate correctly and the authentication keys are being stored in the session. I'm also storing additional data like the user's firstname and businessname during the login procedure. When my login function is ready to redirect the user to the user dashboard, I'm able to see all the data correctly when I do $session::instance()->as_array(); (Array ( [auth_user] => NRyk6lA8 [businessname] => Dudetown [firstname] => Matt )) As soon as I redirect the user to another page, $session::instance()->as_array(); is empty. By dumping out the Session::instance() object, I can see that the Session id's are still the same. When I look at my database table though, i dont see any session records being saved and my session table is empty. My bootstrap.php contains: Session::$default = 'database'; Cookie::$salt = 'asdfasdf'; Cookie::$expiration = 1209600; Cookie::$domain = FALSE; and my session.php config file looks like: return array( 'database' => array( 'name' => 'auth_user', 'encrypted' => TRUE, 'lifetime' => 24 * 3600, 'group' => 'default', 'table' => 'sessions', 'columns' => array( 'session_id' => 'session_id', 'last_active' => 'last_active', 'contents' => 'contents' ), 'gc' => 500, ), ); I've looked high and low for an answer.. if anyone has any suggestions, i'm all ears! Thanks!

    Read the article

  • .NET 3.5 DataGridView Wont Save To Database

    - by Jimbo
    I have created a test Winforms application in Visual Studio 2008 (SP1) to see just how "RAD" C# and .NET 3.5 can be. So far I have mixed emotions. Added a service-based database to my application (MyDB.mdf) and added two tables - Contact (id [identity], name [varchar] and number [varchar] columns) and Group (id [identity] and name [varchar] columns) Added a DataSource, selected "Database" as the source, used the default connection string as the connection (which points to my database) and selected "All Tables" to be included in the data source and saved as MyDBDataSet Expanded the data source showing my two tables, selected the "Group" table and chose to display it as a DataGridView (from the dropdown option on the right of each entity) and dragged it onto Form1, thus creating a groupBindingNavigator, groupBindingSource, groupTableAdapter, tableAdapterManager, myDBDataset and groupDataGridView Press F5 to test the application, enter the name "Test" under the DataGridView's "name" column and click "Save" on the navigator which has autogenerated code to save the data that looks like this: private void groupBindingNavigatorSaveItem_Click(object sender, EventArgs e) { this.Validate(); this.groupBindingSource.EndEdit(); this.tableAdapterManager.UpdateAll(this.myDBDataSet); } Stop the application and have a look at the database data, you will see no data saved there in the "Group" table. I dont know why and cant find out how to fix it! Googled for about 30 minutes with no luck. The code is auto-generated with the controls, so you'd think that it would work too :)

    Read the article

  • Database doesn't update using TransactionScope

    - by Dissonant
    I have a client trying to communicate with a WCF service in a transactional manner. The client passes some data to the service and the service adds the data to its database accordingly. For some reason, the new data the service submits to its database isn't being persisted. When I have a look at the table data in the Server Explorer no new rows are added... Relevant code snippets are below: Client static void Main() { MyServiceClient client = new MyServiceClient(); Console.WriteLine("Please enter your name:"); string name = Console.ReadLine(); Console.WriteLine("Please enter the amount:"); int amount = int.Parse(Console.ReadLine()); using (TransactionScope transaction = new TransactionScope(TransactionScopeOption.Required)) { client.SubmitData(amount, name); transaction.Complete(); } client.Close(); } Service Note: I'm using Entity Framework to persist objects to the database. [OperationBehavior(TransactionScopeRequired = true, TransactionAutoComplete = true)] public void SubmitData(int amount, string name) { DatabaseEntities db = new DatabaseEntities(); Payment payment = new Payment(); payment.Amount = amount; payment.Name = name; db.AddToPayment(payment); //add to Payment table db.SaveChanges(); db.Dispose(); } I'm guessing it has something to do with the TransactionScope being used in the client. I've tried all combinations of db.SaveChanges() and db.AcceptAllChanges() as well, but the new payment data just doesn't get added to the database!

    Read the article

  • how to minimize application downtime when updating database and application ORM

    - by yamspog
    We currently run an ecommerce solution for a leisure and travel company. Everytime we have a release, we must bring the ecommerce site down as we update database schema and the data access code. We are using a custom built ORM where each data entity is responsible for their own CRUD operations. This is accomplished by dynamically generating the SQL based on attributes in the data entity. For example, the data entity for an address would be... [tableName="address"] public class address : dataEntity { [column="address1"] public string address1; [column="city"] public string city; } So, if we add a new column to the database, we must update the schema of the database and also update the data entity. As you can expect, the business people are not too happy about this outage as it puts a crimp in their cash-flow. The operations people are not happy as they have to deal with a high-pressure time when database and applications are upgraded. The programmers are upset as they are constantly getting in trouble for the legacy system that they inherited. Do any of you smart people out there have some suggestions?

    Read the article

  • Efficient database access when dealing with multiple abstracted repositories

    - by Nathan Ridley
    I want to know how most people are dealing with the repository pattern when it involves hitting the same database multiple times (sometimes transactionally) and trying to do so efficiently while maintaining database agnosticism and using multiple repositories together. Let's say we have repositories for three different entities; Widget, Thing and Whatsit. Each repository is abstracted via a base interface as per normal decoupling design processes. The base interfaces would then be IWidgetRepository, IThingRepository and IWhatsitRepository. Now we have our business layer or equivalent (whatever you want to call it). In this layer we have classes that access the various repositories. Often the methods in these classes need to do batch/combined operations where multiple repositories are involved. Sometimes one method may make use of another method internally, while that method can still be called independently. What about, in this scenario, when the operation needs to be transactional? Example: class Bob { private IWidgetRepository _widgetRepo; private IThingRepository _thingRepo; private IWhatsitRepository _whatsitRepo; public Bob(IWidgetRepository widgetRepo, IThingRepository thingRepo, IWhatsitRepository whatsitRepo) { _widgetRepo = widgetRepo; _thingRepo= thingRepo; _whatsitRepo= whatsitRepo; } public void DoStuff() { _widgetRepo.StoreSomeStuff(); _thingRepo.ReadSomeStuff(); _whatsitRepo.SaveSomething(); } public void DoOtherThing() { _widgetRepo.UpdateSomething(); DoStuff(); } } How do I keep my access to that database efficient and not have a constant stream of open-close-open-close on connections and inadvertent invocation of MSDTS and whatnot? If my database is something like SQLite, standard mechanisms like creating nested transactions are going to inherently fail, yet the business layer should not have to be concerning itself with such things. How do you handle such issues? Does ADO.Net provide simple mechanisms to handle this or do most people end up wrapping their own custom bits of code around ADO.Net to solve these types of problems?

    Read the article

  • How to "upgrade" the database in real world?

    - by Tattat
    My company have develop a web application using php + mysql. The system can display a product's original price and discount price to the user. If you haven't logined, you get the original price, if you loginned , you get the discount price. It is pretty easy to understand. But my company want more features in the system, it want to display different prices base on different user. For example, user A is a golden parnter, he can get 50% off. User B is a silver parnter, only have 30 % off. But this logic is not prepare in the original system, so I need to add some attribute in the database, at least a user type in this example. Is there any recommendation on how to merge current database to my new version of database. Also, all the data should preserver, and the server should works 24/7. (within stop the database) Is it possible to do so? Also , any recommend for future maintaince advice? Thz u.

    Read the article

  • Pitfalls of the Architecture - Database based HTTP Request/Response Parsing

    - by Sam
    We have a current eCommerce Site that runs on ASP.NET and we hired a consultant to develop an new site bases on SOA. The new site architecture is as follows Web Application : Single Page Web Application (built on javascript/jquery templates - do not use any MVVM frameworks) that uses some javascript thrown all over the place. Service Layer : Very very light Service Layer that does not do anything other than calling a single stored procedure and pass in the entire http request. Database : The entire site content is in the database. The database does the heavy lifting of parsing the request and based on the HTTP method and some input parameter calls the appropriate Store Procedures or views and renders the result in JSON/XML. We have been told by them that this is built on latest and greatest technologies. I have a lot of concerns and of them given are the few Load on the Database SEO concerns for single page application as this is a public facing website Scalablity? Is this SOA? Cross Browser compatability (Site does not work in < IE9) Realistic implementaion of Single page application I know something is not right but I just need to validate my concerns here. Please help me.

    Read the article

  • Have I pushed the limits of my current VPS or is there room for optimization?

    - by JRameau
    I am currently on a mediatemple DV server (basic) 512mb dedicated ram, this is a CentOS based VPS with Plesk and Virtuozzo. My experience with it from day 1 has been bad and I only could sooth my server issues with several caching "Band-aids," but my sites are not as small as they were a year ago either so the issues have worsen. I have 3 Drupal installs running on separate (plesk) domains, 1 of those drupal installs is a multisite, that consists of 5-6 sites 2 of those sites are bringing in actual traffic. Those caching "Band-aids" I mentioned are APC, which seemed to help alot initially, and Drupal's Boost, which is considered a poorman's Varnish, it makes all my pages static for anonymous users. Last 30day combined estimate on Google Ananlytics: 90k visitors 260k pageviews. Issue: alot of downtime, I am continually checking if my sites are up, and lately I have been finding it down more than 3 times daily. Restarting Apache will bring it back up, for some time. I have google search every error message and looked up ways to optimize my DV server, and I am beyond stump what is my next move. Is this server bad, have I hit a impossibly low restriction such as the 12mb kernel memory barrier (kmemsize), is it on my end, do I need to optimize some more? *I have provided as much information as I can below, any help or suggestions given will be appreciated Common Error messages I see in the log: [error] (12)Cannot allocate memory: fork: Unable to fork new process [error] make_obcallback: could not import mod_python.apache.\n Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/mod_python/apache.py", line 21, in ? import traceback File "/usr/lib/python2.4/traceback.py", line 3, in ? import linecache ImportError: No module named linecache [error] python_handler: no interpreter callback found. [warn-phpd] mmap cache can't open /var/www/vhosts/***/httpdocs/*** - Too many open files in system (pid ***) [alert] Child 8125 returned a Fatal error... Apache is exiting! [emerg] (43)Identifier removed: couldn't grab the accept mutex [emerg] (22)Invalid argument: couldn't release the accept mutex cat /proc/user_beancounters: Version: 2.5 uid resource held maxheld barrier limit failcnt 41548: kmemsize 4582652 5306699 12288832 13517715 21105036 lockedpages 0 0 600 600 0 privvmpages 38151 42676 229036 249036 0 shmpages 16274 16274 17237 17237 2 dummy 0 0 0 0 0 numproc 43 46 300 300 0 physpages 27260 29528 0 2147483647 0 vmguarpages 0 0 131072 2147483647 0 oomguarpages 27270 29538 131072 2147483647 0 numtcpsock 21 29 300 300 0 numflock 8 8 480 528 0 numpty 1 1 30 30 0 numsiginfo 0 1 1024 1024 0 tcpsndbuf 648440 675272 2867477 4096277 1711499 tcprcvbuf 301620 359716 2867477 4096277 0 othersockbuf 4472 4472 1433738 2662538 0 dgramrcvbuf 0 0 1433738 1433738 0 numothersock 12 12 300 300 0 dcachesize 0 0 2684271 2764800 0 numfile 3447 3496 6300 6300 3872 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 14 14 200 200 0 TOP: (In January the load avg was really high 3-10, I was able to bring it down where it is currently is by giving APC more memory play around with) top - 16:46:07 up 2:13, 1 user, load average: 0.34, 0.20, 0.20 Tasks: 40 total, 2 running, 37 sleeping, 0 stopped, 1 zombie Cpu(s): 0.3% us, 0.1% sy, 0.0% ni, 99.7% id, 0.0% wa, 0.0% hi, 0.0% si Mem: 916144k total, 156668k used, 759476k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached MySQLTuner: (after optimizing every table and repairing any table with overage I got the fragmented count down to 86) [--] Data in MyISAM tables: 285M (Tables: 1105) [!!] Total fragmented tables: 86 [--] Up for: 2h 44m 38s (409K q [41.421 qps], 6K conn, TX: 1B, RX: 174M) [--] Reads / Writes: 79% / 21% [--] Total buffers: 58.0M global + 2.7M per thread (100 max threads) [!!] Query cache prunes per day: 675307 [!!] Temporary tables created on disk: 35% (7K on disk / 20K total)

    Read the article

  • After restoring a SQL Server database from another server - get login fails

    - by Renso
    Issue: After you have restored a sql server database from another server, lets say from production to a Q/A environment, you get the "Login Fails" message for your service account. Reason: User logon information is stored in the syslogins table in the master database. By changing servers, or by altering this information by rebuilding or restoring an old version of the master database, the information may be different from when the user database dump was created. If logons do not exist for the users, they will receive an error indicating "Login failed" while attempting to log on to the server. If the user logons do exist, but the SUID values (for 6.x) or SID values (for 7.0) in master..syslogins and the sysusers table in the user database differ, the users may have different permissions than expected in the user database. Solution: Links a user entry in the sys.database_principals system catalog view in the current database to a SQL Server login of the same name. If a login with the same name does not exist, one will be created. Examine the result from the Auto_Fix statement to confirm that the correct link is in fact made. Avoid using Auto_Fix in security-sensitive situations. When you use Auto_Fix, you must specify user and password if the login does not already exist, otherwise you must specify user but password will be ignored. login must be NULL. user must be a valid user in the current database. The login cannot have another user mapped to it. execute the following stored procedure, in this example the login user name is "MyUser" exec sp_change_users_login 'Auto_Fix', 'MyUser'   NOTE: sp_change_users_login cannot be used with a SQL Server login created from a Windows principal or with a user created by using CREATE USER WITHOUT LOGIN.

    Read the article

  • SQL SERVER – How to Change Compatibility of Database to SQL Server 2014

    - by Pinal Dave
    Yesterday I wrote about how we can install SQL Server 2014. Right after the blog post was live, I received a question from the developer that he has installed SQL Server 2014 and attached a database file from previous version of SQL Server. Right after attaching database, he was not able to work with the latest features of Cardinality Estimation. As soon as he sent me email I realize what has happened exactly. When he attached database, the database compatibility was set to still of the earlier version of SQL Server. To use most of the latest features of SQL Server 2014, one has to change the compatibility level of the database to the latest version (i.e. 120). Here are two different ways how we can change the compatibility of database to SQL Server 2014′s version. 1) Using Management Studio For this method first to go database and right click over it. Now select properties. On this screen user can change the compatibility level to 120. 2) Using T-SQL Script. You can execute following script and change the compatibility settings to 120. USE [master] GO ALTER DATABASE [AdventureWorks2012] SET COMPATIBILITY_LEVEL = 120 GO   Well, it is that easy :-) Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SharePoint 2010 PowerShell Script to Find All SPShellAdmins with Database Name

    - by Brian Jackett
    Problem     Yesterday on Twitter my friend @cacallahan asked for some help on how she could get all SharePoint 2010 SPShellAdmin users and the associated database name.  I spent a few minutes and wrote up a script that gets this information and decided I’d post it here for others to enjoy.     Background     The Get-SPShellAdmin commandlet returns a listing of SPShellAdmins for the given database Id you pass in, or the farm configuration database by default.  For those unfamiliar, SPShellAdmin access is necessary for non-admin users to run PowerShell commands against a SharePoint 2010 farm (content and configuration databases specifically).  Click here to read an excellent guest post article my friend John Ferringer (twitter) wrote on the Hey Scripting Guy! blog regarding granting SPShellAdmin access.  Solution     Below is the script I wrote (formatted for space and to include comments) to provide the information needed. Click here to download the script.   # declare a hashtable to store results $results = @{}   # fetch databases (only configuration and content DBs are needed) $databasesToQuery = Get-SPDatabase | Where {$_.Type -eq 'Configuration Database' -or $_.Type -eq 'Content Database'}   # for each database get spshelladmins and add db name and username to result $databasesToQuery | ForEach-Object {$dbName = $_.Name; Get-SPShellAdmin -database $_.id | ForEach-Object {$results.Add($dbName, $_.username)}}   # sort results by db name and pipe to table with auto sizing of col width $results.GetEnumerator() | Sort-Object -Property Name | ft -AutoSize     Conclusion     In this post I provided a script that outputs all of the SPShellAdmin users and the associated database names in a SharePoint 2010 farm.  Funny enough it actually took me longer to boot up my dev VM and PowerShell (~3 mins) than it did to write the first working draft of the script (~2 mins).  Feel free to use this script and modify as needed, just be sure to give credit back to the original author.  Let me know if you have any questions or comments.  Enjoy!         -Frog Out   Links PowerShell Hashtables http://technet.microsoft.com/en-us/library/ee692803.aspx SPShellAdmin Access Explained http://blogs.technet.com/b/heyscriptingguy/archive/2010/07/06/hey-scripting-guy-tell-me-about-permissions-for-using-windows-powershell-2-0-cmdlets-with-sharepoint-2010.aspx

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >