Search Results

Search found 3710 results on 149 pages for 'all databases'.

Page 88/149 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • How do I know if my disks are being hit with too much IO reads or writes or both?

    - by Mark F
    Hi All, So I know a bit about disk I/O and bottlenecks relating to this especially when relating to databases. But how do I really know what the max IO numbers will be for my disks? What metric might be available to me for working out roughly (but needs to be a good approximation) of how much capacity (if you will) have I got left available in I/O. I've seen it before where things are bubbling along nicely and then all of a sudden, everything screams to a halt, and it ends up being an IO bound problem. Is there a better way to predict when IO is reaching its limits? This article was interesting but not giving the answer I desire. "http://serverfault.com/questions/61510/linux-how-can-i-see-whats-waiting-for-disk-io". So is my best bet surrounding just looking at 'CPU IO WAIT'? There must be a more reactive method for this? Best, M

    Read the article

  • Searching for online database software/cms

    - by ButterdBread
    I am searching for a software or CMS that manages and displays large online databases, as some kind of frontend to MySQL or any other database. It should be accessible through the browser, be as secure as possible (offering login). The data I'd like to store would be personal information such as name, adress and birthday - also I'd need to be able to add custom fields as well. Also forms and the possibility to download the data in an excel? table would be great. PHPmyadmin is not an option, it should be similar to a CRM but more closely adapted to managing database tables, searching for entries and filtering data. It should be possible to have many user accounts with different rights, with each of them being able to acces certain parts of the data and entering own data. Is there something out there, that might get close to what I imagine? I appreciate any help!

    Read the article

  • Mimicking Google's Persistant Disks -- Is this a logical FreeBSD disaster recovery strategy?

    - by Casey Jordan
    I am looking into FreeBSD to provide a more comprehensive backup and disaster recovery strategy for database servers. Ideally I want to mimic what google is doing with "Persistant disks" https://developers.google.com/compute/docs/disks#snapshots I am hoping someone who knows more about FreeBSD can validate these ideas/questions: I have read that FreeBSD can take instant disk snapshots, therefore if our databases trigger a consistent state (Block all writes, and flush buffers to disk), I would assume I could take snapshots every hour without service interruption for more than a few seconds. Is this true? Is there a way to take snapshots and back them up offsite easily? Can this be done incrementally as to save how much disk space is actually used? If a rollback needed to be done, how long does this typically take? Is a rollback also instantaneous? Thanks!

    Read the article

  • Performance Drop Lingers after Load [closed]

    - by Charles
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I'm noticing a drop in performance after subsequent load tests. Although our cpu and ram numbers look fine, performance seems to degrade over time as sustained load is applied to the system. If we allow more time between the load tests, the performance gets back to about 1,000 ms, but if you apply load every 3 minutes or so, it starts to degrade to a point where it takes 12,000 ms. None of the application servers are showing lingering apache processes and the number of database connections cools down to about 3 (from a sustained 20). Is there anything else I should be looking out for here?

    Read the article

  • Store system passwords with easy and secure access

    - by CodeShining
    I'm having to handle several VPS/services and I always set passwords to be different and random. What kind of storage do you suggest to keep these passwords safe and let me access them easily? These passwords are used for services like databases, webserver user and so on that run customers' services, so it's really important to keep them in a safe place and strong. I'm actually storing them in a google drive spreadsheet file, describing user, password, role, service. Do you know of better solutions? I'd like to keep them on a remote service to make sure I don't have to make backup copies (in case my hdd would fail somehow). I do work on *nix platforms (so windows specific solutions are not a choice here).

    Read the article

  • High Availability Clustering and Virtualization

    - by tmcallaghan
    I'm trying to understand how the various virtualization vendors (specifically Amazon EC2, but also VMware and Xen) enable software vendors to provide a real HA solution in the environment where the servers are virtualized. Specifically, if I'm running any HA application (exchange, databases, etc) I need to ensure that my redundant virtual "servers" aren't located on the same physical server. Using in-house virtualization solutions (VMware, Xen, etc) I can provision accordingly as well as check the virtual - physical arrangement. I could, however, accidentally "vmotion" to the same physical hardware. With EC2, I don't even have the ability at provision time to select different physical servers. Since their Cluster Compute Instances are 1 virtual server per physical server it seems to be the only way to guarantee I don't have a false sense of redundancy. Any ideas or thoughts would be helpful. What are others doing about this problem? If the vendors provided an API where I could get something as simple as a unique physical system identifier I could at least know if I'm going to have an issue. -Tim

    Read the article

  • VPC on Windows 7 very slow network

    - by Shigg
    I have a Windows 2003 virtual machine which I use for website testing. I've just installed Windows 7 and am using the new version VPC (not xp mode). When I try to copy a file - I need to copy some big databases across - I get a file copy speed of about 20k per sec. Copying from one PC to another on the real network transfers files at 13mb per second. Any ideas what may be causing this? I've turned off differential network compression on win 7. The Virtual HD is on a seperate physical drive to the OS. Running Windows 7 64 bit on a dual xeon with 16gb ram and 10,000 rpm drives. Tried installing VPC 2007 but windows blocks it running saying its not compatable. Many thanks for any ideas.

    Read the article

  • Which modules can be disabled in apache2.4 on windows

    - by j0h
    I have an Apache 2.4 webserver running on Windows. I am looking into system hardening and the config file httpd.conf. There are numerous load modules and I am wondering which modules I can safely disable for performance and / or security improvements. Some examples of things I would think I can disable are: LoadModule cgi_module others like LoadModule rewrite_module LoadModule version_module LoadModule proxy_module LoadModule setenvif_module I am not so sure they can be disabled. I am running php5 as a scripting engine, with no databases, and that is it. My loaded modules are: core mod_win32 mpm_winnt http_core mod_so mod_access_compat mod_actions mod_alias mod_allowmethods mod_asis mod_auth_basic mod_authn_core mod_authn_file mod_authz_core mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_dav_lock mod_dir mod_env mod_headers mod_include mod_info mod_isapi mod_log_config mod_cache_disk mod_mime mod_negotiation mod_proxy mod_proxy_ajp mod_rewrite mod_setenvif mod_socache_shmcb mod_ssl mod_status mod_version mod_php5

    Read the article

  • MySQL server simple insert/update/delete queries are taking a long time to execute

    - by ElGabbu
    We have a VPS hosting server with a MySQL server running on it. We host several databases for client's websites. Recently we have noticed that insert/update and delete queries are taking a long time to execute sometimes as close as 30 seconds. I use the following command to see these queries being executed: watch -n1 mysqladmin proc stat We have still not been able to track the root of this problem. I would apprecite if anyone had any pointers as to what we can check or improve to resolve the issue. Thanks

    Read the article

  • SQL 2008 R2 Mirroring Issue

    - by CWL
    Windows 2008 R2 with SQL 2008 R2 - Using Mirroring of a Database across the WAN in a HA setup with one witness. One issue I am having is during a failure (ever so often) the system fails over or tries, but leaves both databases in a Restoring State. My guess is the failover issue happens when there is a WAN bouncing and the systems get confused. The usual fix is to reboot the sql servers. Has anyone seen this type of failure? While this does not happen often it does causes an issue and concern with HA not being trusted fully.

    Read the article

  • How can I enable encryption for SQLite3 in PHP5.3?

    - by meouw
    The php manual for SQLite3::open has this information: public bool SQLite3::open ( string $filename [, int $flags = SQLITE3_OPEN_READWRITE | SQLITE3_OPEN_CREATE [, string $encryption_key ]] ) Opens an SQLite 3 Database. If the build includes encryption, then it will attempt to use the key. I would like to use encrypted SQLite databases for a project I'm working on, but I can't find any information anywhere on how to build the SQLite module including encryption. Does anyone know how to do this? Perhaps it's so obvious no one has published any information or perhaps only commercial modules are available. I've noticed that the developers of SQLite offer a proprietary encryption extension. Is this the only way to go?

    Read the article

  • Which type of memory is faster than average for server use?

    - by Tony_Henrich
    I am building a server computer which will be used for SQL Server and I am planning to use like 32G+ of RAM and putting the databases in memory. (I know all about data loss issues when power is gone). I haven't been up to date with the new types of memory sticks out there. What kind of memory should I get which is faster than average and not very expensive? I am buying a lot of ram so I am looking for memory that's above average but below high end if high end is very expensive. (I will be using Windows Server 2008 R2 Standard or Windows HPC Server 2008 R2)

    Read the article

  • Recomposing data structure in Excel

    - by Velletti
    I've got a sheet of 35k rows of the following kind of data that I want to reshape into table below. So, I want to reshape this data in a way to get all the people within a specific GroupID in separate columns. I suppose that I should add a counter for each row within specific group id? Also, I suppose these kind of issues are a lot more comfortable to be done in databases? Since I often have this kind of data, I need be much quicker about solving it, then I am now.

    Read the article

  • What steps are required to get DB2 working again after renaming the Windows XP system it was running on?

    - by Suppressingfire
    I think this is a fairly well known problem, but I haven't found a really solid solution to add to my toolbox. Here's the sequence of steps that leads to the problem: Install Windows (e.g., XP), naming the system XXX Install DB2 and create some databases Rename the system from XXX to YYY (via the System control panel's Computer Name tab Reboot and find DB2 unable to start How can I get DB2 up and running again without having to reinstall it and without having to rename the system back to XXX? I did find a blog post that hints at some registry values to tweak, but I'm hoping the SF community can come up with a solution in which I can have more confidence.

    Read the article

  • Could 11.5 Million 401's be causing bottlenecks?

    - by roviuser
    I'm going to preface this with a warning: My knowledge about servers and networking is VERY limited, and if you provide me with technical answers, I probably won't understand much until I research your answer further. I'm trying to expand my knowledge and learn about it, though. If the information that I am able to provide in this question is insufficient to answer the question, I understand, and it can be closed. We have a SharePoint 2007 system that is extremely slow, mostly from huge amounts of use. We've been told that the main speed bottleneck is the access to the sql databases. However, they do provide a statistics dashboard, so I did some poking around, and noticed that we have 11.5 million or more 401 - access denied errors every month. Could this be causing major speed/performance decreases? Authentication for sharepoint uses active directory.

    Read the article

  • How to enable a Web portal-based enterprise platform on different domains and hosts without customization

    - by S.Jalali
    I work at Coscend, a cloud and communications software product company. We have built a Web portal-based collaboration platform that we would like to host on five different Windows- and Linux-based servers in different hosting environments that run Web servers. Each of these Windows and Linux servers have a different host name and domain name (and IP address). Out team would appreciate your guidance on: (1) Is there a way to implement this Web portal-based platform on these Linux servers without customizing the host name, domain name and IP address for each individual instance? (2) Is there a way to create some variables using JavaScript for host name and domain name and call them from the different implementations? (3) Can these JavaScript modules be made portable and re-usable object modules for different environments and instances? The portal is written in JavaScript that is embedded in HTML5 and padded with CSS3. Other technologies include Flash, Flex. Databases used are PostgreSQL and MySQL.

    Read the article

  • What factors can affect performance of Http Server written in C-Sharp? [on hold]

    - by Yousaf
    I am having trouble in terms of handling huge databases. I have multiple clients like 100-300 (clients are basically servers with i.e windows sql). Each client may have 38 thousand rows/listing of data, each row has 10-12 fields. I cannot afford to have json files of each client and than handle them on main server, because of memory issue. What if i have http server written in c or c# installed on clients and they return 250 rows in each response to the main server. How the factors like speed, memory or other issues can effect us ? What exactly I am asking for ? In short words if a server writter in c-sharp sends 250 rows per request. What factors can effect the performance of server ? for example. Speed, processing, Operating system, Implementation of algorithm of server ? How these factors can really effect the performance on large scale?

    Read the article

  • How do I deploy Java code on an EC2 instance?

    - by Marianna
    I just started with Amazon web services, and I have an EC2 instance. I downloaded the JAVA SDK and the Eclipse toolbox. I am able to run a sample program locally on my PC and connect to the Amazon databases, etc. My question is, what do I need to do to get this working on my EC2 instance? This may not even be specific to AWS. On Eclipse, I can just "Run as Application" and run any code. On the server side, what do I need to do? Should I ftp over my .java files? Should I export it to a jar and upload that? Do I need to install anything special to actually run it?

    Read the article

  • What's the best practice for taking MySQL dump, encrypting it and then pushing to s3?

    - by HalogenCreative
    This current project requires that the DB be dumped, encrypted and pushed to s3. I'm wondering what might be some "best practices" for such a task. As of now I'm using a pretty straight ahead method but would like to have some better ideas where security is concerned. Here is the start of my script: mysqldump -u root --password="lepass" --all-databases --single-transaction > db.backup.sql tar -c db.backup.sql | openssl des3 -salt --passphrase foopass > db.backup.tarfile s3put backup/db.backup.tarfile db.backup.tarfile # Let's pull it down again and untar it for kicks s3get surgeryflow-backup/db/db.backup.tarfile db.backup.tarfile cat db.backup.tarfile | openssl des3 -d -salt --passphrase foopass |tar -xvj Obviously the problem is that this script everything an attacker would need to raise hell. Any thoughts, critiques and suggestions for this task will be appreciated.

    Read the article

  • How to use filegroups for DB split?

    - by Robin Jain
    In my project I have one DB used for everything. I want it to break into two databases. Static tables having look up values are to be stored in one DB and another DB would be having tables with dynamic data. My problem is that how would I use foreign key constraint in between those two DBs. Can someone help me out and suggest a way to proceed, better if I'm provided an example for the same. I thought of using synonyms for tables and then constraints on synonyms. but later I came to know that synonyms couldn't be used for constraints. I need to maintain relationships among the tables from both DB as the issue is with update, with a new release I just want to update look up tables and for the same I want to split my DB. I want to know how FileGroups could be used for this.

    Read the article

  • How Do I Migrate 100 DBs From One MS-SQL 2008 Server To Another? (looking for automation)

    - by jc4rp3nt3r
    Let me start by saying that I am not a DBA, but I am in a position where I am responsible for moving just under 100 MS-SQL 2008 DBs from our current development server, to a new/better/faster development server. As this is just a local dev server, temporary downtime is acceptable, but I am looking for a way to move all of the databases (preferably in bulk). I know that I could take a bak of each, and restore it on the new server, but given the volume of DBs, I am looking for a more efficient way. I am not opposed to learning a new piece of software, writing code or any other requirement, so long as it speeds up the process.

    Read the article

  • Entering the user's name in a URL for Chrome through Group Policy

    - by Automate Everything
    I am managing a Windows Server 2008 R2 server, with several Windows 7 machines, and we have recently deployed Google Chrome using Group Policy. We also have a locally hosted intranet for storing procedures, forms, and so on, as well as reports that pull directly from our databases. I am trying to put the user's name in the startup URL for Chrome, so that when they open Chrome at the beginning of the day, it can pull a list of items from the database that contains their username. The report works, and I have it using a drop down right now, but I would like to be able to put their username in the URL as a GET variable instead. Does anybody know how I would go about doing that for Chrome? I tried putting ${user_name} in the URL, and I tried putting %username% in the URL, but that didn't translate to anything. Is there some way to escape it so that it gets translated by the system into a username? Any help would be greatly appreciated.

    Read the article

  • We have a Solaris 9 server running Oracle 10G and have been getting memory consumption errors for a few weeks now

    - by another_netadmin
    We recently upgraded our Enterprise application and everything worked ok until one weekend when we did a server reboot, ever since then we have run into memory errors. The server has 4GB of physical memory installed and the kernel parameters are set to the following (/etc/system). I'm not an Oracle guy so I'm not sure where to start looking but any informaiton is greatly appreciated. Thanks in advance. There are two databases running on this server, one is a production database and the other is a pre-production database. [root@bandb /]# cat /etc/system | grep seminfo set semsys:seminfo_semmni=100 set semsys:seminfo_semmns=2048 set semsys:seminfo_semmsl=400 set semsys:seminfo_semopm=100 set semsys:seminfo_semvmx=32767 [root@bandb /]# cat /etc/system | grep shminfo set shmsys:shminfo_shmmax=4294967295 set shmsys:shminfo_shmmin=1 set shmsys:shminfo_shmmni=100 set shmsys:shminfo_shmseg=10 [root@bandb /]#

    Read the article

  • Seperating paid and free users on SQl Server 2008 R2

    - by Alex
    Right now we have hundreds of "free demo" trial users on the same db server/database with our paid mission critical users. I see this as both a security risk and a load issue. I have also seen cases where demo users run large reports and crash the server.. Does it make sense to separate these users into separate databases on SQL? Rather than just have one DB for all users? My thinking is so one group of users has no effect on the other? Can one group still pose a risk if we do this? I plan to have them on separate web servers also (windows 2008 r2, iis 7, .net 4.0)

    Read the article

  • Move database from SQL 7 to 2005 / 2008

    - by etechpartner
    I have several pretty large databases located in a SQL Server 7 box. Whats the best way to get them into SQL Server 2008? As far as I know, there were changes to the underlying file structures so I am not sure that a simple detach/attach would work. When I tried attaching from 2008 it complained strongly. "Version no longer supported" etc etc. What options do I have? Are there any tools on the market that can connect to both 7 and 2008 and then move the schema and data?

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >