Search Results

Search found 31891 results on 1276 pages for 'database schema'.

Page 577/1276 | < Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >

  • Statsd, Graphite and graphs

    - by w00t
    I've setup Graphite and statsd and both are running well. I'm using the example-client.py from graphite/examples to measure load values and it's OK. I started doing tests with statsd and at first it seemed ok because it generated some graphs but now it doesn't look quite well. First, this is my storage-schema.conf: pattern = .* retentions = 10:2160,60:10080,600:262974 I'm using this command to send data to statsd: echo 'ssh.invalid_users:1|c'| nc -w 1 -u localhost 8126 it executes, I click Update Graph in the Graphite web interface, it generates a line, hit again Update and the line disappears. If I execute the previous command 5 times, the graph line will reach 2 and it will actually save it. Again running the same command two times, graph line reaches 2 and disappears. I can't find what I have misconfigured. The intended use is this: tail -n 0 -f /var/log/auth.log|grep --line-buffered "Invalid user" | while read line; do echo "ssh.invalid_users:1|c" | nc -w 1 -u localhost 8126; done

    Read the article

  • HUGE MAC FILTER and scripting

    - by user195917
    I make an dhcp server on CentOS, and i apply a mac filter for my clients. Now, with a small number of clients (max 10) ,is not that hard, but what I will do with 2000 clients? My idea was to create a list (ex. "macfilter.lst") and this list, to be updated after a database. I have tow questions. First: How do i create a filter in IPTABLES that takes info`s from a file (file hosted on server) Second: Any idea about how to write a script, that update a file after a database?? Thanks so much for your help.

    Read the article

  • Benefits of a RAID BBU in addition to a double UPS + PS system

    - by Wikser
    Today I roughly measured the benefits of enabling write-back on the RAID controller on a server at work. It got no RAID battery-backup-unit (BBU) so the write-cache is currently disabled. As the server is not used to capacity (by far), the results in most test were spectacular, e.g.: Database CRUD: before 35s, after 4s Saving a 1MB Excel file: before: 20s (!), after: 0.5s Of course having a BBU is always recommended, but what are the main benefits of installing a BBU to a system, which got redundant power supplies and is attached to UPSs? Does this depend on the type of the system (database, file, terminal)? What is a realistic fail scenario which could be prevented by a BBU? Thanks in advance!

    Read the article

  • Could SQL Server 2008 replication be used with NLB to allow unlimited scaling of reporting servers?

    - by John Keranos
    We are currently using transactional replication in SQL Server 2008 to keep a secondary reporting server synchronized with a primary database server. This has been working weel and keeps some of the load off the primary server. Would it be possible to scale this solution to multiple reporting servers? We're expecting an increased load of read-only queries and it would be nice to be able to add reporting servers as needed. The general idea was the following: Each reporting server would use a "pull" subscription to get the data from the primary database publication. These reporting databases could be a couple of minutes behind the primary server without it being an issue. The reporting servers would be NLB'd together. All read-only queries would be directed to the NLB which should spread the load across the servers.

    Read the article

  • Squid closing the connection on long HTTP GET requests

    - by Rhys
    When running a database query on a specific external site we use, Squid seems to cut off the connection after a consistent period of time (just over a minute). The query is submitted through a standard web form is that uses GET to query their database. Firefox 3 just displays a blank page. Internet Explorer throws a 'Page Cannot Be Displayed' error (tested in v6 and v8). When we perform the same query on the same machine, but bypass the Squid proxy, it works fine. The query takes about two and a half minutes to complete. There are a few timeout settings in Squid, but I honestly don't know what one to be looking at. Any possible solutions would be much appreciated. Cheers

    Read the article

  • How do I set up disk quotas over LDAP on CentOs?

    - by Noxshun
    I've been google-ing for some time and I haven't been able to find any resources or hints on the subject. I am wondering if it is possible to do so, if so how? Any nudge in right direction will be appricated. I do know that if you download and install "Linux Quota" from source, you'll get some perl scripts which are supposed to aid with the matter. But there is as far as I know absolutely no good documentation to help you along the way. I am also running a NFS server from the same machine. Note: This is for a university assignment, so I might be totally stupid for asking this question. I am trying to explore the options. If there is a better way of solving this, please do tell. Edit: Here is a link to the site of Linux Quota. They do include a LDAP schema, so it should be possible.

    Read the article

  • Oracle 11g network configuration

    - by Kylo
    Hi, I installed Oracle 11g Enterprise Edition on my Windows 7 Pro. My problem is that I cannot log into database from other host (local network). When I connect to database using Oracle SQLdeveloper everything is ok as long as I specify 'localhost' in connection configuration. However, when I change it to '192.168.0.190' which is my host IP address I get 'The Network Adapter could not establish the connection'. I get the same error when logging in from other host in local network. What is the problem?

    Read the article

  • No remote access to PostgreSQL db

    - by gattol
    i'm stuck in connecting to a PostresSQL database from remote host. The server is accepting incoming connections on port 5432 and i've configured pg_hba.conf like this: local all all md5 host all all 0.0.0.0/0 md5 and the postgresql.conf like this: listen_addresses = '*' port = 5432 max_connections = 100 I don't have any problem accessing from local but when i try to connect via psql with something like this: psql -U myuser -h hostname db_name I get this error: psql: FATAL: no pg_hba.conf entry for host "87.zz.yy.xxx", user "myuser", database "db_name", SSL off I also tried to put the host 87.zz.yy.xxx in the pg_hba.conf file without success.

    Read the article

  • Windows 2008 Server on VMWare (hardware)

    - by Bill
    I want to setup a single server to run a few virtual servers for our datacenter. I do not have a lot of money to spend so I am trying to gain bang for the buck. My budget is around $2,000. So I was thinking about building the following as the VMWare physical server: Intel iCore 7 950 (LGA1366, 4 cores,8 threads) Gigabyte GA-X58-USB3 LGA 1366 X58 ATX Intel Motherboard 24 GB of Viper II Series, Sector 7 Edition, Extreme Performance DDR3-1600 (PC3-12800) CL9 Triple Channel Memory VelociRaptor 300GB 10,000 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive I am planning on running the newest version of VMWare ESXi (64-bit). On these I am planning on running a few various servers: Windows 2008 Server R2 w/ IIS (several custom built ASP.NET Apps) Windows 2008 Server R2 w/ MS SQL 2008 Database Server Linux Web Server w/ Several WordPress Blogs (XAMPP?) Windows 2008 Server R2 w/ IIS (DEV ENVIRONMENT) Windows 2008 Server R2 w/ MS SQL 2008 Database Server (DEV ENVIRONMENT) In your opinion, will this hardware be sufficient to run the above load with room for possible 2-3 more virtual machines (probably lightweight web servers)?

    Read the article

  • How can give privilege for DB to a user [ ERROR 1044 (42000): Access denied for user ''@'localhost']

    - by Ahn
    I have created user in mysql 5.1 and given the ALL privilege, details given below, mysql> show GRANTS FOR test; +-------------------------------------------------------------+ | Grants for test@% | +-------------------------------------------------------------+ | GRANT ALL PRIVILEGES ON *.* TO 'test'@'%' WITH GRANT OPTION | | GRANT ALL PRIVILEGES ON `tt`.* TO 'test'@'%' | +-------------------------------------------------------------+ 2 rows in set (0.00 sec) But the show databases is not showing the databases on the mysql. It only shows as given below. How can give privilege for other DB s tables as well for the user 'test'? mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | test | +--------------------+ Error while I tried to use the mysql DB as user test: mysql> use mysql; ERROR 1044 (42000): Access denied for user ''@'localhost' to database 'mysql'

    Read the article

  • PostgreSQL 9.0 HA load balancing between servers

    - by Vijay Ramachandran
    Hey folks, I'm bashing my head to configure load balancing stuff between two database servers. I have no clue whether, I can find any mechanism to implement this. I already tried to implement Heart beat clustering but it requires virtual Ip wherein I can't create virtual IP or assign my own IP address in amazon EC2. Is there a way to configure PostgreSQL database servers in similar to Amazon load balancing kind of thing ? If so, please suggest the solution. Thanks in advance.

    Read the article

  • Terminal runs svn commands very slowly, how can I speed this up?

    - by Paul
    Spending all day in terminal is beginning to get frustrating. We're working with large CakePHP projects, including a ton of schema files and complex controllers. Whenever I go into a project, and enter svn up, or svn ci my system chokes. It takes a good 15-30 seconds before it returns what revision number I'm on. I'm running OSX 10.6 on a Macbook Pro. Any reasoning behind this? Anyway I could fix this speed issue?

    Read the article

  • Drupal7 doesn't detect MySQL on CentOS, but Wordpress3 does?

    - by jyaworski
    Hey guys. I'm running CentOS 5.5 here with Apache2, PHP5, and MySQL 5. My wordpress install on the same system runs perfectly, but the drupal7 install script only detects SQLite. The mysql module is enabled in php.ini, so that isn't the problem. Do you think it could be something with Drupal 7, or my PHP install? I tested it on localhost (I'm essentially running ArchLinux with Apache) and it installs just fine. I don't see a difference between my local php.ini and my server php.ini. I get this when accessing install.php on the server. SQLite The type of database your Drupal data will be stored in. Your PHP configuration only supports a single database type, so it has been automatically selected. Edit: The mysql PDO module is installed already.

    Read the article

  • Less daunting front end for SQL Server

    - by Martin
    We currently have a few users who have been using Access very succesfully to throw around large amounts of data. We've now got to the point where the data is just too large to be held in Access, as well as wanting to hold it in a single place where multiple users can access it. We have therefore moved the data over to SQL Server. I want to provide a general tool that they can use to view the data on the server and do some simple things like run queries and filters and export the data for offline manipulation. I don't want the support headaches that might come with rolling out SQL Management Studio, and neither do I want to have to create an Access database with links for each current database or ones that are created in the future. Can anyone recommend a simple tool that will connect to a server, list all the databases and allow a user to drill into a table and look at the data. Many thanks.

    Read the article

  • How to store data on a machine whose power gets cut at random

    - by Sevas
    I have a virtual machine (Debian) running on a physical machine host. The virtual machine acts as a buffer for data that it frequently receives over the local network (the period for this data is 0.5s, so a fairly high throughput). Any data received is stored on the virtual machine and repeatedly forwarded to an external server over UDP. Once the external server acknowledges (over UDP) that it has received a data packet, the original data is deleted from the virtual machine and not sent to the external server again. The internet connection that connects the VM and the external server is unreliable, meaning it could be down for days at a time. The physical machine that hosts the VM gets its power cut several times per day at random. There is no way to tell when this is about to happen and it is not possible to add a UPS, a battery, or a similar solution to the system. Originally, the data was stored on a file-based HSQLDB database on the virtual machine. However, the frequent power cuts eventually cause the database script file to become corrupted (not at the file system level, i.e. it is readable, but HSQLDB can't make sense of it), which leads to my question: How should data be stored in an environment where power cuts can and do happen frequently? One option I can think of is using flat files, saving each packet of data as a file on the file system. This way if a file is corrupted due to loss of power, it can be ignored and the rest of the data remains intact. This poses a few issues however, mainly related to the amount of data likely being stored on the virtual machine. At 0.5s between each piece of data, 1,728,000 files will be generated in 10 days. This at least means using a file system with an increased number of inodes to store this data (the current file system setup ran out of inodes at ~250,000 messages and 30% disk space used). Also, it is hard (not impossible) to manage. Are there any other options? Are there database engines that run on Debian that would not get corrupted by power cuts? Also, what file system should be used for this? ext3 is what is used at the moment. The software that runs on the virtual machine is written using Java 6, so hopefully the solution would not be incompatible.

    Read the article

  • AWS VPC - why have a private subnet at all?

    - by jkim
    In Amazon VPC, the VPC creation wizard allows one to create a single "public subnet" or have the wizard create a "public subnet" and a "private subnet". Initially, the public and private subnet option seemed good for security reasons, allowing webservers to be put in the public subnet and database servers to go in the private subnet. But I've since learned that EC2 instances in the public subnet are not reachable from the Internet unless you associate an Amazon ElasticIP with the EC2 instance. So it seems with just a single public subnet configuration, one could just opt to not associate an ElasticIP with the database servers and end up with the same sort of security. Can anyone explain the advantages of a public + private subnet configuration? Are the advantages of this config more to do with auto-scaling, or is it actually less secure to have a single public subnet?

    Read the article

  • Apply email retention policy to Inbox but not subfolders?

    - by NaOH
    Our official email policy states that email older than 90 days in the Inbox is moved to Deleted Items, not including subfolders of the Inbox. This wasn't a problem to implement in Exchange 2003. In 2010, however, it appears that Policy Tags applied to the Inbox also apply to its subfolders. How can I prevent this from occuring? EDIT: Here is the output of Get-RetentionPolicy: RunspaceId : b6a05d43-3e56-4348-9d0e-2d2bf7e6c283 RetentionId : 56417b54-af3b-4c14-bd3c-9dcf9bdd133e RetentionPolicyTagLinks : {Junk E-mail - 7 Days, Deleted Items - 7 Days, Sent Items - 90 Days, Inbox - 90 Days} AdminDisplayName : ExchangeVersion : 1.0 (0.0.0.0) Name : Default Company Policy DistinguishedName : CN=Default Company Policy,CN=Retention Policies Container,CN=Company,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=domain,DC=com Identity : Default Company Policy Guid : 56417b54-af3b-4c14-bd3c-9dcf9bdd133e ObjectCategory : domain.com/Configuration/Schema/ms-Exch-Mailbox-Recipient-Template ObjectClass : {top, msExchRecipientTemplate, msExchMailboxRecipientTemplate} WhenChanged : 2/8/2013 2:18:11 PM WhenCreated : 2/8/2013 2:11:18 PM WhenChangedUTC : 2/8/2013 10:18:11 PM WhenCreatedUTC : 2/8/2013 10:11:18 PM OrganizationId : OriginatingServer : server.domain.com IsValid : True

    Read the article

  • Clean out a large MediaWiki text table

    - by Bart van Heukelom
    I just discovered that an old MediaWiki of mine was infested with spam, and the database table named "text" (which contains the page content) is 3GB large. I've deleted all the spam pages manually, but: The table is still the same size. I wonder how it got to 3GB anyway. There wasn't that much spam (about a hundred medium-sized pages) How can I get rid of this mess? If you want to inspect the wiki, it's over here. The database is MySQL 5.0.75.

    Read the article

  • I am trying to setup phpMyAdmin to use with a remote MySQL databases on Scientific Linux release 6.2

    - by techsjs2012
    I am trying to setup phpMyAdmin to use with a remote MySQL databases on Scientific Linux release 6.2. If I use the mysql command line to connect to the remote database it works great but if I use mysqladmin I am getting "#2002 Cannot log in to the MySQL server". I have found if I do a: setenforce 0 It will work from myphpadmin to my remote database but once I reboot or set Scientific Linux setenforce back to one it stops working again.. I know setenforce 0 is not the right thing to do but can someone please give me details steps on how to get this working the right way... thanks I am new to Scientific Linux and been having some issues.. thanks

    Read the article

  • Terminal runs svn commands very slowly, how can I speed this up?

    - by Paul
    Spending all day in terminal is beginning to get frustrating. We're working with large CakePHP projects, including a ton of schema files and complex controllers. Whenever I go into a project, and enter svn up, or svn ci my system chokes. It takes a good 15-30 seconds before it returns what revision number I'm on. I'm running OSX 10.6 on a Macbook Pro. Any reasoning behind this? Anyway I could fix this speed issue?

    Read the article

  • Does TFS 2010 lock a project collection when it's being cloned?

    - by Hirvox
    We're planning to migrate a project collection currently hosted on TFS 2010 to TFS 2012. We want to keep the current installation running while resolving any issues that might arise, so we need to copy the current project collection to the new server. However, TFS doesn't allow us to attach a restored database backup directly. The database first must be detached from the original TFS installation. We can get around that limitation by cloning the project collection and detaching the clone, but we're not sure whether that would also impact the original project collection. Does TFS lock the original project collection while it's being cloned?

    Read the article

  • "The requested operation could not be completed due to a file system limitation" 3202

    - by user46529
    I backup SQL Server database and it fails BACKUP DATABASE dd TO DISK = '\backupServer\backups\dd.bak' WITH COMPRESSION, CHECKSUM, NOFORMAT, INIT , BlockSize = 65536 , BufferCount = 2200 , MaxTransferSize = 4194304 The backup size is 3TB and I have 6TB free space on bacup server. I am using backup parameters per SQLCAT whitepaper. Everything works ok when I backup to local HDD and it always fails when I backup to network share. After about 6 hours. Can't find why. Thank you. Yes. The backup over the network is fastest and saves me 3Tb of local disk space :) Thanks for pointing to the memory issue. I left 4Gb to OS and it worked!

    Read the article

  • MS Access 2007 end user access

    - by LtDan
    I need some good advise. I have used Access for many years and I use Sharepoint but never the two combined. My newly created Access db needs to be shared with many users across the organization. The back end is SQL and the old way to distribute the database would be placing the db on a shared drive, connecting their PC ODBC connections to the SQL db and then they would open the database and have at it. This has become the OLD way. What is the best (and simpliest) way to allow the end users to utilize a frontend for data entry/edit reporting etc. Can I create a link through SharePoint and the user just open it from there. Your good advise is greatly approciated.

    Read the article

  • Can you convert an address to a zip code in a spreadsheet?

    - by moe37x3
    Given a column of street addresses with city and state but no zip in a spreadsheet, I'd like to put a formula in a second column that yields the ZIP code. Do you know a way to do this? I'm dealing with US addresses, but answers pertaining to other countries are interesting, too. UPDATE: I guess I'm mostly hoping that there's a way to do this in Google Spreadsheets. I realize that you need to access a vast ZIP code database to do this, but it seems to me that such a database is already inside Google Maps. If I put an address in there without ZIP code, I get back an address with ZIP code. If Maps can do that lookup, maybe there's a way to make it happen in Spreadsheets, too.

    Read the article

< Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >