Search Results

Search found 38931 results on 1558 pages for 'database testing'.

Page 563/1558 | < Previous Page | 559 560 561 562 563 564 565 566 567 568 569 570  | Next Page >

  • Port forwarding no longer works

    - by Auryn
    Prior to testing an OpenVPN installation, I setup a basic VPN server using the software already built into Windows 7. Port forwarding on the Linksys router worked as normal and I was able to connect remotely. After doing an install of OpenVPN Access Server on a spare box running Ubuntu, and adding new ports to be forwarded I was unable to access the VPN from an external source as the required ports all indicated that they were closed. (During testing XRDP and VNCSERVER were also installed to facilitate access to the box) Checking back on the Windows 7 VPN resulted in no access to that vpn setup either. All ports are now reporting a being closed despite being previously open even ports that were being used for other services. Adding and removing port forwarding rules seem to have no effect. At this point, in order to troubleshoot, both the firewall and anti-virus software have been disabled on the Windows 7 machine. Could this be just a router issue? Is there any way out of this without having to reset and reconfigure the router?

    Read the article

  • innodb recovery from .ibd files

    - by mr heLL
    My website has crashed a few days ago. The hosting company says some innodb database crashed. They sent a MySql data folder. I tried to restore the database, but phpmyadmin is only showing MyISAM tables. I checked the database with navicat. When I click innodb table, I got this error table 'xyz.wp_posts' doesn't exist. is there anyway to fix this on windows? Feel free to download db: www.degisimanaliz.com/xyzdb.tar.gz Very old backup: www.degisimanaliz.com/29_Ocak_Yedek_deganaliz.sql.gz

    Read the article

  • development server?

    - by ajsie
    for a project there will be me and one more programmer to develop a web service. i wonder how the development environment should be like. cause we need central storage (documents, pictures, business materials etc), file version handling, lamp (testing the web service) etc. i have never set up an environment for this before and want to have suggestions from experienced people which tools to use for effective collaboration. what crossed my mind: seperate applications: - google wave (for communication forth and back, setting up guide lines, other information) - team viewer (desktop sharing) - skype (calling) vps (ubuntu server): - svn (version tracking) - ftp (central storage) - lamp (testing the web service) - ssh (managing the vps) is this an appropriate programming environment? and regarding the vps, is it best practice to use ONE vps for all tasks listed up there? all suggestions and feedbacks are welcome!

    Read the article

  • Disk space consumed

    - by aravind-zoniac
    I have a very serious problem here in one of my client server. The remote server is installed with REDHAT ES 5.2 and we have a postgresql as database. I was trying to clone the database. The hard drive had 32 GB of free space before taking clone. I started cloning the database and during the process, there was some internet issue and due to this, putty got disconnected before taking clone. So I opened another fresh session and I was able to see only 2.5GB as available space. Also I was not able to see the clone in the psql terminal. Any solution to get the 29GB that was consumed????

    Read the article

  • please explain my fio results - is O_SYNC|O_DIRECT misbehaving on linux?

    - by Zoltan
    I'm going mad over figuring out what the problem could be with one of our storage boxes. With a simple fio script I'm testing random writes using bs=1M and direct=1. The SSD is a Samsung 840pro attached to an LSI HBA (3Gbit/s ports). This is the result I'm getting under FreeBSD 9.1: WRITE: io=13169MB, aggrb=224743KB/s, minb=224743KB/s, maxb=224743KB/s, mint=60002msec, maxt=60002msec This is regardless of sync being set to 0 or 1. On linux, this is the result with sync=0: WRITE: io=14828MB, aggrb=253060KB/s, minb=253060KB/s, maxb=253060KB/s, mint=60001msec, maxt=60001msec and with sync=1: WRITE: io=6360.0MB, aggrb=108542KB/s, minb=108542KB/s, maxb=108542KB/s, mint=60001msec, maxt=60001msec My understanding is that since I'm operating on the raw block device, O_SYNC should not make any difference - there's no filesystem, any barrier, anything between the writes and the drive itself. Especially with O_DIRECT|O_SYNC set. Any ideas? For reference, here's the fio script I'm testing with: [global] bs=1M ioengine=sync iodepth=4 size=16g direct=1 runtime=60 filename=/dev/sdh sync=1 [rand-write] rw=randwrite stonewall

    Read the article

  • Ulimit settings in Oracle 11g on Linux 5

    - by Stuart
    Is there an issue with "Ulimit -Hn" being set too low (at 1024) when (Oracle recommend 65536)? This is for Oracle 64-bit 11g on Linux 5. It is one of the settings that appears to be woefully short of its recommendation. But I am also aware that the database server in question is an Oracle Data Guard Local Standby and should only really have a connection or two from its Primary database server (to ship the redo logs across). The Local Standby database server has 'hung' about 3 times in as many months and then requires a reboot. I do not have access to this server, so rely on others to look at logs etc. The sanity check on kernel params uncovered the low value for "ulimit -Hn". Has anyone ever seen that 'low' value cause a hang or crash?

    Read the article

  • Why the huge difference between etch and lenny MySQL

    - by rmarimon
    I've been working on a program for the last year. The development environment is working with a database in MySQL running on debian etch version mysql Ver 14.12 Distrib 5.0.32, for pc-linux-gnu (i486) using readline 5.2. The production environment is working on debian lenny with version mysql Ver 14.12 Distrib 5.0.51a, for debian-linux-gnu (i486) using readline 5.2. I was just timing some database access and what takes in the development environment 150 seconds, takes 300 in the production environment. I checked the /etc/mysql/my.cnf files on both systems and the only differences are # development bind-address = 10.168.1.82 log_bin = /var/log/mysql/mysql-bin.log # production bind-address = 127.0.0.1 myisam-recover = BACKUP #log_bin = /var/log/mysql/mysql-bin.log I dump a database from the production and load it into the development and with the same server everything takes half the time !!! What should I check?

    Read the article

  • Duplicate incoming TCP traffic on Debian Squeeze

    - by Erwan Queffélec
    I have to test a homebrew server that accepts a lot of incoming TCP traffic on a single port. The protocol is homebrew as well. For testing purposes, I'd like to send this traffic both : - to the production server (say, listening on port 12345) - to the test server (say, listening on port 23456) My clients apps are "dumb" : they never read data back, and the server never replies anyway, my server only accepts connections, and do statistical computations and store/forward/service both raw and computed data. Actually, client apps and hardware are so simple there is no way I can tell clients to send their stream on both servers... And using "fake" clients is not good enough. What could be the simplest solution ? I can of course write an intermediary app that just copy incoming data and send it back to the testing server, pretending to be the client. I have a single server running Squeeze and have total control over it. Thanks in advance for your replies.

    Read the article

  • Reset Network Load Balancer Connection Pool

    - by bill_the_loser
    I am currently working on load testing a web application on a virtual machine cluster. I am looking for a way to flush the connection pool / NLB cache so that it is like each machine connecting to the NLB is connecting for the first time and doesn't get directed back to the node that it was on last time. This a Windows 2003 Server cluster, behind a Microsoft software based Network Load Balancer. Additional Information: To do the load testing I'm using virtual machines, one for each node on the cluster. Somehow I got two virtual machines connecting to the same node and I'm looking for an easier way to reset those connections without going in to the NLB Manager and stopping and starting each node on the NLB. Update: We went ahead and changed the affinity on all of the nodes of the cluster to none. Now it's a non-issue.

    Read the article

  • Error in Installing MediaWiki for Ubuntu, Posgres 8.3

    - by Masi
    How can you solve the error message at the last line? .... # Installing MediaWiki with php file extensions # Environment checked. You can install MediaWiki. # Generating configuration file... # Database type: PostgreSQL # Loading class: DatabasePostgres # Attempting to connect to database "wikidb" as "wikiuser"... error: No database connection # Checking the version of Postgres... Warning: pg_version(): supplied argument is not a valid PostgreSQL link resource in /var/www/wiki/includes/db/DatabasePostgres.php on line 1078 FAILED. Required version is 8.1. You have 7.3 or earlier I am using Postgres 8.3 which makes the error message strange. The file "LocalSettings.php" was not created to the directory config so I cannot continue the installation without solving the problem.

    Read the article

  • Change Groupwise 7 User Password from NetWare Server Console

    - by Scott Wolf
    I have a Groupwise 7 server in place that we use for testing purposes. The previous administrator didn't bother to make a note of any of the account passwords on the machine. I have access to the Server Console...but I can't login via ConsoleOne or anything like that. Is there a command line utility that I can run from the Server Console to reset a Groupwise user password? I just need to have one account up and running for testing. If there's a CLI utility I can use to be able to create a new account, that would work just as well. Any help would be greatly appreciated...I'm kinda stuck at this point.

    Read the article

  • SQLServer 2008 FailOver and Load Balancing

    - by Jedi Master Spooky
    I have a project with a 2TB database ( 450.000.000 rows). I need to provide to the proyect a solution that gives FailOver and load Balacing, what do you recommend? We are going to use a NetApp Filer for the Data Files and for the File System of the Proyect. I read that SQl Clustering does not provide load balacing. If I cannnot have this feature and I have to go only to the FailOver what Server ( I presume that the key feature here is memory) would you recomend. We are adding 1.000.000 rows a day. Once the rows is inserted we are doing a lot of updates to that row for about 1 week then the row get static. Because of this I am thinking in some kind of history table or database or something like that. I am open to the Os servers implementation, I was thinking of a windows 2008 server with cluster but this depend of the database solution

    Read the article

  • .NET Framework 4.0 installation is very slow

    - by Dimitri C.
    On my Windows Vista, it takes a full 12 minutes to install the .NET Framework 4.0. a) Is this normal? b) If not, can something be done about it? The reason I'm concerned about the speed is because it slows down the testing of our product installer considerably. Testing an installer is time consuming already, but this new .NET Framework installer makes it almost undoable. Detail: I did the test on a clean Vista inside a VirtualBox virtual machine. This setup does not show any performance issues in other situations. I tried both dotNetFx40_Full_x86_x64.exe and dotNetFx40_Client_x86_x64.exe. They both take approximately the same time to install.

    Read the article

  • Deploy Rails app from Hudson

    - by brad
    I'm using hudson as my CI and it works great, builds run their tests, code metrics, all that good stuff. But at the moment, that's it, no automated deployment, I have to manually do that after. I haven't found any sort of capistrano plugin for hudson and I can't even see where I can just run my cap deploy after a successful build in Hudson. Does anyone have any idea what I need in order to automate a deployment to a testing server on a successful build? I'd like each commit to force a build and in term deploy to testing so I can see everything right away.

    Read the article

  • Cheapest server per gigabit throughput [closed]

    - by nethgirb
    I'm looking for a set of servers for performance testing a network, and secondarily testing some applications on the servers. Their most important task is simply to pump out data: from an application like memcached or just dumped from a large file in memory into a TCP flow (i.e., disk performance doesn't matter). This should happen over one or more 1 gigabit Ethernet ports, and the machines should run Linux (ideally), or perhaps Mac OS X or some other *nix. Other than that, there are few constraints (e.g., even something ARM-based could be fine). So here's the question: What's the cheapest server per gigabit? Price and power are both considerations.

    Read the article

  • Can Resource Governor for SQL Server 2008 be scripted?

    - by blueberryfields
    I'm looking for a method to, in real-time, automatically, adjust Resource Governor settings. Here's an example: Imagine that I have 10 applications, each hitting a different database on the same database machine. For normal operations, they do not hit the database very hard, so I might want each one to have 10% CPU power reserved. Occasionally, though, one or two of them might spike, and run an operation which could really use the extra power to run faster. I'd like to be able to adjust to compensate (say, reducing the non-spiking apps to 3%, and splitting the difference between the spiking apps). This is a kind of poor man's method of trying to dynamically adjust resource allocation and priorities. Scripts (or something script-like) is preferred, since the requirement is for meta-level adjustments to be possible in real-time, also.

    Read the article

  • Is this a bug in Profiler or Entity Framework?

    - by AjarnMark
    Using Entity Framework 4 with stored procedures and SQL Server 2008 SP1... When running SQL Server Profiler (TSQL_SPs template), the lines that show my stored procedure call and its statements say that they executed in DatabaseID = 1 (Master) but it is actually happening in my application database (ID = 8). The procedures execute properly and return the data, and they only exist in my application database, so why does Profiler mark those lines as being in Master? Is this a bug in Profiler? Is it a bug in EF4? Note that running the same code against a SQL 2000 instance, Profiler correctly shows the application's database ID.

    Read the article

  • SQL Server User Mapping - Limit view of databases for a user

    - by Jaime
    Hi there, I am adding a new Login with SQL Server Authentication. I set its Server Role as public and then went into User Mapping, selecting the only database this user should have access to. I then change the Default Schema to dbo and made this user the db_owner. I then connect to the instance using the new user's credentials and I can see not only the database he should have access to but all the other attached databases. How can I limit this user to just see the database he has access to? Thanks in advance!

    Read the article

  • What to do for a 1 million concurrent web application? [duplicate]

    - by Amit Singh
    This question already has an answer here: How do you do load testing and capacity planning for web sites? 3 answers There are few things that I would like to know here. What server configuration do I need. And if I am deploying it on EC2 how many VMs do I need and what should be their configuration. What options do I have to do a load testing for 1 million concurrent users. Any pointer to (for php) how to code or what to keep in mind for such application. This is for sure that I don't exactly know what to ask because this is my first application on this scale. But one thing is clear that this application should pass a load test of 1 million concurrent request.

    Read the article

  • Script / command to drop all connections / locks in Sybase SQL Anywhere 9?

    - by nxzr
    I've recently become responsible for administering an application which is essentially a front end to a Sybase SQL Anywhere 9 database, including the database itself. I'd like to use unload table to efficiently export the data for backup and, in the case of a few tables, ETL to get it into a reporting database / small scale data warehouse. The problem is that the client application crashes and leaves dead connections and shared locks on a pretty regular basis, which seems to prevent unload table from getting the (brief) exclusive locks it needs. Currently I use Sybase Central to verify that these connections are in fact zombies and drop them myself at the end of the day / week. Is there a command or script to drop all connections? Being able to drop everything at once after verifying that they're unneeded would be quite helpful but I haven't found a way to do it.

    Read the article

  • Oracle licensing /pricing ?

    - by Quandary
    Question: I'd like to download Oracle 11g database for evaluation purposes. Now I found this link for downloads: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html But it says one must register 'for accessing premium contents'. But in the same time, it looks like one can download the full database for free. But surely, Oracle doesn't give it for free, but in the registration, there's no mention of any cost/fees, or any billing address. Is this registration free, or as 'premium' suggests, will you get a bill for it if you do (supposed you enter true data) ? Or how does Oracle handle licensing/payment ? I can not see any price tag there anywhere, nor any information on it on that registration page.

    Read the article

  • Should my servers boot from VHD?

    - by tony roth
    I've been testing native vhd boot on several servers. It seems to be pretty transparent in terms of deployment and with my seat of the pants testing I have not noticed any difference in performance. The main reason that I want to boot vhd is due to their transportablility between different hardware and to hyper-v servers. the following roles will be installed. dfsr dhcp iis application server dc <- haven't tested this yet but see no reason why it won't work. With the above low impact (in terms of performance) roles do you thing booting from VHD is appropriate. thx

    Read the article

  • Automatically reconnect to ODBC sources?

    - by stefan.at.wpf
    I am using Asterisk 1.8.10.1 and a MySQL database connected via ODBC to store CDRs. When my MySQL database isn't available when Asterisk starts or has an outage while Asterisk is running, I would expect Asterisk to retry to connect to the database, but this doesn't happen! Anyone knows where I can enable some kidn of automatic reconnect to databases in Asterisk? My res_odbc.conf looks like this: [asterisk] enabled => yes dsn => asterisk-connector username => user password => pass pre-connect => yes pooling => no limit => 1 idlecheck => 1 negative_connection_cache => 1

    Read the article

  • Report Builder 2.0 - Creating DataSet - User Not Authorized

    - by Fahad
    Hello, we are currently using SQL Server Reporting Services and we would like to use Report Builder so our customers can create reports themselves. I have created a User on the server. I have added this user to the SQLServerMSSQLUser and SQLServerReportServerUser groups. I have given this User db_datareader access to the required database and to the Reporting Services database. I've also tried giving the User db_owner access to the Reporting Services db's. And on the Report Manager, this User is a System_User, but has all access (every checkbox is checked). When I connect using Report Builder, I can select the Report Model to create a DataSource, but when I try to create a DataSet, I get the following error: An error occurred while connection to datasource 'DataSource1'. The details are: 'User Not Authorized'. Does anyone know what server permissions I forgot to set? I'm assuming it's a Windows permissions issue because I do not see any database login errors in the event logs.

    Read the article

  • Creating a bootable external drive in OSX

    - by Brian Postow
    I want to do some slightly dangerous testing, testing an install package I've written, so I've got an external drive, and I want to make it a bootable OSX disk, then I can boot up on the external, do my install, and if it screws things up, it doesn't actually affect the usability of my machine. The problem is that when I stick the disk that came with my computer (It's actually another computer in the office, but they're both minis) in and try to run the installer, it says "You cannot install OSx 10.6 on this computer". Now, the computer is ALREADY running 10.6, so that's a kind of silly error message... It does this when I boot to the DVDRom as well. Am I doing something really really dumb? or what?

    Read the article

< Previous Page | 559 560 561 562 563 564 565 566 567 568 569 570  | Next Page >