Search Results

Search found 21336 results on 854 pages for 'db api'.

Page 318/854 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • Working with a copy of my Virtual Machine

    - by Gaby Reyna
    Hi there I'm trying to make a backup/copy of my virtual machine, it's installed in a Windows Server 2000 and I want to make some modifications/tests without changing the original one. The copy is to be used in Windows 7, what I'm trying to do is work/modify an application that communicates with a DB, this application is hosted on the VM, the DB too, and since I don't want to screw up the stable version I want to know how to copy the VM to my desktop pc to experiment without worries. Now, someone told me I might have problems with the IP 'cause the original will have the same IP, and if I change it, it won't work properly. Is this true? If it is indeed true, any suggestions??

    Read the article

  • Using a AWS EC2 Server to host a busy website and I need to set up a loadbalancing

    - by Philip Isaacs
    My company has one EC2 server running on AWS with a MYSQL-DB and Apache on the same instance. This one instance hosts a website built on PHP Zend Framework. The site runs like crap when it starts to get busy with a lot of traffic so I'm looking for some advice on how to set up something that can handle the load better. My first question is should I move the mysql DB on to a separate EC2 instance or perhaps use AWS's RDS service which looks like a nice option. I'm sort of new to some of this but I'm guessing I'll need at least two EC2 instances for serving the website from and some sort of load balancing mechanism to distribute traffic. But maybe not, I'm not sure. Also what are some best practices for how to replicate the data so that they stay in sync on both instances? Okay I know these are a lot of questions. But I don't know where to start so any advice will help.

    Read the article

  • Overriding Apache auth directive

    - by Machine
    Hi! I'm trying to allow public access to a method that generates a WSDL-file for our API. The rest of the site is behind basic auth protection. Can you guys take a look at the following virtual-host configuration and see why the override does not take place? <VirtualHost *:80> ServerName xyz.mydomain.com DocumentRoot /var/www/dev/public <Directory /var/www/dev/public> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all SetEnv APPLICATION_ENV testing </Directory> <Location /> AuthName "XYZ Development Server" AuthType Basic AuthUserFile /etc/apache2/xyz.passwd Require valid-user </Location> <Location /api/soap/wsdl> Satisfy Any allow from all </Location> </VirtualHost>

    Read the article

  • Backup Script - Could Not Open Input File

    - by Iestyn
    this is the backup script that I've got going: http://pastebin.com/4g4E6wUz This is the cron info: /usr/local/bin/php /home/backups/backup-db.php --filename-dated ALL No matter what I do, I keep on getting this error: Could not open input file: /home/backups/backup-db.php - That's the correct location of the file. I just don't know what else to try, I feel I've been working on this for so long now that I've explored every avenue, on the other hand sometimes I think that the time I've spent on it is clouding my thoughts and I'm missing something stupidly obvious. Just wondering if someone can give me a few pointers? Also on a last note, does anyone know of a way/article to auto generate a full backup of cPanel every * amount of days and store it in a location that I want? Kind Regards.

    Read the article

  • Adding SSL to Heroku site post launch

    - by dineth
    I have a rails API that I want to deploy on Heroku. $20/month for a SSL site on heroku is a little steep given I am not earning anything out of this app yet. I am after advice and wondering if it is possible to add SSL sometime in the future? This is for a iOS app that I'm writing. Basically the idea would be that I continue to use https://myapp.heroku.com through their piggyback SSL. Once I get some cash in, I want to transition to using https://www.myapp.com. At this point the API would still need to work for app users who haven't upgraded to a new version of the app that points to the new domain. Anyone know if this is possible? Would both URLs continue to work? My gut feeling tells me this is not possible. Any advice would help. Thanks!

    Read the article

  • How to efficiently dump a huge MySQL innodb database?

    - by Jagbir
    I got an Ubuntu 10.04 production MySQL database server where total size of database is 260 GB while size of root partition is itself 300 GB where DB is stored, essentially means around 96% of / is full and there's no space left for storing dump/backup etc. No other disk is attached to server as of now. My task is to migrate this database to other server sitting in different datacenter. Question is how to do that efficiently with minimum downtime? I'm thinking in line of: Request to attach an extra drive to server and take a dump in that drive. Transfer dump to new server, restore it and make new server slave of existing one to keep data in sync When migration is needed, break replication, update slave config to accept read/write requests and make old server read-only so it won't entertain any write requests and tell app developers to update there config with new IP address for db. What's your suggestions to improve this or any alternate better approach for this task?

    Read the article

  • Can't find windows 2000 domain after PDC Change

    - by Mark A Kruger
    This is a windows 2000 domain issue. I had an old win2000 PDC that was beginning to fail. So, trying to be pre-emptive, I installed a new BDC, then "demoted" the old PDC and took it off the network. Now it appears that no member server can "find" the domain anymore. No logins work (for services or a RDP or anything). What I've tried (based on googling): Verified sysvol is shared on all servers. Used nslookup to verify that DC's are being found. netdiag /fix meta data cleanup routines. verified no firewall issues (port 389 etc) seizing all roles to new PDC (I did that as part of the original promotion). LMHOST file and Netbios settings. At the moment it seems like I can get the DC's returned but cannot contact them. I'm at a loss. My latest attempt was to remove a member server from the domain and try to "re-add" it. When I do that I get this message: The query was for the SRV record for _ldap._tcp.dc._msdcs.cfwebtools.com The following domain controllers were identified by the query: db-dev1.cfwebtools.com file-prod1.cfwebtools.com cfwt-pdc2.cfwebtools.com However no domain controllers could be contacted. It then goes on to ask if I've checked my A record and made sure they are running. Is there a way to force this domain to be seen? I also shared sysvol (or double checked it) and restarted the dfsr service. More information. I got looking at sysvol and found it was not shared on 2 of these servers. Only one of them (db-dev1) has a "good" or at least "populated" sys vol store. So I tried doing a "d2" recovery of my PDC against that good sysvol. But it never synchs - or at least it does not seem to synch. I'm guessing if I could get sysvol and netlogin to kick in and replicate that would fix my issue. I think these DC's aren't responding because they are waiting for replication which is broken somehow. Would taking down all the DC's except for db-dev1 fix the issue - at least temporarily? I know I can't just copy the sysvol stuff over to the other 2 can I?

    Read the article

  • How can I reroute a sub-domain to localhost + port number?

    - by urig
    I have several web applications running on my developer machine. They mimic our production web applications which are hosted on sub-domain. For example, consider: api.myserver.com - is mimicked by 127.0.0.1:8000 www.myserver.com - is mimicked by 127.0.0.1:8008 and so on... How can I make it so that, on my Windows 7 machine, HTTP calls to "api.myserver.com" (note the lack of port number) are redirected to 127.0.0.1:8000 etc? Note that this needs to apply both to client-side calls (in the browser) and server-side calls (from IIS to Python development server and vice versa). Do I need a proxy to run locally to achieve this? Can you recommend such a tool?

    Read the article

  • Is it bad to have a very full hard drive on a high traffic database server?

    - by MikeN
    Running an Ubuntu server with MySQL for a high traffic production database server. Nothing else is running on the machine except the MySQL instance. We store daily database backups on the DB server, is there any performance hit or reason why we should keep the hard disk relatively empty? If the disk is filled up to 86%+ with the database and all of the backups, does it hurt performance at all? So would the DB server running with 86-90%+ full capacity perform less well in any way than the server running with only a 10% full disk? The total disk size on the server is over 1 TB so even 10% of the disk should be enough for basic O/S swapping and such.

    Read the article

  • Best idea dataserver serving small pictures 40 ko

    - by Nicolas Manzini
    I'm designing the server structure for my application in case things go well. I have one server DB connected to multiple server who process connections. All those with lots of RAM and fast processors. (still looking for a way to use the multithread because now it's dumb apache php... so loooots of ram needed). Upon an answer from those servers, the client can then connect to another server to retrieve pictures using the address he previously got from the db. Is it a good idea to have one database server with let's say nginx and ssd disk having to send all pictures to everybody? or should I have multiple server accessing to a shared ssd disk drive or multiple disk updating each other? Also should I put a lot of RAM on the database server? because probably there wont be a picture more popular than another.

    Read the article

  • Configuring IIS site to use HTTPS

    - by James
    I am working on a REST API which I have currently deployed on a Win XP Professional SP2 development machine running IIS 5.1. The site is currently being hosted on port 81 and being accessed via HTTP. I would now like to configure the site to stop using HTTP and use HTTPS only. I have developed a self-signed certificate using the SelfSSL.exe tool from the 6.0 Resource Kit Tools and set the Common Name to be the IP of my server (as it's a local development machine it has no domain name). I have also already configured the site to use SSL using the How To Set Up an HTTPS Service in IIS tutorial as my guide. However, whenever I try to access a resource in the API via HTTPS I get a 404. Any ideas?

    Read the article

  • Filling up bounded form with information from another table while creating new record

    - by amir shadaab
    I have an excel sheet with information about each employee. I keep getting new updated spreadsheet every month. I have to create a database managing cases related to the employees. I have a database and the bounded form already created for the cases which also contain emp info fields. What I am trying to do is to only type in the emp id in the form and want the form to look up in the spreadsheet(which can be a table in the cases db) and populate other fields in the form and that information can go into the cases db. Can this be done?

    Read the article

  • When using RAID10 + BBWC why is it better to separate PostgreSQL data files from OS and transaction logs than to keep them all on the same array?

    - by Vlad
    I've seen the advice everywhere (including here and here): keep your OS partition, DB data files and DB transaction logs on separate discs/arrays. The general recommendation is to use RAID1 for OS, RAID10 for data (or RAID5 if load is very read-biased) and RAID1 for transaction logs. However, considering that you will need at least 6 or 8 drives to build this setup, wouldn't a RAID10 over 6-8 drives with BBWC perform better? What if the drives are SSDs? I'm talking here about internal server drives, not SAN.

    Read the article

  • MySQL simple replication problem: 'show master status' produces 'Empty set'?

    - by simon
    I've been setting up MySQL master replication (on Debian 6.0.1) following these instructions faithfully: http://www.neocodesoftware.com/replication/ I've got as far as: mysql > show master status; but this is unfortunately producing the following, rather than any useful output: Empty set (0.00 sec) The error log at /var/log/mysql.err is just an empty file, so that's not giving me any clues. Any ideas? This is what I have put in /etc/mysql/my.cnf on one server (amended appropriately for the other server): server-id = 1 replicate-same-server-id = 0 auto-increment-increment = 2 auto-increment-offset = 1 master-host = 10.0.0.3 master-user = <myusername> master-password = <mypass> master-connect-retry = 60 replicate-do-db = fruit log-bin = /var/log/mysql-replication.log binlog-do-db = fruit And I have set up users and can connect from MySQL on Server A to the database on Server B using the username/password/ipaddress above.

    Read the article

  • PHP Zend Hash Vulnerability Exploitation Vector [closed]

    - by Resurrected Laplacian
    Possible Duplicate: CVE-2007-5416 PHP Zend Hash Vulnerability Exploitation Vector (Drupal) According to exploit-db, http://www.exploit-db.com/exploits/4510/, it says the following: Example: http://www.example.com/drupal/?_menu[callbacks][1][callback]=drupal_eval&_menu[items][][type]=-1&-312030023=1&q=1/ What are "[callbacks]","[1]" and all these stuffs? What should I put in to these stuffs? Can anyone present a real possible example? I wasn't asking for a real website; I was asking for a possible example! So, how would address be like - what should I put in to these stuffs, as the question says..

    Read the article

  • Issue with aborted MySQL connections (error code: 4)

    - by arikfr
    Some of the connections between my application server (Ubuntu, Apache, PHP) and my DB server (Ubuntu, MySQL) are failing with error code 4. According to the documentation error code 4 is: OS error code 4: Interrupted system call At first I thought that maybe the issue is that the DB server has too many connections and fails because there are too much open files. But it seems not to be the case because: Too many open files has different error code (24). I've checked and during peak time the server had 497 files open (checked using lsof command) while the maximum is 1024. The TCP settings were already checked (see prior question). Any ideas what this can be or what should I check?

    Read the article

  • Capture the build number for a remote-triggered Hudson job?

    - by EMiller
    I have a very simple inhouse web app from which certain Hudson builds (on another server) can be triggered remotely. I have no problem triggering the builds, but I don't know how to capture the associated build number for later reference. I'm using the buildWithParameters trigger, and the actual result of that call is just a mess of HTML - I don't believe it gives me back the build number. I started down the path of pulling the whole build list for the job (via the api), and then attempting to reconcile that list against my records - but that's much more complicated than I'd like it to be. I also considered sleeping for a few seconds after launching the job, and then grabbing the latestBuild from the Hudson api - but I'm sure that's going to go wrong at some point (someone will fire off two jobs quickly, and I'll get the association wrong).

    Read the article

  • Server freeze restarted quickly so how do I fiond what went wrong?

    - by Charlie
    I have a SQL SERVER DB running on a windows server 2008 (VMWare) Yesterday I could not RDP to it so I ended some RDP sessions which were left logged in. This seemed to solve the problem. However last night I learned that the DB was inaccessible and unresponsive to customers. My colleague checked the server but again is unable to create an RDP connection. He then restarted the server and since it has been fine. Looking at the CPU Readings of the Server it spiked up to 100% before the original RDP problem .After I ended the extra seeions uit again dropped down to normal levels however before the time of the customer complaint it had rose to 100% again - before it had to be restarted. Is there anyway I can investigate which processes may have caused the problem in the first place. Would there be some kind of memory dump from when it was restarted. I would prefer to find out what is wrong now instead of waiting until it happens again.

    Read the article

  • Turn off write barriers on ext4 whiche FS is mounted

    - by user462982
    I am doing some IO intensive DB imports that run for several days now and the IO performance has dropped tremendously over times. The DB data files (log files) are on an ext4 formatted logical volume which is mounted with default options (did not specify something special in fstab). Since I just learned that ext4 enables write barriers by default: Q: Is there some way to disable write barriers online (i.e. while the file system is in use), because I cannot interrupt the import and don't want to restart it again. I am aware that write barriers might not be the only thing impeding performance it is a bad idea to have write barriers disabled on journalling file systems if data safty is important (e.g. on a production system)

    Read the article

  • PostgreSQL, update existing rows with pg_restore

    - by woky
    Hello. I need to sync two PostgreSQL databases (some tables from development db to production db) sometimes. So I came up with this script: [...] pg_dump -a -F tar -t table1 -t table2 -U user1 dbname1 | \ pg_restore -a -U user2 -d dbname2 [...] The problem is that this works just for newly added rows. When I edit non-PK column I get constraint error and row isn't updated. For each dumped row I need to check if it exists in destination database (by PK) and if so delete it before INSERT/COPY. Thanks for your advice. (Previously posted on stackoverflow.com, but IMHO this is better place for this question).

    Read the article

  • Do best-practices say to restrict the usage of /var to sudoers?

    - by NewAlexandria
    I wrote a package, and would like to use /var to persist some data. The data I'm storing would perhaps even be thought of as an addition for /var/db. The pattern I observe is that files in /var/db, and the surrounds, are owned by root. The primary (intended) use of the package filters cron jobs - meaning you would need permissions to edit the crontab. Should I presume a sudo install of the package? Should I have the package gracefully degrade to a /usr subdir, and if so then which one? If I 'opinionate' that any non-sudo install requires a configrc (with paths), where should the package look (presuming a shared-host environment) for that config file? Incidentally, this package is a ruby gem, and you can find it here.

    Read the article

  • Apache mod_wsgi elegant clustering method

    - by Dr I
    I'm currently trying to build a scalable infrastructure for my Python webservers. Actually, I'm trying to find the most elegant way to build a scalable cluster to host all my Python WebServices. For now, I'm using three servers like this: 1 x PuppetMaster to deploy my servers. 2 x Apache Reverse Proxy Front-end servers. 1 x Apache HTTPd Server which host the Python WSGI Applications and binded to using mod_wsgi. 4 x MongoDB Clustered server. Everything is OK concerning the Reverse proxy and the DB Backend, I'm able to easily add a new Reverse Proxy and a new DB Node, but my problem is about the Python WebServer. I thinked to just provision a new node with exactly the same configuration and a rsync replication between the two nodes, but It's not really usefull in term of deployement for my developpers etc. So if you have a solution which is as efficient and elegant that the Tomcat Cluster I'll be really happy to ear it ;-)

    Read the article

  • How do quotes/strings work in Powershell?

    - by Casey
    I'm have a command line that works in the regular old Windows Command Shell, but somehow gets misinterpreted in Powershell (I'm fairly new to Powershell). sqlcmd -S .\SQLEXPRESS -i "f:\SQLBackups\ExpressMaint.sql" -v DB="ksuite" -v OPTYPE="DB" -v BACKUPFOLDER="f:\SQLBackups" -v REPORTFOLDER="f:\SQLBackups\Reports" -v DBRETAINUNIT="days" -v DBRETAINVAL="7" Powershell seems to be stripping the drive letters out of the arguments that require paths. For example, I get the following when I attempt to run the above command in Powershell: Sqlcmd: ':\SQLBackups': Invalid argument. Enter '-?' for help. Well sure it's invalid without the drive letter. I have tried variations on double quoting it, escaping it, etc. but can't get it to work. What am I missing that Powershell does differently?

    Read the article

  • ADSL improvement in recent years

    - by cleong
    Currently I have a 2mb/s ADSL connection. I signed up for the service more than five years ago. Has technology improved much during that time to allow for greater speed using the same wires? The building I live in is quite old and the lines aren't very good. They weren't able to support 6mb/s service back then. Now I notice that the lowest speed offered by my telco is 10mb/s. Even that would be a serious improvement over what I have now. Here are the stats from the modem: Line Attenuation (Up/Down) [dB]: 10,5 / 15,5 SN Margin (Up/Down) [dB]: 31,5 / 29,0

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >