Search Results

Search found 31293 results on 1252 pages for 'database agnostic'.

Page 568/1252 | < Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >

  • Can you convert an address to a zip code in a spreadsheet?

    - by moe37x3
    Given a column of street addresses with city and state but no zip in a spreadsheet, I'd like to put a formula in a second column that yields the ZIP code. Do you know a way to do this? I'm dealing with US addresses, but answers pertaining to other countries are interesting, too. UPDATE: I guess I'm mostly hoping that there's a way to do this in Google Spreadsheets. I realize that you need to access a vast ZIP code database to do this, but it seems to me that such a database is already inside Google Maps. If I put an address in there without ZIP code, I get back an address with ZIP code. If Maps can do that lookup, maybe there's a way to make it happen in Spreadsheets, too.

    Read the article

  • mysql server, open 'dead' connections

    - by Jeff
    my basic question is what kind of impact does this have on the server.. lets say for example, there is an older program in my company that opens connections to a mysql database server at a high rate (everything they do with the application basically opens a server connections) however, this application was not designed in the way to dispose of the connections after they where created.. alot of the time the connections remain open but are never used again, open 'dead' connections i guess you could say. they just remain connected until the server times them out, or until an admin goes in and removes the sleeping connections manually. im guessing this could be responsible for sometimes not able to connect errors etc. that we receive from other systems that try to access the mysql database? (connections limit reached) could this slow down the server as well? curious what all this could exactly cause. thanks!

    Read the article

  • Connect to my virtualbox mysql server

    - by WebweaverD
    I wonder if someone here could help me, this is my set up: I am on a windows 7 machine running a ubuntu virtualbox as my local web server and database server (mysql). I have just got hold of a copy of Komodo which i am running on my windows machine which I would like to hook up to my database. The fields it needs are hostname, port, socket, username and password. I know the username/password but am unsure how to find out what to put for the other fields. The ubuntu vb has an ip of 192.168.0.10, which is in my hosts file as http://swishprint.dev I hope I have asked this in the right place, any help much appreciated.

    Read the article

  • Ultimate way to use Picasa in a home network

    - by luisfarzati
    I've been trying a lot of approaches but still didn't find any effective solution. I want gigs of photos in a network drive (a IOMega Home Media Network Drive, plugged to my wifi router). I'd like to do 2 things: Do a Picasa import process of all the photos in the drive, making Picasa organize all the files in a year/month folder structure physically. Ideally, the import target directory should be the same network drive, otherwise I should move all the imported files in my local computer back to the drive myself. Share the Picasa database over the network, by uploading it to the network drive. Have me and other members of the family point our Picasas to the network database, and see the photos as well as make changes (tag faces, create logical albums, etc) into it. Is ANY possibility to accomplish this? Or should I be looking for another photo management app, and in that case do you know such one? Thank you!

    Read the article

  • type mismatch errors querying data from spreadsheet

    - by user2984933
    In EXCEL 2010 I am trying to querying data in another spreadsheet. The data range in the source sheet/ file is named (DATABASE). The Date field in the database is formatted as short date and when I query the date without criteria I get a different format of European datesYYYY-MM-DD with time in the results. When I use criteria and a specific date in the date field criteria grid using English format MM-DD-YYYY I get results. When I set parameters looking at destination file cells for the date for the parameters, I get Type mismatch EVEN THOUGHT THE CELLS ARE Short date Formatted. This worked perfectly in my 2003 version of EXCEL. Now I am running Win 7 -64 and Office 2010 Pro. Why does the query throw Mismatch with cell references for the parameters but accepts hard value dates in any date format? (MSQRY32.EXE)

    Read the article

  • Tunneling through SSH for 1521 port access?

    - by A T
    I am developing locally on my computer, using my own Apache server with PHP configured. My database however is remotely located on an Oracle 11g Database Server. We were also given a separate remote server for hosting our .html and .php files, however only FTP access has been provided there. Development is far too slow waiting for the FTP connection to push. So I decided to develop locally, but still use the remote DB server. Unfortunately that gives me an error. Not sure how—or where—to integrate tunnelling. Do I add something to the oci_connect HOST in my PHP file, or do I encapsulate my whole environment over SSH?

    Read the article

  • Windows, Apache and MSSQL Authentication

    - by user1114330
    I have a create database script written in perl. I remember it working just fine another machine. A couple years later using a Vista machine I am trying to use it again and it keeps failing. The main difference is that now I am using Apache instead of IIS. In the script the IUSR account is granted permissions as it needs to write to the database as a part of another program. IIS has been uninstalled on this machine but the IUSR account still exists. The NT AUTHORITY\IUSR is also seen in the logins drop down in MSSQL(2012). The machine is running Vista Home Edition. However when running the script I get errors that say that NT AUTHORITY\IUSR cannot be found. I tried also with COMPUTERNAME\IUSR just for the heck of it and of course it was not found. I also tried with IUSR alone and for some reason the user isn't being "found"? Any ideas?

    Read the article

  • Why does yum index get corrupted?

    - by TomOnTime
    Occasionally yum's cache gets corrupted and we see errors like this: error: db3 error(-30974) from dbenv->failchk: DB_RUNRECOVERY: Fatal error, run database recovery error: cannot open Packages index using db3 - (-30974) error: cannot open Packages database in /var/lib/rpm The workaround is rm -f /var/lib/rpm/__db* and then the next "yum" command regenerates the data. My question is: what is likely to be causing this? Is there some common task that ignores locks or has other problem that causes this? We have hundreds of CentOS machines and there is no pattern to which see this problem. It could be a "one in a million" issue, which at large scale is seen often. NOTE: I realize this is a very "open ended" question, but if an answer finds the cause, I will go back and turn the question into something more canonical that directly relates to the specific issue.

    Read the article

  • Transaction log is full and does not free up space

    - by titanium
    Hi, I have a database in SQL Server 2005 whose transaction log becomes full. It is using snapshot replication. I noticed the transaction log is not freeing up space. So I created an additional transaction log. Three days has passed and this first transaction log is still full. I performed a full database backup and transaction backup. Then I tried to shrink the transaction log but the shrink failed. Can anyone advise why shrinking transaction log is failing? ANy other recommendation on how to resolve the problem?

    Read the article

  • Remove MySQL ibdata1 without dumping and restoring existing proper databases

    - by Halfgaar
    My MySQL server contains two 100+ GB big databases. One was created with innodb_file_per_table and one wasn't. The one that wasn't, has been dumped, ready to be reloaded. However, the ibdata1 file is still huge and I don't have enough free space. Normal advice in this situation is to dump and remove each database, stop MySQL, then remove ibdata1 and the transaction logs, and then reload the databases. My specific question is: can I leave databases that were created with innodb_file_per_table alone? Or will they be destroyed when I remove ibdata1, even though all their files are separate? I can't afford to take this database off-line to dump and reload it. And because it's already properly made with separate files per table, it would feel pretty useless.

    Read the article

  • How can I speed up a MySQL retore from a dump file?

    - by Dave Forgac
    I am restoring a 30GB database from a mysqldump file to an empty database on a new server. When running the SQL from the dump file, the restore starts very quickly and then starts to get slower and slower. Individual inserts are now taking 15+ seconds. The tables are MyISAM. The server has no other active connections. SHOW PROCESSLIST; only shows the insert from the restore (and the show processlist itself). Does anyone have any ideas what could be causing the dramatic slowdown? Are there any MySQL variables that I can change to speed the restore while it is progressing?

    Read the article

  • How to effectively secure a dedicated server for intranet use?

    - by Mark
    I need to secure a dedicated server for intranet use, the server is managed so will have software based security, but what other security should be considered for enterprise level security? The intranet is a host for an ECM (Alfresco) managing and storing sensitive documents. As the information is sensitive we are trying to make it as secure as reasonably possible (requirement in UK law). We plan to encrypt the data on the database. It will be connected to via SSL encryption. Should we consider Hardware firewall, Private lan between the application server and database server?

    Read the article

  • Modify Oracle SOA Suite 11g repository DB config

    - by Alfabravo
    Hello there! Don't know if this question goes here or in superuser. Anyhow, let's try. I have an Oracle SOA Suite installed in a server. The repository database is installed in another server. Both are virtual. Sadly, we don't have snapshots neither UPS and lights went off yesterday... the repo database is now a bunch of unformed bits and we need to recreate it. ¿Is there any way to reconfigure Oracle SOA Suite to use a brand new repository? Or should I paninfully reinstall the whole crap? Thanks in advance.

    Read the article

  • CentOS not allowing remote MySQL connections

    - by nd8ad
    When assigning a user from a remote IP to connect to a database it is saying that it's failing to connect. It is also failing to connect with root so something is wrong. Bind IP is off and I have also tried disabling iptables, still no dice. Port 3306 is forwarded. I'm running on Centos 5.6, using phpmyadmin, but I have also tried to assign the user via the commandline and create a new database, still not working. Been googling and troubleshooting for hours now, no dice.

    Read the article

  • my.ini optimization on Windows 2008 R2 VPS

    - by MKphpDev
    I have a vmware VPS running Windows Server 2008 R2 Enterprise that has performance issues with MySQL. Every few minutes, MySQL stall for few seconds then responed to queries. I'm sure that my.ini need to be optimized, but unfortunately, I don't have any idea of my.ini configuration. What's running on the server: 2 small wordpress blogs, 1 vbulletin forums (approx. 1.2 GB database, and increasing), small database for some sort of plug-ins (no more than 4000 records) Server Info: Processor: Intel Xeon X5550 @ 2.67GHz, RAM: 6 GB (memory useage never exceeded 2 GB), MySQL 5.5, PHP 5.3.10, IIS 7 current my.ini: [mysqld] default-storage-engine=INNODB sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE _USER,NO_ENGINE_SUBSTITUTION" max_connections=250 myisam_max_sort_file_size=20G innodb_additional_mem_pool_size=256M innodb_flush_log_at_trx_commit=1 innodb_log_buffer_size=8M innodb_buffer_pool_size=512MB innodb_log_file_size=128M innodb_thread_concurrency=10 key_buffer_size = 512M myisam_sort_buffer_size = 8M join_buffer_size = 256K read_buffer_size = 256K sort_buffer_size = 256K table_cache = 4000 thread_cache_size = 200 wait_timeout = 30 connect_timeout = 10 tmp_table_size = 32M max_allowed_packet = 1M max_connect_errors = 10000 query_cache_size = 16M query_cache_limit = 2M query_cache_type = 1 query_cache_min_res_unit = 1024 query_prealloc_size = 16384 query_alloc_block_size = 16384 skip-external-locking read_rnd_buffer_size=1M max_heap_table_size=16M thread_concurrency=8 [mysqld_safe] open_files_limit = 8192 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 128M sort_buffer_size = 128M read_buffer = 2M write_buffer = 2M any help with that, please?

    Read the article

  • The simple "cron" that killed the cloud hosting option

    - by ron M.
    My SaaS application required a nightly cron job to run, analyze a database, send out e-mails and do some database maintenance work. This job cannot be triggered by user action. Almost every 'cloud' hosting solution balks at this to the point where they tell me "we cannot do this". Is this feature so exotic that cloud hosting providers simply don't care about? Am I using the wrong lingo here? should I use another concept? Do I have to go with dedicated hosting where i have "root access" as the only solution to this?

    Read the article

  • How can I speed up a MySQL restore from a dump file?

    - by Dave Forgac
    I am restoring a 30GB database from a mysqldump file to an empty database on a new server. When running the SQL from the dump file, the restore starts very quickly and then starts to get slower and slower. Individual inserts are now taking 15+ seconds. The tables are MyISAM. The server has no other active connections. SHOW PROCESSLIST; only shows the insert from the restore (and the show processlist itself). Does anyone have any ideas what could be causing the dramatic slowdown? Are there any MySQL variables that I can change to speed the restore while it is progressing?

    Read the article

  • reclaim space after moving indexes to file group

    - by Titan2782
    I have an extremely large database and most of the space is the index size. I moved several indexes to a different file group (just to experiment) but no matter what I do I cannot reduce the size of the MDF. I tried shrink database, shrink files, rebuilding clustered index. What can I do to reclaim that space in the MDF? I've moved 15GB worth of indexes to a different file group. Is it even possible to reduce my mdf by that same 15gb (or close to it)? SQL Server 2008 Enterprise

    Read the article

  • Advised auditing method for MS SQL to track changes made to a specific table by a specific user?

    - by scape
    What is the best method for tracking changes or logging the queries done to a table by a specific user when the person is using Management Studio? I'm using 2008 R2 Express Edition and want to specifically track a single user who logs in through Management studio and runs queries to make changes manually. I want to see what query was run and thus determine what was changed and how. I am not interested in restoring the information. I considered Change Tracking but read that it is not ideal for auditing as well I am unsure how to read the data, then I considered the Bulk-Logging option on the database however I then have to consider handling the log files which may grow huge as the database is used constantly by a web app. I am wondering if there is a more concise method to do what I want?

    Read the article

  • Search desktop files using a list of keywords stored in a text file

    - by Tod1d
    I have a list of 1285 keywords (database object names) that I have compiled into a TXT file; one keyword per line. I would like to search a directory of files (most have a .aspx or .cs extension) using this list of keywords. My main goal is find out which of the 1285 database objects are being referenced in these files. Can anyone recommend a tool that could accomplish this? Otherwise, I will just create my own. Thanks.

    Read the article

  • OpeVPN log connecting client IPs

    - by TossUser
    I looking for the best solution to log all connecting client's ip to either a text file or a database who logs into my VPN server. Under the IP I mean the public WAN IP on the internet where they are connecting from. A hack could definitely be to make the openvpn server log to a separate logfile and run logtail periodically to extract the necessary information. So the database I want to build would look like: Client_Name | Client_IP | Connection_date roadwarr1 | 72.84.99.11 | 03/04/14 - 22:44:00 Sat Please don't recommend me to use the commercial Openvpn Access Server. That's not a real solution here. If the disconnection date could be determined that would be even better so I could see how long a client was connected and from where! Thank you

    Read the article

  • PHP/Oracle Connectivity randomly "drops out"

    - by user20555
    Hi! Here's the current situation - I have two web servers (for now named A and B) and two database servers (named C and D). The web servers are quite old, and are running an early version of Apache 2 + PHP4, while the DB servers are running Oracle 9i and 10g respectively. We're experiencing a strange problem connecting (via PHP code) to database A while on web server B only. Web server A has no issues at all... Randomly, web server B will report a "Not connected to Oracle" error (3114). I can't see a real pattern with this, but refreshing a few times seems to fix the issue. Apparently there are no drop-outs on the network interface, which leads me to believe that there's some misconfiguration between PHP/Apache and Oracle (which uses connection pooling). We're running SunOS 5.8... Any ideas?

    Read the article

< Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >