Search Results

Search found 26798 results on 1072 pages for 'difference between detach attach and restore backup a db'.

Page 442/1072 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • Why does my DSL modem now need a reboot each time for my laptop to connect?

    - by msorens
    I have a rather peculiar home networking issue. For sometime my home network was purring along fine. I could turn on either of my laptops and they would quickly find and connect to my DSL modem (and thence the internet). Several days ago I unplugged my DSL modem for the first time in months. Upon turning it back on and waiting for the boot to finish, the lights on the panel indicated the DSL modem was fully operational, just as before. But that's not what happened. Not at all. Now when I turn on my Win7 laptop, the network icon in my system tray shows a small starburst; hovering over it the tooltip states "Not connected; connections are available". Clicking it lists several nearby networks including my own network showing a strong signal. If I click to connect, it attempts a connection but then I get a dialog stating "Windows was unable to connect to MyNet.". Turning off wireless on my laptop and turning it back on yields no difference. Running the network troubleshooter (which includes doing a repair on the network connection) yields no difference. The only remedy is to reboot the DSL modem (i.e. unplug it, wait a few seconds, then plug it back in). As soon as it goes online my laptop finds it and connects properly. To add one more twist to the story, this happened to me once before, several months ago. After a couple weeks, the situation resolved itself(!). Everything started working properly again, due to nothing I did. Final note: this problem only affects the wireless connection to the DSL modem. My desktop computer, connected via hardline to the DSL modem, connects fine when I turn it on. Any thoughts on why this is happening or how to fix it?

    Read the article

  • What is the best way to recover from a mysql replication fail?

    - by Itai Ganot
    Today, the replication between our master mysql db server and the two replication servers dropped. I have a procedure here which was written a long time ago and i'm not sure it's the fastest method to recover for this issue. I'd like to share with you the procedure and I'd appreciate if you could give your thoughts about it and maybe even tell me how it can be done quicker. At the master: RESET MASTER; FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS; And copy the values of the result of the last command somewhere. Wihtout closing the connection to the client (because it would release the read lock) issue the command to get a dump of the master: mysqldump mysq Now you can release the lock, even if the dump hasn't end. To do it perform the following command in the mysql client: UNLOCK TABLES; Now copy the dump file to the slave using scp or your preferred tool. At the slave: Open a connection to mysql and type: STOP SLAVE; Load master's data dump with this console command: mysql -uroot -p < mysqldump.sql Sync slave and master logs: RESET SLAVE; CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=98; Where the values of the above fields are the ones you copied before. Finally type START SLAVE; And to check that everything is working again, if you type SHOW SLAVE STATUS; you should see: Slave_IO_Running: Yes Slave_SQL_Running: Yes That's it! At the moment i'm in the stage of copying the db from the master to the other two replication servers and it takes more than 6 hours to that point, isn't it too slow? The servers are connected through a 1gb switch.

    Read the article

  • BIND zones and named files

    - by preethika
    I've installed BIND in my Windows server2003. i've configured the named file in C:\named\etc\named.conf as: options { directory "c:\named\zones"; allow-transfer { none; }; recursion no; }; zone "tisdns.com" IN { type master; file "db.tisdns.com.txt"; allow-transfer { none; }; }; My zone file is configured in C:\named\zones\db.tisdns.com.txt as: $TTL 6h @ IN SOA ns1.tisdns.com. hostmaster.tisdns.co… ( 2010010901 10800 3600 604800 86400 ) @ NS ns1.tisdns.com. ns1 IN A 192.168.0.17 mug IN A 192.168.0.103 key "rndc-key" { algorithm hmac-md5; secret "M0oW24WFQZhMu9wTq8qepw=="; }; controls { inet 127.0.0.1 port 53 allow { 127.0.0.1; } keys { "rndc-key"; }; }; In the above i've given the name to the domain as "tisdns". i want to create a new domain name in a different zone file. how can i create it?

    Read the article

  • Apache runs in console but not as a service?

    - by danspants
    I have an apache 2.2 server running Django. We have a network drive T: which we need constant access to within our Django app. When running Apache as a service, we cannot access this drive, as far as any django code is concerned the drive does not exist. If I add... <Directory "t:/"> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> to the httpd.conf file the service no longer runs, but I can start apache as a console and it works fine, Django can find the network drive and all is well. Why is there a difference between the console and the service? Should there be a difference? I have the service using my own log on so in theory it should have the same access as I do. I'm keen to keep it running as a service as it's far less obtrusive when I'm working on the server (unless there's a way to hide the console?). Any help would be most appreciated.

    Read the article

  • How do I change the Dropbox directory on a headless GNU/Linux server?

    - by DrTwox
    I have installed Dropbox 2.0.0 via command line on my home server (Ubuntu Server 12.04) to use for off-site automated backups, but I can't change the directory that the Dropbox daemon keeps synced. I've tried the following: The official docs say to use the desktop application, which is not applicable in my situation. However I installed the desktop app on my desktop machine and changed the default folder location, but I can't find where this change is stored in the ~/.dropbox/ directory so I can make the same change on the server. This page (and several others) recommends a Python script to do the job. Looking at the script, it opens a SQLite database called ~/.dropbox/dropbox.db, which does not exist on my Dropbox install, leading me to believe the script is out-of-date. This forum thread suggests manually inserting the required row in the config.db database, which I did, but it made no difference. I checked the same database file on my desktop machine, and it does not have the dropbox_path key, so I'm presuming the information in that thread is also out of date for version 2.0. I have tried to launch the Dropbox GUI configuration wizard over SSH with X11 forwarding, as suggested in one of the answers, but the binary must detect the absence of a local X11 install and it starts a command line daemon instead, which provides no means to change the option I need. I am currently using a symlink, as suggested as an answer, but this is a kludge. I would like to know the correct way to make the change. How do I change the Dropbox directory on a headless GNU/Linux server? Update: I've ditched Dropbox and started using Copy. Their Linux tools and support is far superior to Dropbox. I leave this question here in case someone, someday, can answer it.

    Read the article

  • Postfix SMTP-relay server against Gmail on CentOS 6.4

    - by Alex
    I'm currently trying to setup an SMTP-relay server to Gmail with Postfix on a CentOS 6.4 machine, so I can send e-mails from my PHP scripts. I followed this tutorial but I get this error output when trying to do a sendmail [email protected] Output: tail -f /var/log/maillog Apr 16 01:25:54 ext-server-dev01 postfix/cleanup[3646]: 86C2D3C05B0: message-id=<[email protected]> Apr 16 01:25:54 ext-server-dev01 postfix/qmgr[3643]: 86C2D3C05B0: from=<[email protected]>, size=297, nrcpt=1 (queue active) Apr 16 01:25:56 ext-server-dev01 postfix/smtp[3648]: 86C2D3C05B0: to=<[email protected]>, relay=smtp.gmail.com[173.194.79.108]:587, delay=4.8, delays=3.1/0.04/1.5/0.23, dsn=5.5.1, status=bounced (host smtp.gmail.com[173.194.79.108] said: 530-5.5.1 Authentication Required. Learn more at 530 5.5.1 http://support.google.com/mail/bin/answer.py?answer=14257 qh4sm3305629pac.8 - gsmtp (in reply to MAIL FROM command)) Here is my main.cf configuration, I tried a number of different options but nothing seems to work: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = localhost inet_protocols = ipv4 mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost myhostname = host.local.domain myorigin = $myhostname newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relayhost = [smtp.gmail.com]:587 sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_sasl_type = cyrus smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.crt smtp_use_tls = yes smtpd_sasl_path = smtpd unknown_local_recipient_reject_code = 550 In the /etc/postfix/sasl_passwd files (sasl_passwd & sasl_passwd.db) I got the following (removed the real password, and replaced it with "password"): [smtp.google.com]:587 [email protected]:password To create the sasl_passwd.db file, I did that by running this command: postmap hash:/etc/postfix/sasl_passwd Do anybody got an idea why I can't seem to send an e-mail from the server? Kind Regards Alex

    Read the article

  • Postgres pgpass windows - not working

    - by Scott
    DB: Postgres 9.0 Client: Windows 7 Server Windows 2008, 64bit I'm trying to connect remotely to a postgres instance for purposes of performing a pg_dump to my local machine. Everything works from my client machine, except that I need to provide a password at the password prompt, and I'd ultimately like to batch this with a script. I've followed the instructions here: http://www.postgresql.org/docs/current/static/libpq-pgpass.html but it's not working. To recap, I've created a file on the client (and tried the server as well): C:/Users/postgres/AppData/postgresql/pgpass.conf, where postgresql is the db user. The file has one line with the following data: *:5432:*postgres:[mypassword] (also tried explicit ip/dbname values, all asterisks, and every combination in between. (I've also tried replacing each '*' with [localhost|myip] and [mydatabasename] respectively. From my client machine, I connect using: pg_dump -h [myip] -U postgres -w [mydbname] [mylocaldumpfile] I'm presuming that I need to provide the '-w' switch in order to ignore password prompt, at which point it should look in the AppData directory on the server. It just comes back with "connection to database failed: fe_sendauth: no password supplied. Any insights are appreciated. As a hack workaround, if there was a way I could tell the windows batch file on my client machine to inject the password at the postgres prompt, that would work as well. Thanks.

    Read the article

  • Slowdown upon router/modem setup change

    - by Ollie Saunders
    I’ve been using a Belkin FSD7632-4 modem router to connect to my TalkTalk provided ADSL internet connection for some time and been pretty happy with it. Recently, however, the connection has been failing and I decided to get a ASUS RT-N16 instead, which is also a much more capable router generally. The ASUS RT-N16 doesn’t come with a modem built-in so I purchased as Zoom modem as well. I’ve set them both up and am using them to post this message. But I’m a bit miffed to find that I get a significantly and consistently slower downstream rate from the new configuration than with the old Belkin. Belkin modem router: downstream: 3.45 mbps upstream: 0.73 mbps ASUS router + Zoom modem: downstream: 2.71 mbps upstream: 0.66 mbps Any ideas why this is? The really weird thing about this is that the Zoom supports ADSL2 and ADSL2+ but I don’t think the old Belkin does. At first I thought it might be due to the Zoom modem being limited to PPPoE instead of PPPoA, which my ISP supports, but then I tried using PPPoE with the Belkin and that still gave a high speed. I’m using VC-Mux encapsulation with both. VPI of 0 and VCI of 38. I pulled this data off the Zoom: Mode: ADSL2 Line Coding: Trellis On Status: No Defect Link Power State: L0 Downstream Upstream SNR Margin (dB): 12.3 11.8 Attenuation (dB): 43.0 24.9 Output Power (dBm): 12.9 0.0 Attainable Rate (Kbps): 3936 844 Rate (Kbps): 3194 840 MSGc (number of bytes in overhead channel message): 59 10 B (number of bytes in Mux Data Frame): 99 14 M (number of Mux Data Frames in FEC Data Frame): 2 16 T (Mux Data Frames over sync bytes): 1 8 R (number of check bytes in FEC Data Frame): 8 8 S (ratio of FEC over PMD Data Frame length): 1.9833 9.0594 L (number of bits in PMD Data Frame): 839 219 D (interleaver depth): 32 2 Delay (msec): 15 4 Super Frames: 15808 14078 Super Frame Errors: 0 4294967232 RS Words: 513778 111753 RS Correctable Errors: 126 4294967238 RS Uncorrectable Errors: 0 N/A HEC Errors: 0 4294967279 OCD Errors: 0 0 LCD Errors: 0 0 Total Cells: 1920175 237597 Data Cells: 205993 392 Bit Errors: 0 0 Total ES: 0 0 Total SES: 0 0 Total UAS: 34 0

    Read the article

  • New SBS 2011 installation (not migration) in an existing 2008 R2 domain

    - by Tong Wang
    My current network setup has two servers: a Windows 2008 R2 with TMG 2010 as edge firewall (TMG server), a second 2008 R2 with DC, DNS and Hyper-V roles (DCDNS server). I was trying to install SBS 2011 as a child partition on DCDNS, first I installed SBS 2011 in English and did the migration successfully. However, later on, I found that I can't change the display language in SBS 2011 once it's installed (but the clients require a different language), so I had to re-install the SBS in a different language. It is during the re-installation that the problem came up: the migration can't be completed with some error message stating "can't access the source server". I re-ran the migration preparation tool, but it didn't make any difference. I wonder if it's because the source server can only be "migrated" once. Since I only need to setup a handful of users and computers, so I decided to do a new install of SBS and picked a different domain name. But I can't get the SBS to connect to LAN: it can't ping other servers, neither can other servers ping the SBS server. I've tried to stop the DC/DNS services on DCDNS and restart SBS, but with no difference. Anyone has idea how to fix this problem?

    Read the article

  • SendMail not working in CentOs 6.4

    - by Kane
    I am trying to send e-mails from my CentOS 6.4 but it does not work. My knowledge about servers is quite limited, so I hope someone can help me. Here is what I did: First i tried to send an email using the "mail" command, but it was not in the OS so I installed it. # yum install mailx After that, I tried sending an email using the "mail" command, but it did not send anything. I checked it on the internet and I realized I needed an e-mail server like sendmail, so I installed it. # yum install sendmail sendmail-cf sendmail-doc sendmail-devel After that, I configured it following some tutorials. First, sendmail.mc file. # vi /etc/mail/sendmail.mc Commented out the next line: BEFORE # DAEMON_OPTIONS('Port=smtp, Name=MTA') dnl AFTER # dnl DAEMON_OPTIONS('Port=smtp, Name=MTA') dnl Check that the next lines are correct: # FEATURE(`virtusertable', `hash -o /etc/mail/virtusertable.db')dnl # ... # FEATURE(use_cw_file)dnl # ... # FEATURE(`access_db', `hash -T<TMPF> -o /etc/mail/access.db')dnl Update sendmail.cf # m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf Open the port 25 adding the proper line in the iptables file # vi /etc/sysconfig/iptables # -A INPUT -m state --state NEW -m tcp --dport 25 -j ACCEPT restart iptables and sendmail # service iptables restart # service sendmail restart So i thought that would be ok, but when i tried: # mail '[email protected]' # Subject: test subject # test content #. I checked the mail log: # vi /var/log/maillog And that is what I found: Aug 14 17:36:24 dev-admin-test sendmail[20682]: r7D8RItS019578: to=<[email protected]>, ctladdr=<[email protected]> (0/0), delay=1+00:09:06, xdelay=00:00:00, mailer=esmtp, pri=2460500, relay=alt4.gmail- smtp-in.l.google.com., dsn=4.0.0, stat=Deferred: Connection timed out with alt4.gmail-smtp-in.l.google.com. I do not understand why there is a connection time out. Am I missing something? Can anyone help me, please? Thank you.

    Read the article

  • Best Asp.net Hosting

    - by dotnetguts
    There are many asp.net web hosting companies which spends lot on advertisement and also gives you very cheaper rate, as low as $5, but when it comes to support they are simply hopeless. Everyone can you please pass your experience with your past hosting companies and suggest any good asp.net hosting company? Please consider following requirement factors 1) Asp.net 3.5 or 4.0 supported. 2) Url Rewriter support 3) GZip support (Dynamic through code) 4) Initial Setup support (If required) 5) SQL Server 2005 or 2008 6) Allow to access SQL Server DB using SQL Mgmt Studio 7) Environment supporting Backup and Restore of DB on my own, without involving tech support team 8) Full Text Search support 9) FTP support 10) I can able to send atleast 500 Emails daily. 11) 99.9% Up Time (No matter all web hosting say they have 99.9% Up Time, but its not true). 12) Alert Email to be sent when they do any maintenance or during downtime. 13) Hosting Price should be reasonable. Incase you feel i am missing something please add to the list. Can anyone suggest good webhosting company based on above factors?

    Read the article

  • CMD/ADB - Autorun script to search, copy, and paste a file from android system to flash drive

    - by Outride
    I've looked around and can't find anything that answers my question. This is my first question, so any tips or thoughts are welcome, as well as an answer :p As explained in title, i want to create a script that launches, finds a file on android phone, copies it, and pastes it to a flash drive. As of right now, it's a mix of multiple tutorials, trial and error, and I'm at a point of giving up. As of right now, I have a flash drive, loaded with three scripts. As follows: Bold = name of file file.bat @echo off :: variables /min SET odrive=%odrive:~0,2% set backupcmd=xcopy /s /c /d /e /h /i /r /y echo off %backupcmd% "C:\Users\Outride\Desktop\kikDatabase.db" "%drive%\all" @echo off cls invisible.vbs CreateObject("Wscript.Shell").Run """" & WScript.Arguments(0) & """", 0, False launch.bat wscript.exe \invisible.vbs file.bat So far, I had to use android commander, manually go through the directory, find /data/data/kik.android/databases and then copy kikDatabase.db to my desktop. Then run this scrip. Yes i'm trying to pull the database to copy all my email contacts. I use launch.bat, which then makes file.bat invisible due to the invisible.vbs script. What would i need to do now to have the file searched for and copied to the flashdrive? Thanks in advance, i'll be glad to answer any questions if theres any :p just remember that i'm not exactly a tech expert haha EDIT* Cleared junk of prior edits. New - I now have a .bat script to recognize what drive the usb is on, and launch py_cmd (adb shell) This is the current script. pull.bat @echo off :: variables SET odrive=%odrive:~0,2% set launching=start "%drive%\Minimal ADB and Fastboot\py_cmd" echo off %launching% so how could I make it for the .bat or a new script, to type the following "adb pull /data/data/kik.android/databases/ %drive%\All\Database" into the adb terminal? please help! I've been racking my brain over this all night :3

    Read the article

  • SQL Server Installation: Is it 32 or 64 bit?

    - by CapBBeard
    Hi, Recently I was performing an OS upgrade on one of our DB servers, moving from Server 2003 to Server 2008. The DBMS is SQL Server 2005. While reinstalling SQL on the new Windows installation, I went to another of our DB servers to verify a couple of settings. Now, I always thought this second server was Server 2003 x64 + SQL 2005 x64 (from what I'd been told), but I now have my doubts about this. I now suspect that it is in fact only 32 bit SQL, however I'd like to verify this. Here's some details: The OS is definitely 64 bit. xp_msver shows Platform as NT INTEL X86 SELECT @@VERSION shows Microsoft SQL Server 2005 - 9.00.4035.00 (Intel X86)... However sqlservr.exe is not shown with '* 32' in taskmgr, does anyone know why this is the case, if it is in fact 32 bit as claimed? Despite this, it does seem to be running out of the x86 program files folder. If I do the same checks on a confirmed 64 bit installation, it does give back the expected 64 bit readings, which can only prove that this server in question is only running in 32 bit. Now, that being the case, the question arises about how much memory this '32 bit' install can use. Task manager reports about 3.5GB memory usage for sqlservr.exe (The server has 16GB physical). I suspect that AWE has not been configured at all, and therefore the server will be significantly under-utilised (remembering that the OS is 64 bit) if SQL is simply using a 32bit address space. Is this assumption correct? I feel the server should have SQL reinstalled as 64 bit in order to fully utilise the hardware platform, however it is currently heavily in production; this will be no easy task. I suspect we may just have to configure AWE correctly and let it be for the time being (Unless this is a bad idea?). I apologise that this question is a little vague/lost; I'm no SQL expert, just trying to get a handle on what's going on here.

    Read the article

  • Routing for remote gateway over VPN in Vista/7 broken?

    - by Raymond
    Hi, Situation is as follows. Home computer running Windows 7, sets up VPN connection (LT2P + IPSec, "use remote gateway" disabled) to office. Subnet is 192.168.64.x Office has Draytek Vigor 2920 router, subnet is 192.168.32.x What happens? - VPN connection itself works fine - Can ping any machine on the remote network - When trying to open a webpage from a host in the remote network, the remote server logs the incoming request, but the browser hangs on "waiting for..." and eventually times out. I have observed this problem on Windows Vista and Windows 7. On Windows XP however there is no problem like described above. The only clue I have is that there is a difference in the routing between XP and Vista/7. The output of "route print" on Windows XP looks like this: (See www.latunyi.com/routing_xp.png) So here the gateway for the 192.168.32.x subnet is the IP address that the local computer has in the remote network. The output of "route print" on Windows 7 (and Windows Vista) looks like this: (See www.latunyi.com/routing_win7.png") Now the gateway for the 192.168.32.x subnet is the IP address of the VPN router (32.1). I don't know if that causes this trouble, but it seems a bit strange. Enabling "use default gateway on remote network" doesn't make a difference. Using the new option "Disable class based route addition" in Windows 7 only makes the route to the VPN router disappear. I am really puzzled here. I assume the VPN routing can't be broken in both Vista and Windows 7, and this should just work without manually adding routes. I hope someone has a solution for this problem :-). Thanks!

    Read the article

  • How can I avoid permission denied errors when attempting to deploy a rails app with capistrano?

    - by joshee
    Total noob here. I'm attempting to deploy an app through Capistrano. I'm getting relentless permission denied errors when I attempt to run cap deploy:update. Seemingly at least some of these errors are due to missing directories that trigger a "Permission Denied" error. (I'm doing setup on root just temporarily.) set :user, 'root' set :domain, 'domainname.com' set :application, 'appname' # adjust if you are using RVM, remove if you are not $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require "rvm/capistrano" set :rvm_ruby_string, '1.9.2' # file paths set :repository, "ssh://[email protected]/~/git/appname.git" set :deploy_to, "/var/rails/appname" # distribute your applications across servers (the instructions below put them # all on the same server, defined above as 'domain', adjust as necessary) role :app, domain role :web, domain role :db, domain, :primary => true set :deploy_via, :remote_cache set :scm, 'git' set :branch, 'master' set :scm_verbose, true set :use_sudo, false set :rails_env, :production namespace :deploy do desc "cause Passenger to initiate a restart" task :restart do run "touch #{current_path}/tmp/restart.txt" end desc "reload the database with seed data" task :seed do run "cd #{current_path}; rake db:seed RAILS_ENV=#{rails_env}" end end after "deploy:update_code", :bundle_install desc "install the necessary prerequisites" task :bundle_install, :roles => :app do run "cd #{release_path} && bundle install" end Here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/shared/cached-copy'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly I'm able to ssh without a password, so not sure about that publickey error. By the way, if I run cap deploy:update without set :deploy_via, :remote_cache, here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/releases/20120326204237'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly command finished Thanks a lot for your help with this.

    Read the article

  • How to know whether mongodb is running on 64 bit mode or 32 bit mode

    - by Jim Thio
    My programmer install mongodb. Then somehow it doesn't work. I run C:\mongod\bin>mongod mongod --help for help and startup options Sat Aug 11 22:57:50 Sat Aug 11 22:57:50 warning: 32-bit servers don't have journaling enabled by def ault. Please use --journal if you want durability. Sat Aug 11 22:57:50 Sat Aug 11 22:57:50 [initandlisten] MongoDB starting : pid=3800 port=27017 dbpat h=/data/db 32-bit host=haryantoi5 Sat Aug 11 22:57:50 [initandlisten] Sat Aug 11 22:57:50 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data Sat Aug 11 22:57:50 [initandlisten] ** see http://blog.mongodb.org/post/13 7788967/32-bit-limitations Sat Aug 11 22:57:50 [initandlisten] ** with --journal, the limit is lower Sat Aug 11 22:57:50 [initandlisten] Sat Aug 11 22:57:50 [initandlisten] db version v2.0.7-rc1, pdfile version 4.5 Sat Aug 11 22:57:50 [initandlisten] git version: 9efe4cce272373b52b96de1309c1fbf 0c984305f Sat Aug 11 22:57:50 [initandlisten] build info: windows sys.getwindowsversion(ma jor=6, minor=0, build=6002, platform=2, service_pack='Service Pack 2') BOOST_LIB _VERSION=1_42 Sat Aug 11 22:57:50 [initandlisten] options: {} ************** Unclean shutdown detected. Please visit http://dochub.mongodb.org/core/repair for recovery instructions. ************* Sat Aug 11 22:57:50 [initandlisten] exception in initAndListen: 12596 old lock f ile, terminating Sat Aug 11 22:57:50 dbexit: Sat Aug 11 22:57:50 [initandlisten] shutdown: going to close listening sockets.. . Sat Aug 11 22:57:50 [initandlisten] shutdown: going to flush diaglog... Sat Aug 11 22:57:50 [initandlisten] shutdown: going to close sockets... Sat Aug 11 22:57:50 [initandlisten] shutdown: waiting for fs preallocator... Sat Aug 11 22:57:50 [initandlisten] shutdown: closing all files... Sat Aug 11 22:57:50 [initandlisten] closeAllFiles() finished Sat Aug 11 22:57:50 dbexit: really exiting now It seems that mongod is running on 32 bit. I have a 64 bit computer and I want to run mongodb in 64 bit enviroment. How do I do so?

    Read the article

  • Postgres pgpass windows - not working

    - by Scott
    DB: Postgres 9.0 Client: Windows 7 Server Windows 2008, 64bit I'm trying to connect remotely to a postgres instance for purposes of performing a pg_dump to my local machine. Everything works from my client machine, except that I need to provide a password at the password prompt, and I'd ultimately like to batch this with a script. I've followed the instructions here: http://www.postgresql.org/docs/current/static/libpq-pgpass.html but it's not working. To recap, I've created a file on the client (and tried the server as well): C:/Users/postgres/AppData/postgresql/pgpass.conf, where postgresql is the db user. The file has one line with the following data: *:5432:*postgres:[mypassword] (also tried explicit ip/dbname values, all asterisks, and every combination in between. (I've also tried replacing each '*' with [localhost|myip] and [mydatabasename] respectively. From my client machine, I connect using: pg_dump -h [myip] -U postgres -w [mydbname] [mylocaldumpfile] I'm presuming that I need to provide the '-w' switch in order to ignore password prompt, at which point it should look in the AppData directory on the server. It just comes back with "connection to database failed: fe_sendauth: no password supplied. Any insights are appreciated. As a hack workaround, if there was a way I could tell the windows batch file on my client machine to inject the password at the postgres prompt, that would work as well. Thanks.

    Read the article

  • Server 2003 and SSL Certificates

    - by Keith Stokes
    I have a Windows 2000 domain with dozens of Windows 2000 servers and a few 2003 servers. Each server runs a custom app talking to a 3rd party utilizing self-signed certificates. To help troubleshooting we've created a custom test app. The 2000 servers are able to talk within seconds. The 2003 servers take anywhere from 10-30 seconds using a domain account and much less, usually under 5 seconds using a local account. The only exception to the local account performance is a new account, which is slow initially then faster. If you leave the test app open and reconnect repeatedly it talks in seconds. If you leave it open for sometime between 1 and 2 hours, it reverts back to the previous 10 seconds, so obviously something is caching. Installing the destination certificates in the local 2003 server store makes no difference. I've installed the certificates in AD and that apparently makes domain accounts work in 9-12 seconds, vs 30 seconds that was regular before. Manually clearing the certificate store on the 2003 server makes no difference. I'm at a loss as to where the certs might be cached and if I'm using some sort of domain certificate store that's hiding from me.

    Read the article

  • What is the correct authentication mechanism when there are users inside and outside the domain?

    - by Gary Barrett
    We have a Windows 7 enterprise desktop data entry app for mobile (laptop) users with local SQL Express 2008 R2 Express db that syncs data with an SQL Server 2008 R2 Server db. Authentication is required before syncing the data. The existing group of users are part of the organisation's domain so normal scenario and they connect to the Sql Server directly. But there are plans for a second group of app users who belong to various partner organisations so they are outside our domain and have their own various separate domains/accounts. The aim is to deploy the desktop app to them and they will periodically sync data to our SQL Server. What I am uncertain of: Is it possible to authenticate users from another domain? Can permissions be managed via Active Directory etc? Which authentication protocol should be used in this scenario? Windows, Forms, SQL, etc? The IT people are requesting users of the system be managed via Active Directory. Is it possible to manage the external domain users access via Active Directory?

    Read the article

  • use ssh tunnel with phpmyadmin

    - by JohnMerlino
    I been using ssh tunnel to bypass firewall of remote mysql server. On my Ubuntu 12.04 installation, it works via the terminal and it works when using a program called mysql workbench. However, that program freezes often and I want to try phpmyadmin as an alternative. However, I cannot connect to remote server using ssh tunnel on phpmyadmin, albeit I can connect locally. These are the steps I've tried: 1) Open a tunnel, listening on localhost:3307 and forwarding everything to xxx.xxx.xxx.xxx:3306 (used 3307 because MySQL on my local machine uses the default port 3306): ssh -L 3307:localhost:3306 [email protected] So now I have the port for tunnel open and I have my local mysql installation default port: $ netstat -tln Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:3307 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN ... 2) Now I can easily connect to remote server via localhost using the terminal: $ mysql -u user.name -p -h 127.0.0.1 -P 3307 Notice that I expicitly identify 3307 as the port, so traffic forwards to the remote server, and hence it logs me in to the remote server. Unfortunately, the localhost/phpmyadmin local login interface doesn't allow you to specify a port option. So I modify the config-db.php file and change the $dbport variable to 3307, under the impression that the phpmyadmin interface will now work with port 3307: $ sudo vim /etc/phpmyadmin/config-db.php $dbport='3307'; Then I restart the mysql server. Unfortunately, it didn't work. When I use the remote credentials to login, it gives me error: #1045 Cannot log in to the MySQL server

    Read the article

  • Can't get php+sqlite working

    - by facha
    Hi, everyone I'm struggling all morning to make php work with an sqlite database. Here is a piece of php code that I try to execute: #less /var/www/html/test.php <?php $db=new PDO("sqlite:/var/www/test.sql"); $sql = "insert into test (login,pass) values ('login','pass');"; $db->exec($sql); ?> Here is how I've done tests: # sqlite3 /var/www/test.sql sqlite> create table test (login varchar,pass varchar); #chown apache:apache /var/www/test.sql #chmod 644 /var/www/test.sql Here is the stuff that drives me mad: When I execute from command line: #php test.php everything goes well. Sql is being executed and I can see a new row appear in the database. When I execute the same script from a browser - sql is not being executed. I don't get a new row in the database. There are no errors in the apache log file. Please, help

    Read the article

  • Size of modules within initrd

    - by LiKao
    I am currently trying to manually replace the kernel within ubuntu-core on an embedded device with a custom kernel. However when I try to update the initrd my initrd becomes much bigger. Here is what I did: Extract the initrd that was shipped with ubuntu Make a list of all modules within the old initrd get the same modules from the new module director at /lib/modules/new_kernel_version add these modules to the initrd and remove the old ones If I do this my initrd becomes quite bigger than the original one, so I checked the individual modules. I picked the btrfs.ko filesystem driver as an example. So by comparing these two modules I noticed the one I just picked into the initrd was much bigger, which caused the difference in size. -rw-r--r-- 1 root root 999K Nov 14 15:06 btrfs.ko For the btrfs.ko within the shipped initrd. -rw-r--r-- 1 root root 7.2M Nov 14 15:08 btrfs.ko For the new btrfs.ko. What is different between these two modules? Could this be caused by some faulty setting for the new kernel? When producing the kernel I copied /proc/config.gz and used make oldconfig to update it, so all optimisations should be the same for both kernels. Or is there something else which is being done to the modules before they are put into the initrd? Maybe is there even some better way to build a new initrd for the new kernel in ubuntu altogether. Update: I just also tested with an initrd which I created from scratch using the mkinitrfs command within ubuntu, and it has the same size difference that I found for the initrd I manually updated.

    Read the article

  • Why is my new PC so slow at startup?

    - by rumtscho
    Bought a new PC this weekend, and it works really good. Only I have one big problem: startup time. Its BIOS needs 62 sec to load, then from Grub start to pw entering screen it's another 26 sec. I think this is a lot, because my old PC needs 34 sec for BIOS and another 8 sec to pw screen. After I enter the pw, the desktop is usable with practically no delay on both. The new PC is a core i7-930, running a Lucid Lynx 64 bit from a Intel Postville SSD (no internal HDs). The old PC is a Pentium 4 celeron (forgot the clock speed) running a Lucid Lynx 32 bit from an ATA 100 hard drive. Neither PC is overclocked. The new one has boot sequence 1.DVD ROM, 2.SSD (connected over SATA in AHCI mode), 3. removable drive. The old one boots from 1. DVD ROM, 2. HDD, 3. Floppy. Neither has a second OS installed. The new one has less software installed than the old one (I think), but the boot time difference was noticeable even before I made any installs. As far as I know, just the SSD should be enough to make a noticeable difference in boot time. I thought that having a good mainboard on the new PC as opposed to the basic office model on the old one would also mean a faster loading BIOS. If these assumptions are right, I guess I must have misconfigured something in the BIOS of the new PC. How should I configure it for a fast boot? It has an ASUS P6X58D board with an AMI BIOS, if you need the BIOS revision number I could post that too.

    Read the article

  • DNS caching server config problem

    - by Alex
    I have a Bind DNS caching-only server setup that is working. I am bringing up a new AD domain controller that will also be a DNS server for that AD but I don't want it responding to any DNS queries except those that are AD related. So, my goal is to leave this caching server as the primary DNS server for stations on the network and have it forward requests for the AD domain to the domain controller. My understanding is that I just need a forward zone for that domain pointing to the domain controller. However it does not seem to be working. So that leaves me to think that my caching server is not forwarding properly. For example, this AD is going to have a naming convention of hostname.mydomain.local. If I do an nslookup and specify the domain controller's IP address as the server, I can query addresses that exist in DNS on that server, such as dc1.mydomain.local. However, queries to my caching server times out (I get a response from the caching server if I query mydomain.local but none of the objects in that domain). Any suggestions? Here is my named.conf file: options { directory "/var/named"; listen-on { 192.168.0.14; 127.0.0.1; }; forwarders { ; ; }; forward first; }; zone "." in { type hint; file "db.cache"; }; zone "0.0.127.in-addr.arpa" in { type master; file "db.127.0.0"; }; //forward zone for mydomain.local zone "mydomain.local" { type forward; forwarders { 192.168.1.21; }; };

    Read the article

  • Compare-Object gives false differences

    - by Andy
    I have some problem with Compare-Object. My task is to get difference between two directory snapshots made at different times. First snapshot is taken like this: ls -recurse d:\dir | export-clixml dir-20100129.xml Then, later, I get second snapshot and load both of them: $b = (import-clixml dir-20100130.xml) $a = (import-clixml dir-20100129.xml) Next, I'm trying to compare with Compare-Object, like that: diff $a $b What I get is in some places files that were added to $b since $a, but in some -- files that were in both snapshots, and some files, that were added to $b, are not given in Compare-Object output. Puzzling, but $b.count - $a.count is EXACTLY the same as (diff $a $b).count. Why is that? Ok, Compare-Object has -property param. I try to use that: diff -property fullname $a $b And I get the whole mess of differences: it shows me ALL the files. For example, say $a contains: A\1.txt A\2.txt A\3.txt And $b contains: X\2.mp3 X\3.mp3 X\4.mp3 A\1.txt A\2.txt A\3.txt diff output is something like that: X\2.mp3 => A\1.txt <= X\3.mp3 => A\2.txt <= X\4.mp3 => A\3.txt <= A\1.txt => A\2.txt => A\3.txt => Weird. I think I don't understand something crucial about Compare- Object usage, and manuals are scarce... Please, help me to get the DIFFERENCE between two directory snapshots. Thanks in advance UPDATE: I've saved data as plain strings like that: > import-clixml dir-20100129.xml | % { $_.fullname } | out-file -enc utf8 a.txt And results are the same. Here're excerpts of both snapshots (top 100-something lines, a.txt and b.txt), output of compare-object, and output of UNIX diff (unified). All files are UTF-8: http://dl.dropbox.com/u/2873752/compare-object-problem.zip

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >