Search Results

Search found 5011 results on 201 pages for 'grand master t'.

Page 124/201 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Automated git push attempt does not work - authentication issue

    - by at least three characters
    I'm trying to automate a very periodic git add/commit/push cycle using a shell script and cron under OS X 10.8.5. The script is as basic as one would expect it to be: cd /my/directory git add . git commit -m "a commit message with the date" git push -u origin master I've tried running it both as root as well as a non-root user. When I do this manually, I get a dialog box from OS X requesting that I authenticate the operation. Running the script (either using cron or just using sh) ends up sending a message (via mail) to whichever user's cron executed the script saying that it was unable to write a file in the .git directory because of a permissions issue (which is most likely manual execution requires authentication). Is there any way to circumvent this issue, or give the script permission to perform this operation without having me intervene each time?

    Read the article

  • HDD is not recognized/initialized via USB, only via SATA - is a reformat through USB a bad idea?

    - by Wuschelbeutel Kartoffelhuhn
    I have a 4TB Hitachi HDD that I purchased in Europe (I use it as a backup disk); I use Windows 7. When I connect it to a SATA port, it is recognized in Windows Explorer and gives no problems, even after transferring 3TB at a time or after being on for days. When I connect it via a SATA-to-USB2.0 adapter, it is also recognized, but when I transfer a large amount of data, it will intermittently stop being recognized by Windows Explorer and cancel the transfer. When I connect it via an external enclosure (which is technically a SATA-to-USB3.0 adapter), it does not display at all in Windows Explorer, but Disk Management will show the drive, albeit uninitialized (prompts for format). I only got the external enclosure because I want to backup my files more conveniently (instead of having to open the computer case each time). Do you advise against reformat/initialization via the external enclosure? Can it screw up things in an irrevocable way (Master Boot Record etc.)?

    Read the article

  • Cloning Win 7 installation from MBR to GPR drive and make it bootable

    - by Nelluk
    I've seen threads on similar topics - such as this one - but the answers never seem to solve how to make it bootable. I have Win 7 64-bit on a PC installed on a 2tb MBR volume. The motherboard is UEFI compatible. I just installed a secondary internal 3TB drive which will be partitioned as GPT. Is there a relatively easy way to clone my installation over to the new drive and have that drive be bootable? I have used EaseUS Partition Master to clone the C volume to the D volume, but that would not boot and I assume the issue is that one is MBR and one is GPT. Is there a process to do this?

    Read the article

  • Postfix dynamic smtp_helo_name

    - by William
    I have a mail server that relays e-mails for two different domains. I want the smtp_helo_name to be different based on the domain. I'm assuming there is no way to do this via checking the mail headers, so I was wondering if there was a way to do it by sending mail for one domain to one IP, and mail for the other to another. I tried modified master.cf to do this: localhost:smtp inet n - n - - smtpd ip1:smtp inet n - n - - smtpd ip2:smtp inet n - n - - smtpd -o myhostnamee=example2.com And setting smtp_helo_name to $myhostname in main.cf I also tried doing -o smtp_helo_name instead, neither work. Any suggestions would be great. Thanks

    Read the article

  • What database is easy to maintain and manage in a cluster?

    - by Sanoj
    I'm looking for a database (DBMS) that is easy to scale out. I would like to have high availability so I need a multi-master cluster, where the data is replicated to two or more physical computers. I would also like to be able to start with one node (no replication), and then scale out to more nodes as needed without a reinstallation or downtime. I would like to have a DBMS that are easy to maintain and manage. It should be easy to add nodes, remove nodes, take live backup and monitor the use of resources. It doesn't have to be a relational database system, so a NoSQL is okey. And I would like to have a free version so I can test it in small scale and compare it with alternatives. What alternatives do I have?

    Read the article

  • Help replacing old Windows 2003 SBS DC with a Win2008 Standard Edition DC

    - by Chris
    Objective: Trying to replace a Windows 2003 SBS domain controller with a windows server 2008 Standard Edition Domain Controller. What I did: used ADPREP. Then all user accounts and OUs are successfully replicated into the 2008 server. I have also managed to transfer all the DC roles (operations master,schema,pdc) into the Server 2008. I have also used NETDOM QUERY FSMO . It displayed that all the roles transferred to the 2008 server. Problem: When I am trying to demote the windows 2003 SBS server using DCPROMO, the message is “No other Active Directory for this domain can be contacted”. I also tried shutting down the 2003 server. Users can login into the domain but they have trouble finding SHARED folders. Can someone help me find out what I did wrong ? Need a little push in the right direction here. Thank you very much ?

    Read the article

  • Help setting up NSD daemon (DNS server)

    - by Catalin
    While searching for a secure dns server I came across the NSD project. I was really impressed by what seemed to me the best option out there that's open source. One problem thought their 'tutorial' is really not beginner friendly. I have basic DNS knoledge but what's in there is out of my league. I need to have multiple sites on this CentOS server I've recently got my hands on. They also need to receive email. Details: I have a master host and would love to set this in the way described in the rows that follow: masterhost.com -> ns1.masterhost.com mail.masterhost.com www.masterhost.com addonhost.com -> ns1.masterhost.com mail.masterhost.com www.addonhost.com And so on. Any help in setting up this DNS server please? All answers and suggestions are welcomed. Thank you in advance.

    Read the article

  • Granting rights to the sa account using osql

    - by Jan Jongboom
    I'm installing sql instances through script, and after creating a certain instance, I cannot get the sa account to be enabled through osql. What I've tried osql -S .\INSTANCENAME -E use master ALTER LOGIN sa ENABLE GO Using SSMS to enable the account (by logging in using Windows Auth., 'New query', and exactly the same query as in 1.) Suggestions in this issue No. 2. is actually working; and the account is enabled instantly. No 1 is not working, not even with the suggestions provided in 3., I have restarted the SQL services after executing the commands in osql. Additional info Windows 2003 Server, Microsoft SQL Server 2005 Enterprise, No password policies apply to the account.

    Read the article

  • Windows 7 EFI Partition Deleted

    - by Sam Clark
    I converted my Windows 7 GPT disk to dynamic, which resulted in it being unable to boot. So I used EaseUs Partition Master on my second Win 7 installation to convert the disk back to basic. Now, I have one partition, which is my C: drive, and I can't figure out how to recovery my EFI partition. I no longer see the "Windows" entry in my EFI boot menu. The Windows 7 repair on the install DVD doesn't see my installation. Please let me know how to restore the EFI partition.

    Read the article

  • So how does one use rockmongo to connect to a mongo sharded setup with replicasets?

    - by Tom
    I try to use rockmongo, to connect to our cluster. Our setup is a setup of two shards each consisting of a replicaset. I try to connect to the mongos instance and while rockmongo connects I get an error when trying to list the dbs: Execute failed:not master function (){ return db.getCollectionNames(); } This has something to do with the replica sets and everybody points to: $MONGO["servers"][$i] = array("replicaSet" => "xxxxx"); This is all fine, but I have two replicasets and as far as I understand I should connect to the mongos instance and not directly to the members of the set. So how does one use rockmongo to connect to a mongo sharded setup with replicasets?

    Read the article

  • Problems enabling BitLocker on Windows 7 enterprise

    - by ericl42
    I had BitLocker turned on originally when I loaded the computer but had to turn it off to do some testing. I recently tried to turn it back on and I continue to get the following error: Blockquote A required TPM measurement is missing. If there is a bootable CD or DVD in your computer, remove it, restart the computer, and turn on BitLocker again. If the problem persists, ensure the master boot record is up to date. Blockquote I have verified that there is nothing in the DVD tray and that the laptop is not docked. I have also verified that TPM is running and I have no problems enabling BitLocker on a flash drive. I think it's a problem with my MBR since I am dual booting into Fedora as well but I am not sure how to fix it. (even though it did work a few months ago while I was also dual booted) Thank you for the help.

    Read the article

  • how to lower CPU usage for Ableton Live 8 in Mac OSX 10.6.8

    - by Travis Dtfsu Crum
    While running Ableton Live 8 after a project gets to a certain size, the audio starts to get a bit choppy and distorted. My levels are fine and is not the cause of the lag and chop. I have samples set to 1024 and the high quality button selected which I need to hear the sound. I always lower these when I'm recording audio with my mic but I need to have these turned up to be able to mix master the song. Anything I can do to lower the CPU usage? It sucks because I can't even use my iZotope Ozone plug-ins because of their CPU usage and they are AMAZING plug-ins that I would love to use.

    Read the article

  • Any good PostgreSQL client for linux?

    - by senotrusov
    stackoverflow points me "belongs-on-serverfault" on this, so crossposting. I am frustrated of not having a good Linux GUI administration and development tool for PostgreSQL. pgAdmin III is buggy and unusable piece of... hmm, software, compared to Windows-only PostgreSQL Maestro and EMS PostgreSQL manager. phpPgaAmin does not looks promising. EMS PostgreSQL manager can work under Wine, but such setup have a number of issues. Requirements are: Table data editing and browsing for large tables (1M+), able to jump by FK or some master-slave editing, GUI filtering and so on. ER diagrams with in-place schema editing Schema editing and browsing with all useful GUI support Schema changes log to put into DB versioning (migrations script). Tabbed interface to be able to work with a number of tables and SQL queries at once. And so on. Any ideas?

    Read the article

  • Serve up PC hard drive as USB mass storage

    - by sheepsimulator
    Is there a software package available that can serve up a hard-drive internal to a PC and make it available over USB to other USB Master nodes as mass storage? Ex: take your C: or /dev/hda drive on a PC (let's call the computer PC-A), and run a driver program which makes your C: or /dev/hda drive available to external devices as USB mass storage. When you'd hook up another PC (PC-B) to PC-A via USB, it would detect a USB mass storage device, which is C: or /dev/hda on PC-A. Is this even possible? EDIT: I know that there are other ways of making data on a drive available between two different computers (eg. putting PC-A's hdd in a USB-drive-enclosure, or having PC-A make the hdd available via a network share). But I'd like to know if the method that I describe above is even technically possible.

    Read the article

  • Buffalo wireless router

    - by Walter White
    Hi all, I have a Buffalo wireless router and it appears to work fine, for the most part. However, printing out some debug information, I noticed that the Mode is reported as unknown/bug. In the DD-WRT configuration, I have selected AP which should be Master. I was having a problem earlier with my ESSID being a bunch of weird characters. I re-entered it and restarted the router and now the ESSID is correct. I tried re-entering the configuration for the router and it still is reported back as Unknown/bug via iwlist wlan0 scanning. Any ideas? Walter

    Read the article

  • Unable to create context rendering error whet run OpenGL application

    - by Rodnower
    Hello, I try to run Mesa gears example and I get following error: freeglut (./gears): Unable to create direct context rendering for window 'Gears' This may hurt performance. though the application runs successfully, but I guess that in future I will have much problems with productivity. I run Linux CentOS 5 on WMvare 7. Mesa's version is 6.5 Relevant output of lspci -v gives: 00:0f.0 VGA compatible controller: VMware SVGA II Adapter (prog-if 00 [VGA controller]) Subsystem: VMware SVGA II Adapter Flags: bus master, medium devsel, latency 64, IRQ 9 I/O ports at 10d0 [size=16] Memory at d0000000 (32-bit, non-prefetchable) [size=128M] Memory at d8000000 (32-bit, non-prefetchable) [size=8M] [virtual] Expansion ROM at 30000000 [disabled] [size=32K] Capabilities: [40] Vendor Specific Information Any one have idea? There is driver of vmvare for CentOS? Thank you for ahead.

    Read the article

  • Proper management of PGPool II

    - by Cathy
    Currently I have a site, with one Postgres database server. It is just for a select number of users (less than ten) but it needs the maximum uptime possible. I would like some kind of automatic failover for the database. So I was thinking something like: one server running PGPool II, one running Postgres as master, one running Postgres as slave. But then, if wherever PGPool is running suddenly loses power (or dies, or whatever), there's a single point of failure and the whole thing goes down. Is there a solution, assuming that outsourcing this to someone else isn't possible?

    Read the article

  • Groups issue on Ubuntu

    - by grobarTN
    Hello, I am member of couple of groups lets say Master, Student, Web. The problem is that by default whatever I do is first created under student group. I need to set it so it is created with Web group. Folder www/ where I need to write file is already mode 770. But because it picks up my student group it does not allow me to write to that folder. Is there any way to change the group that I create files under. If I execute groups it lists all groups so I am member of correct group I just cant write to the folder. Anyone?

    Read the article

  • Security and Windows Login

    - by Mimisbrunnr
    I'm not entirely sure this is the right place for the is question but I cannot think of another so here goes. In order to login to the windows machines at my office one must press the almighty CTRL-ALT-DELETE command combo first. I, finding this very frustrating, decided to look into why and found claims from both my sys and Microsoft stating that it's a security feature and that "Because only windows could read the CTRL-ALT-DELETE it helped to ensure that an automated program cannot log in. Now I'm not a master of the windows operating system ( as I generally use *nix ) but I cannot believe that "Only windows can send that signal" bull. It just doesn't sit right. Is there a good reason for the CTRL-ALT-DELETE to login thing? is it something I'm missing? or is it another example of antiquated legacy security measures?

    Read the article

  • Dovecot Virtual Users and Users Domain Mapping

    - by Stojko
    I have successfully compiled, configured and ran Dovecot with virtual users feature. Here's part of my /etc/dovecot.conf configuration file: mail_location = maildir:/home/%d/%n/Maildir auth default { mechanisms = plain login userdb passwd-file { args = /home/%d/etc/passwd } passdb passwd-file { args = /home/%d/etc/shadow } socket listen { master { path = /var/run/dovecot/auth-worker mode = 0600 } } } I faced one issue I can't resolve myself. Is there anyway to create users' domains mapping and provide username in mail_location? Examples: 1. currently I have /home/domain.com/user/Maildir 2. I'd like to have /home/USER/domain.com/user/Maildir Can I achieve this somehow? Greets, Stojko

    Read the article

  • Considering building a new system; but undecided about chassis

    - by J.C. Bengtson
    I'm considering a new system build, and was thinking that this particular motherboard has features I need and like: http://www.tigerdirect.com/applications/searchtools/item-details.asp?EdpNo=667651&pagenumber=1&RSort=1&csid=ITD&recordsPerPage=5&body=REVIEWS#CustomerReviewsBlock .. but am unsure which model chassis to pair it with. I'd strongly prefer something from Cooler Master, as I'm a fan of their products, but am having a hard time deciding, and also don't want to get into some odd situation where the board doesn't properly fit. I plan on having two optical drives (5.25"), two internal HDs (3.5"), and will likely go with an SLI setup of 2 or possibly even 3 cards, so I'd need a chassis that is roomy enough to accomodate all of that, as well as the motherboard itself. Based on the stock available at that same site, do you all have any suggestions? The larger, the better, as I hate having components crammed together. Your help is most appreciated!

    Read the article

  • serving static assets via http is really slow compared to sshfs (apache2/nginx)

    - by s1lv3r
    After migrating to a new VPS I had some users complaining about slow loading images on their sites. After creating some test files with dd I realized that I can download all files via sshfs with full speed while downloads via web are painfully slow. The larger the file is and the longer the transfer takes, the slower the transfer speed gets. I thought I had some problems with Apache and just spend the whole evening with replacing Apache2 against nginx for static file serving - with no effect at all. No I/O wait states in top. Tons of RAM free, no high CPU utilization and hdparm shows a decent I/O performance at all times. I just have no idea anymore, what's happening on this server. This is a link to a demo file: http://master.dealux.de/file.tgz Anybody an idea what I can check out?

    Read the article

  • How to use Binary Log file for Auditing and Replicating in MySQL?

    - by Pranav
    How to use Binary Log file for Auditing in MySQL? I want to track the change in a DB using Binary Log so that I can replicate these changes to other DB please do not give me hyperlinks for MySQL website. please direct me to find the solution EDIT I have looked for auditing options and created a script using Triggers for that, but due toi the Joomla DB structure it did'nt worked for me, hence I have to move on to Binary Log file concept now i am stucked in initiating the concept as I am not getting the concept of making the server master/slave, so can any body guide me how to actually initiate it via PHP?

    Read the article

  • How to install PHP, Pear, PECL, and APC with Homebrew on Mac OS X?

    - by Andrew
    I'm trying to install APC for PHP 5.3 in the easiest way possible. I love Homebrew so I started down that route. I was able to install PHP 5.3.6 with this command: brew install https://github.com/adamv/homebrew-alt/raw/master/duplicates/php.rb --with-mysql I think this is supposed to install PHP, Pear, and PECL. It seems to install these just fine. Now when I try to install APC: $ pecl install apc downloading APC-3.1.9.tgz ... Starting to download APC-3.1.9.tgz (155,540 bytes) .................................done: 155,540 bytes Warning: require_once(Archive/Tar.php): failed to open stream: No such file or directory in PackageFile.php on line 305 Warning: require_once(Archive/Tar.php): failed to open stream: No such file or directory in /usr/local/Cellar/php/5.3.6/lib/php/PEAR/PackageFile.php on line 305 Fatal error: require_once(): Failed opening required 'Archive/Tar.php' (include_path='/usr/local/Cellar/php/5.3.6/lib/php') in /usr/local/Cellar/php/5.3.6/lib/php/PEAR/PackageFile.php on line 305 How can I fix this?

    Read the article

  • How to have supervisord follows the new unicorn process after USR2 rolling restart?

    - by ybart
    I have configured supervisord to track my unicorn server process. When I send USR2 process, this performs a rolling restart. After this operation the old unicorn master have restarted and then changed PID. This caused supervisor to lose track of the unicorn process considering it as EXITED. How can I have supervisord to follow the new unicorn process after this operation ? Unicorn has a PID file available, but I have not found an option in supervisord configuration for this. An other option would be to have supervisord to send itself the USR2 signal, but I don't know how to perform this and whether it will prevent my problem from occurring.

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >