Search Results

Search found 4636 results on 186 pages for 'jedi master spooky'.

Page 50/186 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • DNS and name server in centos 6.3 64 bit is not pinged out side

    - by user135855
    I got a problem with centOS 6.3 64-bit. I want to setup my nameserver with bind here. I am listing all my configuration [root@izyon92 ~]# cat/etc/hosts -------------- 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 182.19.26.92 izyon92.zyonize1.com izyon92 [root@izyon92 ~]# cat /etc/sysconfig/network --------------------------------------------- NETWORKING=yes HOSTNAME=izyon92.zyonize1.com GATEWAY=182.19.26.89 [root@izyon92 ~]# cat /etc/resolv.conf -------------------------------------------- # Generated by NetworkManager search zyonize1.com nameserver 182.19.26.92 [root@izyon92 ~]# cat /etc/named.conf -------------------------------------------- // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { #listen-on port 53 { 127.0.0.1; }; listen-on-v6 port 53 { none; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { 182.19.26.92; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; [root@izyon92 ~]# cat /etc/named.rfc1912.zones -------------------------------------------------- // named.rfc1912.zones: // // Provided by Red Hat caching-nameserver package // // ISC BIND named zone configuration for zones recommended by // RFC 1912 section 4.1 : localhost TLDs and address zones // and http://www.ietf.org/internet-drafts/draft-ietf-dnsop-default-local-zones-02.txt // (c)2007 R W Franks // // See /usr/share/doc/bind*/sample/ for example named configuration files. // zone "localhost.localdomain" IN { type master; file "named.localhost"; allow-update { none; }; }; zone "localhost" IN { type master; file "named.localhost"; allow-update { none; }; }; zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" IN { type master; file "named.loopback"; allow-update { none; }; }; zone "1.0.0.127.in-addr.arpa" IN { type master; file "named.loopback"; allow-update { none; }; }; zone "0.in-addr.arpa" IN { type master; file "named.empty"; allow-update { none; }; }; zone "zyonize1.com" { type master; file "/var/named/zyonize.com.hosts"; }; [root@izyon92 ~]# cat /var/named/zyonize.com.hosts --------------------------------------------------------- $ttl 38400 zyonize1.com. IN SOA 182.19.26.92. dev\.izyon.gmail.com. ( 1347436958 10800 3600 604800 38400 ) zyonize1.com. IN NS 182.19.26.92. zyonize1.com. IN A 182.19.26.92 www.zyonize1.com. IN A 182.19.26.92 izyon92.zyonize1.com. IN A 182.19.26.92 I have disabled selinux and stopped iptables. dig and nslookup is working fine in the same machine [root@izyon92 ~]# dig zyonize1.com ---------------------------------------- ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.2 <<>> zyonize1.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55751 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;zyonize1.com. IN A ;; ANSWER SECTION: zyonize1.com. 38400 IN A 182.19.26.92 ;; AUTHORITY SECTION: zyonize1.com. 38400 IN NS 182.19.26.92. ;; Query time: 0 msec ;; SERVER: 182.19.26.92#53(182.19.26.92) ;; WHEN: Fri Sep 14 00:09:19 2012 ;; MSG SIZE rcvd: 72 [root@izyon92 ~]# nslookup zyonize1.com ---------------------------------------------- Server: 182.19.26.92 Address: 182.19.26.92#53 Name: zyonize1.com Address: 182.19.26.92 But here is the problem I am facing, I have windows machine, to test this dns and nameserver I set the first IPv4 DNS server to 182.19.26.92. Here is the details Connection-specific DNS Suffix: Description: Realtek PCIe GBE Family Controller Physical Address: ?14-FE-B5-9F-3A-A8 DHCP Enabled: No IPv4 Address: 192.168.2.50 IPv4 Subnet Mask: 255.255.255.0 IPv4 Default Gateway: 192.168.2.1 IPv4 DNS Servers: 182.19.26.92, 182.19.95.66 IPv4 WINS Server: NetBIOS over Tcpip Enabled: Yes Link-local IPv6 Address: fe80::45cc:2ada:c13:ca42%16 IPv6 Default Gateway: IPv6 DNS Server: when I am pining from this machine it is not finding the server. Where as in another server with another live IP with Fedora ping is working fine.

    Read the article

  • PPTX to PDF -> Hyperlinks failed

    - by Joe
    Hi, I have a Powerpoint 2007 document with hyperlinks to other slides and external docs both on the master layout and on serveral slides. When I try to convert it to PDF, it loses all the hyperlinks. I have the following Questions: Which PDF-Converter does not kill my Hyperlinks made in the Powerpoint Layout master? I tried Adobe and DoPDF. And how do I have to setup the prog/converter? thanks in advance!

    Read the article

  • Keyboard repeat on Input Director slave machine.

    - by ProfKaos
    I'm using Input Director as a software KVM to control my laptop from my desktop, and all is almost OK with the setup. However, key-presses on the master keyboard seem to repeat very easily on the slave, and it is close to impossible to type a word on the slave without getting repeated characters. I typed the word 'repeat' on the master keyboard and my editor on the slave captured the characters 'repeeaatt'. Both machines are Windows 7.

    Read the article

  • How to setup GIT repo on server with need for working dir (non- bare)

    - by OrangeTux
    I want to have configurate a GIT repo for a website. Multiple users will have a clone of the repo on their local machine and on the end of each day they push their work to the server. I can setup a bare repo, but I want a working dir/non-bare repository. The idea is that the working dir of the repository will the root folder for the website. At the end of each day all changes will be visible directly. But I can't find a way to do this. Initializing the server repo with git init gives the following error when a client is trying to push some files: git push origin master [email protected]'s password: Counting objects: 3, done. Writing objects: 100% (3/3), 227 bytes, done. Total 3 (delta 0), reused 0 (delta 0) remote: error: refusing to update checked out branch: refs/heads/master remote: error: By default, updating the current branch in a non-bare repository remote: error: is denied, because it will make the index and work tree inconsistent remote: error: with what you pushed, and will require 'git reset --hard' to match remote: error: the work tree to HEAD. remote: error: remote: error: You can set 'receive.denyCurrentBranch' configuration variable to remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into remote: error: its current branch; however, this is not recommended unless you remote: error: arranged to update its work tree to match what you pushed in some remote: error: other way. remote: error: remote: error: To squelch this message and still keep the default behaviour, set remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'. To ssh://[email protected]/home/orangetux/www/ ! [remote rejected] master -> master (branch is currently checked out) error: failed to push some refs to 'ssh://[email protected]/home/orangetux/www/' So I'm wondering if this the right way to setup a GIT repo for a website? If so, how do I have to do this? If not, what is a better way to setup a GIT repo for the development of a website? EDIT you can't push to a non-bare repository Oke, clear. But whats the way to solve my problem? Create a bare repository on the server and have a clone of this repo on the same server in the htdocs folder? This looks a bit clumsy to me. To see the result of a commit I've to clone the repository each time.

    Read the article

  • Diff 2 files while ignoring parts of lines

    - by Millianz
    I would like to diff a file system. Currently my bash script prints out the file system recursively into a file (ls -l -R) and diffs it with an expected output. An example for a line in this file would be: drw---- 100000f3 00000400 0 ./foo/ My current diff command is diff "$TEMP_LOG" "$DIFF_FILE_OUT" --strip-trailing-cr --changed-group-format='%' --unchanged-group-format='' "$SubLog" As you can see I ignore additional lines in the current output file, I only care about lines that match with the master output. I now have the problem though that some files may differ in size, or a folder might even have a different name, but due to it's location I know what access rights it should have. For example: Output: ------- 00000000 00000000 528 ./foo/bar.txt Master: ------- 00000000 00000000 200 ./foo/bar.txt Only the size differs here, and it doesn't matter, I would like to just ignore certain parts of the diff, kind of like an ansi c comment. Master: ------- 00000000 00000000 /*200*/ ./foo/bar.txt -- OR -- Master: d------ 00000000 00000000 /*10*/ ./foo//*123123*///*76456546*//bar.txt Output: d------ 00000000 00000000 0 ./foo/asd/sdf/bar.txt And still have it diff correctly. Is this even possible with diff, or will I have to write a custom script for it? Since I'm fairly new to cygwin I might be using the completely wrong tool all together, I'm happy for any suggestions. Update: Taking a step back, here is the general task at hand that I want to achieve. I want to write a script that checks the file system to see if the read/write permissions are set up correctly. The structure of the file system is under my control, so I don't have to worry about it changing too much. Sometimes folders/files might not be present, but if they are their permissions must be checked. For Example assume that the following is a snapshot of the current file system structure drw ./foo drw ./foo/bar -rw ./foow/bar/bar.txt drw ./foo/baz -rw ./foo/baz/baz.txt And this is what the file system structure might dictate, i.e. if these folders / files are present, the permissions must match. drw ./foo drw ./foo/bar -rw ./foo/bar/bar.txt --- ./foo/bar/foobar.txt drw ./foo/baz -rw ./foo/baz/foobaz.txt In this case the file system checked out ok, since all files present match their expected values. The situation becomes more complicated as soon as certain folders might have any arbitrary name, only due to their location I know what their permissions should be. Assume that the directory ./foo/bar in the above example might be such a case, i.e. instead of bar the folder could have any name, but still match the -rw permissions. This seems like a very complicated situation, and I'm not even sure if I can solve it with bash scripting alone. I might have to write an actual application.

    Read the article

  • postfix error fatal:table lookup

    - by samer na
    here the mail.log server localhost: Can't connect to local MySQL server through socket '/var/run/mysqld /mysqld.sock' (2) Mar 23 23:07:19 ubuntu postfix/trivial-rewrite[6417]: fatal: mysql:/etc/postfix/mysql_virtual_alias_maps.cf(0,lock|fold_fix): table lookup problem Mar 23 23:07:20 ubuntu postfix/smtpd[6401]: warning: problem talking to service rewrite: Success Mar 23 23:07:20 ubuntu postfix/cleanup[6296]: warning: problem talking to service rewrite: Connection reset by peer Mar 23 23:07:20 ubuntu postfix/master[6291]: warning: process /usr/lib/postfix/trivial-rewrite pid 6417 exit status 1 Mar 23 23:07:20 ubuntu postfix/master[6291]: warning: /usr/lib/postfix/trivial-rewrite: bad command startup -- throttling

    Read the article

  • Can I set up an "involuntary" conference call with Freeswitch?

    - by Atilla Filiz
    I am trying to set up a SIP/RTP public announcement infrastructure. Basically there are several slave user agents that are configured to answer automatically, and a master UA which should be able to call all of them and make announcements. A way to work around seems creating a conference and making all UAs to join via some RPC mechanism but I don't want to go that direction unless I have to. The slave UAs are linphone and I haven't decided on the master agent yet.

    Read the article

  • Addig a second samba server to windows domain

    - by Eric
    Hi, I'm trying to add a second samba server (stand alone) to our windows domain, managed by a Samba server, but we've had some problems, we see the server and the shares, but cannot access the shares. We decided to start with minimal configuration. [global] netbios name = GINGER wins server = 192.168.0.2 workgroup = DOMAIN1 os level = 20 security = share passdb backend = tdbsam preferred master = no domain master = no [data] comment = Data path = /home/data guest only = Yes Again trying to access the share gives permissions error. Thanks,

    Read the article

  • slave dns server, resolv.conf configuration

    - by Tim the Enchanter
    I have two servers (a master and slave) running DNS (bind). What should the /etc/resolv.conf file look like for the master and the slave? For example should the servers running the DNS have only : nameserver 127.0.0.1 or should they refer to the I.P. addresses of each server, as the other servers on the network do : search <mydomain>.co.uk nameserver 192.168.1.52 nameserver 192.168.1.57

    Read the article

  • What DNS server to use for dynamic load-balancing of website?

    - by Marki555
    I will have 2 servers in different datacenters (different countries) and I want to use DNS load-balancing mainly for High Availability of website hosted on those 2 servers. It is just ad tracking site, which records hit in local database and returns few lines on html code. I want to return 2 A records each time because of DNS pinning in browsers (if one server fails, browser will try second A record which it has already cached). Both servers will be acting also as DNS servers for redundancy. Now comes my proposed solution: I will use BIND and have both servers as a master for that zone. On each server there will be running script, which will periodically test availability (http) of both servers and remove IP from DNS in case of failure. Now the questions :) 1) Is BIND suitable for this solution? I think BIND performance is good and it is easy to manipulate the zone file via script. And as I will modify the zone only in case of failure/maintenance, the modifications (and thus bind reload) won't be often. 2) I plan to use TTL of 5 minutes. The website will have about 1000-3000 req/s but from distinct clients (each IP only 1-3 requests), so I think the DNS load won't be too much. I suppose their ISPs will cache the responses for those 5 mins. Is there any reason to lower the TTL even more? 3) Is my master-master approach good? Or should I make one of the servers master and the other one slave? Right now each server can monitor both itself and the other one. If only webservice fails, both DNS nodes will notice it. If the whole server fails, then the remaining DNS node will notice it and the failed node will not answer DNS queries anyway. 4) Is it a big issue when one NS server does not respond to queries? If yes, I can make a third DNS, so anytime at least 2 of them would accept queries... 5) Should I rewrite the zone file via script, or just use dynamic DNS update (for example via nsupdateutility)?

    Read the article

  • Successfully concatenating multiple videos

    - by wiseguydigital
    My mission is to create videos out of old web slideshows. To start with I have jpegs and audio files that worked as Flash slideshows in an old system, structured such as this: Audio structure my_audio_1.mp3 (this file is a 3 second mp3 of silence) my_audio_2.mp3 my_audio_3.mp3 my_audio_4 etc... roughly 30 mp3s per slideshow Image structure my_image_1.jpg (this acts as the opening slide) my_image_2.jpg my_image_3.jpg my_image_4. etc... roughly 30 images per slideshow. As there are almost 100 slideshows that must be converted to video, I have created a web-based interface using PHP to automate the process, that sits on a local system and attempts to combine the files using shell_exec(). The process uses the following workflow: Loop through each slide and make an avi or mpeg. So for instance my_mini_video_2.avi would be a video that consists of my_image_2.jpg and has a soundtrack of my_audio_2.mp3. This slide would last the length of my_audio_2.mp3. Join / stitch / concat all of the mini videos to create the final video (Using a combination of cat and either mencoder or ffmpeg (I have also tried avimerge but to no avail). Transcode the new 'master' video to various formats such as flv etc. I thought this would be simple and have been close on many occasions but it still won't work. I can't get past stage 2 as I can't get a perfect 'master' video. I have now experimented with Mencoder, FFMpeg and seem to have been through every combination I can think of. The problem is that the audio and visuals never sync, no matter what I try. Also, I have even tried created audio-less mini videos, joining the MP3s into one long MP3 using both cat and mp3wrap and then assigning the new long MP3 as the audio track, but this always produces either a very short file or a badly slowed down file and makes the female voiceover sound like a male boxer!!! There appears to be no problems at all with the original files. Does anybody have any experience in producing a video successfully from the same kind of starting point? Or any ideas on what I may be doing wrong? As an example: If I create silent mini-videos, and stitch them together into 'temp-master.mpg' and then join the MP3s together into single MP3 called 'temp-master-audio.mp3', the audio file's duration is 09:10 and the video file's duration is 08:35. They should be the same and the audio will seem sloooow. I haven't posted code as I have written lots and lots of combinations.

    Read the article

  • Puppet, secret fatcs

    - by black_rez
    I manage servers with a puppet master and I use Foreman for visualisation. Because of specific regulation, the only access I have is the puppet agent for configuration and some informations can't be visualized by foreman and the master can't store this information. For example, the puppet agent need to get a secret variable (a password store in a file). How I can get it without know this variable? Also I need to keep reports because I want to know what happen on the server.

    Read the article

  • How can I roll back 1 commit?

    - by n179911
    I have 2 commits that I did not push: $ git status # On branch master # Your branch is ahead of 'faves/master' by 2 commits. How can I roll back my first one (the oldest one), but keep the second one? $ git log commit 3368e1c5b8a47135a34169c885e8dd5ba01af5bb ... commit baf8d5e7da9e41fcd37d63ae9483ee0b10bfac8e ... From here: http://friendfeed.com/harijay/742631ff/git-question-how-do-i-rollback-commit-just-want Do I just need to do: git reset --hard baf8d5e7da9e41fcd37d63ae9483ee0b10bfac8e That is?

    Read the article

  • dual boot with windows and linux

    - by nuttynibbles
    hi, i have windows xp installed. Recently i installed suse linux and afterwhich suse linux is the main OS to be booted up. However suse rovide options to boot windows 1 (window xp) during boot up. If i uninstall suse linux (i've tried), windows won't be able to be booted up as the grub master boot will be corrupted. my question is: is there a software tool which can be booted on cd to modify the master boot record so as to reduce much effort??

    Read the article

  • dual boot with windows and linux

    - by nuttynibbles
    hi, i have windows xp installed. Recently i installed suse linux and afterwhich suse linux is the main OS to be booted up. However suse rovide options to boot windows 1 (window xp) during boot up. If i uninstall suse linux (i've tried), windows won't be able to be booted up as the grub master boot will be corrupted. my question is: is there a software tool which can be booted on cd to modify the master boot record so as to reduce much effort??

    Read the article

  • ldap export and import

    - by Jure1873
    Is it possible to export all the data inside openldap for example using ldapsearch or some other tool to a (ldif?) file and then import everything on another server and put this in a script that would be run every day. So that I could use the other one as a backup when the first/master server is not available? I have full access to the first/master server, but I can't modify it's configuration so I think I can't set up replication.

    Read the article

  • Rescuing files and commits from "no branch" in git

    - by Xeoncross
    I started working on some files I had in a git submodule under another project. However, since it was a git submodule it never checked out "master" and instead just checked out the head and placed all the files in the folder in "no branch". Now that I've made some changes by accident to these files I just realized that I was working in a "no branch", submodule of my project. How do I get those files into a branch (like master) so I can rescue them?

    Read the article

  • 2 servers, high availability and faster response

    - by user17886
    I recently bought a second webserver because I worry about hardware failure of my old server. Now that I have that second server I wish to do a little more then just have one server standby and replicate all day. As long as it's there I might as well get some advantage our of it ! I have a website powered by ubuntu 12.04, nginx, php-fpm, apc, mysql (5.5) and couchdb. Im currently testing configurations where i can achieve failover AND make good use of the extra harware for faster responses / distributed load. The setup I am testing nowinvolves heartbeat for ip failover and two identical servers. Of the two servers only one has a public ip adress. If one server crashes the other server takes over the public ip adress. On an incoming request nginx forwards the request tot php-fpm to either server a of server b (50/50 if both servers are alive). Once the request has been send to php-fpm both servers look at localhost for the mysql server. I use master-master mysql replication for this. The file system is synced with lsyncd. This works pretty well but Im reading it's discouraged by the (mysql) community. Another option I could think of is to use one server as a mysql master and one server as a web/php server. The servers would still sync their filesystem, would still run the same duplicate software (nginx,mysql) but master slave mysql replication could be used. As long as bother servers are alive I could just prefer nginx to listen to ip a and mysql to ip b. If one server is down, the other server could take over the task of the other server, simply by ip switching. But im completely new at this so I would greatly value your expert advice. Is either of the two setups any good ? If you have any thoughts on this please let me know ! PS, virtualisation, hosting on different locations or active/passive setups are not solutions im looking for. I find virtual server either too slow or too expensive. I already have a passive failover on another location. But in case of a crash I found the site was still unreachable for too long due to dns caching.

    Read the article

  • How to get a safecontrol entry into manifest.xml with WSPBuilder project

    - by andrew
    Upon taking the default sharepoint master page for MySite, making some changes, and making a wsp out of it with WSPBuilder, I come to these errors in my logs: http://spoint/MySite/%5Fcatalogs/masterpage/MySite.master - An unexpected error has been encountered in this Web Part. Error: The control with virtual path '_controltemplates/Welcome.ascx' is not in the safe controls list for web at URL 'http://spoint/MySite'., Source: [UnsafeControlException: The control with virtual path '_controltemplates/Welcome.ascx' is not in the safe controls list for web at URL 'http://spoint/MySite' (stack trace omitted) http://spoint/MySite/%5Fcatalogs/masterpage/MySite.master - An unexpected error has been encountered in this Web Part. Error: The control with virtual path '_controltemplates/DesignModeConsole.ascx' is not in the safe controls list for web at URL 'http://spoint/MySite'., Source: [UnsafeControlException: The control with virtual path '_controltemplates/DesignModeConsole.ascx' is not in the safe controls list for web at URL 'http://spoint/MySite' (stack trace ommited) So, this masterpage does in fact use these OOTB controls and so I guess I need to get them safecontrolled. And I guess I want to do this via the manifest.xml. But I do not see how to make WSPBuilder do this.

    Read the article

  • How to generate simple Packing List with MySQL ?

    - by Stephen
    Hi, I need help on how to create a packing list of a shipment with MySQL. Let's say i have 32 boxes of keyboard ready to ship, the master carton can contain 12 boxes. I only have value 32 boxes and volume of 12. The other value in result below is generated by sql command. Not coming from record. So this easily calculate that the number of master carton would be 3 master cartons, with one as a non-standard quantity. How to perform query on this ? As i would like to be this result: +----------+---------------+-------------------+--------+------------+---------+ | Quantity | Standard_Qty | Non_Standard_Qty | Box_N | Box_Total | RowType | +----------+---------------+-------------------+--------+------------+---------+ | 12 | 1 | 0 | 1 | 3 | Detail | | 12 | 1 | 0 | 2 | 3 | Detail | | 8 | 0 | 1 | 3 | 3 | Detail | | 32 | 2 | 1 | | | Summary | +----------+---------------+-------------------+--------+------------+---------+ It looks like two query i know and probably the use of FLOOR command, in which i was teach in here. How to make this result? Thanks in advance. Stephen

    Read the article

  • Is there any replication standard or concept for application server data replication

    - by Naga
    Hi friends, Say, I have a server that handles file based mass data and can process thousands of read requests and hundreds of provisioning requests(Add, modify, delete) per second. This is not SQL based database. Now i planned to implement replication. There should be master- master replication, master slave replication, partial replication(entries matching a criteria) and fractional replication(part of an entry). Is there any standard for implementing such replication mechanisms? Can you please suggest some solution overview? Thanks, Naga

    Read the article

  • C# - Launch Invisible Process (CreateNoWindow & WindowStyle not working?)

    - by Chad
    I have 2 programs (.exe) which I've created in .NET. We'll call them the Master and the Worker. The Master starts 1 or more Workers. The Worker will not be interacted with by the user, but it is a WinForms app that receives commands and runs WinForms components based on the commands it receives from the Master. I want the Worker app to run completely hidden (except showing up in the Task Manager of course). I thought that I could accomplish this with the StartInfo.CreateNoWindow and StartInfo.WindowStyle properties, but I still see the Client.exe window and components in the form. However, it doesn't show up in the taskbar. Process process = new Process { EnableRaisingEvents = true, StartInfo = { CreateNoWindow = true, WindowStyle = ProcessWindowStyle.Hidden FileName = "Client.exe", UseShellExecute = false, ErrorDialog = false, } }; What do I need to do to let Client.exe run, but not show up?

    Read the article

  • Maintain set of local commits working with git-svn

    - by benizi
    I am using git to develop against a project hosted in subversion, using git-svn: git svn clone svn://project/ My general workflow has been to repeatedly edit-and-commit on the master branch, then commit to the svn repository via: git stash git svn dcommit git stash apply One of the local modifications that 'stash' command is preserving, that I don't want to commit to the svn repository, is a changed database connection string. What's the most convenient way to keep this local change without the extra 'stash' steps? I suspect that something like 'stash' or 'quilt' is what I'm looking for, but I'm still new enough to git that I think I'm missing some terminology that would lead to the exact incantation. Update: The only solution I found that seems to avoid the git stash + git-svn action + git stash apply series was to update the git-svn ref manually: (check in local-only change to 'master', then...) $ cat .git/refs/master > .git/refs/remote/git-svn $ git svn fetch (with at least one new SVN revision) And that leaves the local-only commit as a weird (probably unsafe) commit between two svn revisions.

    Read the article

  • How should I load a xib into a detail view for a split view iPad objective app?

    - by editor
    I've got a split view iPad app with table view in the master pane. After drilling down several levels, each time loading a new .xib, I'd like one of the cells to trigger a web page load in the detail pane. Right now I can only get the web page .xib to load in the master pane side -- which is a master pain in my side. The basic load call where "URLWindow" is a class loaded with initWithNibName: [[self navigationController] pushViewController:URLWindow animated:YES]; I want to do this, but it doesn't seem to work: @interface @property (nonatomic, retain) IBOutlet DetailViewController *detailViewController; ... @implementation [[self detailViewController] pushViewController:URLWindow animated:YES]; How should I be loading the URLWindow .xib into a detail view for a split view detail pane?

    Read the article

  • Using jQuery to load Telerik MVC scripts.

    - by ProfK
    I'm busy building my first MVC app that uses the Telerik MVC components. Their docs specify that the ScriptRegistrar helper be called right at the bottom of a view, e.g. "at the end of the master page.". I assume this renders a script block that must only run when the page has loaded. I normally prefer to achieve this using jQuery, and keep all my script related stuff at the top of my master page, preferably in the <head> tag. Is there anything I can do to achieve this with the Telerik components and do away with the lone and forgotten helper call at the bottom of my master page?

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >