Search Results

Search found 3431 results on 138 pages for 'happy clicker'.

Page 94/138 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Pulling application updates from closest server?

    - by Mike Morris
    Setup: 6 Major Sites with Server 2003/2008 DCs doing DHCP/AD Integrated DNS, each on their own subnet. All connect back to datacenter through a 3 mbps WAN ERP server running in the datacenter, accessed by clients at all sites Currently, when we update the software, I manually push a copy of the updated client/config files down to each DC. I have a script that we run on each PC to update the clients. It determines what subnet the PC is on, and pulls the software from that DC. It's messy, but it works. The client has an autoupdate feature, but it'll only pull from the application server (which is housed in the datacenter, over the 3 meg link). It takes forever, since the updates are not "patches" but a full version of the client, even for minor upgrades (bad design). After the most recent patch, you can configure the clients to pull from a different server. Unfortunately, it is the same for all clients. Is there some kind of DNS magic I can use to pull from the local server? For instance, if I tell the clients their update server is ERPUPDATE, can I have their local DNS server return a different IP for ERPUPDATE than the other sites? Example: Client 1 is at site A, client 2 is at site b. They each run the software and a version change is detected. As per the config files, the clients look to ERPUPDATE for their updated client. Client 1 queries DNS for the IP of ERPUPDATE at its current location (site A) DNS at site A returns 192.1.1.5 Client 1 pulls update from 192.1.1.5 Client 2 queries DNS for the IP of ERPUPDATE at its current location (site B) DNS at site B returns 192.1.2.5 Client 2 pulls update from 192.1.2.5 Excuse the poor explanation, I worked 61 hours over the weekend and haven't completely rebounded. I'll be happy to clarify if needed!

    Read the article

  • How to restore Active Desktop if Themes is locked via GPO?

    - by pepoluan
    My company forces Active Desktop upon everybody so that it can display a (monthly-rotated) corporate wallpaper.jpg. Problem is, some computers (including my laptop) somehow experienced some errors resulting in the dreaded "Active Desktop Recovery" screen to show up... and clicking the "Restore My Active Desktop" button always resulted in "Internet Explorer Script Error". Various workarounds I found in the Internet either does not work or requires me to change the theme first to something else... and the latter I can't do because the Desktop Settings screen is locked via GPO. As it happens, due to the nature of the programs I use, I'm granted Administrator-level access on my computer. The question is: How do I fix my situation? Note: I don't need to put on my own wallpaper, but watching the "Active Desktop Recovery" screen (with its BLANK WHITE!!1! OH MY EYES!1!!one!eleven!! background) gets tiresome. I'm quite happy with the corporate wallpaper. I just need to somehow 'recover' my Active Desktop. More information: OS: Windows XP Professional SP3 (yeah, company's too afraid to even experiment with Windows 7) Antivirus: Symantec Endpoint Protection If you need any additional information, feel free to ask.

    Read the article

  • Every month, scheduled task fails and password must be reset - why?

    - by Ducain
    [NOTE: I posted this originally at StackOverflow but it got no traction there - reposting here.] We have a bit of software installed at a few client locations that runs (via Windows task scheduler) a few times each day. In ONLY ONE of the client locations, we have a unique problem: each month, the task will stop working, after running every day for weeks. Twice now it's failed on the 2nd of the month. When I walk the client through troubleshooting it, we've found that it can't start - access denied. To fix it, we simply re-enter the same exact password, and then off it goes happy as a clam. I've never heard of this issue, and their IT people say they don't have anything running once a month that might cause that. I'm at a complete loss here. Any ideas as to why this might be happening? Further details: Windows XP pro machine. Task is being fired with credentials from a local admin account. Computer is always on, and connected to the net.

    Read the article

  • Is the Windows VPN secure?

    - by Tor Haugen
    I have used a few VPN solutions over the years. Most are hard to set up, slow to connect and/or rather ill-behaved (replacing system drivers, disrupting each other etc). One solution I have never used earlier is the one built into Windows. This is mostly because the infrastructure guys always refuse to use it because they claim it's 'not secure'. Now I have finally had the chance to use it (on Windows 7), and wow, it's a breeze! Easy to set up, well-behaved, it connects almost instantly, automatically authenticates with my logged-in credentials, and integrates excellently with the UI. I have to say, unless it really isn't secure, I'll be happy if I never have to use another VPN product ever again. I gather the Windows VPN used to rely on PPTP, which is not considered secure. But in Windows 7/2008, it supports L2TP/IPSec, SSTP and IKEv2, and authenticates with EAP or CHAP/CHAPv2. That seems pretty up-to-date to me. But I'm just a lowly developer. Can someone in the know give me the lowdown on this?

    Read the article

  • Basics about file/folder permissions on Win 7

    - by Altar
    Hi. Under Win XP I never touched the permissions of a file/folder. I was happy with the way it worked. But recently I installed Windows 7 on a drive that previously hosted Windows XP. Now, some programs do not have 'read' and/or 'write' access to their own folders - and I am not talking about system folders like 'Program Files' but normal folders like 'C:\my data\my own folder\program folder'. I see that for folders created under Win XP I have some user groups that do not exist for 'normal' folders (folders created by me recently under Windows 7). For example, for the Win XP folder I have: Creator owner System Account unknown(S-1-5-21 blablabla... Admins Users For Win7 folders I have: Authenticated users System Admins Users How should I proceed? Should I give the right to the "Users" account to write to XP folders? Should I make the old folders (the XP folders) to have the same groups of users as the normal (Win7) ones by adding the "Authenticated users" account to those folders? Should I delete the "Account unknown" account from my system? (In this case, how?) Many thanks.

    Read the article

  • Can next hop address be same as destination address?

    - by Raj
    Like if host address is 100.0.0.1 and next hop address is 100.0.0.2 and destination ip address is also 100.0.0.2 Is this a valid use case? Any real life usage? <dest ip> <next hop> ip route 100.0.0.2 255.255.255.255 100.0.0.2 weight 1 next-hop-vrf GlobalRouter Above is the command on a router inside a VRF. 100.0.0.2 is pingable from host. 100.0.0.1 & 100.0.0.2 are an ip address assigned to a VLAN on host & destination respectively. On a linux box, Such configuration is valid. [root]# netstat -r -n Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 55.55.55.55 55.55.55.55 255.255.255.255 UGH 0 0 0 eth0 [root]# ip route show 55.55.55.55 via 55.55.55.55 dev eth0 As per my understanding, If a destination IP is reachable (i.e in the same subnet of host IP) we dont need a next hop. I came across one application for using next hop for destination IP in same subnet (i.e for VPN) See this: Will packets send to the same subnet go through routers? If next hop != destination IP but they are in same subnet as that of host, is a valid scenario for VPN, then i am wondering what are the applications of next_hop==dest_ip & subnet same as host? This is my first post in Super User. Extremely happy with the quick and warm response.

    Read the article

  • VirtualName-based local development host behind corporate proxy (MAMP)

    - by geerlingguy
    I am behind a corporate proxy server/firewall, and this firewall seems to not be too happy with my idea of local development. On my home computer (Mac/Leopard), I have MAMP running, with a rule in /etc/hosts that directs dev.example.com to 127.0.0.1, and I have a virtualhost set up in the httpd.conf file which works great for me. However, at work, I set up the exact same configuration, but am not able to access dev.example.com, likely due to some address/DNS translation going on via the proxy server. Here are the relevant details from Terminal: $ ping dev.example.com PING dev.example.com (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.025 ms $ host dev.example.com Host dev.example.com not found: 3(NXDOMAIN) I've tried adding dev.example.com to the list of bypass addresses in System Preferences (the 'Bypass proxy settings for these Hosts & Domains' list), but that had no effect. Is there any way I can develop locally using name-based hosts at work? I can access localhost, but can't get to the dev.example.com (or any other custom virtualhosts) here at work, which complicates other matters related to the sites on which I'm working...

    Read the article

  • Why is piping dd through gzip so much faster than a direct copy?

    - by Foo Bar
    I wanted to backup a path from a computer in my network to another computer in the same network over a 100MBit/s line. For this I did dd if=/local/path of=/remote/path/in/local/network/backup.img which gave me a very low network transfer speed of something about 50 to 100 kB/s, which would have taken forever. So I stopped it and decided to try gzipping it on the fly to make it much smaller so that the amount to transfer is less. So I did dd if=/local/folder | gzip > /remote/path/in/local/network/backup.img.gz But now I get something like 1 MB/s network transfer speed, so a factor of 10 to 20 faster. After noticing this, I tested this on several paths and files and it was always the same. Why does piping dd through gzip also increase the transfer rates by a large factor instead of only reducing the bytelength of the stream by a large factor? I'd expected even a small decrease in transfer rates instead, due to the higher CPU consumption while compressing, but now I get a double plus. Not that I'm not happy, but just wondering. ;)

    Read the article

  • Free blog sites where the blogs can (if the template does) validate as XHTML Strict 1.0?

    - by Deleted
    I'm looking for a site where I may register a free blog and have the blog validate as XHTML Strict 1.0. On the surface this may seem like a trivial problem only related to the theme/template in use, but that's unfortunately not the case. One example of a provider which can't fullfill this requirement is Blogger. Altough the pages of the blogs there presents themselfs as XHTML 1.0 Strict it is impossible to actually comply with the requirements inheritied by that markup type in the blog (as the XHTML which is generated by Blogger makes the page as a whole invalid). I've sent a mail to Tumblr to see if it was possible with them, but so far my reply consists of them having forwarded my mail with a "suggestion" to the development department. I don't know if we had a communication error or if I'm actually going to receive a proper answer later. Time will tell. I haven't had time to investigate Tumblr myself, so they may very well be the solution to this problem. To sum things up, I'm looking for any provider: Of a free blog. The blog must have the capability to validate as XHTML Strict 1.0. With capability I mean that the system shouldn't get in the way of creating/using a theme which complies as XHTML Strict 1.0. Preferably is large or at least likely to stay around for a couple of years to come. But I'm willing to take my chances if none of the established providers are up to the task. Thank you for reading! I hope you know of any provider which would be suitable, preferably with proof by linking to a blog there which validates. I'm not looking for suggestions to look into, as there are far to many to investigate and far too little time. If you know of something for sure, I'd be very happy to know about it.

    Read the article

  • Revamping an old and unstable office IT-solution using Windows Server and OpenVPN

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • Laptop seemingly randomly "freezes" to the point of no longer executing applications

    - by Aierou
    After upgrading to Windows 8 pro on my Samsung Series 7 Chronos NP700Z5C-S04US (may be relevant, I'm not sure), my computer began to stop allowing the execution of any service or application, as well as discontinuing the update of the clock until a hard shutdown was performed. This seems to occur randomly after periods of inactivity and I've no idea the cause. These are measures I have already taken in order to attempt to stop this: -Obviously Googling potential answers to this problem -Updating all drivers -Researching all events that have occurred around the time of the failure to respond (with no results) -I tried applying "bcdedit /set disabledynamictick no" which was a hotfix for what seemed to be the same error but was not. Here is some more, potentially related, information about the error: -No BSOD (actually, I haven't at all experienced a BSOD with Windows 8) -Computer seems to have a problem shutting down/restarting most of the time (Hangs at the point where it should completely turn off) -New sound instances are not able to play, but previously loaded containers function properly -As mentioned before, the clock freezes at the time of the error -USB devices function properly -Servers that I was running fail to respond on my end, but stay online. If you require more information, please request it specifically and I will be happy to oblige. Thanks.

    Read the article

  • How to resolve 'No internet connectivity issues' with a Virtualised 2008 R2 Server using Forefront UAG

    - by user684589
    I have spent some considerable time reading up on as many possible blogs and articles as I can to help me solve why my VM (Running on Hyper-V) for DirectAccess has suddenly stopped being able to access the internet. The VM setup shares the same internet connection on which I have written and submitted this question so I know that the actual underlying internet connection is fully functional. Previous to last week the DirectAccess was fully functional and had no issues. This is a recent problem which was led up to by a number of consistent crashes on the DA machine when access was attempted. Upon reboot all seemed well until recently. I am not certain whether it is relevant, but previously to this I had a number of power issues where the entire VM host shutdown unexpectedly leaving around 8 VM's in a bad way. Upon restart, the UAG DirectAccess machine was unable to access its configuration service (although the service was started) but this seemed to relate to the Light-Weight Active Directory Service AD LDS which had a corrupted database. Having repaired this database, I restarted the service and could subsequently reconnect to the configuration service again. For good measure I re-bound the network adapters (virtualised through Hyper-V) and DirectAccess claimed to be all happy again. However as it stands my machine is still unable to access the internet showing the "No internet connectivity" exclamation mark for the external facing NIC. I have also tried removing the adapters, disabling, re-enabling and the problem persists. The intranet part of the VM CorpNet seems to be fully functional as before and I'm running out of ideas. Any input would be greatly appreciated. I am not an advanced Domain Administrator so please be gentle.

    Read the article

  • cheap gigabit switch for small business

    - by neoice
    my friend's business is currently borrowing my Adtran 1224R and is very happy with it. it's configured with a few VLANs to segment customers, internal traffic and public wifi. port 1 is a "trunk" port to the router, a chunky Linux box with iptables+NAT. they push a lot of traffic over the LAN (data backups) and really need gigabit. besides, I'd like my Adtran back :P my goal is to find a cheap(ish) switch that can function as a drop-in replacement. it looks like VLAN trunking is actually part of the 802.1q spec, so anything with VLAN support should cover the current trunk-to-router setup. it's nice to have both a web interface and SSH, but I can configure it either way if needed. things like the Netgear GS724T have caught my eye, but it seems like none of the hardware in the $300-500 range have really solid reviews. I'm concerned that "cheaper" hardware might not work for a network full of power users. does anyone have a recommendation for the Netgear GS724T or a switch that will meet my needs?

    Read the article

  • Best solution top keep data secure

    - by mrwooster
    What is the simplest and most elegant way of storing a small amount of data in a reasonably secure way? I am not looking for ridiculous levels of advanced encryption (AES-256 is more than enough) and I am only looking to encrypt a small number of files. The files I wish to encrypt are mostly comprised of password lists and SSH keys for servers. Unfortunately it is impossible to keep track of ever changing passwords for my servers (and SSH keys) and so need to keep a list of the passwords. Obviously this list needs to be secure, and also portable (I work from multiple locations). At the moment, I use a 10MB encrypted disk image on my mac (std .dmg AES-256) and just mount it whenever I need access to the data. To my knowledge this is very secure and I am very happy using it. However, the data is not very portable. I would like to be able to access my data from other machines (especially ones running linux), and I am aware that there are quite a few issues trying to mount an encrypted .dmg on linux. An alternative I have considered is to create a tar archive containing the files and use gpg --symmetric to encrypt it, but this is not a very elegant solution as it requires gpg to be installed on every system. So, what over solutions exist, and which ones would you consider to be the most elegant? Ty

    Read the article

  • Computer freezes after watching Youtube videos

    - by Roberts
    I had Windows 7 installed all september. But I installed Windows XP Professional back because my computer couldn't handle the new OS. After first boot I tried to install newest flash player (from Adobe website), but it failed. I had my old setup on USB drive and it worked. I don't know is it important or not. I am watching Youtube videos in my free time (almost every hour). After few days the computer started to freeze when I open a page with the video or close the page with video, not while I watch a video. No BSODs. Nothing in Event viewer. I use Firefox only. When computer freezes the sound wont. If iTunes is playing a radio station or is it another video playing in background, the sound wont freeze. Last few days the mouse wont freeze. Its a strange symptom. If I click few times then the cursor will actually freeze. I just want to know where does this problem come from (hardware - graphics card, old motherboard or it's just some glitch in setups). If it's not graphics card then I will be happy. The graphics card is ATI Radeon HD 4650 - brand new. Catalyst 11.8 installed. Things I have tried: Installed newest flash player after a week (the setup didn't fail this time) Installed latest video drivers Deleting cookies Defragmenting hard drive Using TuneUp utilities for computer cleenup Installed latest Mozilla Firefox Cleaned the PC Changed CPU Fan speed almost to max (just to be sure) Things I haven't tried yet: Didn't try playing videos on other browsers What can I do now?

    Read the article

  • Searching SharePoint site with Windows Explorer

    - by alexsome
    Every week, I manually backup recent versions of the files on my group's SharePoint site. I open the library in Windows Explorer, search for all files modified in the past week, then copy and paste them to a network location. We need this process because our SharePoint site has a quota that we would easily meet if we had unlimited versions, so we keep a history of older versions on the network. Recently I got an upgrade to my work computer and I am unable to search the site using Windows Explorer. When I run the search for files modified in the last week no results are returned. If I run a search with no criteria on the file library, all the files are found but the "modified on" field is blank. So the search results only have the file and type fields. The new computer has Windows XP, just like the old one did. I hope this makes sense. Does anyone have any clue what the problem could be? I'd be happy to provide more info if necessary. It's bugging me to no end and I'm not even sure where to begin looking - it's either a trivial issue or a very obscure one. Thanks a lot.

    Read the article

  • Performance-optimizing Oracle 10g on a server that is also a Tomcat JSP app server?

    - by PKHunter
    I have inherited a simple RedHat 5 - 64bit platform. It has SCSI disks on RAID1, with 16GB of RAM. Double Core CPU. Oracle 10g, Release 2. This would be a decent platform for running the DB only, perhaps, but the same server in an "A-A mode" clustering (very simple) also runs Tomcat and there are several Java servlets running on this. Sadly there is no caching platform etc. We only use an external CDN for some html caching. I am personally more familiar with web environments on the LAMPP platform (apache, php, mysql, postgresql). PROBLEM: Because the server has both Tomcat JSP/Java and Oracle 10g running on the same server, with no caching, I have some issues of the server going down. Often, sadly. QUESTION: What are my options in terms of improving performance of all these different apps? Connection Pooling? Example, in Postgresql world we have PgBouncer, which really helps things. Does Oracle have something similar? Or is there a famous Java-based external pooler that people use in production environments? (I'm not familiar with Java) Any "SQL cache" as in the MySQL and Postgresql world? Any other kind of application cache, as "APC" or "eAccelarator" in the PHP world? The "OSCache" stuff from the Java world (JSP thingie I found on Google: http://onjava.com/pub/a/onjava/2005/01/05/jspcache.html?page=2) ... What else? Sorry if this is a noob question. I have googled and googled, but problem is I don't know what to google for, other than the broad general concepts above. So if not full answers, I would even appreciate basic pointers and I am happy to JFGI myself. Thanks!

    Read the article

  • How should I configure backup of my server?

    - by ed209
    I have just rented a dedicated server. If it helps this is the config I have: CPU1 Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz (Cores 8) RAM 15975 MB Disk /dev/sda doesn't contain a valid partition table (=> /dev/sda doesn't) Disk /dev/sdc doesn't contain a valid partition table (=> /dev/sdc doesn't) Disk /dev/sdb doesn't contain a valid partition table (=> /dev/sdb doesn't) Disk /dev/sda: 120.0 GB (=> 114 GIB) Disk /dev/sdc: 3000.6 GB (=> 2861 GIB) Disk /dev/sdb: 3000.6 GB (=> 2861 GIB) /dev/sda is a 120GB SSD. This is where I have Ubuntu/lamp installed. It's the drive that will run my site. With the account I got two other drives of 3000GB each which I really don't need but they came with the account. I figured I could use these to back up my main 120gb drive. So a couple of things I wondered were: Should I use these for backups? How should I back up. The data I want to back up is a user uploads directory full of images and the database. Everything else is either in a code repo or backed up some other way. For example, it would be nice to know there is a disk image of the 120gb drive somewhere that I can copy over should there be any problems but equally I don't mind doing a fresh install of all the software and copying over just the images and database dump. Thanks for your advice! (also, happy to not use the two other drives and backup elsewhere if it's more sensible)

    Read the article

  • Revamping an old and unstable IT-solution for a customer?

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • Apache LocationMatch does not work for group

    - by dma_k
    I would like to configure Apache to proxy mldonkey running at localhost. Initially I have used the following configuration: <IfModule mod_proxy.c> <LocationMatch /(mldonkey|bittorrent)/> ProxyPass http://localhost:4080/ ProxyPassReverse http://localhost:4080/ </LocationMatch> </IfModule> and it didn't worked! error.log reads [error] [client 192.168.1.1] File does not exist: /var/www/mldonkey which means that Apache does not intersect the URL. However, when I change the regexp to following: <LocationMatch /mldonkey/> it started to work (i.e. mod_proxy functions OK, more over all ). I have tried the following alternatives: <LocationMatch ^/(mldonkey|bittorrent)/> <LocationMatch ^/(mldonkey|bittorrent)/.*> <LocationMatch ^/(mldonkey|bittorrent)> <LocationMatch /(mldonkey|bittorrent)> <LocationMatch "^/(mldonkey|bittorrent)/"> <LocationMatch "/(mldonkey|bittorrent)"> <LocationMatch "/(mldonkey)"> <LocationMatch "/(mldonkey)/"> with no positive result. I am stuck. Please give me a hint where to look at. P.S. Apache Server 2.2.19. P.P.S. Would be happy if <LocationMatch> would work, without using the heavy artillery of mod_rewrite.

    Read the article

  • Creating a private wiki

    - by Hand-E-Food
    I want to create a simple, private wiki, but am really struggling to find what I need. I require the following features: Private wiki. Only I will read or write it. Some formatting capability: headings, bold, italic, bullets, block quotes Wiki Viewer for Windows 7. If it comes with an editor, I need to be able to hide it. Page Editor for Windows 7. Page Editor for iPhone. Synchronize by cloud but available offline in Windows. So far, my research has led me to Markdown language. I can easily edit this as plain text using Notepad++ for Windows and Elements for iPhone. I can sync these files through Dropbox and have them available offline. What I can't find is a suitable viewer for Windows. I'd prefer to steer away from using HTML due to its verbose formatting codes. Can anyone recommend a solution for me? If need be, I'll happy to make a small one-off payment for software.

    Read the article

  • How to set up multi users on dev server with git and github

    - by Derek Organ
    I'm working on lamp application. We have 2 servers (Debian) Live and Dev. I constantly work on dev main to add new features and fix bugs. When happy all works well I scp the relevant code to the Live system. Database (mysql) is local to each machine. Now this is pretty basic setup really and I want to improve the workflow a bit. I use git and github for version control. Admittedly I've only really used one branch. Their can be 3 different developers who work on the code at different times. We all use the same linux username to connect to the dev server and edit the code directly when needed. I usually then commit and push the code at the end of the day to github. One thing to bare in mind is it isn't easy to run this code on a local machine as there are many apache and subdomain configurations that wouldn't work on a local machine so it is important to work on the dev server not locally. I need to create a new process because we need to have a main trunk now and a branch with a big code re-write. What is the best way to do this. Should I create different unix logins for each developer and set up different working areas on the dev server for there changes? e.g. /var/www/mysite_derek /var/www/mysite_paul /var/www/mysite_mike my thinking is they can do a pull from the main branch and then create there own branch and merge it back in. I'm not sure how this will work though with git locally and with github. will i need to create different github user accounts as well. I'd like to do this the 'right' way and future proof for having lots of potential developers but I also don't want to over complicate it. I simple and elegant solution is preferred. any recommendations or suggestions?

    Read the article

  • Compiz & Linux compositing: how does it fit into the X architecture?

    - by Latanius
    Not a really "how to solve stuff" question, but... I was wondering how the modern X architecture works, with compiz & all. What I know about it: in the beginning, there was the X server, clients connected (presumably on TCP), and then sent messages to the server to instruct it to show windows etc. because this didn't work (at all? or just fast enough?) for OpenGL & 3D acceleration, additional APIs were created for direct rendering (DRI? and, in addition to the X server, what things did the X clients talk to to render stuff and through what interfaces?) and, finally, enter Compiz: X clients end up (somehow) rendering to OpenGL textures, which is then put together to form a fancy-looking screen with translucent windows, and rendered to the screen. What I'm especially interested in is what components does the system have and how do they connect to each other? Like... if there is a box labelled "compiz" in the system... is it inside the X server? If it's not, how do the rendered images from the apps end up in it? And where does it render to? Is that another X server? Or DRI? Of course, I'd be equally happy if pointed to some docs capable of clearing up the confusion described above (conditional on they being significantly shorter than book-sized entities).

    Read the article

  • Working with an external button box

    - by Scott
    I tried this question on Stack Overflow, but I was pointed here, so here goes: For a new project for myself, I am looking for a way to be able to (for example) open a pop-up window on my laptop, by pressing a button on an external device (to be build by myself, or at least bought) connected with USB. Basically I would be looking at something like a Arduino or Raspberry (IF I am looking in the right direction) with buttons on it, and as soon as I hit a button on the external box with physical buttons, a command activates on my laptop and for example opens a popup window in which I can input tekst. Does anyone know: 1) if it is possible to do this at all. 2) What equipment is needed for the external box, what programming is needed. I preffer .net (dot net) but maybe it can only be done with software from the external box. If anyone can point me in the right direction, like make/model of the external box or websites I would be very happy. I have knowledge of Visual Studio/.net but I am willing to learn other languages if .net is not an option for this project. Thanks in advance Scott PS: If anyone knows of some better tags, or at least knows what I mean and needs me to edit the question, please do tell me... I am new on Stack Overflow/Superuser.

    Read the article

  • Convert Public Folder to Shared Mailbox

    - by Lilienthal
    Due to a change in company policy, all existing Public Folders (PF) have to be phased out in favour of shared mailboxes. Unfortunately, they don't seem to have any procedures or guidelines for this migration and I can't find much online either. I've already migrated one of our public folders so far as a sort of test case. Because we still use Exchange 2003, we can't create real shared mailboxes as we would in 2007 or 2010 (With New-Mailbox -Shared ... in the Exchange Shell). Instead, I simply created a new account on the AD and assigned it a mailbox. I then set the PF's permissions to read-only to keep it in a consistent state and copied the entire folder to a local PST in Outlook 2010, from which the folder was in turn copied to the new mailbox. Permissions and Folder Visible were set for all users and the migration was successful. While this works, the whole procedure feels very hackish to me and not at all efficient. I'd welcome some input on automating or at least streamlining the process. Additionally, we are unsure of what to do with our mail-enabled Public Folders. Several of these are nested under other PFs, some of which are also mail-enabled. Preserving folder structure is a key requirement and this seems impossible at first glance. I've considered creating dummy accounts for all the email addresses from our mail-enabled PFs and then setting up automated rules to forward messages to a subfolder of the new shared mailboxes, but I am not familiar enough with Exchange to know if this is even possible. Further points of concern are the Calendars and Contact lists in our public folders. I suppose I'll be forced to create new mailboxes for every one of these we have as well, then set up share permissions for their Calendar and Contact items, but would be happy to be proven wrong.

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >