Search Results

Search found 24037 results on 962 pages for 'every'.

Page 424/962 | < Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >

  • Bash Script to Compress / Transfer / Remove Log Files

    - by Jason
    I am currently using chronolog to set log file names for Apache with date. They are in the following format: /WEB/LOGS/APACHE_ACCESS_YYYY-MM-DD.log /WEB/LOGS/APACHE_ERROR_YYYY-MM-DD.log I would like to have a script that runs on the first of every month and compresses the log files from the previous month, transfers them to another host (via SCP) and then deletes the compressed file. find . -name '*.log' -mtime +1 -type f I've found several examples like the one above that allow you to select files x days old, but I need all files from the previous month. I am the first to admit my bash scripting skills are weak so would really appreciate any help and guidance.

    Read the article

  • Updating Cisco VPN config to add vpnc support

    - by Igor Kuzmitshov
    I have a Cisco 1841 configured for VPN connections of two types: Peer-to-peer for partners' routers (IPsec) — using different crypto isakmp key and crypto map with set peer, set transform-set, match address for every peer (same map name, different priorities). That crypto map name is added to the WAN interface. Client access (PPTP) — using vpdn-group with accept-dialin protocol pptp. Now, a new partner wants to connect using vpnc client. The latter needs IPSec ID (group name) and IPSec secret in addition to username and password. I guess that IPSec secret is pre-shared key that can be specified in crypto isakmp key on Cisco. But I could not find any VPN tutorials involving groups. Hence, my questions: How to add IPSec ID (group name) and IPSec secret on Cisco router for vpnc connections? Should I add a new crypto map matching all addresses as well? Is it possible to add this configuration without breaking the existing setup? Thank you.

    Read the article

  • Does Exchange 2010 lift the restriction that DL addresses must be in Active Directory?

    - by Justin Grant
    We'd like to enable end-users to be able to create and maintain their own email distribution lists in Exchange 2010, where those lists may include users inside the company but also customers, partners, etc. who are outside the company. One of the limitations in Exchange 2007 (see this question) was that any member of a DL had to have an entry in active directory. You couldn't just take a group of email addresses (both inside and outside my company) and create an Exchange DL with those addresses without involving Active Directory admins to create entries for each external user. For a company creating hundreds of small mailing lists every month, this was an unacceptable IT expense. So we had to use a separate mailing list solution (GNU mailman) for DLs which included external users. Is this limitation relaxed in Exchange 2010 so we can throw away GNU mailman and use Exchange instead?

    Read the article

  • How can I run this script on startup, restart, and shutdown?

    - by Exeleration-G
    I'm using Ubuntu 11.10. I've written a script, that synchronises a directory in ~ with a directory on /dev/sda4, using Unison. Before, I had this script running every five minutes with no problems, using crontab. Right now, I want to execute this script at startup, restart and shutdown only. This is what the script looks like: #!/bin/bash unison -perms 0 -batch "/mnt/Data/Syncfolder/" "/home/myname/Syncfolder/" My crontab configuration was as follows: m h dom mon dow command 0,5,10,15,20,25,30,35,40,45,50,55 * * * * sh /usr/local/bin/s4lj.bash Note that I copied the script from ~ to /usr/local/bin/ first, to avoid root problems. I've read How to execute script on shutdown? and How to write an init script that will execute an existing start script?. After doing that, I've done this: I've made s4lj.bash executable, and then copied it to /etc/init.d/. For startup, I've made a symlink in /etc/rc2.d/ to /etc/init.d/s4lj.bash, and renamed it to S70s4lj.bash. For restart, I've made a symlink in /etc/rc6.d/ to /etc/init.d/s4lj.bash, and renamed it to K70s4lj.bash. For shutdown, I've made a symlink in /etc/rc0.d/ to /etc/init.d/s4lj.bash, and renamed it to K70s4lj.bash. Still, the script won't be run in any of these situations. How can I make the script get executed? I'd be happiest with a proper *.conf file in /etc/init. Thanks in advance.

    Read the article

  • What is a widely accepted term for a string variable that would probably contain a file path and file name?

    - by Peter Turner
    For functions that need to index files in a directory and rename them FileName0001, FileName0002, etc... I often need to write a function that splits the file name from the file path and rename the file. When I put the file name and file path back together, I don't have a very good name for the variable that contains both of them and I usually just wind up concatenating them every time I want to use them (usually using them as parameters for functions labeled either filename or filepath) so I never really know what I'm doing until I notice a lot of files being written in the same directory as my binaries. Anyway, what do I call a file name and a file path? I don't want to call it File, because that usually means the binary information behind the file. I don't want to call it URI because that usually means I've got some sort of protocol, which I don't. I just want a good way to denote "c:\somedir\somedir\somedir\somefile.txt" so as to deconfuse this mess I've just realized I'm in. Please don't just list your personal preference. I think an excellent answer should "'site its sources". (as in, provide a link to a repository with a good example of the code being used as I described)

    Read the article

  • Memory concerns while plotting escape from DLL Hell in Delphi

    - by Peter Turner
    I work on a program with about 50 DLLs that are loaded from one executable, it's an old organically grown program where the only rationale for creating a new DLL is that one previously didn't exist to fill a given need. (and namespaces didn't exist in Delphi so it never crossed our mind to make dll1.main.pas, dll2.main.pas or something even more unique) What we want to do is consolidate all these DLLs into one executable, since none of them are used out of the program, there shouldn't be much of a problem. The concern my boss has is that if we did this, the memory overhead for terminal server clients would go through the roof. So, I've stepped through enough initialization code to know that lots of stuff is done every time a DLL is loaded in to memory, but say I've got a project with about 4000 files, and 50 dlls, 10 of which are probably utilized by any one user in any one session of the program. The 50 dlls are about 2/3rds form files, if not more, but beyond that there's not a lot of other resources being loaded (only a few embedded pictures, icons, cursors, etc..). If I loaded all these files in to memory, how much memory is used per unit? how much is used per class? How do I keep the overhead down? and what is the biggest project one can reasonably expect to build with Delphi? This tidbit won't help answering, but I think it might clarify what my boss is worried about, we currently start our program at about 18megs, normal working conditions are usually less than 40 megs, he thinks it could climb as high as 120 megs.

    Read the article

  • python Velocity control of the player, why doesn't this work?

    - by Dominic Grenier
    I have the following code inside a while True loop: if abs(playerx) < MAXSPEED: if moveLeft: playerx -= 1 if moveRight: playerx += 1 if abs(playery) < MAXSPEED: if moveDown: playery += 1 if moveUp: playery -= 1 if moveLeft == False and abs(playerx) > 0: playerx += 1 if moveRight == False and abs(playerx) > 0: playerx -= 1 if moveUp == False and abs(playery) > 0: playery += 1 if moveDown == False and abs(playery) > 0: playery -= 1 player.x += playerx player.y += playery if player.left < 0 or player.right > 1000: player.x -= playerx if player.top < 0 or player.bottom > 600: player.y -= playery The intended result is that while an arrow key is pressed, playerx or y increments by one at every loop until it reaches MAXSPEED and stays at MAXSPEED. And that when the player stops pressing that arrow key, his speed decreases. Until it reaches 0. To me, this code explicitly says that... But what actually happens is that playerx or y keeps incrementing regardless of MAXSPEED and continues moving even after the player stops pressing the arrow key. I keep rereading but I'm completely baffled by this weird behavior. Any insights? Thanks.

    Read the article

  • My NetGear router suddenly started showing limited access for all wifi connected laptops

    - by Yasser
    I have netgear n300 router which I had installed about 6 months back. Here is how the setup is, I have a local internet provider by the name of "Hathway" they have this modem which is in turn connected to the router and a wire from router is connected to my desktop. As shown in this pic below, except for the laptop I use a desktop rest all the connection is same So with this connection and the below configuration, every thing worked fine. The desktop would work also all my laptops and mobile devices would connect and be able to access the internet without any problem. Now suddenly since yesterday (with no changes made whatsoever to this config) all my laptops start showing the limited connection message and cannot connect to the internet. However the desktop which is connected can access the internet. Can someone please guide me on this.

    Read the article

  • Outputting SVN hook messages

    - by Luke Segars
    Hi all, I have a subversion repository on my Linux machine that is set up to export a new build of a project every time a new commit occurs using a post-commit hook. I would really like to be able to provide an output message to the committer containing some status information once the hook completes. Is it possible to redirect the output of the hook to come after the standard commit messages? For example: owner@dev-machine:/working/dir$ svn commit Sending FILE1 Sending FILE2 Transmissing file data ... Committed revision 13. Exporting project... Successfully exported to mysite.com The addition of the last two lines is the functionality I'm looking for.

    Read the article

  • Is there a way to disable the hardware on/off switch for the Wireless interface?

    - by avee
    I have an HP 520 and is running the latest Ubuntu 11.10. The hardware works fine with Ubuntu with one exception: The device has a hardware switch for turning the wifi on and off. Every time the wi-fi is disabled through the hardware switch, I am unable to bring it on again. The message on the networking popup would be device not ready. What I am looking for is a way to disable the hardware switch altogether so that when users accidentally press the button, the wifi would not be disabled. There is no setting to disable the switch in the BIOS. Hardware info from lspci -nn: 00:00.0 Host bridge [0600]: Intel Corporation Mobile 945GME Express Memory Controller Hub [8086:27ac] (rev 03) 00:02.0 VGA compatible controller [0300]: Intel Corporation Mobile 945GME Express Integrated Graphics Controller [8086:27ae] (rev 03) 00:02.1 Display controller [0380]: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller [8086:27a6] (rev 03) 00:1b.0 Audio device [0403]: Intel Corporation N10/ICH 7 Family High Definition Audio Controller [8086:27d8] (rev 01) 00:1c.0 PCI bridge [0604]: Intel Corporation N10/ICH 7 Family PCI Express Port 1 [8086:27d0] (rev 01) 00:1c.1 PCI bridge [0604]: Intel Corporation N10/ICH 7 Family PCI Express Port 2 [8086:27d2] (rev 01) 00:1d.0 USB Controller [0c03]: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 [8086:27c8] (rev 01) 00:1d.7 USB Controller [0c03]: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller [8086:27cc] (rev 01) 00:1e.0 PCI bridge [0604]: Intel Corporation 82801 Mobile PCI Bridge [8086:2448] (rev e1) 00:1f.0 ISA bridge [0601]: Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge [8086:27b9] (rev 01) 00:1f.2 IDE interface [0101]: Intel Corporation 82801GBM/GHM (ICH7 Family) SATA IDE Controller [8086:27c4] (rev 01) 02:06.0 CardBus bridge [0607]: ENE Technology Inc CB1410 Cardbus Controller [1524:1410] (rev 01) 02:08.0 Ethernet controller [0200]: Intel Corporation 82562ET/EZ/GT/GZ - PRO/100 VE (LOM) Ethernet Controller Mobile [8086:1068] (rev 01) 10:00.0 Network controller [0280]: Intel Corporation PRO/Wireless 3945ABG [Golan] Network Connection [8086:4222] (rev 02) The output from lsmod | grep iwl: iwl3945 73329 0 iwl_legacy 71499 1 iwl3945 mac80211 272785 2 iwl3945,iwl_legacy cfg80211 172392 3 iwl3945,iwl_legacy,mac80211

    Read the article

  • Is this method pure?

    - by Thomas Levesque
    I have the following extension method: public static IEnumerable<T> Apply<T>( [NotNull] this IEnumerable<T> source, [NotNull] Action<T> action) where T : class { source.CheckArgumentNull("source"); action.CheckArgumentNull("action"); return source.ApplyIterator(action); } private static IEnumerable<T> ApplyIterator<T>(this IEnumerable<T> source, Action<T> action) where T : class { foreach (var item in source) { action(item); yield return item; } } It just applies an action to each item of the sequence before returning it. I was wondering if I should apply the Pure attribute (from Resharper annotations) to this method, and I can see arguments for and against it. Pros: strictly speaking, it is pure; just calling it on a sequence doesn't alter the sequence (it returns a new sequence) or make any observable state change calling it without using the result is clearly a mistake, since it has no effect unless the sequence is enumerated, so I'd like Resharper to warn me if I do that. Cons: even though the Apply method itself is pure, enumerating the resulting sequence will make observable state changes (which is the point of the method). For instance, items.Apply(i => i.Count++) will change the values of the items every time it's enumerated. So applying the Pure attribute is probably misleading... What do you think? Should I apply the attribute or not?

    Read the article

  • GParted in UBUNTU shows entire disk as UNALLOCATED SPACE

    - by msPeachy
    Good day to everyone. I hope someone can help me with my problem. I have a dual boot Windows and Ubuntu system. I recently encountered an hd0 out of disk error and wasn't able to boot Ubuntu. So I booted into Windows, after 2 to 3 times of booting and rebooting Windows, I tried booting Ubuntu but still I get the hd0 out of disk error. I decided to run Ubuntu from LIVEUSB to try to fix my Ubuntu partition using GParted, but when I run GParted, it shows my entire disk as UNALLOCATED SPACE! The strange thing is that Nautilus still shows and mounts my partitions. Also every time I boot into Windows , my partitions exists and I am able to read and write to them. I have no idea what is wrong. Please help! I can't stand using Windows since most of the tools I use are in Ubuntu. I don't mind reinstalling Ubuntu. In fact I already tried reinstalling using the LIVEUSB but I wasn't able to, since GParted or the Ubuntu installer itself does not recognized my partitions and shows the entire disk as unallocated space. I am currently running Ubuntu from LIVEUSB. Here's the outpuf of sudo fdisk -l Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb30ab30a Device Boot Start End Blocks Id System /dev/sda1 * 2048 104869887 52433920 83 Linux /dev/sda2 104869888 105074687 102400 7 HPFS/NTFS/exFAT /dev/sda3 105074688 156149759 25537536 7 HPFS/NTFS/exFAT /dev/sda4 156151800 625153409 234500805 f W95 Ext'd (LBA) /dev/sda5 156151808 169156591 6502392 82 Linux swap / Solaris /dev/sda6 169158656 294991871 62916608 7 HPFS/NTFS/exFAT /dev/sda7 294993920 471037944 88022012+ 7 HPFS/NTFS/exFAT /dev/sda8 471041928 625121152 77039612+ 7 HPFS/NTFS/exFAT When I run, sudo parted -l, I got this error message: ubuntu@ubuntu:~$ sudo parted -l Error: Can't have a partition outside the disk!

    Read the article

  • Wubi shows error

    - by Quirk
    I tried installing Ubuntu 12.04 using Wubi and it just doesn't work, every time, without fail. I had the following scenarios: 1. I downloaded only wubi.exe and ran it. The wubi installer started downloading the amd64.iso using torrent. But when there were just about 40 secs left to download, it shows an error 404: File not found. 2. I downloaded the iso file seperately and put it in the same folder as the wubi.exe. Now there are two cases: a. Offline: wubi says it could not download the metalinks file and hence cannot download the iso. So I download the meta files separately and place them in the same directory. wubi shows the same error again. b. Online: wubi works in same way as in case 1. and the same problem occurs as in case 1. In short wubi doesn't recognize the already downloaded iso in the directory at all. 3. I burn the iso into a Cd and run it. The same thing occurs as in Case 2. Just in case that you know, I installed SP3 for win XP just before using wubi. While Windows is running alright, is it possible that its causing conflicts for wubi?

    Read the article

  • Take ownership in windows 7/8 does not work

    - by John
    I have taken ownership of my external HDD and now I should be able to access every folder but I can't. There are numerous folders for which I don't have the ownership of. I tried taking ownership of those but after I close/reopen an explorer windows the ownership resets itself to what it was before. For example I have a folder which now has this S-1-5-21-95661877-3860777391-1413521220-1000 and a subfolder of this which has Unable to display current owner. If I take the ownership of the subfolder it shows me as the owner but I still can't access it; I get "You don't currently have permission ...".

    Read the article

  • Cannot find Ruby on Rails installed

    - by James
    I've managed to install Ruby and the gems install (rvm?) but now I'm stuck actually installing Ruby on Rails. Every time I execute, gem install rails Terminal says that it's fetching each file and that it successfully installed it: 1 gem installed However when I then run the rails command, I'm told that it's not installed and to run the gem install rails command again. I've attempted to install with sudo but the same thing happens. I've restarted after an install and that's not worked. Ideas?

    Read the article

  • fail2ban and denyhosts constantly ban me on Ubuntu

    - by Trey Parkman
    I just got an Ubuntu instance on Linode. To secure the SSH on it, I installed fail2ban (using apt-get), but then had a problem: fail2ban kept banning my IP (for limited durations, thankfully) even though I was entering the correct password. So I removed fail2ban and installed denyhosts instead. Same problem, but more severe: It seems like every time I SSH in, my IP gets banned. I remove it from /etc/hosts.deny, restart denyhosts and log in again, and my IP gets banned again. The only explanation I can think of is that I've been SSH-ing in as root (yes, yes, I know); maybe something is set somewhere that blocks anyone who SSH-es in as root, even if they log in successfully? This seems bizarre to me. Any ideas? (Whitelisting my IP is a temporary fix. I don't want to only be able to log on from one IP.)

    Read the article

  • Webcast Tomorrow: Securing the Cloud for Public Sector

    - by Darin Pendergraft
    Securing the Cloud for Public Sector Click here, to register for the live webcast. Cloud computing offers government organizations tremendous potential to enhance public value by helping organizations increase operational efficiency and improve service delivery. However, as organizations pursue cloud adoption to achieve the anticipated benefits a common set of questions have surfaced. “Is the cloud secure? Are all clouds equal with respect to security and compliance? Is our data safe in the cloud?” Join us December 12th for a webcast as part of the “Secure Government Training Series” to get answers to your pressing cloud security questions and learn how to best secure your cloud environments. You will learn about a comprehensive set of security tools designed to protect every layer of an organization’s cloud architecture, from application to disk, while ensuring high levels of compliance, risk avoidance, and lower costs. Discover how to control and monitor access, secure sensitive data, and address regulatory compliance across cloud environments by: providing strong authentication, data encryption, and (privileged) user access control to ensure that information is only accessible to those who need it mitigating threats across your databases and applications protecting applications and information – no matter where it is – at rest, in use and in transit For more information, access the Secure Government Resource Center or to speak with an Oracle representative, please call1.800.ORACLE1. LIVE Webcast Securing the Cloud for Public Sector Date: Wednesday, December 12, 2012 Time: 2:00 p.m. ET Visit the Secure Government Resource CenterClick here for information on enterprise security solutions that help government safeguard information, resources and networks. ACCESS NOW Copyright © 2012, Oracle. All rights reserved. Contact Us | Legal Notices | Privacy Statement

    Read the article

  • SQLAuthority News – Microsoft Whitepaper – AlwaysOn Solution Guide: Offloading Read-Only Workloads to Secondary Replicas

    - by pinaldave
    SQL Server 2012 has many interesting features but the most talked feature is AlwaysOn. Performance tuning is always a hot topic. I see lots of need of the same and lots of business around it. However, many times when people talk about performance tuning they think of it as a either query tuning, performance tuning, or server tuning. All are valid points, but performance tuning expert usually understands the business workload and business logic before making suggestions. For example, if performance tuning expert analysis workload and realize that there are plenty of reports as well read only queries on the server they can for sure consider alternate options for the same. If read only data is not required real time or it can accept the data which is delayed a bit it makes sense to divide the workload. A secondary replica of the original data which can serve all the read only queries and report is a good idea in most of the cases where there is plenty of workload which is not dependent on the real time data. SQL Server 2012 has introduced the feature of AlwaysOn which can very well fit in this scenario and provide a solution in Read-Only Workloads. Microsoft has recently announced a white paper which is based on absolutely the same subject. I recommend it to read for every SQL Enthusiast who is are going to implement a solution to offload read-only workloads to secondary replicas. Download white paper AlwaysOn Solution Guide: Offloading Read-Only Workloads to Secondary Replicas Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: AlwaysOn

    Read the article

  • What type of career path / jobs for a developer to have best work life balance?

    - by programmx10
    I know some people may look down on a question like this but I've been thinking lately a lot about what direction I can take my career to have a good work life balance, since I have been working for a startup where hours tend to drag on, etc and I find it often drains the life out of me. I have been going to interviews and some other companies are also startups / new companies and seem to have a similar attitude about working long hours. Maybe its the technologies I use, the type of development, I don't know but I'm curious if anyone can offer advice on what a path is to be a programmer / developer but work for a company that respects a regular work week and would only rarely find the need to move past this. I realize this won't lead to being the highest paid in my field but I'm ok with that and feel the tradeoff would be worth it as it would also give me time for my own projects, etc. I know some people may say this is too general but I believe it is a programmer specific question because I believe there tends to be a higher than average rate of working overtime, etc and people working in "startup" venture situations than in many other fields and there is definitely a mindset among a lot of people in the field of working long hours that doesn't exist in every industry.

    Read the article

  • What is a 'best practice' backup plan for a website?

    - by HollerTrain
    I have a website which is very large and has a large user-base. I am trying to think of a 'best practice' way to create a back up or mirror website, so if something happens on domain.com, I can quickly point the site to backup.domain.com via 401 redirect. This would give me time to troubleshoot domain.com while everyone is viewing backup.domain.com and not knowing the difference. Is my method the ideal method, or have you enacted better methods to creating a backup site? I don't want to have the site go down and then get yelled at every minute while I'm trying to fix it. Ideally I would just 'flip the switch' and it would redirect the user to a backup. Any insight would be greatly appreciated.

    Read the article

  • "Failed to create swap space" error during installation

    - by Welsh Heron
    I've been trying to install Ubuntu for the past two days or so, but I've been running into a problem: every time I run the installation program on the LiveCD, I always get the same (or a very similar) error: "Failed to create Swap space The creation of swap space in partition #3 of SCSI5 (0,0,0)(sda) failed." So far, I've run DBAN (Darik's Boot and Nuke) on my HDD once, to make absolutely sure that everything on it had been erased. Then, I simply put in the LiveCD, and let it run the automated install. I get the above error directly after I tell it to automatically partition the HDD (it will work for a second or so, then this will pop up), forcing me back to the screen that lets me choose whether I want to automatically or manually partition the HDD. Well, after failing to install the software manually, I did a little research and learned enough about partitioning Linux to use the 'Manual partitioning' option. I partitioned the HDD as follows (it's a 1TB drive): /home - (the rest)- ext2, / - 20GB - ext2, /boot - 100MB - ext2, /swap - 8GB /EFIboot - 40MB The only difference when I tried this method was that I got THIS message: "Failed to create Swap space The creation of swap space in partition #2 of SCSI5 (0,0,0)(sda) failed." Basically, the only difference was that there was now a '2' instead of a '3'. If I may ask, what exactly am I doing wrong? I've tried looking around the internet (that's basically all I've done for the last two days), but no one seems to have the same problem that I have, and I've tried most of the solutions for similar problems (DBAN, formatting partitions in ext2 format, etc). The only thing I haven't tried is using the terminal to manually partition the HDD...and I actually DID try to do this, but I wasn't able to get past 'su' 's password demand, so I wasn't able to use the terminal. Thank you for your help in advance. ~Welsh

    Read the article

  • What does this IIS memory dump mean? (reserved memory)

    - by Jesse
    My w3wp's are recycling every 60 seconds after using too much virtual memory. I ran the IIS Debug Diagnostic Tool to capture a memory dump before the worker process recycled; the most interesting part seems to be this: Virtual Allocation Summary Reserved memory 4.88 GBytes Committed memory 328.27 MBytes Mapped memory 17.36 MBytes Reserved block count 524 blocks Committed block count 1082 blocks Mapped block count 43 blocks So that 4.88 GBytes of reserved memory seems really big. But neither the DotNetMemoryAnalysis or the regular Memory Pressure Analyzer seems to tell me where that 4.88 GB went. How can I find out?

    Read the article

  • Freeware Local Proxy for Proxy Chaining with HTTPAUTH

    - by pepoluan
    I am looking for a freeware local proxy to perform proxy-chaining with HTTPAUTH. To explain my situation: In my workplace I am forced to keep switching between several internet-connected apps, and thus everytime I have to type in the credentials (or, at least, click on 'OK' to send my previously-saved credential). To make matters more annoying, the proxy login times out every 30 minutes, requiring me to lather-rinse-repeat the whole annoyance. I'd like to just point them all to a locally installed proxy which will on its own perform the required HTTPAUTH against the corporate proxy. I've tried Cntlm, but it always fail to authenticate (and according to this thread, that is due to the proxy using HTTPAUTH which is not supported by Cntlm) Any suggestions? ETA: I found Polipo, but it's kinda wonky on Windows. Especially if I visit a new URL, and the DNS server is a bit slow, then Polipo will simply drop/refuse the connection. And I have to put my password in plaintext. If there's a better suggestion, I'm all ears.

    Read the article

  • What is the best way to code the XNA Game Server for FPS game?

    - by AgentFire
    I'm writing a FPS XNA game. It gonna be multiplayer so I came up with following: I'm making two different assemblies — one for the game logic and the second for drawing it and the game irrelevant stuff (like rocket trails). The type of the connection is client-server (not peer-to-peer), so every client at first connects to the server and then the game begins. I'm completly decided to use XNA.Framework.Game class for the clients to run their game in window (or fullscreen) and the GameComponent/DrawableGameComponent classes to store the game objects and update&draw them on each frame. Next, I want to get the answer to the question: What should I do on the server side? I got few options: Create my own Game class on the server, which will process all the game logic (only, no graphics). The reason why I am not using the standart Game class is when I call Game.Run() the white window appears and I cant figure out how to get rid of it. Use somehow the original XNA's Game class, which is already has the GameComponent collection and Update event (60 times per second, just what I need). UPDATE: I got more questions: First, what socket mode should I use? TCP or UDP? And how to actually let the client know that this packet is meant to be processed after that one? Second, if I is going to use exacly GameComponent class for the game objects which is stored and process on the server, how to make them to be drawn on the client? Inherit them (while they are combined to an assembly)? Something else?

    Read the article

  • Super slow website - show me what's been downloaded so far.

    - by Mick
    Every now and then a website becomes super-slow (but not broken) because there are too many people looking at it at the same time. When I try and view such a site, say with firefox, I can see that it is downloading all sorts of components of the site because of the progress information printed at the bottom of the window and I'm sitting there thinking "If only the browser would show me what it's got so far. I don't care if its a jumbled mess, I just want to see what you've got". Does any browser offer such an option?

    Read the article

< Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >