Search Results

Search found 2950 results on 118 pages for 'andrew martin'.

Page 39/118 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Ethernet 802.1x client -> WiFi AP on a Raspberry Pi?

    - by Martin Janiczek
    I have an Ethernet connection that requires 802.1x authentication (TTLS, MSCHAPv2, name+password). My goal is to connect that to something that would then act as an WiFi AP, so I can use the connection on more devices (iPhone, notebook, etc.) Would it be possible/good idea to use Raspberry Pi for this purpose? Or are there better-suited devices to do this? EDIT: found some alternatives but because of low rep can't post more than two links... OpenWRT + wpa_supplicant guide Carambola - works with OpenWRT (but probably not standalone?) Hornet-UB - works with OpenWRT Asus RT-N10+ + OpenWRT how-to

    Read the article

  • Apache rewrite rule to remove index.php and direct certain areas to https

    - by Stephen Martin
    I have a codeignitor application running on Apache2, I have managed to remove the index.php from the urls with this .htaccess RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .* index.php/$0 [PT,L] now I want to make certain parts of the site redirect to https, I tried this: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .* index.php/$0 [PT,L] RewriteRule ^/?cpanel/(.*) https://%{SERVER_NAME}/cpanel/$1 [R,L] RewriteRule ^/?login/(.*) https://%{SERVER_NAME}/cpanel/$1 [R,L] But it doesn't work. I have to say when it comes to Apache rewrites im a noob. I can't find any tutorials on how to remove index.php and rewrite/redirect certain parts of the site to https. Any ideas, Thanks.

    Read the article

  • How to set up a PC which can be booted from Linux AND Windows?

    - by Martin
    Our PC was running Windows XP up to know. It has become incredibly slow and I'm considering switching to Linux (Ubuntu?!) as a fresh OS. However, there are some applications we rarely use which run only on Windows and I also want to have the possibility to easily go back to the old system, if I should find during testing linux, that anything is missing or not available. So the idea is to install Linux on a new (second) hard drive and use the existing Windows XP from a virtual machine (converted by Paragon Drive Backup) in the transition time. We have a lot of data on the PC, tens of GBs of Photos (managed by Picasa), ... My questions: What could be the best way to setup the new hard drive? (Partitions) I assume that I can not access the Linux data from Windows but I could access (read/write) windows drives from Linux? Does anyone know good tutorials for this use case? What other things might I have to consider for transition Windows-Linux?

    Read the article

  • Local and public IPs on the same switch?

    - by Andrew
    It's all pretty much in the title. Is it possible to assign both local and public IPs to different nodes connected to the same switch? I have 4 servers with 2 gigabit ethernet ports each. I want one of each to have a public IP, and the remaining ports to have local IPs for server-to-server traffic. Sorry if it's a dumb question. I didn't see anything about it on here already.

    Read the article

  • restore -A usage

    - by Martin v. Löwis
    I have created a number of dump files using Linux dump(8), using the -A option to get a table of contents on disk (the backups are on tape). Now I'm trying to look into these archive files, using restore -i -A <archive>` However, this insists on asking what tape to use, and complains if I say none. What am I doing incorrectly? I was hoping that I can use these archive index files without having to insert the tape to use.

    Read the article

  • Tools to manage large network of heterogeneous web applications?

    - by Andrew
    I recently started a new job where I've been tasked with managing a global network of heterogenous web applications. There's very little documentation. My first order of business is to create an inventory of all of the web applications. Are there any tools out there to manage a large group of web apps? I'd like to collect a large dataset for each website including: logins for web based control panels logins to FTP/ssh accounts Google analytics tracking code for each site 3rd party libraries used SSL certs, issuers, and expiration dates etc I know I could keep the information in Excel or build a custom database, but I'm hoping there's already a tool out there to help me with this.

    Read the article

  • How do I make the first row of an Excel chart be treated as a heading when it's a number?

    - by Andrew Grimm
    Given a data sample like Prisoner 24601 0.50 Day 1 80 90 Day 2 81 89 Day 3 82 90 Day 4 81 91 What's the easiest way to tell Excel that 24601 and 0.50 are data series names rather than Y axis values when creating a line chart? Approaches I'm aware of: Turn Prisoner numbers into text by having ="24601" and ="0.50" Only select rows 2 onwards as data, and then add in the labels once the graph has been created? Approaches that don't appear to work: Ask Excel to format the first row's numbers as text.

    Read the article

  • Multi-account Google Apps Calendar Free-busy status?

    - by Andrew Bolster
    I have 3 google apps domain accounts and a personal google account, and until recently, there was little need for the google app's calendars to have any real use; Now however the powers that be finally discovered the smart re-scheduler and the other google tools for managing meetings and schedules; unfortunately I'm now in a position where I've got events notifications all over the place and because each of the calendars do not know about each other, I'm losing all of the advantages of rescheduler / free-busy status. TL;DR, 4 calendars, unified 'free busy' status without having to manually copy every event please.

    Read the article

  • if a usb pen drive has 64349 cylinders, is it damaged?

    - by Andrew S
    I have a 4Gb USB pen drive that I'm trying to format with FAT32. When I run fdisk, it gives me this message: The number of cylinders for this disk is set to 64349. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) I've tried deleting the partition table, creating new partitions, etc, but they never work. Sometimes I can write to the drive the first time I use it after formatting it, but then it becomes read-only in both Windows and Linux. I've tried this on multiple computers. Am I doing something wrong, or is the drive reporting an incorrect number of cylinders? Is the drive itself likely to be corrupted, and is there anyway to fix this under Windows Vista or Linux? Thanks

    Read the article

  • logrotate: neither rotate nor compress empty files

    - by Andrew Tobey
    i have just set up an (r)syslog server to receive the logs of various clients, which works fine. only logrotate is still not behaving as intending. i want logrotate to create a new logfile for each day, but only to keep and store i.e. compress non-empty files. my logrotate config looks currently like this # sample configuration for logrotate being a remote server for multiple clients /var/log/syslog { rotate 3 daily missingok notifempty delaycompress compress dateext nomail postrotate reload rsyslog >/dev/null 2>&1 || true endscript } # local i.e. the system's very own logs: keep logs for a whole month /var/log/kern.log /var/log/kernel-info /var/log/auth.log /var/log/auth-info /var/log/cron.log /var/log/cron-info /var/log/daemon.log /var/log/daemon-info /var/log/mail.log /var/log/rsyslog /var/log/rsyslog-info { rotate 31 daily missingok notifempty delaycompress compress dateext nomail sharedscripts postrotate reload rsyslog >/dev/null 2>&1 || true endscript } # received i.e. logs from the clients /var/log/path-to-logs/*/* { rotate 31 daily missingok notifempty delaycompress compress dateext nomail } what i end up with is having is some sort of "summarized" files such as filename-datestampDay-Day and corresponding .gz files. What I do have are empty files, which are eventually zipped. so does the notifempty directive is in fact responsible for these DayX-DayY files, days on which really nothing happened? what would be an efficient way to drop both, empty log files and their .gz files, so that I eventually only keep logs/compressed files that truly contain data?

    Read the article

  • Configuring sendmail to use one outbound MTA exclusively

    - by Charlie Martin
    I have a sendmail problem, and I'm anything but a sendmail guru -- I could use some help. My problem is that I have a system intended to be more or less an "appliance" -- it's not intended to have an admin. Because of this, it needs to be able to "call home" by sending email. As we have configured it, this works fine -- using sendmail, it finds the appropriate relay by looking up an MX record and everything works fine. Now, however, because of security concerns, we want to limit it to using exactly one relay, so for example relay.corp.example.com. Should the user configure it to use, say, fubar.example.com, the mail sending should fail or be deferred. I thought that by configuring sendmail with a /etc/mail/server.switch file containing hosts files without dns, I'd get that effect. This doesn't work -- instead, if it gets mail addressed to [email protected], it tries to talk directly to example.com, and ignores the configured server. Any ideas?

    Read the article

  • How do I extract files from one tarball to another tarball in one step?

    - by Martin
    I have some fairly large tarball archives, from which I need to extract some files. I will later repack those files to transfer them to another server. Currently that is a two (multi) step process for me: mkdir ttmp tar -vxzf large.tgz -C ttmp/ --strip-components=<INT> <folder-to-be-extracted> or alternatively with wildcards mkdir ttmp tar -vxzf large.tgz -C ttmp/ --strip-components=<INT> \ --wildcards --no-anchored '*pattern*' Then I go ahead and recompress the created folder tar -vczf small.tgz ttmp/* rm -rf ttmp How can I combine these two commands into one? Like this tar -x large.tgz > tar -c small.tgz Just to show, what I already tried: Whenever I search the terms "extract" I will end up here or here or even here. When I use the term "split" I will end up here and that is definitely not what I intend to do. When I use "repack" I end up in strange places.

    Read the article

  • Schedled Tasks and Environment Variables

    - by Andrew J. Brehm
    I have a scheduled task, a batch file, that uses an environment variables which is set system-wide. On server 1, the scheduled task runs under a domain account and the environment variable works. The environment variable also exists in my session and when I runas as the service account. On server 2, the scheduled task runs under a different domain account and the environment variable DOES NOT work. However, the environment variable does exist in my session and when I runas as the service account. On both servers the environment variable has been set system-wide by the same script originally. The script runs again every now and then and as far as I can see noone has tempered with the environment variable. The scheduled tasks are set up identically on the two servers (using the same XML file) and the two service accounts are identically configured (as far as I know). What am I doing wrong?

    Read the article

  • smartOS HPC config suggestion

    - by Andrew B.
    I'm configuring a brand new HPC server and am interested in using SmartOS because of it's virtualization control and zfs features. Does this configuration make sense for a SmartOS HPC, or would you recommend an alternative? System Specs: 2x 8-core xeon 384 GB RAM 30 TB HDs with 2x512GB SSDs Uses: - zfs for serving data to different vms, and over the network; 1 SSD for L2ARC and 1 for ZIL - typically 1-2 ubuntu instances running R and custom C/C++ code My biggest concerns as a newbie to SmartOS and ZFS are: (1) will I get near-metal performance from ubuntu running on SmartOS if it is the only active vm? (2) how do I serve data from the global zfs pool to the containers and other network devices?

    Read the article

  • Nginx: Disallow index.html in URL

    - by Martin Vilcans
    We're generating a site consisting of only static files (using Assemble). Having the .html extension on URLs looks so nineties, so we generate every static HTML file in its own directory and call it index.html. For example, the url http://www.example.com/foo/bar/ is in the file /var/www/foo/bar/index.html. This works well, but there is one small thing nagging me: Now there are two possible URLs to the same resource: http://www.example.com/foo/bar/ (slash URL) http://www.example.com/foo/bar/index.html (index.html URL) By accident someone may link to the index.html form of the URL, which is bad for SEO and looks ugly (remember the nineties?). Is it possible in Nginx to give a 404 error on the index.html URL, but serve the slash URL? I tried this: location ~ /index\.html$ { return 404; } But it seems that Nginx does some internal rewrite of the slash URL to the index.html URL, and then matches this location so we get a 404 even on the slash URL. Note that to catch mistakes, we want index.html URLs to be an error, not just redirect to the slash URL.

    Read the article

  • Discover the public ip of a network without being connected

    - by Martin Trigaux
    Let say, I'm next to a network and can see the traffic (with airodump or similar tool) but can not decipher it (because I am not connected on the network). Is it possible to discover the public ip address of the network ? I know the MAC address of the users connected on the network but do I know the one of the router ? If yes, maybe there is a way to do the matching. I know IP addresses are not forever but some addresses are static and never change. Maybe there is a database of MAC address having recorded that. Google has a database that match MAC address and geographical coordinates so why not with IP addresses ? Other idea, if I know where am I, I can maybe guess the IP range used in the city by the ISP (is it findable ?) and then try to "ping" each IP on the range (if it is a /24, it's possible, even /16 maybe). Will I get some information like the MAC of the box or see some traffic on the network ? These are two ideas I had. I don't know if they are doable, certainly not perfect. Do you think of some others ? By trying several methods, maybe I can get a guess with a bit of luck. Thank you

    Read the article

  • Remote desktop fails with user denied until client reboot

    - by Andrew J. Brehm
    Sometimes (but often enough to anoy) Remote Desktop Connection cannot connect to a server (2008 R2 but maybe also 2003) and claims that "The connection was denied because the user account is not authorized for remote login." The user is always authorized for remote login and the connection works from other clients. (Although this is the very same message that appears when a user really isn't autorized for remote login.) The problem always goes away after a client restart. The client is always Windows 7 but I have no (other) reason to assume that it only affects Windows 7 clients. Any idea what causes this?

    Read the article

  • In Linux, what's the best way to delegate administration responsibilities, like for Apache, a database, or some other application?

    - by Andrew Banks
    In Linux, what's the best way to delegate administration responsibilities for Apache and other "applications"? File permissions? Sudo? A mix of both? Something else? At work we have two tiers of "administrators" Operating system administrators. These are your run-of-the-mill "server administrators." They are responsible for just the operating system. Application administrators. The people who build the web site. This includes not only writing the SQL, PHP, and HTML, but also setting up and running Apache and PostgreSQL or MySQL. The aforementioned OS admins will install this stuff, but it's mainly up to the app admins to edit all the config files, start and stop processes when needed, and so on. I am one of the app admins. This is different than what I am used to. I used to just write code. The sysadmin took care not only of the OS but also installing, setting up, and keeping up the server software. But he left. Now I'm in charge of setting up Apache and the database. The new sysadmins say they just handle the operating system. It's no problem. I welcome learning new stuff. But there is a learning curve, even for the OS admins. Apache, by default, seems to be set up for administration by root directly. All the config files and scripts are 644 and owned by root:root. I'm not given the root password, naturally, so the OS admins must somehow give my ordinary OS user account all the rights necessary to edit Apache's config files, start and stop it, read its log files, and so on. Right now they're using a mix of: (1) giving me certain sudo rights, (2) adding me to certain groups, and (3) changing the file permissions of various directories, to make them writable by one of the groups I'm in. This never goes smoothly. There's always a back-and-forth between me and the sysadmins. They say it's ready. Then I try certain things, and half of them I still can't do. So they make some more changes. Then finally I seem to be independent and can administer Apache and the database without pestering them anymore. It's the sheer complication and amount of changes that make me uncomfortable. Even though it finally works, more or less, it seems hackneyed. I feel like we're doing it wrong. It seems like the makers of the software would have anticipated this scenario (someone other than root administering it) and have a clean two- or three-step program to delegate responsibility to me. But it feels like we are really chewing up the filesystem and making it far and away from the default set-up. Any suggestions? Are we doing it the recommended way? P.S. For PostgreSQL it seems a little better. Its files are owned by a system user named postgres. So giving me the right to run sudo su - postgres gives me just about everything. I'm just now getting into MySQL, but it seems to be set up similarly. But it seems a little weird doing all my work as another user.

    Read the article

  • Is the cooling fan on a 2004 graphics card really necessary?

    - by Andrew
    I have an old GeForce 6600 card in my computer, circa 2004. Recently the fan has started playing up and making loud irritating noises. I've tried oiling it with no luck. This is the second fan I've put on the card, the stock one broke ages ago. Is a card this old really likely to need a cooling fan or can I remove it altogether? It has a decent heatsink on the chip but there's not a lot of airflow in that part of the box. Edit: I should add that I seem to remember most mid range graphics cards at the time I bought that didn't have fans (pretty sure they had heatsinks only), which is why I'm wondering.

    Read the article

  • Changing the Start Menu Power Button - Setting does not work: only (Shut down) is available

    - by Martin
    This is the second time I try to change this setting on a Vista based OS and I can't get it to work again. OS: Windows Server 2002 SP2 (not R2) = Microsoft Windows [Version 6.0.6002] = Vista When I go to: Power Options - Change Plan Settings - Change advanced power settings - Power buttons and lid - Start menu power button - Setting: the available combo box will only show the option Shut down. No other options are available. This server is part of a domain and has not been set up by me. I have not yet talked with the domain admin, but as far as I could tell from googling, only Win7 has group policy options for the start menu. (And yes, OC I will talk to the domain admin to see if he has any clue - which I doubt.) (Edit: I have now talked to our domain admin, and he's got no clue either.) I'm responsible for this server and a local administrator but not a domain administrator. I switched off User Account Control (UAC) yesterday without problems. Since I always log into this machine via RDP and this being a server, the natural choice would be the option (Log out) and not (Shut down). What can I do to fix it or to find out why it cannot be changed? Thanks!

    Read the article

  • Configure iptables with a bridge and static IPs

    - by Andrew Koester
    I have my server set up with several public IP addresses, with a network configuration as follows (with example IPs): eth0 \- br0 - 1.1.1.2 |- [VM 1's eth0] | |- 1.1.1.3 | \- 1.1.1.4 \- [VM 2's eth0] \- 1.1.1.5 My question is, how do I set up iptables with different rules for the actual physical server as well as the VMs? I don't mind having the VMs doing their own iptables, but I'd like br0 to have a different set of rules. Right now I can only let everything through, which is not the desired behavior (as br0 is exposed). Thanks!

    Read the article

  • Wrong chrome full screen resolution with multiple displays

    - by Martin
    I am using Google Chrome on Windows 7 64bit with two displays. The displays have different resolutions and aspect-ratios. My problem is best visible when playing back full-screen videos from a flash player inside chrome (but it is not flash related). The maximum size is calculated from the primary displays' resolution, even when the relevant window is maximised on the second screen. The problem is also visible on websites that set the width to 100%. The 100% then apply to the primary display, even when the window is opened on the secondary display. Are there any known solutions to this problem? I am observing it since many Chrome versions, do not know if it has been ever correct.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >