Search Results

Search found 10098 results on 404 pages for 'per pixel'.

Page 312/404 | < Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >

  • Loading the preview function of AUCTeX 11.86 on macports Emacs-app 23.2.1 port.

    - by Sarah
    I've installed Emacs-app 23.2.1 via MacPorts and I'm trying to install AUCTeX 11.86 so that it will work on this installation. I've run the following configure line for AUCTeX and that seems to work. ./configure --with-emacs=/Applications/MacPorts/Emacs.app/Contents/MacOS/Emacs --with-lispdir=/Applications/MacPorts/Emacs.app/Contents/Resources/site-lisp/ --with-texmf-dir=/usr/local/texlive/2010basic/texmf-local/ make and make install seem to work, and I've added the following line to my init.el (require 'tex-site) as per the installation instructions. However, when I open a TeX file, the Preview menu does not show up (although the LaTeX menu does.) The following are some of my tests: M-x load-library RET preview-latex RET doesn't seem to do anything. M-x load-library RET preview RET brings up the Preview menu. Is it safe to somehow add the load-library preview to my init.el? Or do I risk mucking up something? I'm new to Emacs and primarily trying to learn it because of the AUCTeX preview features, but I don't feel very safe in this environment yet.

    Read the article

  • How to calculate unweighted averages in Excel PivotTable?

    - by yonatron
    I often make PivotTables in which each row contains a number of per-person average measures. I then want to look at the unweighted column average for each measure, and usually make some kind of chart from these. Because my individual cells are often averaged from different numbers of data points, the Grand Total row ends up being a weighted average, which I’m not interested in. So I usually make my own average row a few rows above the table to use for my charts. That’s not too much work, but there’s another problem. I often add a few more people’s worth of data to the PivotTables’ source, then refresh the tables. This means my average row needs to be updated to encompass more rows from the PivotTable. Not a huge deal with one table, but when I have lots of them across lots of sheets, I have to do find/replace on a whole bunch of formulas. So: is there a way to automatically get unweighted column averages in a PivotTable, such that when the table is refreshed, the averages don’t change locations and encompass the newly added (or removed) data Thanks

    Read the article

  • What to do with old laptop screens?

    - by Lord Torgamus
    This question is inspired by another SU question I came across earlier today: What to do with old hard drives? It made me think about two long-dead laptops I have with perfectly good screens still inside. One is a Dell Inspiron 5100 and the other is an Averatec E1200, but responses need not be geared towards those particular models' screens. Rules, based heavily on the original question's: Objectives and suggestions to keep in mind when you post an answer : Should showcase your geekiness, be plain ol' fun, serve a social purpose or benefit the community. Your answer need not be limited to only one screen. For a really good answer, I'll go out and buy additional leftover screens. Your answer need not be limited to one project per screen. If additional accessories need be purchased, make sure they are common. Don't tell me to get a moon rock or something. The projects you suggested should serve a useful purpose; art is nice, but functional art is way better. Thanks in advance, folks. EDIT: Found another related question. Fun projects to do with an old 17" LCD monitor EDIT 2: I, for one, am enjoying the new outpouring of creativity here. Best fifty bucks... I mean, rep points... I ever spent. EDIT 3: That does it. At the end of the week, there was a tie for most votes between the accepted answer and the game platform answer. The game platform answer was cooler, but less reasonable as a project to actually do; in other words, it was more moon rocky. Unfortunately, I think fencepost had the best comment on the topic, which is that displays on their own have no good interface. Thanks for playing, everyone!

    Read the article

  • Exchange 2003 Internet Mail Size Limits

    - by scampbell
    I have unsuccessfully tried to increase per user incoming mail size settings by editing their user account settings on our Exchange server, but large incoming mail from external domains is still blocked using the default global settings. After reading here: http://support.microsoft.com/default.aspx?scid=kb;en-us;322679 I see that All Internet e-mail messages use the global setting for limits on sending and on receiving. The message categorizer evaluates the sender's sending limit and the recipient's receiving limit. In example 2 earlier, a user with a user mailbox limit of 3 MB could receive messages from another user with a 3-MB sending limit. Because Internet users use the global setting, they can send only a 2-MB message. Which to me is madness! Surely if I want to allow a user to receive mail up to a certain size then I should be able to set it as such? Is there a specific way of getting round this? Would setting the global defaults high and setting a lower, say 10MB, limit on the SMTP connector do the trick? Thanks.

    Read the article

  • Pulling application updates from closest server?

    - by Mike Morris
    Setup: 6 Major Sites with Server 2003/2008 DCs doing DHCP/AD Integrated DNS, each on their own subnet. All connect back to datacenter through a 3 mbps WAN ERP server running in the datacenter, accessed by clients at all sites Currently, when we update the software, I manually push a copy of the updated client/config files down to each DC. I have a script that we run on each PC to update the clients. It determines what subnet the PC is on, and pulls the software from that DC. It's messy, but it works. The client has an autoupdate feature, but it'll only pull from the application server (which is housed in the datacenter, over the 3 meg link). It takes forever, since the updates are not "patches" but a full version of the client, even for minor upgrades (bad design). After the most recent patch, you can configure the clients to pull from a different server. Unfortunately, it is the same for all clients. Is there some kind of DNS magic I can use to pull from the local server? For instance, if I tell the clients their update server is ERPUPDATE, can I have their local DNS server return a different IP for ERPUPDATE than the other sites? Example: Client 1 is at site A, client 2 is at site b. They each run the software and a version change is detected. As per the config files, the clients look to ERPUPDATE for their updated client. Client 1 queries DNS for the IP of ERPUPDATE at its current location (site A) DNS at site A returns 192.1.1.5 Client 1 pulls update from 192.1.1.5 Client 2 queries DNS for the IP of ERPUPDATE at its current location (site B) DNS at site B returns 192.1.2.5 Client 2 pulls update from 192.1.2.5 Excuse the poor explanation, I worked 61 hours over the weekend and haven't completely rebounded. I'll be happy to clarify if needed!

    Read the article

  • De-duplicating backup tool on a block basis? [closed]

    - by SST
    I am looking for an (ideally free as in speech or beer) backup tool for Unix-like OS which can store deduplicated backups, i.e. only nonredundant content takes up additional space. I already looked at dirvish (my first candidate) and rsnapshot which use hardlinks to achieve deduplication on a per-file level. However, as I want to back up large files (Thunderbird mailboxes 3GB, VMware images 10GB), such file are stored again entirely even if just a few bytes change. Then there are rsync-based tools like rdiff-backup which only store deltas and a current mirror. However, as the deltas are generated against each previous mirror, it is difficult to fine-tune the retention granularity (only keep one backup after a week, etc.) because the deltas would have to be re-evaluated. Another approach is to partition content into blocks and store each block only if it is not stored yet, otherwise just linking it to the first occurrence. The only tool I know of that does this by now is obnam (http://liw.fi/obnam), and it even supports zlib-compression and gpg-encryption -- nice! But it is very slow, AFAICT. Does any one know any other, solid backup software which supports deduplication on a sub-file level, ideally with at least some management options (show/select/delete generations...)?

    Read the article

  • Unable to remove invalid(orphaned?) SPNs

    - by Brent
    tldr version: Renamed domain from internal.domain.com to domain.com, have 4 SPNs that am unable to remove from DC. So my domain was internal.domain-name.com and I renamed it to domain-name.com and I thought everything was good. Several days later, I start setting up my RD Gateway and am noticing issues surrounding group policy. I run dcdiag and the SystemLog part fails. Starting test: SystemLog A warning event occurred. EventID: 0x00001796 Time Generated: 08/25/2014 02:48:30 Event String: Microsoft Windows Server has detected that NTLM authentication is presently being used between clients and this server. This event occurs once per boot of the server on the first time a client uses NTLM with this server. An error event occurred. EventID: 0xC0001B70 Time Generated: 08/25/2014 02:49:18 Event String: The SQL Server (MSSQLSERVER) service terminated with the following service-specific error: An error event occurred. EventID: 0xC0001B70 Time Generated: 08/25/2014 02:49:48 Event String: The SQL Server (MSSQLSERVER) service terminated with the following service-specific error: An error event occurred. EventID: 0xC0001B70 Time Generated: 08/25/2014 02:52:47 Event String: The SQL Server (MSSQLSERVER) service terminated with the following service-specific error: This made me check my AD for possible connections to the .internal domain. I found four which I remove by: setspn -D E3514235-4B06-11D1-AB04-00C04FC2DCD2/d79fa59c-74ad-4610-a5e6-b71866c7a157/internal.domain-name.com ServerName setspn -D HOST/ServerName.domain-name.com/internal.domain-name.com ServerName setspn -D GC/ServerName.domain-name.com/internal.domain-name.com ServerName setspn -D ldap/ServerName.domain-name.com/internal.domain-name.com ServerName Also, checking my dns records, there's an internal subdomain that I can delete but it comes back as well. I've tried removing the spns to no avail. Is there something I'm missing?

    Read the article

  • Ram configuration 4x4 or 2x8?

    - by Carl B
    I am looking to upgrade my Ram to 16gigs and I am wondering if there is any distinct advantage of the way I do it. That being a 4x4 or 2x8 set up. In all my searching there have been a number of pros for each profile. I can find no benchmarck results for either setup as a compaison. So, if there are 2 profiles of the same speed, same voltage, same timing and same cas what would perform better or have a better over all benafit? A few examples from my search - a 4x4 set would lend a benafit in that if one stick failed, you only lose 25% of your Ram vs 50% in a 2x8 set up. a 2x8 set up would have less strain on the memory controler and motherboard. a 2x8 would generate less heat. a 2x8 set up is easier to over clock (not part of my need, but alot of the comparisons circled around the overclocability ease of the 2 stick set up). There is one outstanding benafit that I have found in at least the target company I have looked at and that is price. The 2x8 is nearly half the cost. My motherboard supports a max of 16 gigs and I have a 64 bit OS. Has anyone seen any performance comparisons or is 16 gb just 16 gb no matter how you slice it? And is there any merit to the above pros? Edit: as per the mobo specs - Main Memory • Supports four unbuffered DIMM of 1.5 Volt DDR3 800/1066/1333/1600*/1800*/2133* (OC) DRAM, 16GB Max

    Read the article

  • nginx with fail2ban and mod_security

    - by Mahesh
    I forgot to update my fail2ban config for nginx. I just moved to nginx from apache. Today, I got a lot of cals from a single IP. IP tried to access login pages with post and get methods IP tried to use nginx as a proxy (GET http:/...) IP searched images, js, css folders IP tried to inject -d url_allow_fopen =1 and something similar. Most of the calls ended with 404. http { limit_req_zone $binary_remote_addr zone=app:10m rate=5r/s; ... server { ... location / { limit_req zone=app burst=50; } I got approximately 50 requests from that ip for a second. So i updated my nginx like the above. Will it avoid too many connections per second now? I have updated my fail2ban jail.local to support nginx. I am confused with the nginx-noscript.conf [Definition] failregex = ^<HOST> -.*GET.*(\.php|\.asp|\.exe|\.pl|\.cgi|\scgi) ignoreregex = I am serving php with nginx. I checked apache's noscript.conf and which has .php extension on it too. I tested this above settings before restarting fail2ban and got thousands of ips matched. I removed php and nothing matched. Do i need .php| in nginx-noscript.conf? Using mod_security and fail2ban together bring any problem? When i was searching today, i came to know mod_security is available for nginx too. So i am planning to use it too.

    Read the article

  • Exchange migration: ExchangeTransport warnings after uninstalling source server

    - by carlpett
    After disabling/uninstalling Exchange from our source SBS2003 server, I'm getting these warnings: Event 5020 "The topology doesn't contain a route to Exchange 2000 Server or Exchange Server 2003 sourceserver.domain.local in Routing Group [...]" Event 5006 "Cannot find route to Mailbox Server CN=SOURCESERVER [...] for store CN=[...]", for Public folder, First storage group and Recovery storage group. I followed the technet article here: http://technet.microsoft.com/en-us/library/bb288905.aspx (linked from the SBS 2003 - 2011 migration guide). When uninstalling Exchange, I got a warning about NNTP not being found in the registry, but that didn't seem relevant, and the uninstall continued. The server was subsequently removed from the domain and shut down, as per the instructions. If I open the Public Folder Management console on the Exchange 2010 server, the public folders \NON_IPM_SUBTREE\EFORMS_REGISTRY and \Archived mails gives an error on "Update content". I haven't found anything else which indicates something is wrong. We never really used the public folders on the old server, so there isn't really anything lost. Can I just remove these folders and let them be created anew?

    Read the article

  • Rendering a frame is producing noise from speakers in Windows and Linux

    - by Robber
    When any hardware accelerated application is rendering a frame (or many of them) a very short noise is coming from my speakers. This can be a game, a WebGL application or XBMC. When the application/game is rendering many frames per second (like most of them do) the noise is a continuous buzzing that gets higher pitched with higher framerates. This applies to Linux and Windows, so I'd assume it's a hardware problem. The current hardware in the PC is: CPU: Core2Quad Q9550 GPU: Radeon HD 5770 RAM: 2x2GB DDR2 Motherboard: Asus P5QLD PRO PSU: be quiet! Pure Power 530W Screen and speakers: Old 720p LCD TV connected via VGA and aux cable Muting the TV stops the noise, muting Windows doesn't. I tried replacing the PSU first (used a Tagan 700W PSU before) because I thought it was a power problem. It wasn't. I tried replacing the motherboard (used a ASUS P5B SE before) next because I thought it was a sound card problem. It wasn't. I tried the GPU in a different PC because I thought it was a broken graphics card. It worked perfectly fine in the other PC. I thought it might be interference, but moving the audio cable around changes absolutely nothing. I tried using an HDMI cable instead and that did work, but is not an option since my TV has only one HDMI input and I need that for my PS3.

    Read the article

  • No partition on USB Flash Drive?

    - by Skytunnel
    A friend gave me a corrupted USB memory stick to try recovery data from. But I've had some unusual results, so thought I'd share to see if anyone is familiar with this problem... First off I just tried opening from my own PC. Windows prompted to Format the drive, which I of course declined Downloaded TestDisk to anaylsis the drive. And right away I noticed something strange, on the listed drives it comes up as Disk /dev/sdc - 6144 B - USB Flash Drive That's right, the first USB flash drive smaller than a floppy disk!? Moving on anyway... first anaylsis comes up with: Partition sector doesn't have the endmark 0xAA55 TestDisk's Quick Search gave no results, moved on to Deeper Search: No partition found or selected for recovery This left me stumped. I tired a couple of other programs with no success I did manage to get a backup image, but it was just as small as TestDisk indicated, so nothing of use on it After a few hours trying various suggestions from other sources, I gave in and just tried formatting the drive. But returned the message: Windows was unable to complete the format. From googling that, the suggestion was to delete the partition. But there is no partition to delete in this case. most recently I've tried formatting from cmd, and got this result: Format D: /FS:FAT32 The type of the file system is RAW The new file system is FAT32 Verifying 0M 11 bad sectors were encountered during the format. These sectors cannot be guaranteed to have been cleaned The volume is too small for FAT32 Anyone got any suggestions? UPDATE: As per suggestion from @Karen, I tried running a CLEAN from DISKPART, results as follows DiskPart has encountered an error: The request could not be preformed because of an I/O device error.

    Read the article

  • MySQL Memory Limit Windows Server 2003

    - by Matt
    I am running MySQL 5.0.51a on Windows Server 2003 Standard Edition on an HP DL580 G4 with 3GB installed. One of my database tables has grown to 5.3 GB with an index file of 2.5 GB, which I believe is causing MySQL to be slow due to having to constantly load and unload the index file when updates are made to the table. The server itself seems to be performing OK because MySQL is only using about 500MB of memory (there are other apps running on the system, but MySQL uses the most memory). The table is fairly active with new records getting adding all during day but no deletes, ever. The MySQL server has up to 600 connections allowed, but only small number (10 or 20) would actually be writing to this table. I increased the memory limits in MySQL but since the max connections is so high I don't think I can give each connection 1GB without risking a problem. Is there some tuning that would let just certain connections get a lot of memory? So I have started to look for alternatives to avert the crisis I know is coming soon. Some of the options I have: Upgrade to Server 2003 Enterprise to install 64GB of memory. Question: would 32 bit MySQL be able to access more than 2GB? Would that be 2GB per thread? That would still be smaller than the index table size so it might not solve the problem completely, but it would be better than now. Upgrade to Server 200x 64 bit and MySQL 64 bit. Switch to a *nix 64 bit server. If anybody has suggestions for things to do in the meantime, opinions on which way to go, or other things that I have overlooked I would appreciate the help. Thanks

    Read the article

  • Can SQL Server (2008) transaction logs handle the database being dropped and re-created?

    - by Ben
    We're trying to restore a database (created programatically by running a hand-crafted SQL script we have). Our backup routine is to create a full backup of every database on the SQL Server 2008 instance on a Saturday then automatic transaction logs (I assume these are created automatically anyway - we appear to have lots of log files, possibly one per transaction after the full backup was taken?). On Tuesday this week the database in question was dropped and another one with the exact same name and schema was created. SQL Server has continued to create transaction log files but it hasn't had chance to create a new full backup (that won't happen until next Saturday). Now as it turns out we need to restore the database to how it was on Thursday. This is after the "drop and re-create". My question is, is this possible? If it isn't, what exactly does SQL Server think that it's writing to those transaction logs created since the drop and re-create? (I understood they were kind of files containing a binary delta, which makes me think maybe we can restore from them?) I'm no DBA but then neither is our IT department, so I'm doing the best I can to resolve this. Any advice much appreciated!

    Read the article

  • Slower/cached Linux file system required

    - by Chopper3
    I know it sounds odd but I need a slower or cached filesystem. I have a lot of firewalls that are syslog'ing their data to a pair of Linux VMs which write these files to their 'local' (actually FC SAN attached) ext3-formatted disks and also forward the messages to our Splunk servers. The problem is that the syslog server is writing these syslog messages as hundreds, sometimes thousands, of tiny ~4k writes per second back to our FC SAN - which can handle this workload right now but our FW traffic's going to be growing by at least a factor of 5000% (really) in coming months and that'll be a pain for the SAN, I want to fix the root cause before it's a problem. So I need some help figuring out a way of getting these writes cached or held-off in some way from the 'physical' disks so that the VMs fire off larger, but less frequent, writes - there's no way of avoiding these writes but there's no need for it to do so many tiny ones. I've looked at the various ext3 options, setting noatime and nodiratime but that's not made much of a dent in the problem. Obviously I'm investigating other file systems but thought I'd throw this out in case others have the same problem in the future. Oh and I can't just forward these messages to Splunk, our firewall team insist they're in their original format for diag purposes.

    Read the article

  • What does the 'Burst Rate' stat mean in HDTune?

    - by UpTheCreek
    I recently upgraded my laptop's v slow hard drive to a seagate momentus 7200. Everything is working fine, but I'm a bit confused by these benchmark results: The burst rate is significantly less than the Maximim transfer rate, and not much higher than the normal minimum (if you ignore the spikes). What's going on here? On the HDtune website it defines Burst Rate as: ...the highest speed (in megabytes per second) at which data can be transferred from the drive interface (IDE or SCSI for example) to the operating system. Which begs some questions... e.g. if this is the highest, then how did the bechmarking tool record the 103MB/sec maximum? And if this really is the true maximum, then where is the bottleneck? The laptops SATA interface is on an Intel 82801GBM southbridge controller. When I check in hardware manager, I see that it's driver is iaStor.sys from 2005. Maybe that's the issue? I'll look for a newever version, but any insights would be appreciated. Thanks

    Read the article

  • MySQL Tables Missing/Corrupt After Recreation

    - by Synetech inc.
    Hi, Yesterday I dumped my MySQL databases to an SQL file and renamed the ibdata1 file. I then recreated it and imported the SQL file and moved the new ibdata1 file to my MySQL data directory, deleting the old one. I’ve done it before without issue, however this time something is not right. When I examine the (personal, not MySQL config) databases, they are all there, but they are empty… sort of. The data directory still has the .ibd files with the correct content in them and I can view the table list in the databases, but not the tables themselves. (I have file-per-table enabled, and am using InnoDB as default for everything.) For example with the urls database and its urls table, I can successfully open mysql.exe or phpMyAdmin and use urls;. I can even show tables; to see the expected table, but then when I try to describe urls; or select * from urls;, it complains that the table does not exist (even though it just listed it). (The MySQL Administrator lists the databases, but does not even list the tables, it indicates that the dbs are completely empty.) The problem now is that I have already deleted the SQL file (and cannot recover it even after scouring my hard-drive). So I am trying to figure out a way to repair these databases/tables. I can’t use the table repair function since it complains that the table does not exist, and I can’t dump them because again, it complains that the tables don’t exist. Like I’ve said, the data itself is still present in the .ibd files and the table names are present. I just need a way to get MySQL to recognize that the tables exist in the databases (I can find the column names of the tables in question in the ibdata1 file using a hex-editor). Any idea how I can repair this type of corruption? I don’t mind rolling up my sleeves, digging in, and taking a bunch of steps to fix it. Thanks a lot.

    Read the article

  • Windows 8 install from USB freezes

    - by Rafael Almeida
    I'm trying to install Windows 8 from an 8GB Kingston Data Traveler. I'm currently using the Windows 7 USB DVD Download Tool to put the iso into the flash drive. It copies the files, but in the end it says it 'had a problem with bootsect' and could not make the flash drive bootable. This seems to be because my current system is Windows 7 32bits, and the bootsect.exe in the ISO is a 64-bit executable. Then I downloaded the 32-bit bootsect.exe and made the drive bootable by running: bootsect /nt60 E: /mbr Then I restarted and managed to boot via the flash drive, but now everything is very slow. It takes about two minutes for the initial black screen with the Windows logo and the spinner go away, then it goes to a purple-ish blank screen that stays on for about five more minutes and then it finally shows a dialog asking for the installation, date/time and keyboard languages. I input then, click "Install Now" and it takes about three more minutes with a "Setup is starting" screen. After that, the PC apparently reboots, the CPU fan speeds up considerably, and there's no video and nothing more happens even after more than ten minutes. What is happening? I already tried using another USB port and even installing from a Samsung G3 Station 2TB external hard disk, but the same thing happens. The file transfer speed to the Kingston drive was about only 3 megabytes per second.

    Read the article

  • Wordpress on Apache is redirecting all https to http

    - by Krist van Besien
    I have a problem with a wordpress site on a server I admin. I don't know anything about wordpress however. My problem is that we want the site to be accessed over https, bot somehow all requests to https:// URLs are answered by the server with a 302, redirecting to http. The wordpress site itself is configured to use https, and we see that in the pages that are generated the links are all https links. In the apache config there are no rewrite rules and no redirects. However, any request to a https:// URL is answered with a redirect to the equivalent http URL. And I really would like to know where these redirects are coming from, what is generating these redirects. I've increased the loglevel on the webserver to DEBUG, but did not get any info there. I tried to enable debug logging in wordpress per the recipy I found here: http://codex.wordpress.org/Debugging_in_WordPress But did not get a debug.log file in the directory where one should appear. I'm really at a loss here, and need to fix this urgently. Any hints as where to start looking? Apache is 2.2.14 on Ubuntu. There are several other virtual hosts on this server, using php and https without any problem... Edit: I created a small info.php script and dropped that in the webservers' root. Calling this yields the output of the script, no redirect is generated. This suggest that it's not the webserver, but wordpress that is doing it. A second thing I noticed is that the redirect comes with several cookies, one of which has "httponly" set. Could that be it?

    Read the article

  • Buying an old laser printer -- what will need to be replaced?

    - by marienbad
    Hi all -- as you can see I'm new. I do IT and wiring for a small local shop but I never deal with printers. I do a LOT of printing, and I'd like to stop spending as much money on it. On my local CL, there is an HP 8100DN (duplex network) printer for a very good price (and the toner is a quarter-cent per page). It has printed 200,000 pages and I don't yet know anything else about it. The model was released in 1999. So my questions: What are the parts that tend to need service on laser printers? On ebay, I see fusers, rollers, DC power boards, and motors. What would you expect to replace soon at 200,000 pages? Are there any good "tests" to find out if certain parts are near failure? Do you have anything to say about the HP 8100 specifically? The bottom line for me is that if there's any chance of repairs costing more than $100, it's not worth it for me.

    Read the article

  • Can next hop address be same as destination address?

    - by Raj
    Like if host address is 100.0.0.1 and next hop address is 100.0.0.2 and destination ip address is also 100.0.0.2 Is this a valid use case? Any real life usage? <dest ip> <next hop> ip route 100.0.0.2 255.255.255.255 100.0.0.2 weight 1 next-hop-vrf GlobalRouter Above is the command on a router inside a VRF. 100.0.0.2 is pingable from host. 100.0.0.1 & 100.0.0.2 are an ip address assigned to a VLAN on host & destination respectively. On a linux box, Such configuration is valid. [root]# netstat -r -n Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 55.55.55.55 55.55.55.55 255.255.255.255 UGH 0 0 0 eth0 [root]# ip route show 55.55.55.55 via 55.55.55.55 dev eth0 As per my understanding, If a destination IP is reachable (i.e in the same subnet of host IP) we dont need a next hop. I came across one application for using next hop for destination IP in same subnet (i.e for VPN) See this: Will packets send to the same subnet go through routers? If next hop != destination IP but they are in same subnet as that of host, is a valid scenario for VPN, then i am wondering what are the applications of next_hop==dest_ip & subnet same as host? This is my first post in Super User. Extremely happy with the quick and warm response.

    Read the article

  • How can I automate or script daily downloads for any new anti- virus databases, and then have the program scan my drive?

    - by Macgrimm
    Howdy all Super Users" I humbly ask if any Super User can direct this long time, gray haired Apple Tech in the right direction on this issue. I believe there probably are many ways to skin this cat. But I am looking to find simply the best, most unattended way to get it done. Any help will be greatly appreciated. also (I know there are much better softwares out there for the Mac so please don't go there! The politics of this company dictate which Anti virus we have to use) anyway without any further wait: basically I am trying to automate 2 very important functions of Mc'Afee anti-virus for Mac. First I want to automate the process of retrieving new virus definition files, and second I want to automate the process of scanning for viruses. It turns out that Using Mc'Afee Anti-Virus for the Mac are both manual functions. And they left up to the user (per user account) to perform. Depending on all of about 150 MAc users to perform these 2 tasks themselves is around 65% compliance. My question then is: If I wanted to use the command line such as (open /Applications/McAfee\ Security.app) It will open up the Security Console. But how can I make command Mc'Afee go out and grab the definition files and scan the computer? I have to admit I am at a crossroad and Macaltimers has set in. I would really appreciate it if any of you "Super ~ Users" can help me out with this MacAltimers loss of how to what to do. Thanks to All up Front Macgrimm

    Read the article

  • cPanel FTP account access to sym links from parent directory

    - by totbar
    I would like to give a potential developer temporary access to some of my projects. I have almost everything in its own subdomain, and each directory is a sibling to my public_html directory. It looks something like: ("developer" is the cPanel account name.) developer/ *This is the top level directory for the cPanel account. "/home/developer" site1/ *site1.mysite.com site2/ *site2.mysite.com site3/ *site3.mysite.com public_html/ *www.mysite.com ... etc I created a directory inside public_html called tempdev and I added symbolic links to each of the sibling directories listed above. My understanding of cPanel is that I can only assign one user with "Special FTP Access" per domain. I really dont want to give a complete stranger my login creds, (its just a development environment but still). So I used the cPanel FTP account creator UI. It will not allow me to assign the user access to the directories outside of public_html. I cant even give access to public_html either. So I made the tempdev directory in www and created the symlinks. Using the new account, I can see the symlinks, but I can go into them. Is there a better way to accomplish what I am attempting?

    Read the article

  • Giving the root user priority to maintain Debian (while server collapsing under heavy load)

    - by Saix
    Is there any way to setup Debian to prioritize any or specific root's activity before every other? For instance, several times per year something gets wrong (usually man's fault by overstressing apache/mysql) and system gets unresponsive under heavy load like 200 (8-core cpu). I know there are limits for php scripts to run then kill, but that's not the way because this limit has to be at least 45 minutes long. The problem is, until I'm able to login via SSH and let apache/mysql restart under this server stress, it nearly hits these 45 minutes anyway. Also hardware restart causing usually to run fsck at boot time on all harddrives since it's usually pretty long the box haven't been restarted. I was told it's really not good idea disabling fsck but then again, it takes more then hour to complete. What is the fastest way to restart apache/mysql? Is there any way to give ssh users or root user higher priority so the logging in and completing these restarts (rather stops though) commands wouldn't take so long? One comes to my mind.. use NICE for apache/mysql but no way. I can't risk limiting those two vital apps 24/7 or could I? I'm a little bit scared if any other system process wouldn't slow the pages down too much. Any backup process, swap (if any) etc. There is pretty heavy PHP framework with 20k visits a day, so it needs every hw/sw resource available. I can't throttle it the whole time, just in certain points when system gets unresponsive, so I could maintain it.

    Read the article

  • How to flash Dell Precision 390 from linux (debian)

    - by malat
    I am trying to update my BIOS: $ sudo dmidecode -s bios-version 2.1.2 With a newer one: 2.6.0. I went to this page Dell Precision System BIOS, 2.6.0 After downloading the file WS390-020600.BIN, here is what it states: $ ./WS390-020600.BIN --help Usage: WS390-020600.BIN [options] Options: --help Print this text. --version Print package versions. If no options, update the BIOS. and $ ./WS390-020600.BIN --version Dell BIOS Update Installer 1.2 Copyright 2006 Dell Inc. All Rights Reserved. ./WS390-020600.BIN: 60: ./WS390-020600.BIN: ./flash: not found Does anyone knows where this flash command can be found ? Update: it looks like this is a self-extracting archive (need bash as per comment in header). $ head -30 WS390-020600.BIN [...] Extract() { tail -n +`awk '/^__ARC__/ { print NR + 1; exit 0; }' $0` $0 | gzip -cd >$_PRG So the flash command should have been auto-generated, however the above command does not appear to be running as original author intended. I do not see anything wrong with the command though.

    Read the article

< Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >