Search Results

Search found 6743 results on 270 pages for 'regular joe'.

Page 164/270 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Using off-the-shelf hardware for brand-name servers; Possible? Good idea?

    - by threecheeseopera
    Is it possible or advisable to use 'regular' not-sanctioned-by-the-server-manufacturer hardware in high end servers? Often these manufacturer-supplied parts have a very high price markup, and I wonder if it's always necessary (understanding that they probably apply more rigorous requirements to this hardware). For example, Dell sells 300GB 15,000rpm serial-attached scsi drives for a certain server family for almost $600 each, while newegg sells a drive with the same specs for almost half the price http://www.newegg.com/Product/Product.aspx?Item=N82E16822116059. Do we really need to pay these high markups, especially for disks that are likely RAID-ed and so guarded against catastrophic failure?

    Read the article

  • rkhunter warns of inode change by no file modification date changes

    - by Nicholas Tolley Cottrell
    I have several systems running Centos 6 with rkhunter installed. I have a daily cron running rkhunter and reporting back via email. I very often get reports like: ---------------------- Start Rootkit Hunter Scan ---------------------- Warning: The file properties have changed: File: /sbin/fsck Current inode: 6029384 Stored inode: 6029326 Warning: The file properties have changed: File: /sbin/ip Current inode: 6029506 Stored inode: 6029343 Warning: The file properties have changed: File: /sbin/nologin Current inode: 6029443 Stored inode: 6029531 Warning: The file properties have changed: File: /bin/dmesg Current inode: 13369362 Stored inode: 13369366 From what I understand, rkhunter will usually report a changed hash and/or modification date on the scanned files to, so this leads me to think that there is no real change. My question: is there some other activity on the machine that could make the inode change (running ext4) or is this really yum making regular (~ once a week) changes to these files as part of normal security updates?

    Read the article

  • Using wget to recursively download whole FTP directories

    - by user9406
    I want to copy all of the files and folders from one host to another. The files on the old host sit at /var/www/html and I only have FTP access to that server, and I can't TAR all the files. Regular connection to the old host through FTP brings me to the /home/admin folder. I tried running the following command form my new server: wget -r ftp://username:[email protected] But all I get is a made up index.html file. What the right syntax for using wget recursively over FTP?

    Read the article

  • Write to windows share mounted in Ubuntu

    - by aidan
    I used to mount a windows share in Ubuntu server, with an entry in fstab: //data/SharedFolder /media/SharedFolder/ smbfs user,defaults,credentials=/root/.creds,uid=root,gid=root 0 0 /root/.creds is a text file with three lines, my username, password and domain. Users on the ubuntu server could write to this mount, but then I upgraded to 10.04 and now only root can write. Regular users can still read though. mount currently tells me: //data/SharedFolder on /media/SharedFolder type cifs (rw,mand,noexec,nosuid,nodev) How do I make it world writeable again? Thanks

    Read the article

  • Completely clean previous Radeon drivers on Windows 7 64bit

    - by tomo
    Recently I replaced my old Radeon HD 2600XT to Radeon HD 6770 from MSI. I had strange problem that after installing the newest Radeon drivers they exist only until first reboot. After reboot my new card is recognized as an old 2600. I tried to unintall ATI/AMD software completetely from Programs/Features, then reboot, then untinstall driver from device manager, then reboot, then system showed that display driver is regular VGA (and oldschool 640x480 resolutions). Then to be double sure I executed DriverCleaner3 and Driver sweeper. After the restart I installed the newest drivers from amd site but after restarting the system recognizes card as 2600. I'm completely lost. Perhaps Win7 64bit caches somewhere drivers? Are there any issues regarding drivers-shadowing or 32/64 mirroring? Reinstalling the system is not an option.

    Read the article

  • Windows 7 unresponsive after hibernate.

    - by Simon Verbeke
    The last few days, when I get Windows out of hibernate, Windows itself gets unresponsive. I can use any of my programs that were already opened before hibernating, but the window functions (closing, changing the size etc) aren't reacting. I can't open any new programs, except for the Task Manager. Also, I don't get an Internet connection. When I use Ctrl + Alt + Del, I get either a black screen, or the regular background. Then it hangs and after a while shows me a message that Windows failed starting up that "thing"(don't know what it's called in English, something with safety, etc). I have Windows 7 Ultimate installed.

    Read the article

  • How can I bulk rename files in a RAR or ZIP archive on the mac?

    - by Chris R
    I have a set of archive files -- both zip and rar formats -- inside of which I need to rename some files. Specifically, I want to do something like this: for each archive file in a directory for each file in the archive if the file name matches the regular expression /(.* - [0-9]{2})([0-9]{2} - .)*/ rename the file as \1-\2 The trick isn't so much in the generation of the new name; I can do that with either bash or sed or anything else. It's the set of commands to manipulate the files in the archives using rar/unrar or unzip/zip (If it makes a difference, I'm re-formatting some CBR/CBZ files to get the double-page spreads to come up in the right order in SimpleComic -- it interprets page 0203 as page 203, which makes the story a bit hard to follow)

    Read the article

  • Objective-C protocol vs inheritance vs extending?

    - by ryanjm.mp
    I have a couple classes that have nearly identical code. Only a string or two is different between them. What I would like to do is to make them "x" from another class that defines those functions and then uses constants or something else to define those strings that are different. I'm not sure if "x" is inheritance or extending or what. That is what I need help with. For example: objectA.m: -(void)helloWorld { NSLog("Hello %@",child.name); } objectBob.m: #define name @"Bob" objectJoe.m #define name @"Joe" (I'm not sure if it's legal to define strings, but this gets the point across) It would be ideal if objectBob.m and objectJoe.m didn't have to even define the methods, just their relationship to objectA.m. Is there any way to do something like this? It is kind of like protocol, except in reverse, I want the "protocol" to actually define the functions. If all else fails I'll just make objectA.m: -(void)helloWorld:(NSString *name) { NSLog("Hello %@",name); } And have the other files call that function (and just #import objectA.m).

    Read the article

  • FCoE, on any Ethernet switch?

    - by javano
    I understand the concept of FCoE. I have looked at the Wikipedia page and looking at the layer 2 frame diagram, it looks like FCoE really should "just work" on any Ethernet switch, but is this really the case? If so, what do switches like Cisco's Nexus 5k or 6120P offer that normal switches don't (in specific relation to FCoE)? I am just using those two switches as examples. On the Nexus 5548UP page for example it says the following; Unified ports that support traditional Ethernet, Fibre Channel (FC),and Fibre Channel over Ethernet (FCoE) Well if FCoE runnins over regular Ethernet, that why does it support "Ethernet and Fibire Channel over Ethernet"? This is why I am curious as to weather FCoE will run on any Ethernet switch and these switches just support "bonus" features, or if you do indeed require a specialist switch. Thank you.

    Read the article

  • Issue with https:// url going to an unknown location

    - by Brandon
    We have a website (ASP.NET/Plesk 9.5.5) that can be accessed just fine through the regular URL (http://example.com). However when accessing the site through https://example.com the site displays the invalid security certificate warning, which is fine since we don't have an SSL certificate. If I add an exception, I'm sent to a completely separate site that is apparently hosting a malware script (I'm still on https://example.com though). Because of this Google has flagged the site as dangerous. I can't find anything in the Plesk panel that would help fix this, and as far as I can tell those files don't exist on our server. How do I tell where the https:// link is sending me? I'm not that familiar with DNS, but is that what is causing this behavior?

    Read the article

  • Cron Script to Delete Folder Contents Every 5 Minutes on Media Temple

    - by Brian Iannone
    I'm not familiar with server-side scripting, but I'm currently using a PHP application on Media Temple to cache JPEGs from four webcams hosted on a server located in the middle of the Indian Ocean. (Hence my reason for caching them in the US.) The webcams are updated every five minutes. The PHP application stores the cached images in http://static.rigic.co/cache/. I would like to create a cron script to automatically delete the contents of "cache" (not the folder itself; just the files inside) at a regular interval.

    Read the article

  • Command+Tab not working on MacOSX Client with Synergy

    - by dragonmantank
    I have an Ubuntu 11.10 laptop paired with an Apple Wireless Keyboard. I'm using the answer on this question to swap the Command and Option keys so that they are mapped like a regular keyboard. This works fine in Ubuntu. Now I have a Macbook Pro running as a Synergy Client. This mapping kind of works on the Macbook, but it only allows me to Command+Tab between two programs, not all of them. Copy, Paste, and other Command keys seem to work fine, but not Command+Tab. It's annoying just because I use the keyboard a lot. And yes, I could switch to syncing the keyboard and the Macbook, but I use the Ubuntu laptop more.

    Read the article

  • Throttling apache downloads selectively

    - by Synchro
    I have a linux box running Debian Sarge (old I know) and apache 2.0.54. It serves two kinds of files - regular web pages and small images, and a lot of large podcast mp3s. The podcast downloads swamp the connection and make the rest of the site unresponsive, so I'm looking to throttle the data transfer rate (not the request rate) of just the podcasts. I've set up haproxy using this technique which does what it says it will, but solves a different problem - even only 5 simultaneous podcast downloads is enough to saturate the link. In a perfect world, haproxy would support per-connection throttling, but it doesn't. So far I've looked at mod_bw (won't compile for me, seems unsupported), mod_cband (unsupported, widely reported as problematic) and iptables using tc. The iptables approach would allow me to throttle things, but would not be at all selective, slowing down everything on the server, not just the podcasts, so would just move the bottleneck without changing overall behaviour. Ideas?

    Read the article

  • Vetting Github Pull requests with Hudson

    - by cdecker
    I've been using Gerrit and Hudson very successfully to test and automatically vote on new checkins in the past and now I'm wondering whether it is possible to set up Hudson so that it'll check Github at regular intervals and looks if there are new Pull Requests available. If yes it should apply the patch and run the unit tests against it, adding a comment to the pull request if no failure is detected. It would certainly reduce the amount of work going into vetting patches/pull requests. Is that possible at all, or should I stick with my Gerrit setup?

    Read the article

  • AdBlock Plus Advanced Element Hiding?

    - by funkafied
    I'm trying to block a certain element on a site using AdBlock Plus's element hiding feature. However the problem is that there are two elements with the same exact name and type that I'm trying to hide so there's no way to tell the filter which one to keep and which one not to keep. So I figure there might be a way to hide only the second element by telling it to only hide the second occurrence of an element that matches the filter. Like skip the first one and hide the second occurrence. Or alternatively maybe hide the one that also has a certain other element in front of it. Is there any way to do this? Like regular expressions or something?

    Read the article

  • Ubuntu: move logs from /dev/tty8 to different terminal /dev/tty12 or get rid of it.

    - by Casual Coder
    I want to know how to move or get rid of /dev/tty8 log output in Ubuntu 9.10. /dev/tty7 is my regular X session. When I am switching user to test account where I can try and test setups and configs I am at next available console i.e. /dev/tty9 because /dev/tty8 is taken by log output. Where can I configure this ? All I've found related to /dev/tty8 is commented lines in /etc/rsyslog.d/50-default.conf. I changed it like that: daemon,mail.*;\ news.=crit;news.=err;news.=notice;\ *.=debug;*.=info;\ *.=notice;*.=warn /dev/tty12 And I've got nice log output on /dev/tty12 but where is configuration for log output on /dev/tty8. How can I change it?

    Read the article

  • How can I use encryption with Gmail?

    - by Torben Gundtofte-Bruun
    I'm currently reading Cory Doctorow's novel Little Brother which includes a part about encrypted messaging, and even wrapping messages first in my private key and then your public key. I'd like to play around with that but from what I've googled so far it seems to be a rather convoluted process, requiring installing several program components, and creating an encrypted message requires doing some manual file manipulation. I'm surprised that I can't find something like a Firefox plugin that integrates encryption into Gmail. I've seen that there is a Thunderbird PGP plugin, but I don't use T-bird. I also saw a blog post that Google apparently toyed with PGP support in 2009, but nothing has appeared in the meantime. Question: To use encryption with Gmail, is there a simpler method than creating a file locally, then encrypting that file, and finally attaching it to a regular Gmail message?

    Read the article

  • BASH echo write mysql input

    - by jmituzas
    Have a bash menu where variables write to file for mysql input. heres what I have: echo "CREATE DATABASE '$mysqldbn'; #GRANT ALL PRIVILEGES ON *.* TO '$mysqlu'@'$myhost' IDENTIFIED BY '$mysqlup' WITH GRANT OPTION; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES ON '$mysqldbn'.* TO '$mysqlu'@'$myhost' IDENTIFIED BY '$mysqlup'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES ON '$mysqldbn'.* TO '$mysqlu'@'$myip' IDENTIFIED BY '$mysqlup'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES ON '$mysqldbn'.* TO '$mysqlu'@'localhost' IDENTIFIED BY '$mysqlup'; GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES< LOCK TABLES on '$mysqldbn'.* TO '$mysqlu'@'$rip' IDENTIFIED BY '$mysqlup';" > nmysql.db mysql -u root -p$mypass < nmysql.db problem is to get variables to show I had to put them in single quotes, the single quotes show up as I want for instances like '$mysqlu'@'localhost'. But how can I remove the quotes and still get to use the variable in the instance like, CREATE DATABASE '$mysqldbn' ? Double quotes wont work either, I am at a loss. Thanks in advance, Joe

    Read the article

  • Viewpoint gem and Exchange resource account

    - by scott.simpson
    Hi- I'm trying my hand at using the Viewpoint gem (by zenchild @ github) as the base for a meeting scheduling system. It's great at reading calendar information from regular Exchange 2007 accounts, but I got stuck trying to change the SOAP request header to allow me to read resource accounts as a delegate. I came across http://blogs.msdn.com/b/mstehle/archive/2009/06/16/exchange-api-team-blog-exchange-impersonation-vs-delegate-access.aspx and it seems to be what I need, and I have the feeling I'm on the edge of getting it working, but I'm just not quite there yet as a ruby programmer. Any help would be appreciated... Thanks!

    Read the article

  • Storage setup for large files

    - by Mecca
    I need to store over 200TB of data (all types, biggest being video files) and be able to access it over a local network. The files will be accessed for editing or searches. I don't need versioning, but a setup that would keep me safe from harddrive failures would be nice. Right now the content is on different harddrives, some external drives, some regular. I don't exclude the possibility of buying new/extra drives if necessary. If they will ever be exposed to the web, it wont be to the public, but just a couple of people. I have no idea what to buy to make this happen. I see some NAS solutions over the internet like this http://www.bestbuy.com/site/a/2266043.p?id=1218317764591&skuId=2266043 but the storage is not enough, plus it doesn't seem to be scalable. What do you recommend? Thanks

    Read the article

  • Commercial web application--scalable database design

    - by Rob Campbell
    I'm designing a set of web apps to track scientific laboratory data. Each laboratory has several members, each of whom will access both their own data and that of their laboratory as a whole. Many typical queries will thus be expected to return records of multiple members (e.g. my mouse, joe's mouse and sally's mouse). I think I have the database fairly well normalized. I'm now wondering how to ensure that users can efficiently access both their own data and their lab's data set when it is mixed among (hopefully) a whole ton of records from other labs. What I've come up with so far is that most tables will end with two fields: user_id and labgroup_id. The WHERE clause of any SELECT statement will include the appropriate reference to one of the id fields ("...WHERE 'labroup_id=n..." or "...WHERE user_id=n..."). My questions are: Is this an approach that will scale to 10^6 or more records? If so, what's the best way to use these fields in a query so that it most efficiently searches the relevant subset of the database? e.g. Should the first step in querying be to create a temporary table containing just the labgroup's data? Or will indexing using some combination of the id, user_id, and labroup_id fields be sufficient at that scale? I thank any responders very much in advance.

    Read the article

  • ntbackup workalike for adhoc full backups in Windows 7 thats free and preferably open source

    - by Justin Dearing
    On windows 2000 and XP machines I used to be able to do the following: ntbackup backup systemstate c: /f e:\backups\machineName\machineName-full+systemstate_200101206.bkf This gave me a full backup of the system that I could use to do a system restore, after doing a barebones OS install. Windows 7 has a great utility for regular backups with alerting and all that stuff. It does not seem to have command line support. I'd like a backup solution for my Windwos 7 systems that has the following features: Is free Is open source (preferebly) Works while the system is booted and leaves the system functional (clonezilla is great for offline backups, and I use that too) Gives me a backup that is suited for a full system restore or partial system restore (ruling out most imaging software even if they could work while the system is booted via some sort of shadow copy voodoo) Can work via the command line Compression would be nice, the ability to pipe output would be better.

    Read the article

  • How to have 2 windows machines on the same network with the same IP address

    - by Stu
    I have a custom made ADC device that is spitting out data by addressed UDP packets. I have that device plugged into a 4 port switch. I have one windows embedded standard 7 machine which is the normal recipient of that data. To be able to receive the data (Using LabVIEW) the windows network adapter IPv4 settings must have a static IP address that corresponds to the UDP packet destination. I would like to add a second windows machine (This one is just regular Win 7 Pro) to simultaneously catch the data, however with all devices connected to the switch, the Win 7 Pro machine recognizes an IP address conflict and will not take the setting for the required static IP address. (The network adaptor settings show that the correct value has been entered but ipconfig shows that it is not actually set.) Neither windows machine needs to transmit network data, they only need to be able to receive the UDP data from the ADC device. Is there any way to disable this IP address conflict detection 'feature' of windows networking?

    Read the article

  • trie reg exp parse step over char and continue

    - by forest.peterson
    Setup: 1) a string trie database formed from linked nodes and a vector array linking to the next node terminating in a leaf, 2) a recursive regular expression function that if A) char '*' continues down all paths until string length limit is reached, then continues down remaining string paths if valid, and B) char '?' continues down all paths for 1 char and then continues down remaining string paths if valid. 3) after reg expression the candidate strings are measured for edit distance against the 'try' string. Problem: the reg expression works fine for adding chars or swapping ? for a char but if the remaining string has an error then there is not a valid path to a terminating leaf; making the matching function redundant. I tried adding a 'step-over' ? char if the end of the node vector was reached and then followed every path of that node - allowing this step-over only once; resulted in a memory exception; I cannot find logically why it is accessing the vector out of range - bactracking? Questions: 1) how can the regular expression step over an invalid char and continue with the path? 2) why is swapping the 'sticking' char for '?' resulting in an overflow? Function: void Ontology::matchRegExpHelper(nodeT *w, string inWild, Set<string> &matchSet, string out, int level, int pos, int stepover) { if (inWild=="") { matchSet.add(out); } else { if (w->alpha.size() == pos) { int testLength = out.length() + inWild.length(); if (stepover == 0 && matchSet.size() == 0 && out.length() > 8 && testLength == tokenLength) {//candidate generator inWild[0] = '?'; matchRegExpHelper(w, inWild, matchSet, out, level, 0, stepover+1); } else return; //giveup on this path } if (inWild[0] == '?' || (inWild[0] == '*' && (out.length() + inWild.length() ) == level ) ) { //wild matchRegExpHelper(w->alpha[pos].next, inWild.substr(1), matchSet, out+w->alpha[pos].letter, level, 0, stepover);//follow path -> if ontology is full, treat '*' like a '?' } else if (inWild[0] == '*') matchRegExpHelper(w->alpha[pos].next, '*'+inWild.substr(1), matchSet, out+w->alpha[pos].letter, level, 0, stepover); //keep adding chars if (inWild[0] == w->alpha[pos].letter) //follow self matchRegExpHelper(w->alpha[pos].next, inWild.substr(1), matchSet, out+w->alpha[pos].letter, level, 0, stepover); //follow char matchRegExpHelper(w, inWild, matchSet, out, level, pos+1, stepover);//check next path } } Error Message: +str "Attempt to access index 1 in a vector of size 1." std::basic_string<char,std::char_traits<char>,std::allocator<char> > +err {msg="Attempt to access index 1 in a vector of size 1." } ErrorException Note: this function works fine for hundreds of test strings with '*' wilds if the extra stepover gate is not used Semi-Solved: I place a pos < w->alpha.size() condition on each path that calls w->alpha[pos]... - this prevented the backtrack calls from attempting to access the vector with an out of bounds index value. Still have other issues to work out - it loops infinitely adding the ? and backtracking to remove it, then repeat. But, moving forward now. Revised question: why during backtracking is the position index accumulating and/or not deincrementing - so at somepoint it calls w->alpha[pos]... with an invalid position that is either remaining from the next node or somehow incremented pos+1 when passing upward?

    Read the article

  • Can Windows XP remember file icon position?

    - by g .
    Windows allows you to arrange icons however you like in a window. However after some time when I go back to the window, it has forgotten the icon positions and has completely rearranged the icons. Is there a way to preserve the icon positions? In the previous version of Windows I used, this would happen on rare occasions (about 6 months) and it might only move a couple of icons. Now it is happening practically every day and it is every single icon. Attempts to rearrange them manually are futile and it is increasingly maddening. Searching Google, I have found a number of programs that allow you to save and restore icon position on the desktop. But, this is just a regular folder window so I am not sure if any of those would even work. Some details: Windows XP SP 3 running in Parallels 5 virtual machine Auto Arrange and Align to Grid are NOT turned on Fresh OS install and only setting icon positions in one window

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >