Search Results

Search found 2417 results on 97 pages for 'states'.

Page 90/97 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • C++ Virtual Constructor, without clone()

    - by Julien L.
    I want to perform "deep copies" of an STL container of pointers to polymorphic classes. I know about the Prototype design pattern, implemented by means of the Virtual Ctor Idiom, as explained in the C++ FAQ Lite, Item 20.8. It is simple and straightforward: struct ABC // Abstract Base Class { virtual ~ABC() {} virtual ABC * clone() = 0; }; struct D1 : public ABC { virtual D1 * clone() { return new D1( *this ); } // Covariant Return Type }; A deep copy is then: for( i = 0; i < oldVector.size(); ++i ) newVector.push_back( oldVector[i]->clone() ); Drawbacks As Andrei Alexandrescu states it: The clone() implementation must follow the same pattern in all derived classes; in spite of its repetitive structure, there is no reasonable way to automate defining the clone() member function (beyond macros, that is). Moreover, clients of ABC can possibly do something bad. (I mean, nothing prevents clients to do something bad, so, it will happen.) Better design? My question is: is there another way to make an abstract base class clonable without requiring derived classes to write clone-related code? (Helper class? Templates?) Following is my context. Hopefully, it will help understanding my question. I am designing a class hierarchy to perform operations on a class Image: struct ImgOp { virtual ~ImgOp() {} bool run( Image & ) = 0; }; Image operations are user-defined: clients of the class hierarchy will implement their own classes derived from ImgOp: struct CheckImageSize : public ImgOp { std::size_t w, h; bool run( Image &i ) { return w==i.width() && h==i.height(); } }; struct CheckImageResolution; struct RotateImage; ... Multiple operations can be performed sequentially on an image: bool do_operations( std::vector< ImgOp* > v, Image &i ) { std::for_each( v.begin(), v.end(), /* bind2nd(mem_fun(&ImgOp::run), i ...) don't remember syntax */ ); } int main( ... ) { std::vector< ImgOp* > v; v.push_back( new CheckImageSize ); v.push_back( new CheckImageResolution ); v.push_back( new RotateImage ); Image i; do_operations( v, i ); } If there are multiple images, the set can be split and shared over several threads. To ensure "thread-safety", each thread must have its own copy of all operation objects contained in v -- v becomes a prototype to be deep copied in each thread.

    Read the article

  • New to asp.net. Need help debugging this email form.

    - by Roeland
    Hey guys, First of all, I am a php developer and most of .net is alien to me which is why I am posting here! I just migrated over a site from one set of webhosting to another. The whole site is written in .net. None of the site is database driven so most of it works, except for the contact form. The output on the site simple states there was an error with "There has been an error - please try to submit the contact form again, if you continue to experience problems, please notify our webmaster." This is just a simple message it pops out of it gets to the "catch" part of the email function. I went into web.config and changed the parameters: <emailaddresses> <add name="System" value="[email protected]"/> <add name="Contact" value="[email protected]"/> <add name="Info" value="[email protected]"/> </emailaddresses> <general> <add name="WebSiteDomain" value="hoyespharmacy.com"/> </general> Then the .cs file for contact contains the mail function EmailFormData(): private void EmailFormData() { try { StringBuilder body = new StringBuilder(); body.Append("Name" + ": " + txtName.Text + "\n\r"); body.Append("Phone" + ": " + txtPhone.Text + "\n\r"); body.Append("Email" + ": " + txtEmail.Text + "\n\r"); body.Append("Fax" + ": " + txtEmail.Text + "\n\r"); body.Append("Subject" + ": " + ddlSubject.SelectedValue + "\n\r"); body.Append("Message" + ": " + txtMessage.Text); MailMessage mail = new MailMessage(); mail.IsBodyHtml = false; mail.To.Add(new MailAddress(Settings.GetEmailAddress("System"))); mail.Subject = "Contact Us Form Submission"; mail.From = new MailAddress(Settings.GetEmailAddress("System"), Settings.WebSiteDomain); mail.Body = body.ToString(); SmtpClient smtpcl = new SmtpClient(); smtpcl.Send(mail); } catch { Utilities.RedirectPermanently(Request.Url.AbsolutePath + "?messageSent=false"); } } How do I see what the actual error is. I figure I can do something with the "catch" part of the function.. Any pointers? Thanks!

    Read the article

  • MATLAB image corner coordinates & referncing to cell arrays

    - by James
    Hi, I am having some problems comparing the elements in different cell arrays. The context of this problem is that I am using the bwboundaries function in MATLAB to trace the outline of an image. The image is of a structural cross section and I am trying to find if there is continuity throughout the section (i.e. there is only one outline produced by the bwboundaries command). Having done this and found where the is more than one section traced (i.e. it is not continuous), I have used the cornermetric command to find the corners of each section. The code I have is: %% Define the structural section as a binary matrix (Image is an I-section with the web broken) bw(20:40,50:150) = 1; bw(160:180,50:150) = 1; bw(20:60,95:105) = 1; bw(140:180,95:105) = 1; Trace = bw; [B] = bwboundaries(Trace,'noholes'); %Traces the outer boundary of each section L = length(B); % Finds number of boundaries if L > 1 disp('Multiple boundaries') % States whether more than one boundary found end %% Obtain perimeter coordinates for k=1:length(B) %For all the boundaries perim = B{k}; %Obtains perimeter coordinates (as a 2D matrix) from the cell array end %% Find the corner positions C = cornermetric(bw); Areacorners = find(C == max(max(C))) % Finds the corner coordinates of each boundary [rowindexcorners,colindexcorners] = ind2sub(size(Newgeometry),Areacorners) % Convert corner coordinate indexes into subcripts, to give x & y coordinates (i.e. the same format as B gives) %% Put these corner coordinates into a cell array Cornerscellarray = cell(length(rowindexcorners),1); % Initialises cell array of zeros for i =1:numel(rowindexcorners) Cornerscellarray(i) = {[rowindexcorners(i) colindexcorners(i)]}; %Assigns the corner indicies into the cell array %This is done so the cell arrays can be compared end for k=1:length(B) %For all the boundaries found perim = B{k}; %Obtains coordinates for each perimeter Z = perim; % Initialise the matrix containing the perimeter corners Sectioncellmatrix = cell(length(rowindexcorners),1); for i =1:length(perim) Sectioncellmatrix(i) = {[perim(i,1) perim(i,2)]}; end for i = 1:length(perim) if Sectioncellmatrix(i) ~= Cornerscellarray Sectioncellmatrix(i) = []; %Gets rid of the elements that are not corners, but keeps them associated with the relevent section end end end This creates an error in the last for loop. Is there a way I can check whether each cell of the array (containing an x and y coordinate) is equal to any pair of coordinates in cornercellarray? I know it is possible with matrices to compare whether a certain element matches any of the elements in another matrix. I want to be able to do the same here, but for the pair of coordinates within the cell array. The reason I don't just use the cornercellarray cell array itself, is because this lists all the corner coordinates and does not associate them with a specific traced boundary.

    Read the article

  • div "top" bug IE and everything else. Big problem

    - by Victor
    Hi everyone. I am new in CSS so please help me in this problem. I hope to describe it wright. I am making div named content where my site content is. I made it with z-index:-1; so an image to be over this div. But in Chrome, FF and safari, content became inactive. I cant select text , click on link and write in the forms. So I tried with positive states in the z-index but IE don't know what this means. Damn. So I decided to make conditional div. Here is the code: .content { background:#FFF; width:990px; position:relative; float:left; top:50px; } .content_IE { background:#FFF; width:990px; position:relative; float:left; top: 50px; z-index:-1; } and here is the HTML: <!--[if IE 7]> <div class="content_IE" style="height:750px;"> <![endif]--> <div class="content" style="height:550px;"> Everything is fine with the z-index but the problem is that if there is no top in .content class everything looks fine in IE but there is no space in the other browsers. If i put back the top:50px; there onother 50px like padding in the .content_IE class. I mean that the page looks like I've put top:50px; and padding-top=50px;. I've try everything like margin-top:-50px; padding-top:-50px; and stuff like this but I am still in the circle. It look fine only if there is no top option in .content class. Please help.

    Read the article

  • WPF ClickOnce Bootstrap Dection Failure on One Machine

    - by Dexter Morgan
    Hello Friend, I've decided to use ClickOnce technology to deploy my new WPF application. By and large, ClickOnce works as advertised but I've hit a minor glitch regarding Bootstrapping and framework detection. Some background: - I'm using the standard Visual Studio-generated publish.htm page as my launch page. - The only prerequisite is the .NET Framework 4.0 Client Profile. - All clients using IE 8. - All clients already have the .NET 4.0 Client Profile installed. ClickOnce works as advertised on the vast majority of machines. The VS-generated JScript correctly detects that the framework is installed and presents the user with a Run button. The app launches just fine. I'm getting odd results on one of the machines, however. On the offending machine, the VS-generated JScript tells the user that the prereqs may not be installed -- or rather, it FAILS to detect that the framework is already installed. The "launch" link successfully launches the application but the Run link points to the bootstrapper setup.exe. Why is it failing to detect the framework on this one machine? It occurred to me that framework detection is largely a matter of examining the useragent string that's submitted by the browser. So, what you see below are two UserAgent strings. The first is from a machine where things are working properly. The second is from the offending machine. THIS ONE WORKS: 2011-01-11 15:14:14 W3SVC1 192.168.0.36 GET /publish.htm - 80 - 72.130.187.100 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+6.0;+Trident/4.0;+SLCC1;+.NET+CLR+2.0.50727;+Media+Center+PC+5.0;+.NET+CLR+3.5.21022;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30729;+.NET4.0C) 304 0 0 THIS ONE DOESN'T: 2011-01-11 18:49:12 W3SVC1 192.168.0.36 GET /publish.htm - 80 - 76.212.204.169 Mozilla/4.0+(compatible;+MSIE+8.0;+Windows+NT+6.1;+WOW64;+Trident/4.0;+GTB6.6;+SLCC2;+.NET+CLR+2.0.50727;+.NET+CLR+3.5.30729;+.NET+CLR+3.0.30729;+Media+Center+PC+6.0;+.NET4.0C) 200 0 0 The useragent string of both machines clearly states, "hey the .NET 4.0 client profile is installed here" -- yet the second machine seems unable to detect it. I don't know enough about useragent strings to understand why the former works and the latter fails. The only difference as far as I can tell is that the offending machine is running 64bit. But that shouldn't make a difference. Should it? Any ideas? Dexter Morgan

    Read the article

  • Settings designer file complains when protecting configuration for connectionStrings in App.Config i

    - by Joe
    Hi, I am trying to encrypt Configuration Information Using Protected Configuration in Visual Studio 2010. I have the following info speicifed in the App.Config file: <connectionStrings configProtectionProvider="TheProviderName"> <EncryptedData> <CipherData> <CipherValue>VALUE GOES HERE</CipherValue> </CipherData> </EncryptedData> </connectionStrings> <appSettings configProtectionProvider="TheProviderName"> <EncryptedData> <CipherData> <CipherValue>VALUE GOES HERE</CipherValue> </CipherData> </EncryptedData> </appSettings> However, when I then go to the Settings area of the Projects Properties to view the settings in the Designer, I get prompted with the following error "An error occured while reading the App.config file. The file might be corrupted or contain invalid XML." I understand that my changes are causing the error, however, is there anyway I can bypass that the information is not read into at design view? (Of course the best way would be to make the tags be recognized by the designer, is there any way to do this?) I tried adding <connectionStrings configProtectionProvider="TheProviderName" xmlns="http://schemas.microsoft.com/.NetConfiguration/v2.0"> to connectionStrings as well as to the appSettings, but with no luck, the intellisense is bypassed in the config file, but the designer still complains. I would be satisfied if the designer would not complain about this "error", which is not actually an error because Microsoft states here that it should work. ASP.NET 2.0 provides a new feature, called protected configuration, that enables you to encrypt sensitive information in a configuration file. Although primarily designed for ASP.NET, protected configuration can also be used to encrypt configuration file sections in Windows applications. For a detailed description of the new protected configuration capabilities, see Encrypting Configuration Information Using Protected Configuration. And yes, it does work to encrypt it and to decrypt it and use it, it is just very annoying and frustrating that the designer complains about it. Anyone who knows which xsd file that is used (if used) to verify the contents of the App.config file in the design view? Any help appreciated.

    Read the article

  • JSP to Bean to Java class Validation

    - by littlevahn
    I have a rather simple form in JSP that looks like this: <form action="response.jsp" method="POST"> <label>First Name:</label><input type="text" name="firstName" /><br> <label>Last Name:</label><input type="text" name="lastName" /><br> <label>Email:</label><input type="text" name="email" /><br> <label>Re-enter Email:</label><input type="text" name="emailRe" /><br> <label>Address:</label><input type="text" name="address" /><br> <label>Address 2:</label><input type="text" name="address2" /><br> <label>City:</label><input type="text" name="city" /><br> <label>Country:</label> <select name="country"> <option value="0">--Country--</option> <option value="1">United States</option> <option value="2">Canada</option> <option value="3">Mexico</option> </select><br> <label>Phone:</label><input type="text" name="phone" /><br> <label>Alt Phone:</label><input type="text" name="phoneAlt" /><br> <input type="submit" value="submit" /> </form> But when I try and access the value of the select box in my Java class I get null. Ive tried reading it in as a String and an Array of strings neither though seems to be grabbing the right value. The response.jsp looks like this: <%@ page language="java" %> <%@ page import="java.util.*" %> <%@page contentType="text/html" pageEncoding="UTF-8"%> <%! %> <jsp:useBean id="formHandler" class="validation.RegHandler" scope="request"> <jsp:setProperty name="formHandler" property="*" /> </jsp:useBean> <% if (formHandler.validate()) { %> <jsp:forward page="success.jsp"/> <% } else { %> <jsp:forward page="retryReg.jsp"/> <% } %> I already have Java script validation in place but I wanted to make sure I covered validation and checking for non-JS users. The RegHandler just uses the name field to refer to the value in the form. Any Idea how I could access the select box's value?

    Read the article

  • .NET threading: how can I capture an abort on an unstarted thread?

    - by Groxx
    I have a chunk of threads I wish to run in order, on an ASP site running .NET 2.0 with Visual Studio 2008 (no idea how much all that matters, but there it is), and they may have aborted-clean-up code which should be run regardless of how far through their task they are. So I make a thread like this: Thread t = new Thread(delegate() { try { /* do things */ System.Diagnostics.Debug.WriteLine("try"); } catch (ThreadAbortException) { /* cleanup */ System.Diagnostics.Debug.WriteLine("catch"); } }); Now, if I wish to abort the set of threads part way through, the cleanup may still be desirable later on down the line. Looking through MSDN implies you can .Abort() a thread that has not started, and then .Start() it, at which point it will receive the exception and perform normally. Or you can .Join() the aborted thread to wait for it to finish aborting. Presumably you can combine them. http://msdn.microsoft.com/en-us/library/ty8d3wta(v=VS.80).aspx To wait until a thread has aborted, you can call the Join method on the thread after calling the Abort method, but there is no guarantee the wait will end. If Abort is called on a thread that has not been started, the thread will abort when Start is called. If Abort is called on a thread that is blocked or is sleeping, the thread is interrupted and then aborted. Now, when I debug and step through this code: t.Abort(); // ThreadState == Unstarted | AbortRequested t.Start(); // throws ThreadStartException: "Thread failed to start." // so I comment it out, and t.Join(); // throws ThreadStateException: "Thread has not been started." At no point do I see any output, nor do any breakpoints on either the try or catch block get reached. Oddly, ThreadStartException is not listed as a possible throw of .Start(), from here: http://msdn.microsoft.com/en-us/library/a9fyxz7d(v=VS.80).aspx (or any other version) I understand this could be avoided by having a start parameter, which states if the thread should jump to cleanup code, and foregoing the Abort call (which is probably what I'll do). And I could .Start() the thread, and then .Abort() it. But as an indeterminate amount of time may pass between .Start and .Abort, I'm considering it unreliable, and the documentation seems to say my original method should work. Am I missing something? Is the documentation wrong? edit: ow. And you can't call .Start(param) on a non-parameterized Thread(Start). Is there a way to find out if a thread is parameterized or not, aside from trial and error? I see a private m_Delegate, but nothing public...

    Read the article

  • Cannot extend a class located in another file, PHP

    - by NightMICU
    I am trying to set up a class with commonly used tasks, such as preparing strings for input into a database and creating a PDO object. I would like to include this file in other class files and extend those classes to use the common class' code. However, when I place the common class in its own file and include it in the class it will be used in, I receive an error that states the second class cannot be found. For example, if the class name is foo and it is extending bar (the common class, located elsewhere), the error says that foo cannot be found. But if I place the code for class bar in the same file as foo, it works. Here are the classes in question - Common Class abstract class coreFunctions { protected $contentDB; public function __construct() { $this->contentDB = new PDO('mysql:host=localhost;dbname=db', 'username', 'password'); } public function cleanStr($string) { $cleansed = trim($string); $cleansed = stripslashes($cleansed); $cleansed = strip_tags($cleansed); return $cleansed; } } Code from individual class include $_SERVER['DOCUMENT_ROOT'] . '/includes/class.core-functions.php'; $mode = $_POST['mode']; if (isset($mode)) { $gallery = new gallery; switch ($mode) { case 'addAlbum': $gallery->addAlbum($_POST['hash'], $_POST['title'], $_POST['description']); } } class gallery extends coreFunctions { private function directoryPath($string) { $path = trim($string); $path = strtolower($path); $path = preg_replace('/[^ \pL \pN]/', '', $path); $path = preg_replace('[\s+]', '', $path); $path = substr($path, 0, 18); return $path; } public function addAlbum($hash, $title, $description) { $title = $this->cleanStr($title); $description = $this->cleanStr($description); $path = $this->directoryPath($title); if ($title && $description && $hash) { $addAlbum = $this->contentDB->prepare("INSERT INTO gallery_albums (albumHash, albumTitle, albumDescription, albumPath) VALUES (:hash, :title, :description, :path)"); $addAlbum->execute(array('hash' => $hash, 'title' => $title, 'description' => $description, 'path' => $path)); } } } The error when I try it this way is Fatal error: Class 'gallery' not found in /home/opheliad/public_html/admin/photo-gallery/includes/class.admin_photo-gallery.php on line 10

    Read the article

  • Jquery Autocomplete after space press

    - by Limpep
    I am having an issue with my auto-complete feature such as when a user presses the space button the auto-complete doesn't show up again. Here is my code script type="text/javascript"> function lookup(inputString) { if(inputString.length == 0) { // Hide the suggestion box. $('#suggestions').hide(); } else { $.post("autocomplete.php", { queryString: ""+inputString+"" }, function(data){ if(data.length >0) { $('#suggestions').show(); $('#autoSuggestionsList').html(data); } }); } } // lookup function fill(thisValue) { $('#tag').val(thisValue); setTimeout("$('#suggestions').hide();", 200); } here my php code <?php require_once('config.php'); $db = new mysqli(DB_HOST, DB_USER, DB_PASSWORD,DB_DATABASE); if(!$db) { // Show error if we cannot connect. echo 'ERROR: Could not connect to the database.'; } else { // Is there a posted query string? if(isset($_POST['queryString'])) { $queryString = $db->real_escape_string($_POST['queryString']); // Is the string length greater than 0? if(strlen($queryString) >0) { // Run the query: We use LIKE '$queryString%' // The percentage sign is a wild-card, in my example of countries it works like this... // $queryString = 'Uni'; // Returned data = 'United States, United Kindom'; $query = $db->query("SELECT name FROM tag WHERE name LIKE '$queryString%' ORDER BY name LIMIT 10"); if($query) { // While there are results loop through them - fetching an Object (i like PHP5 btw!). while ($result = $query ->fetch_object()) { // Format the results, im using <li> for the list, you can change it. // The onClick function fills the textbox with the result. echo '<li onClick="fill(\''.$result->name.'\');">'.$result->name.'</li>'; } } else { echo 'ERROR: There was a problem with the query.'; } } else { // Dont do anything. } // There is a queryString. } else { echo 'There should be no direct access to this script!'; } } ? Any help would be great, thanks.

    Read the article

  • Are comonads a good fit for modeling the Wumpus world?

    - by Tim Stewart
    I'm trying to find some practical applications of a comonad and I thought I'd try to see if I could represent the classical Wumpus world as a comonad. I'd like to use this code to allow the Wumpus to move left and right through the world and clean up dirty tiles and avoid pits. It seems that the only comonad function that's useful is extract (to get the current tile) and that moving around and cleaning tiles would not use be able to make use of extend or duplicate. I'm not sure comonads are a good fit but I've seen a talk (Dominic Orchard: A Notation for Comonads) where comonads were used to model a cursor in a two-dimensional matrix. If a comonad is a good way of representing the Wumpus world, could you please show where my code is wrong? If it's wrong, could you please suggest a simple application of comonads? module Wumpus where -- Incomplete model of a world inhabited by a Wumpus who likes a nice -- tidy world but does not like falling in pits. import Control.Comonad -- The Wumpus world is made up of tiles that can be in one of three -- states. data Tile = Clean | Dirty | Pit deriving (Show, Eq) -- The Wumpus world is a one dimensional array partitioned into three -- values: the tiles to the left of the Wumpus, the tile occupied by -- the Wumpus, and the tiles to the right of the Wumpus. data World a = World [a] a [a] deriving (Show, Eq) -- Applies a function to every tile in the world instance Functor World where fmap f (World as b cs) = World (fmap f as) (f b) (fmap f cs) -- The Wumpus world is a Comonad instance Comonad World where -- get the part of the world the Wumpus currently occupies extract (World _ b _) = b -- not sure what this means in the Wumpus world. This type checks -- but does not make sense to me. extend f w@(World as b cs) = World (map world as) (f w) (map world cs) where world v = f (World [] v []) -- returns a world in which the Wumpus has either 1) moved one tile to -- the left or 2) stayed in the same place if the Wumpus could not move -- to the left. moveLeft :: World a -> World a moveLeft w@(World [] _ _) = w moveLeft (World as b cs) = World (init as) (last as) (b:cs) -- returns a world in which the Wumpus has either 1) moved one tile to -- the right or 2) stayed in the same place if the Wumpus could not move -- to the right. moveRight :: World a -> World a moveRight w@(World _ _ []) = w moveRight (World as b cs) = World (as ++ [b]) (head cs) (tail cs) initWorld = World [Dirty, Clean, Dirty] Dirty [Clean, Dirty, Pit] -- cleans the current tile cleanTile :: Tile -> Tile cleanTile Dirty = Clean cleanTile t = t Thanks!

    Read the article

  • Router Alert options on IGMPv2 packets

    - by Scakko
    I'm trying to forge an IGMPv2 Membership Request packet and send it on a RAW socket. The RFC 3376 states: IGMP messages are encapsulated in IPv4 datagrams, with an IP protocol number of 2. Every IGMP message described in this document is sent with an IP Time-to-Live of 1, IP Precedence of Internetwork Control (e.g., Type of Service 0xc0), and carries an IP Router Alert option [RFC-2113] in its IP header So the IP_ROUTER_ALERT flag must be set. I'm trying to forge the strict necessary of the packet (e.g. only the IGMP header & payload), so i'm using the setsockopt to edit the IP options. some useful variables: #define C_IP_MULTICAST_TTL 1 #define C_IP_ROUTER_ALERT 1 int sockfd = 0; int ecsockopt = 0; int bytes_num = 0; int ip_multicast_ttl = C_IP_MULTICAST_TTL; int ip_router_alert = C_IP_ROUTER_ALERT; Here's how I open the RAW socket: sock_domain = AF_INET; sock_type = SOCK_RAW; sock_proto = IPPROTO_IGMP; if ((ecsockopt = socket(sock_domain,sock_type,sock_proto)) < 0) { printf("Error %d: Can't open socket.\n", errno); return 1; } else { printf("** Socket opened.\n"); } sockfd = ecsockopt; Then I set the TTL and Router Alert option: // Set the sent packets TTL if((ecsockopt = setsockopt(sockfd, IPPROTO_IP, IP_MULTICAST_TTL, &ip_multicast_ttl, sizeof(ip_multicast_ttl))) < 0) { printf("Error %d: Can't set TTL.\n", ecsockopt); return 1; } else { printf("** TTL set.\n"); } // Set the Router Alert if((ecsockopt = setsockopt(sockfd, IPPROTO_IP, IP_ROUTER_ALERT, &ip_router_alert, sizeof(ip_router_alert))) < 0) { printf("Error %d: Can't set Router Alert.\n", ecsockopt); return 1; } else { printf("** Router Alert set.\n"); } The setsockopt of IP_ROUTER_ALERT returns 0. After forging the packet, i send it with sendto in this way: // Send the packet if((bytes_num = sendto(sockfd, packet, packet_size, 0, (struct sockaddr*) &mgroup1_addr, sizeof(mgroup1_addr))) < 0) { printf("Error %d: Can't send Membership report message.\n", bytes_num); return 1; } else { printf("** Membership report message sent. (bytes=%d)\n",bytes_num); } The packet is sent, but the IP_ROUTER_ALERT option (checked with wireshark) is missing. Am i doing something wrong? is there some other methods to set the IP_ROUTER_ALERT option? Thanks in advance.

    Read the article

  • Why does obj.getBounds().height give a larger height than obj.height?

    - by TC
    I'm new to Flash and ActionScript, but managing quite nicely. One thing that is continuously getting in my way are the width and height properties of DisplayObject(Container)s. I'm finally starting to get my head around them and learned that the width and height of a Sprite are determined solely by their contents for example. I do not understand the following though: I've got a Sprite that I add a bunch of Buttons to. The buttons all have a height of 30 and an y of 0. As such, I'd expect the height of the containing Sprite to be 30. Surprisingly, the height is 100. The Adobe documentation of the height property of a DisplayObject states: Indicates the height of the display object, in pixels. The height is calculated based on the bounds of the content of the display object. Apparently, the 'bounds' of the object are important. So I went ahead and wrote this little test in the Sprite that contains the Buttons: for (var i:int = 0; i < numChildren; ++i) { trace("Y: " + getChildAt(i).y + " H: " + getChildAt(i).height); trace("BOUNDS H: " + getChildAt(i).getBounds(this).height); } trace("SCALEY: " + scaleY + " TOTAL HEIGHT: " + height); This code iterates through all the objects that are added to its display list and shows their y, height and getBounds().height values. Surprisingly, the output is: Y: 0 H: 30 BOUNDS H: 100 ... (5x) SCALEY: 1 TOTAL HEIGHT: 100 This shows that the bounds of the buttons are actually larger than their height (and the height that they appear to be, visually). I have no clue why this is the case however. So my questions are: Why are the bounds of my buttons larger than their height? How can I set the bounds of my buttons so that my Sprite isn't larger than I'd expect it to be based on the position and size of the objects it contains? By the way, the buttons are created as follows: var control:Button = new Button(); control.setSize(90, 30); addChild(control);

    Read the article

  • How can I switch an existing set of Subversion repositories to use ActiveDirectory?

    - by jpierson
    I have a set of private Subversion repositories on a Windows Server 2003 box which developers access via SVNServe over the svn:// protocol. Currently we have been using the authz and passwd files for each repository to control access however with the growing number of repositories and developers I'm considering switching to using their credentials from ActiveDirectory. We run in an all Microsoft shop and use IIS instead of Apache on all of our web servers so I would prefer to continue to use SVNServe if possible. Besides it being possible, I'm also concerned about how to migrate our repositories so that the history for the existing users map to the correct ActiveDirectory accounts. Keep in mind also that I'm not the network administrator and I'm not terrible familiar with ActiveDirectory so I'll probably have to go through some other people to get the changes made in ActiveDirectory if necessary. What are my options? UPDATE 1: It appears from the SVN documentation that by using SASL I should be able to get SVNServe to authenticate using ActiveDirectory. To clarify, the answer that I'm looking for is how to go about configuring SVNServe (if possible) to use ActiveDirectory for authentication and then how to modify an existing repository to remap existing svn users to their ActiveDirectory domain login accounts. UPDATE 2: It appears that the SASL support in SVNServe works off of a plugin model and the documentation only shows as an example. Looking at the Cyrus SASL Library it looks like a number of authentication "mechanisms" are supported but I'm not sure which one is to be used for ActiveDirectory support nor can I find any documentation about such matters. UPDATE 3: Ok, well it looks like in order to communication with ActiveDirectory I'm looking to use saslauthd instead of sasldb for the *auxprop_plugin* property. Unfortunately it appears that according to some posts (possibly outdated and inaccurate) saslauthd does not build on Windows and such endeavors are considered a work in progress. UPDATE 4: The lastest post I've found on this topic makes it sound as though the proper binaries () are available through the MIT Kerberos Library but it sounds like the author of this post on Nabble.com is still having issues getting things working. UPDATE 5: It looks like from the TortoiseSVN discussions and also this post on svn.haxx.se that even if saslgssapi.dll or whatever necessary binaries are available and configured on the Windows server that the clients will also need the same customization in order to work with these repositories. If this is true, we will only be able to get ActiveDirectory support from a windows client only if changes are made in these clients such as TortoiseSVN and CollabNet build of the client binaries to support such authentication schemes. Although thats what these posts suggest, this is contradictory from what I originally assumed from other reading in that being SASL compatible should require no changes on the client but instead only that the server be setup to handle the authentication mechanism. After reading a bit more carefully in the document about Cyrus SASL in Subversion section 5 states "1.5+ clients with Cyrus SASL support will be able to authenticate against 1.5+ servers with SASL enabled, provided at least one of the mechanisms supported by the server is also supported by the client." So clearly GSSAPI support (which I understand is required for Active Directory) must be available within the client and the server. I have to say, I'm learning way too much about the internals of how Subversion handles authentication than I ever wanted to and I juts simply want to get an answer about whether I can have Active Directory authentication support when using SVNServe on a Windows server and accessing this from Windows clients. According to the official documentation it seems that this is possible however you can see that the configuration is not trivial if even possible at all.

    Read the article

  • Netgear VPN endpoint drops connectivity to single IP address

    - by Justin Bowers
    I'm having a strange issue with one of the networks I manage recently. We have about 14 different networks connected together through a Netgear hardware VPN. Everything has been running fine (other than standard connectivity problems) for a few years now, but I've hit a wall with a problem that's just cropped up at one of the VPN endpoint locations. Our primary VPN network is on the 192.168.1.0/24 subnet and our other 13 networks are on the 192.168.2.0/24 - 192.168.14.0/24 subnets. We run a terminal server on the 192.168.1.0/24 network with IP address 192.168.1.100. Starting Thursday of last week, we had a problem with connectivity of the 192.168.2.0/24 network to 192.168.1.100. When troubleshooting the problem, I found that Network 2 (192.168.2.0/24) still had connectivity to the Internet as well as VPN connectivity to Network 1 (192.168.1.0/24). We could ping and connect to any other device other than the server with IP address 192.168.1.100. Also, none of our networks had an issue accessing 192.168.1.100. I ran a scan on Network 2 after assigning static IP addresses to one of the workstations but received no response from 192.168.1.100 (looking for possibly a new device that someone had plugged into Network 2 that had a duplicate IP address with the server). Asking the staff, noone had reported connecting a new device to Network 2 as well. I then assigned a secondary IP address of 192.168.1.88 to the server and could ping and connect to the secondary IP address from Network 2, but still couldn't access it via 192.168.1.100. I then just rebooted the Netgear VPN Firewall (FVS318v3) and after it came back up, connectivity to 192.168.1.100 was restored. Beforehand, when checking for devices with a possible duplicate IP address, I did run a check for available wireless access points and stations and found none (our wireless is secured via MAC address access control through a WG102 device). I thought that it may have been a fluke for some reason since everything came back up after a power cycle of the VPN Firewall. Things ran fine for a few days until this afternoon, when the problem happened again. One of our users claimed that they had connectivity problems to the server and after connecting to the computer, I found that I couldn't ping the server address anymore. I could still ping the alternate IP address of the server though, so I went ahead and rebooted the VPN firewall again and connectivity was restored. Unfortunately, I can't find anything in the security or VPN logs of the firewall that helps point me in the right direction, so I thought I would go ahead and ask to see if anyone else has any other insight into why we've started having this problem. I am aware that it could still be a device with a duplicate IP address of the server on Network 2, but every employee claim states that there's been no such new device brought in to the network. I know this is a long read, but any help is appreciated! Thanks, Justin

    Read the article

  • Bridged virtual interface is not available or visible to ifconfig.

    - by Omniwombat
    Hello all. I'm running Ubuntu 9.04, kernel 2.6.28-18, and vmware-server 2.0.1. I'm attempting to setup a virtual linux machine to use a bridged interface rather than NAT or host-only. Both NAT and host-only work just fine. When running vmware-config.pl, I set /dev/vmnet0 to bridge eth0, /dev/vmnet1 to host-only, and /dev/vmnet8 to NAT. When I run ifconfig -a I see the physical interface (eth0), vmnet1 and vmnet8 both of which are up and have IP addresses assigned to them. I also see other various interfaces that are not relevant here. In the web console, when I ask that the guest machine's network card be bridged, it states that a bridged setup is "Not available" and shows the disabled device icon. Inside the guest machine, I do have an eth0 interface which I can set to anything I like, however it can't see my external network, or the host. I do see errors in my vmware/hostd.log which state: "The network bridge on device vmnet0 is not running. The virtual machine will not be able to communicate with the host or with other machines on your network" which confirms the problem. vmnet-bridge is running, and I see the following in my process table: /usr/bin/vmnet-bridge -d /var/run/vmnet-bridge-0.pid -n 0 -i eth0 I confirm that the /var/run/vmnet-bridge-0.pid file is there and that it points to the correct process. I saw this question relating to Ubuntu 9.04 and bridged interfaces, in which the poster determined that the vsock library was not getting built due to a flaw in the vmware-config.pl script. I applied the patch, reran the script, and confirm that vsock.ko and vsock.o are in my /lib directory structure. vsock does show up in an lsmod. My /etc/vmware directory has /vmnet1 and /vmnet8 subdirectories. They contain configuration utilities for running DHCP and nat type services as expected. There is no vmnet0 subdirectory. My /etc/vmware/netmap.conf file DOES show entries for vmnet0; both the name and the device as I configured it from the script. My /dev directory contains devices vmnet0 through vmnet9. They have major device number 119, and minor device numbers 0 through 9. /proc/net/dev shows statistics for vmnet1 and vmnet8, but not vmnet0. I have a /proc/vmnet directory, but it's empty. When I start or stop the vmware service with /etc/init.d/vmware start, I see the following: Starting VMware services: Virtual machine monitor done Virtual machine communication interface done VM communication interface socket family: done Virtual ethernet done Bridged networking on /dev/vmnet0 done Host-only networking on /dev/vmnet1 (background) done DHCP server on /dev/vmnet1 done Host-only networking on /dev/vmnet8 (background) done DHCP server on /dev/vmnet8 done NAT service on /dev/vmnet8 done VMware Server Authentication Daemon (background) done Shared Memory Available done Starting VMware management services: VMware Server Host Agent (background) done VMware Virtual Infrastructure Web Access Starting VMware autostart virtual machines: Virtual machines done Nothing appears to be wrong there. What n00b thing am I doing such that vmnet0 and only vmnet0 does not show up in the interface list?

    Read the article

  • Wordpress Permissions OS X & MAMP

    - by Matt2020
    I have installed several local versions of Wordpress for development purposes. After the install I can create posts, pages and edit admin options. However as soon as try to upload images which would be saved in wp_content/uploads I get an error: Upload Error: Unable to create directory ...../blog/wp-content/uploads/2011/05. Is its parent directory writable by the server? Looks like MAMP server runs as user _www The blog directory is owned by User1 and the group User1 _www is not in the User1 group, should it be? I do not want to chmod 777 or 765 on the directories just to get it going. Googled up a couple of references: http://codex.wordpress.org/Changing_File_Permissions in "Permission Scheme for WordPress" All files should be owned by your user (ftp) account on your web server, and should be writable by that account. On shared hosts, files should never be owned by the webserver process itself (sometimes this is www, or apache, or nobody user). Any file that needs write access from WordPress should be owned or group-owned by the user account used by the WordPress (which may be different than the server account). For example, you may have a user account that lets you FTP files back and forth to your server, but your server itself may run using a separate user, in a separate usergroup, such as dhapache or nobody. If WordPress is running as the FTP account, that account needs to have write access, i.e., be the owner of the files, or belong to a group that has write access. In the latter case, that would mean permissions are set more permissively than default (for example, 775 rather than 755 for folders, and 664 instead of 644). User and group are User1 (which is admin). Running "ps aux | grep httpd" is running as _www So I think this means Wordpress is running as user _www. So the advice seems contradictory: "files should never be owned by the webserver process" i.e. _www but then later it says "Any file that needs write access from WordPress should be owned or group-owned by the user account used by the WordPress" So isn't this _www again? Another search found this url http://dancingengineer.com/computing/2009/07/how-to-install-wordpress-on-mac-os-x-leopard States Which says: My preferred way to do this is to change the group of the wordpress directory and its contents to _www and give write permissions to the group. Keep the owner as your "username". $ cd /Users/"username"/Sites $ sudo chown -R username:_www wordpress_directory $ sudo chmod -R g+w wordpress_directory However, when I tried this, it did not work for automatic upgrades to newer versions of WordPress although it worked for automatically updating the .htaccess file for pretty permalinks. It is not entirely clear to me what should be done. This last suggestion seems to be saying change the group from User1 to _www and give the group write access, but Wordpress upgrades won't work. Is this the right solution? I would have thought there would be a clear way to set this up on OS X 10.6? Be great if there was a plugin that could run a script for each of the main OS's that Wordpress runs on.

    Read the article

  • LDAP object class violation: attribute ou not allowed in suffix?

    - by Paramaeleon
    I am about to set up a LDAP directory. It is used as a tool to communicate user permissions from a web application to WebDav file system access, e.g. adding a user to the web platform shall allow login to the file system with the same credentials. There are no other usages intended. Following this German tutorial which encourages the use of the attributes c, o, ou etc. over dc, I configured the following suffix and root: suffix "ou=webtool,o=myOrg,c=de" rootdn "cn=ldapadmin,ou=webtool,o=myOrg,c=de" Server starts and I can connect to it by LDAP Admin, which reports “LDAP error: Object lacks”. Well, there aren’t any objects yet. I now want to create the root and admin elements from shell. I created an init.ldif file: dn: ou=webtool,o=myOrg,c=de objectclass: dcObject objectclass: organization dc: webtool o: webtool dn: cn=ldapadmin,ou=webtool,o=myOrg,c=de objectclass: organizationalRole cn: ldapadmin Trying to load the file runs into an error, telling me that ou is not allowed: server:~ # ldapadd -x -D "cn=ldapadmin,ou=webtool,o=myOrg,c=de" -W -f init.ldif Enter LDAP Password: adding new entry "ou=webtool,o=myOrg,c=de" ldap_add: Object class violation (65) additional info: attribute 'ou' not allowed I am not using ou anywhere except in the suffix, so the question: Isn’t it allowed here? What is allowed here? Here is my answer. I am not allowed to post it as answer for 8 hours, so don’t mind that it is part of the question by now. I will move it outside some day, if I don’t forget to do so. There are numberous dependencies for the creation of elements, and error messages are rather confusing if you don’t know of the concept. The objectclass isn’t necessarily dcObject for the databases’ root node, as it is likely to guess when you read several tutoriales. Instead, it must correspond to the object’s type: Here, for a name starting with ou=, it must be organizationalUnit. I found this piece of information in these tables [Link removed due to restriction: Oops! Your edit couldn't be submitted because: We're sorry, but as a spam prevention mechanism, new users can only post a maximum of two hyperlinks. Earn more than 10 reputation to post more hyperlinks. Link is below]. Further on, the object class dictates which properties must and can be added in the record. Here, organizationalUnit must have an ou: entry and must not have neither dc: nor o: entry. The healthy init.ldif file looks like that: dn: ou=webtool,o=myOrg,c=de objectclass: organizationalUnit ou: LDAP server for my webtool dn: cn=ldapadmin,ou=webtool,o=myOrg,c=de objectclass: organizationalRole cn: ldapadmin Note: The page also states: “While many objectClasses show no MUST attributes you must (ouch) follow any hierarchy […] to determine if this is the really case.” I thought that would mean my root record would have to provide the must fields for c= and o= (c: and o:, respectively) but this isn’t the case. Link in answer is (1): http :// www (dot) zytrax (dot) com/books/ldap/ape/ "Appendix E: LDAP - Object Classes and Attributes"

    Read the article

  • iptables - quick safety eval & limit max conns over time

    - by Peter Hanneman
    Working on locking down a *nix server box with some fancy iptable(v1.4.4) rules. I'm approaching the matter with a "paranoid, everyone's out to get me" style, not necessarily because I expect the box to be a hacker magnet but rather just for the sake of learning iptables and *nix security more throughly. Everything is well commented - so if anyone sees something I missed please let me know! The *nat table's "--to-ports" point to the only ports with actively listening services. (aside from pings) Layer 2 apps listen exclusively on chmod'ed sockets bridged by one of the layer 1 daemons. Layers 3+ inherit from layer 2 in a similar fashion. The two lines giving me grief are commented out at the very bottom of the *filter rules. The first line runs fine but it's all or nothing. :) Many thanks, Peter H. *nat #Flush previous rules, chains and counters for the 'nat' table -F -X -Z #Redirect traffic to alternate internal ports -I PREROUTING --src 0/0 -p tcp --dport 80 -j REDIRECT --to-ports 8080 -I PREROUTING --src 0/0 -p tcp --dport 443 -j REDIRECT --to-ports 8443 -I PREROUTING --src 0/0 -p udp --dport 53 -j REDIRECT --to-ports 8053 -I PREROUTING --src 0/0 -p tcp --dport 9022 -j REDIRECT --to-ports 8022 COMMIT *filter #Flush previous settings, chains and counters for the 'filter' table -F -X -Z #Set default behavior for all connections and protocols -P INPUT DROP -P OUTPUT DROP -A FORWARD -j DROP #Only accept loopback traffic originating from the local NIC -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -d 127.0.0.0/8 -j DROP #Accept all outgoing non-fragmented traffic having a valid state -A OUTPUT ! -f -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT #Drop fragmented incoming packets (Not always malicious - acceptable for use now) -A INPUT -f -j DROP #Allow ping requests rate limited to one per second (burst ensures reliable results for high latency connections) -A INPUT -p icmp --icmp-type 8 -m limit --limit 1/sec --limit-burst 2 -j ACCEPT #Declaration of custom chains -N INSPECT_TCP_FLAGS -N INSPECT_STATE -N INSPECT #Drop incoming tcp connections with invalid tcp-flags -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL ALL -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL NONE -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,FIN FIN -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,PSH PSH -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,URG URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags SYN,FIN SYN,FIN -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL FIN,PSH,URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags FIN,RST FIN,RST -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags SYN,RST SYN,RST -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL SYN,FIN,PSH,URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL SYN,RST,ACK,FIN,URG -j DROP #Accept incoming traffic having either an established or related state -A INSPECT_STATE -m state --state ESTABLISHED,RELATED -j ACCEPT #Drop new incoming tcp connections if they aren't SYN packets -A INSPECT_STATE -m state --state NEW -p tcp ! --syn -j DROP #Drop incoming traffic with invalid states -A INSPECT_STATE -m state --state INVALID -j DROP #INSPECT chain definition -A INSPECT -p tcp -j INSPECT_TCP_FLAGS -A INSPECT -j INSPECT_STATE #Route incoming traffic through the INSPECT chain -A INPUT -j INSPECT #Accept redirected HTTP traffic via HA reverse proxy -A INPUT -p tcp --dport 8080 -j ACCEPT #Accept redirected HTTPS traffic via STUNNEL SSH gateway (As well as tunneled HTTPS traffic destine for other services) -A INPUT -p tcp --dport 8443 -j ACCEPT #Accept redirected DNS traffic for NSD authoritative nameserver -A INPUT -p udp --dport 8053 -j ACCEPT #Accept redirected SSH traffic for OpenSSH server #Temp solution: -A INPUT -p tcp --dport 8022 -j ACCEPT #Ideal solution: #Limit new ssh connections to max 10 per 10 minutes while allowing an "unlimited" (or better reasonably limited?) number of established connections. #-A INPUT -p tcp --dport 8022 --state NEW,ESTABLISHED -m recent --set -j ACCEPT #-A INPUT -p tcp --dport 8022 --state NEW -m recent --update --seconds 600 --hitcount 11 -j DROP COMMIT *mangle #Flush previous rules, chains and counters in the 'mangle' table -F -X -Z COMMIT

    Read the article

  • nginx server over https using up all available file handles (upd: infinite loop?)

    - by mmr
    Hi all, So I have an nginx server that's working over https with Sinatra. When I try to download a jnlp file in a configuration that works fine over Mongrel and http (no s), the nginx server fails to serve the file with a 504 error. Subsequent checking of the logs states that this error is due to overflowing the available number of file handles, ie, "24: too many open files". Running sudo lsof -p <nginx worker pid> gets me a huge list of files, all looking like: nginx 1771 nobody 11u IPv4 10867997 0t0 TCP localhost:44704->localhost:https (ESTABLISHED) nginx 1771 nobody 12u IPv4 10868113 0t0 TCP localhost:https->localhost:44704 (ESTABLISHED) nginx 1771 nobody 13u IPv4 10868114 0t0 TCP localhost:44705->localhost:https (ESTABLISHED) nginx 1771 nobody 14u IPv4 10868191 0t0 TCP localhost:https->localhost:44705 (ESTABLISHED) nginx 1771 nobody 15u IPv4 10868192 0t0 TCP localhost:44706->localhost:https (ESTABLISHED) nginx 1771 nobody 16u IPv4 10868255 0t0 TCP localhost:https->localhost:44706 (ESTABLISHED) nginx 1771 nobody 17u IPv4 10868256 0t0 TCP localhost:44707->localhost:https (ESTABLISHED) nginx 1771 nobody 18u IPv4 10868330 0t0 TCP localhost:https->localhost:44707 (ESTABLISHED) nginx 1771 nobody 19u IPv4 10868331 0t0 TCP localhost:44708->localhost:https (ESTABLISHED) nginx 1771 nobody 20u IPv4 10868434 0t0 TCP localhost:https->localhost:44708 (ESTABLISHED) Increasing the number of files that can be opened is no help, because then nginx just blows right past that limit. And no wonder, it looks like it's in some kind of loop to pull all available files. Any idea what's going on, and how to fix it? EDIT: nginx 0.7.63, ubuntu linux, sinatra 1.0 EDIT 2: Here's the offending code. It's sinatra serving jnlp, which I finally figured out: get '/uploader' do #read in the launch.jnlp file theJNLP = "" File.open("/launch.jnlp", "r+") do |file| while theTemp = file.gets theJNLP = theJNLP + theTemp end end content_type :jnlp theJNLP end If I serve this with Sinatra via Mongrel and http, everything works fine. If I serve this with Sinatra and nginx via https, I get the above error. All other parts of the website appear to be equivalent. EDIT: I have since upgraded to passenger 2.2.14, ruby 1.9.1, nginx 0.8.40, openssl 1.0.0a, and no change. EDIT: The culprit appears to be infinite redirects due to using SSL. I don't know how to fix this, other than hosting the jnlp file in the root directory of the server (which I'd rather not do, since it limits me to one jnlp-based app at a time). The relevant lines from nginx.conf: # HTTPS server # server { listen 443; server_name MyServer.org root /My/Root/Dir; passenger_enabled on; expires 1d; proxy_set_header X-FORWARDED_PROTO https; proxy_set_header X_FORWARDED_PROTO https;#the almighty google is not clear on which to use location /upload { proxy_pass https://127.0.0.1:443; } } The funny thing about this is, first, I was putting the jnlp into a directory called 'uploader', not 'upload', but that still appeared to trigger the problem, since that proxy_pass directive appeared in the logs. Second, again, moving the jnlp into root avoided the problem, because there wasn't any of this proxying due to ssl. So, how can I avoid the infinite proxy_pass loop in nginx?

    Read the article

  • Windows 7 inbuilt and 3rd party (de)fragmentation related queries

    - by Karan
    I have a pretty good idea of how files end up getting fragmented. That said, I just copied ~3,200 files of varying sizes (from a few KB to ~20GB) from an external USB HDD to an internal, freshly formatted (under Windows 7 x64), NTFS, 2TB, 5400RPM, WD, SATA, non-system (i.e. secondary) drive, filling it up 57%. Since it should have been very much possible for each file to have been stored in one contiguous block, I expected the drive to be fragmented not more than 1-2% at most after this rather lengthy exercise (unfortunately this older machine doesn't support USB 3.0). Windows 7's inbuilt defrag utility told me after a quick analysis that the drive was fragmented only 1% or so, which dovetailed neatly with my expectations. However, just out of curiosity I downloaded and ran the latest portable x64 version of Piriform's Defraggler, and was shocked to see the drive being reported as being ~85% fragmented! The portable version of Auslogics Disk Defrag also agreed with Defraggler, and both clearly expected to grind away for ~10 hours to completely defragment the drive. 1) How in blazes could the inbuilt and 3rd party defrag utils disagree so badly? I mean, 10-20% variance is probably understandable, but 1% and 85% are miles apart! This Engineering Windows 7 blog post states: In Windows XP, any file that is split into more than one piece is considered fragmented. Not so in Windows Vista if the fragments are large enough – the defragmentation algorithm was changed (from Windows XP) to ignore pieces of a file that are larger than 64MB. As a result, defrag in XP and defrag in Vista will report different amounts of fragmentation on a volume. ... [Please read the entire post so the quote is not taken out of context.] Could it simply be that the 3rd party defrag utils ignore this post-XP change and continue to use analysis algos similar to those XP used? 2) Assuming that the 3rd party utils aren't lying about the real extent of fragmentation (which Windows is downplaying post-XP), how could the files have even got fragmented so badly given they were just copied over afresh to an empty drive? 3) If vastly differing analysis algos explain the yawning gap, which do I believe? I'm no defrag fanatic for sure, but 85% is enough to make me seriously consider spending 10 hours defragging this drive. On the other hand, 1% reported by Windows' own defragger clearly implies that there is no cause for concern and defragging would actually have negative consequences (as per the post). Is Windows' assumption valid and should I just let it be, or will there be any noticeable performance gains after running one of the 3rd party utils for 10 hours straight? 4) I see that out of the box Windows 7 defrag is scheduled to run weekly. Does anyone know whether it defrags every single time, or only if its analysis reveals a fragmentation percentage over a set threshold? If the latter, what is this threshold and can it be changed, maybe via a Registry edit? Thanks for reading through (my first query on this wonderful site!) and for any helpful replies. Also, if you're answering question #3, please keep in mind that any speed increases post defragging with 3rd party utils vis-à-vis Windows' inbuilt program should not include pre-Vista (preferably pre-Win7) examples. Further, examples of programs that made your system boot faster won't help in this case, since this is a non-system drive (although one that'll still be used daily).

    Read the article

  • Amazon EC2 Instance - m1.medium Ubuntu 12.04 - Started to crash three days ago

    - by Joy
    The environment: Amazon EC2 Instance - m1.medium Ubuntu 12.04 Apache 2.2.22 - Running a Drupal Site Using MySQL DB Server RAM info: ~$ free -gt total used free shared buffers cached Mem: 3 1 2 0 0 0 -/+ buffers/cache: 0 2 Swap: 0 0 0 Total: 3 1 2 Hard drive info: Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 4.7G 2.9G 62% / udev 1.9G 8.0K 1.9G 1% /dev tmpfs 751M 180K 750M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm /dev/xvdb 394G 199M 374G 1% /mnt The problem About two days ago the site started failing becaue the MySQL server was shut down by Apache with the following message: kernel: [2963685.664359] [31716] 106 31716 226946 22748 0 0 0 mysqld kernel: [2963685.664730] Out of memory: Kill process 31716 (mysqld) score 23 or sacrifice child kernel: [2963685.664764] Killed process 31716 (mysqld) total-vm:907784kB, anon-rss:90992kB, file-rss:0kB kernel: [2963686.153608] init: mysql main process (31716) killed by KILL signal kernel: [2963686.169294] init: mysql main process ended, respawning That states that the VM was occupying 0.9GB, but my Ram has 2GB free, so 1GB was still left free. I understand that in Linux applications can allocate more memory than physically available. I don't know if this is the problme, it's the first time that it has started to happen. Obviously, the MySQL server tries to restart, but there's no memory for it apparently and it won't restart. Here is its error log: Plugin 'FEDERATED' is disabled. The InnoDB memory heap is disabled Mutexes and rw_locks use GCC atomic builtins Compressed tables use zlib 1.2.3.4 Initializing buffer pool, size = 128.0M InnoDB: mmap(137363456 bytes) failed; errno 12 Completed initialization of buffer pool Fatal error: cannot allocate memory for the buffer pool Plugin 'InnoDB' init function returned error. Plugin 'InnoDB' registration as a STORAGE ENGINE failed. Unknown/unsupported storage engine: InnoDB [ERROR] Aborting [Note] /usr/sbin/mysqld: Shutdown complete I simply restarted the Mysql service. About two hours later it happened again. I restarted it. Then it happened again 9 hours later. So then I thought of the MaxClients parameter of apache.conf, so I went to check it out. It was set at 150. I decided to drop it down to 60. As so: <IfModule mpm_prefork_module> ... MaxClients 60 </IfModule> <IfModule mpm_worker_module> ... MaxClients 60 </IfModule> <IfModule mpm_event_module> ... MaxClients 60 </IfModule> Once I did that, I had the apache2 service restart and it all went smoothly for 3/4 of a day. Since at night the MySQL service shut down once again, but this time it wasn't killed by the Apache2 service. Instead it called the OOM-Killer with the following message: kernel: [3104680.005312] mysqld invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 kernel: [3104680.005351] [<ffffffff81119795>] oom_kill_process+0x85/0xb0 kernel: [3104680.548860] init: mysql main process (30821) killed by KILL signal Now I'm out of ideas. Some articles state that the ideal thing to do is change the kernel behaviour with the following (include it to the file /etc/sysctl.conf ) vm.overcommit_memory = 2 vm.overcommit_ratio = 80 So no overcommits will take place. I'm wondering if this is the way to go? Keep in mind I'm no server administrator, I have basic knowldege. Thanks a bunch in advance.

    Read the article

  • Problem installing build-essential and upgrading g++ on Ubuntu 8.04

    - by ehsanul
    I'm having some trouble with dependencies it seems, but myself don't really know how to resolve the issue. Here's the output: ~:) sudo apt-get install build-essential Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: build-essential: Depends: g++ (>= 4:4.3.1) but 4:4.2.3-1ubuntu6 is to be installed E: Broken packages ~:) sudo apt-get install g++ Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. Since you only requested a single operation it is extremely likely that the package is simply not installable and a bug report against that package should be filed. The following information may help to resolve the situation: The following packages have unmet dependencies: g++: Depends: cpp (>= 4:4.3.1-1ubuntu2) but 4:4.2.3-1ubuntu6 is to be installed Depends: gcc (>= 4:4.3.1-1ubuntu2) but 4:4.2.3-1ubuntu6 is to be installed Depends: g++-4.3 (>= 4.3.1-1) but it is not going to be installed Depends: gcc-4.3 (>= 4.3.1-1) but it is not installable E: Broken packages ~:) Edit: I just tried aptitude instead of apt-get, as suggested. Doesn't work, had other problems: ~:) sudo aptitude install build-essential [sudo] password for ehsanul: Reading package lists... Done Building dependency tree Reading state information... Done Reading extended state information Initializing package states... Done Building tag database... Done The following packages are BROKEN: g++ g++-4.3 libstdc++6-4.3-dev The following packages have been automatically kept back: dpkg-dev fakeroot libdns35 libisc35 linux-libc-dev patch The following NEW packages will be automatically installed: libgmp3c2 libmpfr1ldbl The following packages have been kept back: adobe-flashplugin bind9-host dnsutils gvfs gvfs-backends gvfs-fuse libatm1 libbind9-30 libgvfscommon0 libisccc30 libisccfg30 liblwres30 libnautilus-extension1 linux-headers-2.6.24-24 linux-headers-2.6.24-24-generic linux-image-2.6.24-24-generic nautilus nautilus-data The following NEW packages will be installed: libgmp3c2 libmpfr1ldbl The following packages will be upgraded: build-essential The following partially installed packages will be configured: timidity 2 packages upgraded, 4 newly installed, 0 to remove and 24 not upgraded. Need to get 775kB/6265kB of archives. After unpacking 20.3MB will be used. The following packages have unmet dependencies: libstdc++6-4.3-dev: Depends: gcc-4.3-base (= 4.3.2-1ubuntu11) which is a virtual package. Depends: libstdc++6 (>= 4.3.2-1ubuntu11) but 4.2.4-1ubuntu4 is installed. g++-4.3: Depends: gcc-4.3-base (= 4.3.2-1ubuntu11) which is a virtual package. Depends: gcc-4.3 (= 4.3.2-1ubuntu11) which is a virtual package. Depends: libc6 (>= 2.8~20080505) but 2.7-10ubuntu4 is installed. g++: Depends: cpp (>= 4:4.3.1-1ubuntu2) but 4:4.2.3-1ubuntu6 is installed. Depends: gcc (>= 4:4.3.1-1ubuntu2) but 4:4.2.3-1ubuntu6 is installed. Depends: gcc-4.3 (>= 4.3.1-1) which is a virtual package. Resolving dependencies... The following actions will resolve these dependencies: Keep the following packages at their current version: build-essential [11.3ubuntu1 (hardy, now)] g++ [4:4.2.3-1ubuntu6 (hardy-updates, now)] g++-4.3 [Not Installed] libstdc++6-4.3-dev [Not Installed] Score is -9852 Accept this solution? [Y/n/q/?]

    Read the article

  • Installing Lubuntu 14.04.1 forcepae fails

    - by Rantanplan
    I tried to install Lubuntu 14.04.1 from a CD. First, I chose Try Lubuntu without installing which gave: ERROR: PAE is disabled on this Pentium M (PAE can potentially be enabled with kernel parameter "forcepae" ... Following the description on https://help.ubuntu.com/community/PAE, I used forcepae and tried Try Lubuntu without installing again. That worked fine. dmesg | grep -i pae showed: [ 0.000000] Kernel command line: file=/cdrom/preseed/lubuntu.seed boot=casper initrd=/casper/initrd.lz quiet splash -- forcepae [ 0.008118] PAE forced! On the live-CD session, I tried installing Lubuntu double clicking on the install button on the desktop. Here, the CD starts running but then stops running and nothing happens. Next, I rebooted and tried installing Lubuntu directly from the boot menu screen using forcepae again. After a while, I receive the following error message: The installer encountered an unrecoverable error. A desktop session will now be run so that you may investigate the problem or try installing again. Hitting Enter brings me to the desktop. For what errors should I search? And how? Finally, I rebooted once more and tried Check disc for defects with forcepae option; no errors have been found. Now, I am wondering how to find the error or whether it would be better to follow advice c in https://help.ubuntu.com/community/PAE: "Move the hard disk to a computer on which the processor has PAE capability and PAE flag (that is, almost everything else than a Banias). Install the system as usual but don't add restricted drivers. After the install move the disk back." Thanks for some hints! Perhaps some of the following can help: On Lubuntu 12.04: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr mce cx8 mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe up bts est tm2 bogomips : 1284.76 clflush size : 64 cache_alignment : 64 address sizes : 32 bits physical, 32 bits virtual power management: uname -a Linux humboldt 3.2.0-67-generic #101-Ubuntu SMP Tue Jul 15 17:45:51 UTC 2014 i686 i686 i386 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.5 LTS Release: 12.04 Codename: precise cpuid eax in eax ebx ecx edx 00000000 00000002 756e6547 6c65746e 49656e69 00000001 000006d6 00000816 00000180 afe9f9bf 00000002 02b3b001 000000f0 00000000 2c04307d 80000000 80000004 00000000 00000000 00000000 80000001 00000000 00000000 00000000 00000000 80000002 20202020 20202020 65746e49 2952286c 80000003 6e655020 6d756974 20295228 7270204d 80000004 7365636f 20726f73 30352e31 007a4847 Vendor ID: "GenuineIntel"; CPUID level 2 Intel-specific functions: Version 000006d6: Type 0 - Original OEM Family 6 - Pentium Pro Model 13 - Stepping 6 Reserved 0 Brand index: 22 [not in table] Extended brand string: " Intel(R) Pentium(R) M processor 1.50GHz" CLFLUSH instruction cache line size: 8 Feature flags afe9f9bf: FPU Floating Point Unit VME Virtual 8086 Mode Enhancements DE Debugging Extensions PSE Page Size Extensions TSC Time Stamp Counter MSR Model Specific Registers MCE Machine Check Exception CX8 COMPXCHG8B Instruction SEP Fast System Call MTRR Memory Type Range Registers PGE PTE Global Flag MCA Machine Check Architecture CMOV Conditional Move and Compare Instructions FGPAT Page Attribute Table CLFSH CFLUSH instruction DS Debug store ACPI Thermal Monitor and Clock Ctrl MMX MMX instruction set FXSR Fast FP/MMX Streaming SIMD Extensions save/restore SSE Streaming SIMD Extensions instruction set SSE2 SSE2 extensions SS Self Snoop TM Thermal monitor 31 reserved TLB and cache info: b0: unknown TLB/cache descriptor b3: unknown TLB/cache descriptor 02: Instruction TLB: 4MB pages, 4-way set assoc, 2 entries f0: unknown TLB/cache descriptor 7d: unknown TLB/cache descriptor 30: unknown TLB/cache descriptor 04: Data TLB: 4MB pages, 4-way set assoc, 8 entries 2c: unknown TLB/cache descriptor On Lubuntu 14.04.1 live-CD with forcepae: cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 13 model name : Intel(R) Pentium(R) M processor 1.50GHz stepping : 6 microcode : 0x17 cpu MHz : 600.000 cache size : 2048 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fdiv_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov clflush dts acpi mmx fxsr sse sse2 ss tm pbe bts est tm2 bogomips : 1284.68 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 32 bits virtual power management: uname -a Linux lubuntu 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:12 UTC 2014 i686 i686 i686 GNU/Linux lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04.1 LTS Release: 14.04 Codename: trusty cpuid CPU 0: vendor_id = "GenuineIntel" version information (1/eax): processor type = primary processor (0) family = Intel Pentium Pro/II/III/Celeron/Core/Core 2/Atom, AMD Athlon/Duron, Cyrix M2, VIA C3 (6) model = 0xd (13) stepping id = 0x6 (6) extended family = 0x0 (0) extended model = 0x0 (0) (simple synth) = Intel Pentium M (Dothan B1) / Celeron M (Dothan B1), 90nm miscellaneous (1/ebx): process local APIC physical ID = 0x0 (0) cpu count = 0x0 (0) CLFLUSH line size = 0x8 (8) brand index = 0x16 (22) brand id = 0x16 (22): Intel Pentium M, .13um feature information (1/edx): x87 FPU on chip = true virtual-8086 mode enhancement = true debugging extensions = true page size extensions = true time stamp counter = true RDMSR and WRMSR support = true physical address extensions = false machine check exception = true CMPXCHG8B inst. = true APIC on chip = false SYSENTER and SYSEXIT = true memory type range registers = true PTE global bit = true machine check architecture = true conditional move/compare instruction = true page attribute table = true page size extension = false processor serial number = false CLFLUSH instruction = true debug store = true thermal monitor and clock ctrl = true MMX Technology = true FXSAVE/FXRSTOR = true SSE extensions = true SSE2 extensions = true self snoop = true hyper-threading / multi-core supported = false therm. monitor = true IA64 = false pending break event = true feature information (1/ecx): PNI/SSE3: Prescott New Instructions = false PCLMULDQ instruction = false 64-bit debug store = false MONITOR/MWAIT = false CPL-qualified debug store = false VMX: virtual machine extensions = false SMX: safer mode extensions = false Enhanced Intel SpeedStep Technology = true thermal monitor 2 = true SSSE3 extensions = false context ID: adaptive or shared L1 data = false FMA instruction = false CMPXCHG16B instruction = false xTPR disable = false perfmon and debug = false process context identifiers = false direct cache access = false SSE4.1 extensions = false SSE4.2 extensions = false extended xAPIC support = false MOVBE instruction = false POPCNT instruction = false time stamp counter deadline = false AES instruction = false XSAVE/XSTOR states = false OS-enabled XSAVE/XSTOR = false AVX: advanced vector extensions = false F16C half-precision convert instruction = false RDRAND instruction = false hypervisor guest status = false cache and TLB information (2): 0xb0: instruction TLB: 4K, 4-way, 128 entries 0xb3: data TLB: 4K, 4-way, 128 entries 0x02: instruction TLB: 4M pages, 4-way, 2 entries 0xf0: 64 byte prefetching 0x7d: L2 cache: 2M, 8-way, sectored, 64 byte lines 0x30: L1 cache: 32K, 8-way, 64 byte lines 0x04: data TLB: 4M pages, 4-way, 8 entries 0x2c: L1 data cache: 32K, 8-way, 64 byte lines extended feature flags (0x80000001/edx): SYSCALL and SYSRET instructions = false execution disable = false 1-GB large page support = false RDTSCP = false 64-bit extensions technology available = false Intel feature flags (0x80000001/ecx): LAHF/SAHF supported in 64-bit mode = false LZCNT advanced bit manipulation = false 3DNow! PREFETCH/PREFETCHW instructions = false brand = " Intel(R) Pentium(R) M processor 1.50GHz" (multi-processing synth): none (multi-processing method): Intel leaf 1 (synth) = Intel Pentium M (Dothan B1), 90nm

    Read the article

  • Week in Geek: 4chan Falls Victim to DDoS Attack Edition

    - by Asian Angel
    This week we learned how to tweak the low battery action on a Windows 7 laptop, access an eBook collection anywhere in the world, “extend iPad battery life, batch resize photos, & sync massive music collections”, went on a reign of destruction with Snow Crusher, and had fun decorating our desktops with abstract icon collections. Photo by pasukaru76. Random Geek Links We have included extra news article goodness to help you catch up on any developments that you may have missed during the holiday break this past week. Note: The three 27C3 articles listed here represent three different presentations at the 27th Chaos Communication Congress hacker conference. 4chan victim of DDoS as FBI investigates role in PayPal attack Users of 4chan may have gotten a taste of their own medicine after the site was knocked offline by a DDoS attack from an unknown origin early Thursday morning. Report: FBI seizes server in probe of WikiLeaks attacks The FBI has seized a server in Texas as part of its hunt for the groups behind the pro-WikiLeaks denial-of-service attacks launched in December against PayPal, Visa, MasterCard, and others. Mozilla exposes older user-account database Mozilla has disabled 44,000 older user accounts for its Firefox add-ons site after a security researcher found part of a database of the account information on a publicly available server. Data breach affects 4.9 million Honda customers Japanese automaker Honda has put some 2.2 million customers in the United States on a security breach alert after a database containing information on the owners and their cars was hacked. Chinese Trojan discovered in Android games An Android-based Trojan called “Geinimi” has been discovered in the wild and the Trojan is capable of sending personal information to remote servers and exhibits botnet-like behavior. 27C3 presentation claims many mobiles vulnerable to SMS attacks According to security experts, an ‘SMS of death’ threatens to disable many current Sony Ericsson, Samsung, Motorola, Micromax and LG mobiles. 27C3: GSM cell phones even easier to tap Security researchers have demonstrated how open source software on a number of revamped, entry-level cell phones can decrypt and record mobile phone calls in the GSM network. 27C3: danger lurks in PDF documents Security researcher Julia Wolf has pointed out numerous, previously hardly known, security problems in connection with Adobe’s PDF standard. Critical update for WordPress A critical update has been made available for WordPress in the form of version 3.0.4. The update fixes a security bug in WordPress’s KSES library. McAfee Labs Predicts Geolocation, Mobile Devices and Apple Will Top the List of Targets for Emerging Threats in 2011 The list comprises 2010’s most buzzed about platforms and services, including Google’s Android, Apple’s iPhone, foursquare, Google TV and the Mac OS X platform, which are all expected to become major targets for cybercriminals. McAfee Labs also predicts that politically motivated attacks will be on the rise. Windows Phone 7 piracy materializes with FreeMarketplace A proof-of-concept application, FreeMarketplace, that allows any Windows Phone 7 application to be downloaded and installed free of charge has been developed. Empty email accounts, and some bad buzz for Hotmail In the past few days, a number of Hotmail users have been complaining about a rather disconcerting issue: their Hotmail accounts, some up to 10 years old, appear completely empty.  No emails, no folders, nothing, just what appears to be a new account. Reports: Nintendo warns of 3DS risk for kids Nintendo has reportedly issued a warning that the 3DS, its eagerly awaited glasses-free 3D portable gaming device, should not be used by children under 6 when the gadget is in 3D-viewing mode. Google eyes ‘cloaking’ as next antispam target Google plans to take a closer look at the practice of “cloaking,” or presenting one look to a Googlebot crawling one’s site while presenting another look to users. Facebook, Twitter stock trading drawing SEC eye? The high degree of investor interest in shares of hot Silicon Valley companies that aren’t yet publicly traded–like Facebook, Twitter, LinkedIn, and Zynga–may be leading to scrutiny from the U.S. Securities and Exchange Commission (SEC). Random TinyHacker Links Photo by jcraveiro. Exciting Software Set for Release in 2011 A few bloggers from great websites such as How-To Geek, Guiding Tech and 7 Tutorials took the time to sit down and talk about their software wishes for 2011. Take the time to read it and share… Wikileaks Infopr0n An infographic detailing the quest to plug WikiLeaks. The New York Times Guide to Mobile Apps A growing collection of all mobile app coverage by the New York Times as well as lists of favorite apps from Times writers. 7,000,000,000 (Video) A fascinating look at the world’s population via National Geographic Magazine. Super User Questions Check out the great answers to these hot questions from Super User. How to use a Personal computer as a Linux web server for development purposes? How to link processing power of old computers together? Free virtualization tool for testing suspicious files? Why do some actions not work with Remote Desktop? What is the simplest way to send a large batch of pictures to a distant friend or colleague? How-To Geek Weekly Article Recap Had a busy week and need to get caught up on your HTG reading? Then sit back and relax while enjoying these hot posts full of how-to roundup goodness. The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 The 20 Best How-To Geek Linux Articles of 2010 How to Search Just the Site You’re Viewing Using Google Search Ask the Readers: Backing Your Files Up – Local Storage versus the Cloud One Year Ago on How-To Geek Need more how-to geekiness for your weekend? Then look through this great batch of articles from one year ago that focus on dual-booting and O.S. installation goodness. Dual Boot Your Pre-Installed Windows 7 Computer with Vista Dual Boot Your Pre-Installed Windows 7 Computer with XP How To Setup a USB Flash Drive to Install Windows 7 Dual Boot Your Pre-Installed Windows 7 Computer with Ubuntu Easily Install Ubuntu Linux with Windows Using the Wubi Installer The Geek Note We hope that you and your families have had a terrific holiday break as everyone prepares to return to work and school this week. Remember to keep those great tips coming in to us at [email protected]! Photo by pjbeardsley. Latest Features How-To Geek ETC The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Tune Pop Enhances Android Music Notifications Another Busy Night in Gotham City Wallpaper Classic Super Mario Brothers Theme for Chrome and Iron Experimental Firefox Builds Put Tabs on the Title Bar (Available for Download) Android Trojan Found in the Wild Chaos, Panic, and Disorder Wallpaper

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >