Search Results

Search found 341 results on 14 pages for 'overwriting'.

Page 10/14 | < Previous Page | 6 7 8 9 10 11 12 13 14  | Next Page >

  • Announcing MySQL Enterprise Backup 3.7.1

    - by Hema Sridharan
    The MySQL Enterprise Backup (MEB) Team is pleased to announce the release of MEB 3.7.1, a maintenance release version that includes bug fixes and enhancements to some of the existing features. The most important feature introduced in this release is Automatic Incremental Backup. The new  argument syntax for the --incremental-base option is introduced which makes it simpler to perform automatic incremental backups. When the options --incremental & --incremental-base=history:last_backup are combined, the mysqlbackup command  uses the metadata in the mysql.backup_history table to determine the LSN to use as the lower limit of the incremental backup. You no longer need to keep track of the actual LSN (as in the option --start-lsn=LSN) or even the location of the previous backup (as in the option --incremental-base=dir:directory_path)This release also incudes various bug fixes related to some options used in MEB. The most important are few of them as listed below,1. The option --force now allows overwriting InnoDB data and log files in  combination with the apply-log and apply-incremental-backup options, and replacing the image file in combination with the backup-to-image and backup-dir-to-image options. 2. Resolved a bug that prevented MEB to interface with third-party storage managers to execute backup and restore jobs in combination with the SBT interface and associated --sbt* options for mysqlbackup. 3. When MEB is run with the copy-back option,  it now displays warnings as existing files are overwritten.For more information about other bug fixes, please refer to the change-log in http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/meb-news.html The complete MEB documentation is located at http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/index.html. You will find the binaries for the new release in My Oracle Support,  https://support.oracle.comChoose the "Patches & Updates" tab, and then use the "Product or Family (Advanced Search)" feature. If you haven't looked at MEB 3.7.1 recently, please do so now and let us know how MEB works for you. Send your feedback to [email protected].

    Read the article

  • What are the alternatives to "overriding a method" when using composition instead of inheritance?

    - by Sebastien Diot
    If we should favor composition over inheritance, the data part of it is clear, at least for me. What I don't have a clear solution to is how overwriting methods, or simply implementing them if they are defined in a pure virtual form, should be implemented. An obvious way is to wrap the instance representing the base-class into the instance representing the sub-class. But the major downsides of this are that if you have say 10 methods, and you want to override a single one, you still have to delegate every other methods anyway. And if there were several layers of inheritance, you have now several layers of wrapping, which becomes less and less efficient. Also, this only solve the problem of the object "client"; when another object calls the top wrapper, things happen like in inheritance. But when a method of the deepest instance, the base class, calls it's own methods that have been wrapped and modified, the wrapping has no effect: the call is performed by it's own method, instead of by the highest wrapper. One extreme alternative that would solve those problems would be to have one instance per method. You only wrap methods that you want to overwrite, so there is no pointless delegation. But now you end up with an incredible amount of classes and object instance, which will have a negative effect on memory usage, and this will require a lot more coding too. So, are there alternatives (preferably alternatives that can be used in Java), that: Do not result in many levels of pointless delegation without any changes. Make sure that not only the client of an object, but also all the code of the object itself, is aware of which implementation of method should be called. Does not result in an explosion of classes and instances. Ideally puts the extra memory overhead that is required at the "class"/"particular composition" level (static if you will), rather than having every object pay the memory overhead of composition. My feeling tells me that the instance representing the base class should be at the "top" of the stack/layers so it receives calls directly, and can process them directly too if they are not overwritten. But I don't know how to do it that way.

    Read the article

  • Playing a video logs me out

    - by Kartick Vaddadi
    When I try to play a video in vlc, totem or banshee, it immediately logs me out. Sometimes this happens when I try to full screen the video. This seems to happen only after upgrading to ubuntu 11, and happens for multiple kinds of files, like avi and m4v. The motherboard is Asus a8v-mx. Please help me fix my ubuntu installation. Thanks. Here are the relevant entries from syslog: 21:12:27 enlightenment kernel: [ 488.157457] powernow-k8: Hardware error - pending bit very stuck - no further pstate changes possible May 1 21:12:27 enlightenment kernel: [ 488.158634] powernow-k8: transition frequency failed May 1 21:12:27 enlightenment kernel: [ 488.264015] powernow-k8: failing targ, change pending bit set May 1 21:12:27 enlightenment kernel: [ 488.306466] agpgart-amd64 0000:00:00.0: AGP 3.0 bridge May 1 21:12:27 enlightenment kernel: [ 488.306489] agpgart-amd64 0000:00:00.0: putting AGP V3 device into 8x mode May 1 21:12:27 enlightenment kernel: [ 488.306562] pci 0000:01:00.0: putting AGP V3 device into 8x mode May 1 21:12:27 enlightenment kernel: [ 488.372044] powernow-k8: error - out of sync, fix 0x2 0xa, vid 0x4 0x4 May 1 21:12:27 enlightenment kernel: [ 488.372055] powernow-k8: ph2 null fid transition 0xa May 1 21:12:30 enlightenment rtkit-daemon[1304]: Successfully made thread 1987 of process 1987 (n/a) owned by '105' high priority at nice level -11. May 1 21:12:30 enlightenment rtkit-daemon[1304]: Supervising 1 threads of 1 processes of 1 users. May 1 21:12:30 enlightenment rtkit-daemon[1304]: Successfully made thread 1988 of process 1987 (n/a) owned by '105' RT at priority 5. May 1 21:12:30 enlightenment rtkit-daemon[1304]: Supervising 2 threads of 1 processes of 1 users. May 1 21:12:30 enlightenment rtkit-daemon[1304]: Successfully made thread 1989 of process 1987 (n/a) owned by '105' RT at priority 5. May 1 21:12:30 enlightenment rtkit-daemon[1304]: Supervising 3 threads of 1 processes of 1 users. May 1 21:12:32 enlightenment gdm-simple-greeter[1975]: Gtk-WARNING: /build/buildd/gtk+2.0-2.24.4/gtk/gtkwidget.c:5687: widget not within a GtkWindow May 1 21:12:32 enlightenment gdm-simple-greeter[1975]: WARNING: Unable to load CK history: no seat-id found May 1 21:12:34 enlightenment gdm-session-worker[1978]: GLib-GObject-CRITICAL: g_value_get_boolean: assertion `G_VALUE_HOLDS_BOOLEAN (value)' failed May 1 21:12:38 enlightenment gdm-session-worker[1978]: pam_sm_authenticate: Called May 1 21:12:38 enlightenment gdm-session-worker[1978]: pam_sm_authenticate: username = [rama] May 1 21:12:39 enlightenment rtkit-daemon[1304]: Successfully made thread 2108 of process 2108 (n/a) owned by '1000' high priority at nice level -11. May 1 21:12:39 enlightenment rtkit-daemon[1304]: Supervising 4 threads of 2 processes of 2 users. May 1 21:12:39 enlightenment pulseaudio[2108]: pid.c: Stale PID file, overwriting. May 1 21:12:39 enlightenment rtkit-daemon[1304]: Successfully made thread 2111 of process 2108 (n/a) owned by '1000' RT at priority 5. May 1 21:12:39 enlightenment rtkit-daemon[1304]: Supervising 5 threads of 2 processes of 2 users. May 1 21:12:39 enlightenment rtkit-daemon[1304]: Successfully made thread 2112 of process 2

    Read the article

  • Lesser Known NHibernate Session Methods

    - by Ricardo Peres
    The NHibernate ISession, the core of NHibernate usage, has some methods which are quite misunderstood and underused, to name a few, Merge, Persist, Replicate and SaveOrUpdateCopy. Their purpose is: Merge: copies properties from a transient entity to an eventually loaded entity with the same id in the first level cache; if there is no loaded entity with the same id, one will be loaded and placed in the first level cache first; if using version, the transient entity must have the same version as in the database; Persist: similar to Save or SaveOrUpdate, attaches a maybe new entity to the session, but does not generate an INSERT or UPDATE immediately and thus the entity does not get a database-generated id, it will only get it at flush time; Replicate: copies an instance from one session to another session, perhaps from a different session factory; SaveOrUpdateCopy: attaches a transient entity to the session and tries to save it. Here are some samples of its use. ISession session = ...; AuthorDetails existingDetails = session.Get<AuthorDetails>(1); //loads an entity and places it in the first level cache AuthorDetails detachedDetails = new AuthorDetails { ID = existingDetails.ID, Name = "Changed Name" }; //a detached entity with the same ID as the existing one Object mergedDetails = session.Merge(detachedDetails); //merges the Name property from the detached entity into the existing one; the detached entity does not get attached session.Flush(); //saves the existingDetails entity, since it is now dirty, due to the change in the Name property AuthorDetails details = ...; ISession session = ...; session.Persist(details); //details.ID is still 0 session.Flush(); //saves the details entity now and fetches its id ISessionFactory factory1 = ...; ISessionFactory factory2 = ...; ISession session1 = factory1.OpenSession(); ISession session2 = factory2.OpenSession(); AuthorDetails existingDetails = session1.Get<AuthorDetails>(1); //loads an entity session2.Replicate(existingDetails, ReplicationMode.Overwrite); //saves it into another session, overwriting any possibly existing one with the same id; other options are Ignore, where any existing record with the same id is left untouched, Exception, where an exception is thrown if there is a record with the same id and LatestVersion, where the latest version wins SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • How do I get long command lines to wrap to the next line?

    - by BrianH
    Edit It was my .bashrc file. I've copied the same profile from machine to machine, and I used special characters in my $PS1 that are somehow throwing it off. I'm now sticking with the standard bash variables for my $PS1. Thanks to @ændrük for the tip on the .bashrc! ...End Edit... Something I have noticed in Ubuntu for a long time that has been frustrating to me is when I am typing a command at the command line that gets longer (wider) than the terminal width, instead of wrapping to a new line, it goes back to column 1 on the same line and starts over-writing the beginning of my command line. (It doesn't actually overwrite the actual command, but visually, it is overwriting the text that was displayed). It's hard to explain without seeing it, but let's say my terminal was 20 characters wide (Mine is more like 120 characters - but for the sake of an example), and I want to echo the English alphabet. What I type is this: echo abcdefghijklmnopqrstuvwxyz But what my terminal looks like before I hit the key is: pqrstuvwxyzghijklmno When I hit enter, it echos abcdefghijklmnopqrstuvwxyz so I know the command was received properly. It just wrapped my typing after the "o" and started over on the same line. What I would expect to happen, if I typed this command in on a terminal that was only 20 characters wide would be this: echo abcdefghijklmno pqrstuvwxyz Background: I am using bash as my shell, and I have this line in my ~/.bashrc: set -o vi to be able to navigate the command line with VI commands. I am currently using Ubuntu 10.10 server, and connecting to the server with Putty. In any other environment I have worked in, if I type a long command line, it will add a new line underneath the line I am working on when my command gets longer than the terminal width and when I keep typing I can see my command on 2 different lines. But for as long as I can remember using Ubuntu, my long commands only occupy 1 line. This also happens when I am going back to previous commands in the history (I hit Esc, then 'K' to go back to previous commands) - when I get to a previous command that was longer than the terminal width, the command line gets mangled and I cannot tell where I am at in the command. The only work-around I have found to see the entire long command is to hit "Esc-V", which opens up the current command in a VI editor. I don't think I have anything odd in my .bashrc file. I commented out the "set -o vi" line, and I still had the problem. I downloaded a fresh copy of Putty and didn't make any changes to the configuration - I just typed in my host name to connect, and I still have the problem, so I don't think it's anything with Putty (unless I need to make some config changes) Has anyone else had this problem, and can anyone think of how to fix it? Thanks in advance! Brian

    Read the article

  • Storing game objects with generic object information

    - by Mick
    In a simple game object class, you might have something like this: public abstract class GameObject { protected String name; // other properties protected double x, y; public GameObject(String name, double x, double y) { // etc } // setters, getters } I was thinking, since a lot of game objects (ex. generic monsters) will share the same name, movement speed, attack power, etc, it would be better to have all that information shared between all monsters of the same type. So I decided to have an abstract class "ObjectData" to hold all this shared information. So whenever I create a generic monster, I would use the same pre-created "ObjectData" for it. Now the above class becomes more like this: public abstract class GameObject { protected ObjectData data; protected double x, y; public GameObject(ObjectData data, double x, double y) { // etc } // setters, getters public String getName() { return data.getName(); } } So to tailor this specifically for a Monster (could be done in a very similar way for Npcs, etc), I would add 2 classes. Monster which extends GameObject, and MonsterData which extends ObjectData. Now I'll have something like this: public class Monster extends GameObject { public Monster(MonsterData data, double x, double y) { super(data, x, y); } } This is where my design question comes in. Since MonsterData would hold data specific to a generic monster (and would vary with what say NpcData holds), what would be the best way to access this extra information in a system like this? At the moment, since the data variable is of type ObjectData, I'll have to cast data to MonsterData whenever I use it inside the Monster class. One solution I thought of is this, but this might be bad practice: public class Monster extends GameObject { private MonsterData data; // <- this part here public Monster(MonsterData data, double x, double y) { super(data, x, y); this.data = data; // <- this part here } } I've read that for one I should generically avoid overwriting the underlying classes variables. What do you guys think of this solution? Is it bad practice? Do you have any better solutions? Is the design in general bad? How should I redesign this if it is? Thanks in advanced for any replies, and sorry about the long question. Hopefully it all makes sense!

    Read the article

  • How do I get long command lines to wrap to the next line?

    - by BrianH
    Edit It was my .bashrc file. I've copied the same profile from machine to machine, and I used special characters in my $PS1 that are somehow throwing it off. I'm now sticking with the standard bash variables for my $PS1. Thanks to @ændrük for the tip on the .bashrc! ...End Edit... Something I have noticed in Ubuntu for a long time that has been frustrating to me is when I am typing a command at the command line that gets longer (wider) than the terminal width, instead of wrapping to a new line, it goes back to column 1 on the same line and starts over-writing the beginning of my command line. (It doesn't actually overwrite the actual command, but visually, it is overwriting the text that was displayed). It's hard to explain without seeing it, but let's say my terminal was 20 characters wide (Mine is more like 120 characters - but for the sake of an example), and I want to echo the English alphabet. What I type is this: echo abcdefghijklmnopqrstuvwxyz But what my terminal looks like before I hit the key is: pqrstuvwxyzghijklmno When I hit enter, it echos abcdefghijklmnopqrstuvwxyz so I know the command was received properly. It just wrapped my typing after the "o" and started over on the same line. What I would expect to happen, if I typed this command in on a terminal that was only 20 characters wide would be this: echo abcdefghijklmno pqrstuvwxyz Background: I am using bash as my shell, and I have this line in my ~/.bashrc: set -o vi to be able to navigate the command line with VI commands. I am currently using Ubuntu 10.10 server, and connecting to the server with Putty. In any other environment I have worked in, if I type a long command line, it will add a new line underneath the line I am working on when my command gets longer than the terminal width and when I keep typing I can see my command on 2 different lines. But for as long as I can remember using Ubuntu, my long commands only occupy 1 line. This also happens when I am going back to previous commands in the history (I hit Esc, then 'K' to go back to previous commands) - when I get to a previous command that was longer than the terminal width, the command line gets mangled and I cannot tell where I am at in the command. The only work-around I have found to see the entire long command is to hit "Esc-V", which opens up the current command in a VI editor. I don't think I have anything odd in my .bashrc file. I commented out the "set -o vi" line, and I still had the problem. I downloaded a fresh copy of Putty and didn't make any changes to the configuration - I just typed in my host name to connect, and I still have the problem, so I don't think it's anything with Putty (unless I need to make some config changes) Has anyone else had this problem, and can anyone think of how to fix it? Thanks in advance! Brian

    Read the article

  • Exchange 2010, Exchange 2003 Mail Flow issue

    - by Ryan Roussel
    While performing the initial Exchange 2010 deployment for a customer migrating from Exchange 2003, I ran into an issue with mail flow between the two environments.  The Exchange 2003 mailboxes could send to Exchange 2010, as well as to and from the internet.  Exchange 2010 mailboxes could send and receive to the internet, however they could not send to Exchange 2003 mailboxes.   After scouring the internet for a solution, it seemed quite a few people were experiencing this issue with no resolution to be found, or at least not easily.  After many attempts of manually deleting and recreating the routing group connectors,  I finally lucked onto the answer in an obscure comment left to another blogger.   If inheritable permissions are not allowed on the Exchange 2003 object in the Active Directory schema, exchange server authentication cannot be achieved between the servers.   It seems when Blackberry Enterprise Server gets added to 2003 environments, a lot of Admins get tricky and add the BES Admin user explicitly to the server object  to allow  inheritance down from there to all mailboxes.  The problem is they also coincidently turn off inheritance to the server object itself from its parent containers.  You can re-establish inheritance without overwriting the existing ACL however so that the BES Admin can remain in the server object ACL.   By re-establishing inheritance to the 2003 server object, mail flow was instantly restored between the servers.    To re-establish inheritance: 1. Open ASDIedit by adding the snap-in to a MMC (should be included on your 2008 server where Exchange 2010 is installed) 2. Navigate to Configuration > Services > Microsoft Exchange > Exchange Organization > Administrative Groups > First Administrative Group > Servers 3. In the right pane, right click on the CN=Server Name of your Exchange 2003 Server, select properties 4. Navigate to the Security tab, hit advanced toward the bottom. 5. Check the checkbox that reads “include inheritable permissions” toward the bottom of the dialogue box.

    Read the article

  • How do I server multiple domains from the same directory and codebase without my configuraton breaking when apache.conf is overwritten?

    - by neokio
    I have 20 domains on a VPS running cPanel. One public_html is filled with code, the remaining 19 are symbolic links to that one. (For example, assets is a directory within public_html ... for the 19 others, there's a symbolic link to that directory in each each accounts public_html dir.) It's all PHP / MySQL database driven, with content changing depending on the domain. It works like a charm, assuming cPanel has suExec enabled correctly, and assuming apache.conf does NOT have SymLinksIfOwnerMatch enabled. However, every few weeks, my apache.conf is mysteriously overwritten, re-enabling SymLinksIfOwnerMatch, and disabling all 19 linked sites for as long as it takes for me to notice. Here's the offending line in apache.conf: <Directory "/"> AllowOverride All Options ExecCGI FollowSymLinks IncludesNOEXEC Indexes SymLinksIfOwnerMatch </Directory> The addition of SymLinksIfOwnerMatch disables the sites in a strange way ... the html is generated correctly, but all css/js/image in the html fails to load. Clicking any link redirects to /. And I have no idea why. I do have a few things in my .htaccess, which work fine when SymLinksIfOwnerMatch is not present: <IfModule mod_rewrite.c> # www.example.com -> example.com RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^ http://%1%{REQUEST_URI} [R=301,L] # Remove query strings from static resources RewriteRule ^assets/js/(.*)_v(.*)\.js /assets/js/$1.js [L] RewriteRule ^assets/css/(.*)_v(.*)\.css /assets/css/$1.css [L] RewriteRule ^assets/sites/(.*)/(.*)_v(.*)\.css /assets/sites/$1/$2.css [L] # Block access to hidden files and directories RewriteCond %{SCRIPT_FILENAME} -d [OR] RewriteCond %{SCRIPT_FILENAME} -f RewriteRule "(^|/)\." - [F] # SLIR ... reroute images to image processor RewriteCond %{REQUEST_URI} ^/images/.*$ RewriteRule ^.*$ - [L] # ignore rules if URL is a file RewriteCond %{REQUEST_FILENAME} !-f # ignore rules if URL is not php #RewriteCond %{REQUEST_URI} !\.php$ # catch-all for routing RewriteRule . index.php [L] </ifModule> I also use most of the 5G Blacklist 2013 for protection against exploits and other depravities. Again, all of this works great, except when SymLinksIfOwnerMatch gets added back into apache.conf. Since I've failed to find the cause of whatever cPanel/security update is overwriting apache.conf, I thought there might be a more correct way to accomplish my goal using group permissions. I've created a 'www' group, added all accounts to the group, and chmod -R'd the code source to use that group. Everything is 644 or 755. But doesn't seem to be enough. My unix isn't that strong. Do you need to restart something for group changes to take effect? Probably not. Anyways, I'm entering unknown territory. Can anyone recommend the right way to configure a website for multiple sites using one codebase that doesn't rely on apache.conf?

    Read the article

  • Recovery of Windows DFS partion with shadow copy versioned files when overwritten with older modifie

    - by patjbs
    I've noticed the following "bug" on a DFS volume with shadow copies: Pretend you have the following folders/files under shadow copy versioning, going back two weeks. MyDirectory+ MyFile - Modified Date 8/1/2009 The current date: 8/30/2009 You have another version of MyFile stored elsewhere, with a modified date of 7/1/2009. Copy your other version of MyFile into MyDirectory, overwriting the newest version. I expected that you could roll back to the version that was there when it last imaged, say on the prior day and recover your 8/1 version. Not the case. Now, when you go to look at previous versions for the past two weeks, the versioning of that file will be entirely lost, and you'll be stuck with your older 7/1 version. Suckage. Questions: (1) Is this intentional, and if so, what's the rationale? I assume that DFS picks up on the versioning based on the current file, and that's what's wiping out prior versions, but it seems like a fairly stupid/naive way of handling versioning to me. (2) Is there a way to backtrack out of this, without resorting to restoration from other backup mediums? Thanks!

    Read the article

  • Moving automatically spam messages to a folder in Postfix

    - by cad
    Hi My problem is that I want to automatically to move spam messages to a folder and not sure how. I have a linux box giving email access. MTA is Postfix, IMAP is Courier. As webmail client I use Squirrelmail. To filter SPAM I use Spamassassin and is working ok. Spamassasin is overwriting subjects with [--- SPAM 14.3 ---] Viagra... Also is adding headers: X-Spam-Flag: YES X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on xxxx X-Spam-Level: ************** X-Spam-Status: Yes, score=14.3 required=2.0 tests=BAYES_99, DATE_IN_FUTURE_24_48,HTML_MESSAGE,MIME_HTML_ONLY,RCVD_IN_PBL, RCVD_IN_SORBS_WEB,RCVD_IN_XBL,RDNS_NONE,URIBL_RED,URIBL_SBL autolearn=no version=3.2.5 X-Spam-Report: * 0.0 URIBL_RED Contains an URL listed in the URIBL redlist * [URIs: myimg.de] * 3.5 BAYES_99 BODY: Bayesian spam probability is 99 to 100% * [score: 1.0000] * 0.9 RCVD_IN_PBL RBL: Received via a relay in Spamhaus PBL * [113.170.131.234 listed in zen.spamhaus.org] * 3.0 RCVD_IN_XBL RBL: Received via a relay in Spamhaus XBL * 0.6 RCVD_IN_SORBS_WEB RBL: SORBS: sender is a abuseable web server * [113.170.131.234 listed in dnsbl.sorbs.net] * 3.2 DATE_IN_FUTURE_24_48 Date: is 24 to 48 hours after Received: date * 0.0 HTML_MESSAGE BODY: HTML included in message * 1.5 MIME_HTML_ONLY BODY: Message only has text/html MIME parts * 1.5 URIBL_SBL Contains an URL listed in the SBL blocklist * [URIs: myimg.de] * 0.1 RDNS_NONE Delivered to trusted network by a host with no rDNS I want to automatically to move spam messages to a folder. Ideally (not sure if possible) only to move messages with puntuation 5.0 or more to folder.. spam between 2.0 and 5.0 I want to be stored in Inbox. (I plan later to switch autolearn on) After reading a lot in procmail, postfix and spamassasin sites and googling a lot (lot of outdated howtos) I found two solutions but not sure which is the best or if there is another one: Put a rule in squirrelmail (dirty solution?) Use Procmail Which is the best option? Do you have any updated howto about it? Thanks

    Read the article

  • ext4 filesystem corruption -- maybe hardware error?

    - by pts
    I'm getting these errors in dmesg after about half an hour after I turn on the computer: [ 1355.677957] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318420: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251700offset=0(0), inode=1802725748, rec_len=179136, name_len=32 [ 1355.677973] Aborting journal on device sda2-8. [ 1355.678101] EXT4-fs (sda2): Remounting filesystem read-only [ 1355.690144] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1318416: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251699offset=0(0), inode=2194783952, rec_len=53280, name_len=152 [ 1356.864720] EXT4-fs error (device sda2): htree_dirblock_to_tree: inode #1312795: (comm updatedb.mlocat) bad entry in directory: directory entry across blocks - block=5251176offset=1460(13748), inode=1432317541, rec_len=208208, name_len=119 /dev/sda is an SSD, and it's using the noop scheduler. /etc/fstab entry: UUID=acb4eefa-48ff-4ee1-bb5f-2dccce7d011f / ext4 errors=remount-ro,noatime,discard,user_xattr 0 1 System information: $ cat /proc/mounts | grep /dev/sd /dev/sda1 /boot ext2 rw,noatime,errors=continue 0 0 $ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.04 DISTRIB_CODENAME=lucid DISTRIB_DESCRIPTION="Ubuntu 10.04.3 LTS" $ uname -a Linux leetpad 2.6.35-30-generic-pae #61~lucid1-Ubuntu SMP Thu Oct 13 21:14:29 UTC 2011 i686 GNU/Linux I've run memtest for 7 hours, it didn't found any memory errors. Any obvious ideas what can go wrong in this case? The most reasonable thing I can imagine is that the SSD is silently dropping some write requests, which eventually leads to an EXT4 filesystem inconsistency (but no disk I/O errors). How can this happen? Is there a relevant configuration option I should ensure to be set correctly? What tools should I use to diagnose the hardware failures? Would it be possible to diagnose the SSD failure without overwriting data?

    Read the article

  • Intel Rapid Storage Technology (pre-OS) driver installation

    - by Nero theZero
    My desktop machine is built on Gigabyte GA-Z87-UD3H and Gigabyte provides the latest driver for Intel Rapid Storage Technology (IRST), which I installed after installing the OS. Same goes for my Lenovo Thinkpad-T420. And for both machine, checking the controller device under the IDE ATA/ATAPI Controllers section in Device Manager I see the driver has been updated to the latest version. I set the SATA controller to AHCI from BIOS On the desktop machine I have one WD 2TB BLACK & one WD 3TB Green I don’t use RAID, & no chance of using in near future, but according to Intel IRST improves performance in single disk scenario too. Now I have the following questions – What is the actual purpose of IRST (pre-OS install) driver that doesn’t get served with a post-OS driver that I installed? There must be some difference, otherwise there wouldn’t be a pre-OS version of the driver. Right? In the pre-OS procedure (loading the drivers at OS-installation time) after successfully completing the OS installation, do I need that post-OS driver? Because after installing from that one I got a quick launch icon that runs the IRST configuration application. Where do get that after installing the pre-OS driver? As it is “pre-OS”, when I load it at OS-installation time, does it updates anything at BIOS level or anywhere other than HDD? That’s because I’m going to dual boot Windows 7 with Windows 8.1, and after installing Windows 7 when I install Windows 8.1 & load the IRST driver for that, is there any chance of any “overwriting” or OS-incompatibility? In short, is there anything specific to follow while installing the second OS?

    Read the article

  • Break all hardlinks within a folder

    - by Georges Dupéron
    I have a folder which contains a certain number of files which have hard links (in the same folder or somewhere else), and I want to de-hardlink these files, so they become independant, and changes to their contents won't affect any other file (their link count becomes 1). Below, I give a solution which basically copies each hard link to another location, then move it back in place. However this method seems rather crude and error-prone, so I'd like to know if there is some command which will de-hardlink a file for me. Crude answer : Find files which have hard links (Edit: To also find sockets, etc. that have hardlinks, use find -not -type d -links +1) : find -type f -links +1 A crude method to de-hardlink a file (copy it to another location, and move it back) : Edit: As Celada said, it's best to do a cp -p below, to avoid loosing timestamps and permissions. Edit: Create a temporary directory and copy to a file under it, instead of overwriting a temp file, it minimizes the risk to overwrite some data, though the mv command is still risky (thanks @Tobu). # This is unhardlink.sh set -e for i in "$@"; do temp="$(mktemp -d ./hardlnk-XXXXXXXX)" [ -e "$temp" ] && cp -ip "$i" "$temp/tempcopy" && mv "$temp/tempcopy" "$i" && rmdir "$temp" done So, to un-hardlink all hard links (Edit: changed -type f to -not -type d, see above) : find -not -type d -links +1 -print0 | xargs -0 unhardlink.sh

    Read the article

  • Windows Server 2008 (Web Server) Replication

    - by justjoshingyou
    We have a load balanced environment with Windows Server 2008. What are some best practices to setting up replication across the web servers? Do I only want to replicate the web folders? How about replicating IIS changes - or do I need to make IIS changes on every server? I've never, ever set up replication, but I have worked with a web farm that used it before. Basically, I only know the basics about how it works, and am looking for any advice, guides, warnings, etc on setting this up. If you'd like to offer any advice, I'll let you know how our environment is for now. We have 1 prod server up and the second is nearly ready to go. We are using a cloud system and all machines are VM's. I am in the process of setting up the domain controller now (as I need to have one for DFS). Any ideas on the best way to go about setting up replication? Should we just stick the prod server in from the start or set up using a test VM and our second server and then switch it up later? I do not want to risk overwriting our prod server. Thanks!

    Read the article

  • HTTPS/HTTP redirects via .htaccess

    - by Winston
    I have a somehow complicated problem I am trying to solve. I've used the following .htaccess directive to enable some sort of Pretty URLs, and that worked fine. For example, http://myurl.com/shop would be redirected to http://myurl.com/index.php/shop, and that was well working (note that stuff such as myurl.com/css/mycss.css) does not get redirected: RewriteEngine on RewriteCond ${REQUEST_URI} !^(index\.php$) RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^/?(.*)$ index.php/$1 [L] But now, as I have introduced SSL to my webpage, I want the following behaviour: I basically want the above behaviour for all pages except admin.php and login.php. Requests to those two pages should be redirected to the HTTPS part, whereas all other requests should be processed as specified above. I have come up with the following .htaccess, but it does not work. h*tps://myurl.com/shop does not get redirected to h*tp://myurl.com/index.php/shop, and h*tp://myurl.com/admin.php does not get redirected to h*tps://myurl.com/admin.php. RewriteEngine on RewriteCond %{HTTPS} on RewriteCond %{REQUEST_URI} !^(admin\.php$|login\.php$) RewriteRule ^(.*)$ http://%{HTTP_HOST}/${REQUEST_URI} [R=301,L] RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} ^(admin\.php$|login\.php$) RewriteRule ^(.*)$ https://myurl.com/%{REQUEST_URI} [R=301,L] RewriteCond %{REQUEST_URI} !^(index\.php$) RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^/?(.*)$ index.php/$1 [L] I know it has something to do with rules overwriting each other, but I am not sure since my knowledge of Apache is quite limited. How could I fix this apparently not that difficult problem, and how could I make my .htaccess more compact and elegant? Help is very much appreciated, thank you!

    Read the article

  • Accidentally dd'ed an image to wrong drive / overwrote partition table + NTFS partition start

    - by Kento Locatelli
    I screwed up and set the wrong output for dd when trying to copy a freenas iso, overwriting the wrong external hard drive. Ironically, I was trying to setup a freenas server for data backup... External drive is only used for data storage, system is entirely intact Drive had a single NTFS partition filing the entire device (2TB WD elements) Drive originally had an MBR partition table. Drive now shows as having a GPT, presumably from the freenas image. Drive was mounted at the time, with maybe a couple kB of data written/read after running dd Drive is just a few months old and healthy (regular SMART / fs checks) I have not reboot the OS (crunchbang) /proc/partition still holds the correct information (and has been stored) Have dd's output (records in / out / bytes) testdrive did not find any partitions on quick or deep search running photorec to recover the more important data (a couple recent plaintext files that hadn't been backed up yet). Vast majority of disk content ( 80%) is unnecessary media files. My current plan is to let photorec do it's thing, then recreate the mbr with gparted and use cfdisk to create another NTFS partition using the sector information from /sys/block/.../. Is that a good course of action (that is, a chance of success)? Or anything else I should try first? Possibly relevant information: dd if=FreeNAS-8.0.4-RELEASE-p3-x86.iso of=/dev/sdc: 194568+0 records in 194568+0 records out 99618816 bytes (100 MB) copied grep . /sys/block/sdc/sdc*/{start,size}: /sys/block/sdc/sdc1/start:2048 /sys/block/sdc/sdc1/size:3907022848 cat /proc/partitions: major minor #blocks name ** Snipped ** 8 32 1953512448 sdc 8 33 1953511424 sdc1 current fdisk -l output: WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sdc: 2000.4 GB, 2000396746752 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdc doesn't contain a valid partition table

    Read the article

  • How does Firefox sync really work (when adding new devices)?

    - by tim11g
    I'm adding some less frequently used computers to my Firefox sync account. These computers were previously synced using Foxmarks BYOS. When I started using Firefox Sync, I deleted some old bookmarks. Later, as I added some other machines, old bookmarks (that still existed on the other machines) were synced back to my main machine. To prevent that from happening, I wonder if I perhaps need to delete all the bookmarks from new machines before adding them to the Sync account. But then I worry that it might sync the deletion of all the bookmarks and delete them all from the server and my other machines. Is there any documentation on the exact syncing behavior in the case of adding new devices? Is there any way to monitor progress and sync status? Is there any way to cause a "one way" sync for first time connection (sync server to browser only, overwriting everything in the browser? Is there any way to see a list of devices that are associated, and the last time they have synced? Thanks!

    Read the article

  • Restoring MBR, partition table, and boot sector of memory card without data loss ("USBC")

    - by Synetech
    Abstract I have a FAT32 memory card that when inserted into a computer causes Windows to prompt to format it. The card is definitely not supposed to be blank and has a bunch of files on it. Symptoms Using a hex-editor/disk-viewer, I examined the card and found that several sectors/clusters have been overwritten with something that has a signature of USBC at the start of the sector. Specifically, the master boot record (and partition table) is gone (hence Windows thinking the card is blank and needing to be formatted), as are the boot sectors (they have the USBC signature and a volume label of NO NAME and partition type of FAT32). Fortunately, it looks like both copies of the FAT are almost entirely intact (a few FAT entries at the start of a cluster here and there seem to be overwritten by USBC). The root directory is also nearly intact—I can see the volume label entry and subdirectory listings, but one sector is overwritten. (There are no more instances of USBC after the last one in the FAT2.) Hypothesis These observations seem to indicate some sort of virus that erases a few key filesystem structures, and then overwrites a few extra sectors here and there. Googling it seems to corroborate the idea of a virus, except that others report a file called USBC which does not apply here, and in fact, could not be possible since there is no filesystem to even see files. I cannot find any information about a virus with these symptoms, nor a removal tool. (I can't help but wonder if it is actually due to an autorun virus prevention tool.) Question I can likely fix the FAT corruption since they are mostly contiguous chains and maybe even the lost sector of the root directory, but does anyone know of a convenient way to restore or (re)create the MBR/partition table and boot sectors (without formatting or overwriting the data)?

    Read the article

  • Grub rescue, unknown file system. Can't boot into Windows 7

    - by Sam J
    So, I'm confused, so I'm also going to use this question to get clarification and fix my computer. So, some background: I had Windows 7 on a 1 TB HDD and decided to partition my hard drive into two ~500 GB, one for Windows 7 and one for Ubuntu or whatever flavour I desired (like a sandbox partition...) I installed Ubuntu but the installation had issues so I decided to uninstall. Note before uninstallation I had to press f12 when I turned on to boot from my primary HDD, then choose what OS I wanted to use. Undesirable, but it worked. Anyway, after I decided to uninstall Ubuntu I went into Windows 7 Start Computer Manage and deleted the EXT4 filesystem (Ubuntu parition) giving me 4xx GB of free space. However when I restarted Windows 7, I am now unable to boot Windows. When I DON'T hit f12, I see a blank screen with a flashing underscore. When I DO hit f12, I choose my primary HDD, and then I get a GRUB error: Unknown filesystem: grub rescue _ Something I'm unclear of: GRUB boots linux partitions, right? What boots Windows? Is GRUB "overwriting" the Windows bootloader? How can I completely get Windows back to normal? (IE, It boots automatically without hitting f12.) Thanks for any help, I'm on a live CD version of Ubuntu right now until I can get back on Windows.

    Read the article

  • Looking for advice on using dd to backup a dual boot laptop.

    - by AvatarOfChronos
    My questions boils down to this. If I do "dd if=/dev/sda of=usbdrive" can anybody confirm that this will get everything including mbr/partition information/all four partitions and create a drive that I can swap with the failing internal drive without losing anything? If this is done while the computer is running will it still copy everything? At this point I'm afraid to shutdown the computer for fear of it never starting again. Secondly, how tolerant is dd of failing drives? Has anybody used it to recover a half dead drive before that can share any potential pitfalls? Did it get the data ok or is this going to be a hope for the best kind of situation? And lastly, If the usbdrive is larger than the failing internal drive I'll still be able to expand the partitions later so I'm not losing space? this last part seems silly to ask but with my current streak of bad luck I'll end up overwriting some magic bit and forever turning a 640gb hdd into a 500gb hdd. Also if anybody has a better solution to create a complete clone that gets everything I'm all for hearing about it. PostScript: I had been making periodic backups however when whatever miasma that killed the laptop struck it also got the NAS :( Post PostScript: both devices were on a UPS system.

    Read the article

  • Wipe free space on LVM-LUKS (dm-crypt) Volume

    - by peter4887
    My three partitions for my system are created with LVM on a LUKS partition (dm-crypt). These are /home, / and swap. The filesystem is ext4. They are encrypted, because they are on my laptop and I don't want that some laptop thieves get my data. But I often share my laptop with other people so they can access my encrypted partitions. I don't want that these people can recover my cache and all the data I deleted. So I'm now trying to wipe all my free space on /home to prevent against recovering with tools like photorec. (one overwrite should do, the need of multiple overwriting is just a rumor) But still I haven't found any solution to wipe this free space successfully. I tried dd if=/dev/zero of=/home/fillitup bs=512 count=[count of free sectiors] so my partition was complete full of data. df /dev/mapper/home said 100% is used and there are 0 sectors available. But I could still recover gigs of data with photorec, although I selected to recover just form the free space. photorec displays: /dev/mapper/home - 340 GB / 317 GiB (RO) , but df displays that the size of /home is just 313G, why are there these differences and what did the 340GB means? It looks like there is a place on my /dev/mapper/home partition, that I can't access to overwrite, but I can access it to recover. I also checked for corrupted sectors, but there aren't any. Maybe this is the space between my existing files? Did anyone knows why I can't wipe my free space with dd, and how I can find the location of the loads of recoverable files, to securely delete them?

    Read the article

  • Arduino IDE "launch 4j" error

    - by John
    I have a computer running Windows XP. I am trying to run the Arduino IDE 0022. I double-click on arduino.exe, it waits about 30 seconds on the load up title screen, and then it gives me this error: Launch 4j: an error occurred while starting the application My only choice is to click "OK"; the error goes away, and the Arduino IDE closes. If I try to delete the Arduino files (to try overwriting with some different files), I get an error that doesn't allow me to do so: Cannot delete awt.dll: Access denied Make sure the disk is not full or write protected and that the file is not currently in use. The only way to delete the file is by restarting the computer. So something must still be trying to run after that first error. I have noticed in Task Manager that some Java programs are still running: javaw.exe (3 processes) I think this is a problem with Java, but I checked and updated all of my Java software and it is all up to date. I have looked on other forums for this issue and none of them seemed to help. From the forums I have tried: Different Arduino IDE versions Updating Java Opening arduino.exe as Administrator Nothing has worked. Anyone have any suggestions?

    Read the article

  • nginx won't serve an error_page in a subdirectory of the document root

    - by Brandan
    (Cross-posted from Stack Overflow; could possibly be migrated from there.) Here's a snippet of my nginx configuration: server { error_page 500 /errors/500.html; } When I cause a 500 in my application, Chrome just shows its default 500 page (Firefox and Safari show a blank page) rather than my custom error page. I know the file exists because I can visit http://server/errors/500.html and I see the page. I can also move the file to the document root and change the configuration to this: server { error_page 500 /500.html; } and nginx serves the page correctly, so it's doesn't seem like it's something else misconfigured on the server. I've also tried: server { error_page 500 $document_root/errors/500.html; } and: server { error_page 500 http://$http_host/errors/500.html; } and: server { error_page 500 /500.html; location = /500.html { root /path/to/errors/; } } with no luck. Is this expected behavior? Do error pages have to exist at the document root, or am I missing something obvious? Update 1: This also fails: server { error_page 500 /foo.html; } when foo.html does indeed exist in the document root. It almost seems like something else is overwriting my configuration, but this block is the only place anywhere in /etc/nginx/* that references the error_page directive. Is there any other place that could set nginx configuration?

    Read the article

  • Iptables and system-config-firewall

    - by nivde92
    I had a set of netfilter rules set with iptables, but someone else told me to use system-config-firewall to add a rule for sharing files with Windows. (Samba) This rewrote the iptables rules file and I lost my own custom rules. I have a backup copy, but am having trouble restoring them. Edit: The server is Centos, I already tried to restore the rules with iptables-restore < /root/working.iptables.rules but for some reason the rules don't change. What are you trying to do? Trying to restore the iptable rules that I have in a backup file. What have you tried in order to make it happen? I've tried to modify the iptables file with vim, since the command iptables-restore was no help. What results did you expect? To get the old rules back. What actually happened? Nothing, when I run the command or edit the file by hand the file doesn't change at all. Maybe something else it's overwriting.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14  | Next Page >