Search Results

Search found 22515 results on 901 pages for 'created'.

Page 6/901 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Recover RAID 5 data after created new array instead of re-using

    - by Brigadieren
    Folks please help - I am a newb with a major headache at hand (perfect storm situation). I have a 3 1tb hdd on my ubuntu 11.04 configured as software raid 5. The data had been copied weekly onto another separate off the computer hard drive until that completely failed and was thrown away. A few days back we had a power outage and after rebooting my box wouldn't mount the raid. In my infinite wisdom I entered mdadm --create -f... command instead of mdadm --assemble and didn't notice the travesty that I had done until after. It started the array degraded and proceeded with building and syncing it which took ~10 hours. After I was back I saw that that the array is successfully up and running but the raid is not I mean the individual drives are partitioned (partition type f8 ) but the md0 device is not. Realizing in horror what I have done I am trying to find some solutions. I just pray that --create didn't overwrite entire content of the hard driver. Could someone PLEASE help me out with this - the data that's on the drive is very important and unique ~10 years of photos, docs, etc. Is it possible that by specifying the participating hard drives in wrong order can make mdadm overwrite them? when I do mdadm --examine --scan I get something like ARRAY /dev/md/0 metadata=1.2 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b name=<hostname>:0 Interestingly enough name used to be 'raid' and not the host hame with :0 appended. Here is the 'sanitized' config entries: DEVICE /dev/sdf1 /dev/sde1 /dev/sdd1 CREATE owner=root group=disk mode=0660 auto=yes HOMEHOST <system> MAILADDR root ARRAY /dev/md0 metadata=1.2 name=tanserv:0 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b Here is the output from mdstat cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[0] sdf1[3] sde1[1] 1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> fdisk shows the following: fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000bf62e Device Boot Start End Blocks Id System /dev/sda1 * 1 9443 75846656 83 Linux /dev/sda2 9443 9730 2301953 5 Extended /dev/sda5 9443 9730 2301952 82 Linux swap / Solaris Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de8dd Device Boot Start End Blocks Id System /dev/sdb1 1 91201 732572001 8e Linux LVM Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00056a17 Device Boot Start End Blocks Id System /dev/sdc1 1 60801 488384001 8e Linux LVM Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000ca948 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/dm-0: 1250.3 GB, 1250254913536 bytes 255 heads, 63 sectors/track, 152001 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-0 doesn't contain a valid partition table Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x93a66687 Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xe6edc059 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/md0: 2000.4 GB, 2000401989632 bytes 2 heads, 4 sectors/track, 488379392 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Per suggestions I did clean up the superblocks and re-created the array with --assume-clean option but with no luck at all. Is there any tool that will help me to revive at least some of the data? Can someone tell me what and how the mdadm --create does when syncs to destroy the data so I can write a tool to un-do whatever was done? After the re-creating of the raid I run fsck.ext4 /dev/md0 and here is the output root@tanserv:/etc/mdadm# fsck.ext4 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 Per Shanes' suggestion I tried root@tanserv:/home/mushegh# mkfs.ext4 -n /dev/md0 mke2fs 1.41.14 (22-Dec-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 122101760 inodes, 488379392 blocks 24418969 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 14905 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 and run fsck.ext4 with every backup block but all returned the following: root@tanserv:/home/mushegh# fsck.ext4 -b 214990848 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Invalid argument while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Any suggestions? Regards!

    Read the article

  • Cocoa Core data: cannot save Created Items in NSTableview

    - by Paul Rostorp
    Hello, I'm am a beginner in mac os x development and am trying to get started with all this. Here is my problem : I've create a non-document based cocoa app using core data as storage. I've added an entity and attributes to the xdatamodel. In IB i've created an NSArrayController and linked it properly. I've created an nstableview binded to the nsarraycontroller. Next I added a button linked to nsarraycontroller with the " add: " method. When I try it out, I can add and edit the items in the table. Here comes the problem: Core data is supposed to save everything automatically, but to make sure i linked the "save" button in the menu to the appdelegate and to the " file's owner" , first responder, application... everything possible ( with both " save :" and " saveaction:" methods ). And still it doesn't save when clicking save: when I restart the cell created ( and renamed ) are gone. And also, I didn't even edit the source code yet; core data for such simple tasks is supposed to only need Interface builder. Please help me for this, I haven't found any threads resolving this problem. Thank you in advance.

    Read the article

  • jQuery autocomplete for dynamically created inputs

    - by Jamatu
    Hi all! I'm having an issue using jQuery autocomplete with dynamically created inputs (again created with jQuery). I can't get autocomplete to bind to the new inputs. Autocomplete $("#description").autocomplete({ source: function(request, response) { $.ajax({ url: "../../works_search", dataType: "json", type: "post", data: { maxRows: 15, term: request.term }, success: function(data) { response($.map(data.works, function(item) { return { label: item.description, value: item.description } })) } }) }, minLength: 2, }); New table row with inputs var i = 1; var $table = $("#works"); var $tableBody = $("tbody",$table); $('a#add').click(function() { var newtr = $('<tr class="jobs"><td><input type="text" name="item[' + i + '][quantity]" /></td><td><input type="text" id="description" name="item[' + i + '][works_description]" /></td></tr>'); $tableBody.append(newtr); i++; }); I'm aware that the problem is due to the content being created after the page has been loaded but I can't figure out how to get around it. I've read several related questions and come across the jQuery live method but I'm still in a jam! Any advice?

    Read the article

  • Newly created Document library and columns using webservices are not visible on sharepoint

    - by Royson
    Hi, for creating a columns I worked on this code . and for creating document library Lists listService = new Lists(); listService.PreAuthenticate = true; listService.Credentials = new NetworkCredential(username,password,domain; String url = "http://YourServer/SiteName/"; listService.Url = url @ + /_vti_bin/lists.asmx"; XmlNode ndList = listService.AddList(NewListName, "Description", 101); Both are working successfully. But Problem i am facing is: New Columns and document library are not visible. I tried with comparing Field Value of Both Visible and No-Visible types. Difference i found is : Visible (Created Manually) doesn't contain Version value. were as i am creating have it. Can you help me out in this? EDIT: I checked contents of ndList node, List is created and it is visible on my UI. but on sharepoint it should be listed in 'Document' tab where default 'Shared Documents' library is shown. If i click on 'Documents' then we can also see all lib created by this code. Visible means library displayed under 'Documents' tab

    Read the article

  • DataGridView not displaying a row after it is created

    - by joslinm
    Hi, I'm using Visual Studio 10 and I just created a Database using SQL Server CE. Within it, I made a table CSLDataTable and that automatically created a CSLDataSet & CSLDataTableTableAdapter. The three variables were automatically created in my MainWindow.cs class: cSLDataSet cSLDataTableTableAdapter cSLDataTableBindingSource I have added a DataGridView in my Form called dataGridView and datasource cSLDataTableBindingSource. In my MainWindow(), I tried adding a row as a test: public MainWindow() { InitializeComponent(); CSLDataSet.CSLDataTableRow row = cSLDataSet.CSLDataTable.NewCSLDataTableRow(); row.File_ = "file"; row.Artist = "artist11"; row.Album = "album"; row.Save_Structure = "save"; row.Sent = false; row.Error = true; row.Release_Format = "release"; row.Bit_Rate = "bitrate.."; row.Year = "year"; row.Physical_Format = "format"; row.Bit_Format = "bitformat"; row.File_Path = "File!!path"; row.Site_Origin = "what"; cSLDataSet.CSLDataTable.Rows.Add(row); cSLDataSet.AcceptChanges(); cSLDataTableTableAdapter.Fill(cSLDataSet.CSLDataTable); cSLDataTableTableAdapter.Update(cSLDataSet); dataGridView.Refresh(); dataGridView.Update(); } In regards to the DataSet methods I tried calling, I had been trying to find a "correct" way to interact with the adapter, dataset, and datatable to successfully show the row, but to no avail. I'm rather new to using SQL Server CE Database, and read a lot of the MSDN sites & thought I was on the right track, but I've had no luck. The DataGridView shows the headers correctly, but that new row does not show up.

    Read the article

  • What happens when value types are created?

    - by Bob
    I'm developing a game using XNA and C# and was attempting to avoid calling new struct() type code each frame as I thought it would freak the GC out. "But wait," I said to myself, "struct is a value type. The GC shouldn't get called then, right?" Well, that's why I'm asking here. I only have a very vague idea of what happens to value types. If I create a new struct within a function call, is the struct being created on the stack? Will it simply get pushed and popped and performance not take a hit? Further, would there be some memory limit or performance implications if, say, I need to create many instances in a single call? Take, for instance, this code: spriteBatch.Draw(tex, new Rectangle(x, y, width, height), Color.White); Rectangle in this case is a struct. What happens when that new Rectangle is created? What are the implications of having to repeat that line many times (say, thousands of times)? Is this Rectangle created, a copy sent to the Draw method, and then discarded (meaning no memory getting eaten up the more Draw is called in that manner in the same function)? P.S. I know this may be pre-mature optimization, but I'm mostly curious and wish to have a better understanding of what is happening.

    Read the article

  • Table not created by Hibernate

    - by User1
    I annotated a bunch of POJO's so JPA can use them to create tables in Hibernate. It appears that all of the tables are created except one very central table called "Revision". The Revision class has an @Entity(name="RevisionT") annotation so it will be renamed to RevisionT so there is not a conflict with any reserved words in MySQL (the target database). I delete the entire database, recreate it and basically open and close a JPA session. All the tables seem to get recreated without a problem. Why would a single table be missing from the created schema? What instrumentation can be used to see what Hibernate is producing and which errors? Thanks. UPDATE: I tried to create as a Derby DB and it was successful. However, one of the fields has a a name of "index". I use @org.hibernate.annotations.IndexColumn to specify the name to something other than a reserved word. However, the column is always called "index" when it is created. Here's a sample of the suspect annotations. @ManyToOne @JoinColumn(name="MasterTopID") @IndexColumn(name="Cx3tHApe") protected MasterTop masterTop; Instead of creating MasterTop.Cx3tHApe as a field, it creates MasterTop.Index. Why is the name ignored?

    Read the article

  • How to access widgets created within functions in later function calls in Qt

    - by Inanepenguin
    So currently I have code, in C++, that creates a few QLabels, a QLineEdit, and a QCheckBox when a selection is made from a QComboBox. However, I would like to be able to access the widgets I have created in a later function to destroy them if a new selection is made from the combo box. I am able to access the objects created from using the Designer by doing ui-Object but i am not able to do that with objects created by using my own code. Can I do that some how, because I know how to work with that. In short, I would like to be able to dynamically create/destroy QWidgets based on selections made by the user. Is there a reference I should know of to do this, or any documentation? Or am I just completely going about this the wrong way? Here is the code I presently have for creating the objects: if (eventType == QString::fromStdString("Birthday")) { QLabel *label1 = new QLabel ("Celebrant: "); QLabel *label2 = new QLabel ("Surprise: "); QLineEdit *lineEdit = new QLineEdit; QCheckBox *box = new QCheckBox; ui->gridLayout->addWidget(label1,3,0,1,1, 0); ui->gridLayout->addWidget(label2,4,0,1,1,0); ui->gridLayout->addWidget(lineEdit,3,1,1,1,0); ui->gridLayout->addWidget(box,4,1,1,2,0); }

    Read the article

  • Unresolved compilation problems -- can't use .jar files that I have created

    - by Mike
    I created a few .jar files and am trying to access them in another application - I have tried to use both Eclipse and IntelliJ and experience the same issue: java.lang.Error: Unresolved compilation problems: The import com.XXXX.XXXXXXXXX.project2 cannot be resolved The import com.XXXX.XXXXXXXXX.project2 cannot be resolved BeanFactory cannot be resolved to a type Author cannot be resolved to a type AuthorFactoryImpl cannot be resolved to a type Author cannot be resolved to a type Author cannot be resolved to a type I have been using Maven during this process and the jars compile fine. I have included them on the file path using both the Maven .pom file and directly assigning them. I also have unassigned the direct file path and left the reference in Maven and vise versa -- no difference. See below .jar file class info: file structure: Author.java BeanWithIdentityInterface Books Subject ie: Interface: package com.XXXX.training; /** * Created with IntelliJ IDEA. * User: kBPersonal * Date: 11/5/12 * Time: 3:16 PM * */ public interface BeanWithIdentityInterface <I> { I getId(); } Author.java: package com.XXXX.training; /** * Created with IntelliJ IDEA. * User: kBPersonal * Date: 10/25/12 * Time: 12:03 PM */ public class Author implements BeanWithIdentityInterface <Integer>{ private Integer id = null; private String name = null; private String picture = null; private String bio = null; public Author(Integer id, String bio, String name, String picture) { this.id = id; this.bio = bio; this.name = name; this.picture = picture; } public Author (){} @Override public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getPicture() { return picture; } public void setPicture(String picture) { this.picture = picture; } public String getBio() { return bio; } public void setBio(String bio) { this.bio = bio; } @Override public String toString() { return "\n\tAuthor Id: "+this.getId() + " | Bio:"+ this.getBio()+ " | Name:"+ this.getName()+ " | Picture: "+ this.getPicture(); } } implementing Servlet: package com.acentia.training.project3.controller; import com.acentia.training.*; import com.acentia.training.project2.AuthorFactoryImpl; import com.acentia.training.project2.BeanFactory; import javax.servlet.ServletException; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.IOException; import java.io.PrintWriter; /** * Created with IntelliJ IDEA. * User: kBPersonal * Date: 11/11/12 * Time: 6:34 PM * */ public class ListAuthorServlet extends AbstractBaseServlet { private static final long serialVersionUID = -6934109551750492182L; public void doProcess(final HttpServletRequest request, final HttpServletResponse response) throws IOException { final BeanFactory<Author, Integer> authorFactory = new AuthorFactoryImpl(); Author author = null; if (authorFactory != null) { author = (Author) authorFactory.getMember(5); } I can't pull the Author class. Any help would be greatly appreciated.

    Read the article

  • Restore Bak File created on Windows XP pro in Windows XP Pro 64

    - by Kobojunkie
    I have a situation that I need help with. I backed up my Files on Windows XP using the System BackUp utility/wizard, and then Installed a new operating system on the machine. Now I want to restore my old files via the .bak file but it is not being recognized at all. Did I do this wrong or is there a way to still get back my old files on my new OS ? Thanks in advance!

    Read the article

  • Cannot log in with created user in mysql

    - by Brian G
    Using this command GRANT ALL PRIVILEGES ON *.* to brian@'%' identified by 'password'; I try to login with: mysql -u brian -ppassword The error is: ERROR 1045 (28000): Access denied for user 'brian'@'localhost' (using password: YES) I am doing this as root and I did try to flush privileges. I tried this with countless users but it does not seem to work. I can create a user with no password and login works. Command line and from phpmyadmin

    Read the article

  • Why do weekly tasks created via PowerShell using a different user fail with error 0x41306

    - by Danny Tuppeny
    We have some scripts that create scheduled jobs using PowerShell as part of our application. When testing them recently, I noticed that some of them always failed immediately, and no output is ever produced (they don't even appear in the Get-Job list). After many days of tweaking, we've managed to isolate it to any jobs that are set to run weekly. Below is a script that creates two jobs that do exactly the same thing. When we run this on our domain, and provide credentials of a domain user, then force both jobs to run in the Task Scheduler GUI (right-click - Run), the daily one runs fine (0x0 result) and the weekly one fails (0x41306). Note: If I don't provide the -Credential param, both jobs work fine. The jobs only fail if the task is both weekly, and running as this domain user. I can't find information on why this is happening, nor think of any reason it would behave differently for weekly jobs. The "History£ tab in the Task Scheduler has almost no useful information, just "Task stopping due to user request" and "Task terminated", both of which have no useful info: Task Scheduler terminated "{eabba479-f8fc-4f0e-bf5e-053dfbfe9f62}" instance of the "\Microsoft\Windows\PowerShell\ScheduledJobs\Test1" task. Task Scheduler stopped instance "{eabba479-f8fc-4f0e-bf5e-053dfbfe9f62}" of task "\Microsoft\Windows\PowerShell\ScheduledJobs\Test1" as request by user "MyDomain\SomeUser" . What's up with this? Why do weekly tasks run differently, and how can I diganose this issue? This is PowerShell v3 on Windows Server 2008 R2. I've been unable to reproduce this locally, but I don't have a user set up in the same way as the one in our production domain (I'm working on this, but I wanted to post this ASAP in the hope someone knows what's happening!). Import-Module PSScheduledJob $Action = { "Executing job!" } $cred = Get-Credential "MyDomain\SomeUser" # Remove previous versions (to allow re-running this script) Get-ScheduledJob Test1 | Unregister-ScheduledJob Get-ScheduledJob Test2 | Unregister-ScheduledJob # Create two identical jobs, with different triggers Register-ScheduledJob "Test1" -ScriptBlock $Action -Credential $cred -Trigger (New-JobTrigger -Weekly -At 1:25am -DaysOfWeek Sunday) Register-ScheduledJob "Test2" -ScriptBlock $Action -Credential $cred -Trigger (New-JobTrigger -Daily -At 1:25am)

    Read the article

  • New Windows 7 Libraries created keep disappearing

    - by Sean
    I've just got a new laptop that came pre-installed with Windows 7 Professional edition. One of the new features of Windows 7 is Libraries. I'm familiar with how this works and am trying to create my own library called 'Work' to include all my work folders on my computer. However every time I create a new custom Library, after I rename it, it disappears from my Library menu. Each time I click Libraries in the Explorer, I keep seeing the same 4 default libraries, I.e. Documents, Pictures, Music, and Video. So when I try to create a new Library called 'Work' again, I get a pop up message "Do you want to rename New library to Work (2).Library-Microsoft?" Which means that my original work library still exists but for some reason I can't see it. Can someone please help me figure out why this is happening?

    Read the article

  • Created new torrent but client unable to find seeds

    - by ehfeng
    I tried creating a new torrent, using uTorrent, following the directions from torrentfreak (http://torrentfreak.com/how-to-create-a-torrent/). I used a bunch of trackers (just in case any individual one was having troubles) and uTorrent shows it as "seeding" with 100% of the file. It shows: seeds 0(1), peers 0(0). http://exodus.desync.com/announce http://eztv.tracker.prq.to/announce http://open.tracker.thepiratebay.org/announce http://www.torrent-downloads.to:2710/announce http://denis.stalker.h3q.com:6969/announce udp://denis.stalker.h3q.com:6969/announce http://www.sumotracker.com/announce I've also attempted to remove the torrent and then re-add the torrent, pointing it to my already existing files, forcing uTorrent to recheck the files and then begin seeding. Yet when I share the .torrent file with my other computer and attempt to download, it is unable to find any peers or seeds. To confirm that both clients are working, I downloaded a torrent on both computers, using their respective clients, using torrents I found on isohunt. This tells me that there must be something wrong in how I'm creating my torrents. Any help is appreciated.

    Read the article

  • Should windows services be created with custom users, or should I use one of LocalSystem/LocalServic

    - by Justin Dearing
    I'm asking the question in general for the average custom developed NT service or unix OSS daemon ported to windows with SCM support. However, at the moment my immediate concern is for mongodb. From my experience with UNIX I like all my services to run as different unprivileged users. The way this has translated to windows is as follows: Create a local (or domain if it has to talk to SQL server) windows user with a long random password (lately an ASCII85 encoded guid generated from a different machine). Set it to next expire and forbid it from changing its password. Remove that user from the "Users Group". Grant that user "Login as a Service" permission. Give it read permission to the folder where the app resides, and write permission to the logs and data files the applications use. Assign the user to the service. Troubleshoot until the service starts. My feeling is that the unprivileged users are less powerful than the 3 special service users. I also feel that by isolating which users run which services, I would limit collateral damage if a way to compromise one service was found.

    Read the article

  • Trying to understand why VLANs need to be created on intermediate switches

    - by Jon Reeves
    I'm currently studying for the Cisco switching exam and having trouble understanding exactly how 802.1q tagging works. Given three daisy chained switches (A,B, and C) with trunk ports between them and VLAN 101 defined on both end switches (A and C), I'm not sure why the VLAN also needs to be defined on the middle one (B)? Note that I am not disputing that it does need to be configured, I'm just trying to understand why exactly. As I understand it, traffic from VLAN 101 on switch A will be tagged as it goes through the trunk to switch B. According to the documentation I have read, trunks will pass all VLANs by default, and the .1q tag is only removed when the frame leaves through an access port on the relevant VLAN. From this I would expect switch B to simply forward the tagged frame unchanged through the trunk to switch C. Can anyone shed some light on how switch B processes this frame and why it does not get forwarded through the other trunk ?

    Read the article

  • How to restore an os from an image created by macrium reflect

    - by user23950
    Can you recommend of other os imaging software that you use if you haven't use macrium reflect yet. And how do I restore the os from that image? And which is faster? reinstalling the os then install the applications that you need. Or making use of the imaging software to backup the installation along with the applications?Which takes more time?

    Read the article

  • What's created this key combo?

    - by user73784
    I've recently upgraded my iMac to OSX Mavericks. I'm finding that when I press Control-Shift-N something is immediately locking my screen and making it dark. I can still hear my streaming audio playing, so I guess it's not logging me out. I've looked carefully through the list of keyboard shortcuts in System Preferences, and that key combo isn't mentioned anywhere. Is there any place I can get a list of all active keyboard shortcuts? Is there any terminal command I can run to see which application has taken over this keypress combination? It's really annoying because I habitually use that combo in PHPStorm! (And yes, I have checked the keymaps there too.)

    Read the article

  • Temp files created in every folder in Windows Server 2003

    - by i.h4d35
    So we have some folders which are shared over the AD Domain (Windows Server 2003). It was just noticed that in 2 of those folders (which contain only Excel and Word files), whenever a file is opened and closed, the temp file which was opened corresponding to that file still remains. Apparently, this's been going on for the past couple of years (which has led to an insane amount of temp files in each folder/subfolder under those shared folders). These shared folders are under the D:drive and not C: drive. There is only one group (containing 2 users) who access the said folders. I cannot understand if this has to do with the settings/permissions for the User/Group/Individual Client machine. For now, I have manually deleted all the temp files from each folder/subfolder. While this is not critical at the moment, I'd still like to clear this up. Also, it takes an additional fraction of a second to open folders that contains more than 10,000 temp files. Thanks in advance.

    Read the article

  • A "region code" restriction for a custom created video dvd file

    - by user180820
    I want to create a video dvd ( no menus, just "plug and play" ) from a few video files. I`m doing it like this: ffmpeg -i sample-media/hellboy-2.wmv -y -target ntsc-dvd sample-media-to-mpeg/hellboy-2.vob dvdauthor -o sample-dvd -x dvdauthor-settings.xml mkisofs -dvd-video -o hellboy-2-trailers.iso sample-dvd/ where "dvdauthor-settings.xml" is: link. But when I try to play the iso file in windows it says: Windows Media Player cannot play the DVD because the disc prohibits playback in your region of the world. You must obtain a disc that is intended for your geographic region. When I open the *.IFO file with IfoEdit it says that all world regions are unabled. Can someone tell me why is this happening? ( maybe the whole process of creating the *.iso file is wrong? )

    Read the article

  • MegaCli newly created disk doesn't appear under /dev/sdX

    - by Henry-Nicolas Tourneur
    After having successfully added 2 new disks in a new RAID virtual drive (background initialization done), I would have exepected it to appear under /dev/sdh but it's not there (so, unusable). The system is running a CentOS 5.2 64 bits, HAL and udev daemons are running, not records of any sdh apparition under the messsage log file or in dmesg, only MegaCli do see that virtual drive. Any idea ? Some data: [root@server ~]# ./MegaCli -LDInfo -LALL -a0 Adapter 0 -- Virtual Drive Information: Virtual Disk: 0 (target id: 0) Name: RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0 Size:139392MB State: Optimal Stripe Size: 64kB Number Of Drives:2 Span Depth:1 Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Virtual Disk: 1 (target id: 1) Name: RAID Level: Primary-1, Secondary-0, RAID Level Qualifier-0 Size:285568MB State: Optimal Stripe Size: 64kB Number Of Drives:2 Span Depth:1 Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default [root@server ~]# ls -l /dev/disk/by-id/scsi-360* lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36001ec90f82fe100108ca0a704098d09 -> ../../sda lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36001ec90f82fe100108ca0a704098d09-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36001ec90f82fe100108ca0a704098d09-part2 -> ../../sda2 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe07e78f94940c0000a0ee -> ../../sdf lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe07e78f94940c0000a0ee-part1 -> ../../sdf1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe972a3f91240a0000005f -> ../../sdb lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fe972a3f91240a0000005f-part1 -> ../../sdb1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fea7e18f94640c000020ec -> ../../sde lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fea7e18f94640c000020ec-part1 -> ../../sde1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0feb7da8f94340c0000203d -> ../../sdd lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0feb7da8f94340c0000203d-part1 -> ../../sdd1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fed7d78f94040c000080b7 -> ../../sdc lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a028e0fed7d78f94040c000080b7-part1 -> ../../sdc1 lrwxrwxrwx 1 root root 9 Nov 17 2010 /dev/disk/by-id/scsi-36090a05830145e58e0b9c479000010a1 -> ../../sdg lrwxrwxrwx 1 root root 10 Nov 17 2010 /dev/disk/by-id/scsi-36090a05830145e58e0b9c479000010a1-part1 -> ../../sdg1

    Read the article

  • Incorrect durations mp4 file created by ffmpeg (avconv)

    - by Ruslan Sharipov
    Example usage: avconv -i rtmp://maps.lo.ufanet.ru/live/10e227922b473e91f37474fa084107af -vcodec copy -an -sn -map 0 -f segment -segment_format mp4 -segment_time 60 -y %05d.mp4 avconv version 0.8.3-6:0.8.3-1+b1, Copyright (c) 2000-2012 the Libav developers built on Jun 15 2012 13:54:35 with gcc 4.7.0 HandShake: client signature does not match! Metadata: height 480.00 remote_addr: sdp_session {sdp_session,0, {sdp_o,"-","1289703354974145","1289703354974145",inet4, "10.1.12.99"}, "Media Presentation", {inet4,"0.0.0.0"}, {0,0}, [{"control","*"},{"range","npt=0.0 start 30400239.52 timeshift_duration 319250.58 timeshift_size 120000.00 width 640.00 [flv @ 0x1d36a40] Estimating duration from bitrate, this may be inaccurate Input #0, flv, from 'rtmp://maps.lo.ufanet.ru/live/10e227922b473e91f37474fa084107af': Duration: N/A, start: 0.000000, bitrate: N/A Stream #0.0: Video: h264 (Baseline), yuvj420p, 640x480 [PAR 1:1 DAR 4:3], 1k tbr, 1k tbn, 2k tbc Output #0, segment, to '%05d.mp4': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: libx264, yuvj420p, 640x480 [PAR 1:1 DAR 4:3], q=2-31, 1k tbn, 1k tbc Stream mapping: Stream #0:0 -> #0:0 (copy) Press ctrl-c to stop encoding ^Cframe= 9566 fps= 36 q=-1.0 Lsize= -0kB time=318.25 bitrate= -0.0kbits/s video:30348kB audio:0kB global headers:0kB muxing overhead -100.000071% Received signal 2: terminating. Result: serafim@yard:~/video2$ ls 00000.mp4 00001.mp4 00002.mp4 00003.mp4 00004.mp4 00005.mp4 Now try to play the files in the player, such as VLC. And that's what we get: the first fragment (00000.mp4) played well, no problems, but the second (00001.mp4 and beyond) starts the bug manifests itself, namely the file 00001.mp4 first 60 seconds black screen, but since 61 seconds starts playing the video. Attachments: https://dl.dropbox.com/u/760901/rtmp_and_mp4.zip How to get rid of the delay with black screen at the beginning of the segments? Maybe ffmpeg to pass parameters, or third-party software is able to correct the obtained segments mp4?

    Read the article

  • Can't access newly created Subversion repos

    - by Jean-François G. B.
    Sorry in advance, I'm pretty new in server configuration. So I followed this tutorial to install Subversion on my CentOS server. I'm at the part I should test the URL to make sure I can access it and that it's password protected, but it's not working, I can't access it. What is wrong? Is there some config missing? I don't know what more details to give, but if you need some, please ask! :) Thanks in advance.

    Read the article

  • Access to self created torrent on public tracker

    - by Nick
    Not sure if this is the right site to ask this, but here it goes: Let's say I'd like to share a couple of private files with a few friends. The size of these are quite large, so I've figured the best route to distribute these is via torrent. So, on my home PC I create a torrent and start seeding and announce to a public tracker like openbittorrent and publicbt. Now, both of those are public trackers, but they don't seem to have anyway of searching through what is actually being tracked. If I'm only passing around the torrent file to a few friends, whats the chances that someone else will 'randomly' come across the torrent via the public tracker and start leeching?

    Read the article

  • Folder default ACLs not inherited when new file is created

    - by Flavien
    I'm a bit of a beginner with Unix systems, but I'm running Cygwin on my Windows Server, and I am trying to figure out something related to extended ACLs. I have a directory to which I set the following ACLs: Administrator@MyServer ~ $ setfacl -m d:u:Someuser:r-- somedir Administrator@MyServer ~ $ getfacl somedir/ # file: somedir/ # owner: Administrator # group: None user::rwx group::r-x mask:rwx other:r-x default:user::rwx default:user:Someuser:r-- default:group::r-x default:mask:rwx default:other:r-x As you can see mose of the default ACLs have the x bit. Then when I create a fine in it, it doesn't inherit the ACLs it is supposed to: Administrator@MyServer ~ $ touch somedir/somefile Administrator@MyServer ~ $ getfacl somedir/somefile # file: somedir/somefile # owner: Administrator # group: None user::rw- user:Someuser:r-- group::r-- mask:rwx other:r-- It's basically missing the x bit everywhere. Any idea why?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >