Search Results

Search found 215 results on 9 pages for 'dfs r'.

Page 6/9 | < Previous Page | 2 3 4 5 6 7 8 9  | Next Page >

  • Constructing a tree using Python

    - by stealthspy
    I am trying to implement a unranked boolean retrieval. For this, I need to construct a tree and perform a DFS to retrieve documents. I have the leaf nodes but I am having difficulty to construct the tree. Eg: query = OR ( AND (maria sharapova) tennis) Result: OR | | AND tennis | | maria sharapova I traverse the tree using DFS and calculate the boolean equivalent of certain document ids to identify the required document from the corpus. Can someone help me with the design of this using python? I have parsed the query and retrieved the leaf nodes for now.

    Read the article

  • Trobleshooting extremely slow opening times in Win7 for documents on Win2k8 server

    - by Mazupan
    Hello. It's hard even to describe my problem. It seems there's only problem with extreme slow openings (up to 10 minutes) on Windows 7 (on XP things works fine) for files that are stored on Windows Server 2008. And now what I discovered up till now. If I open (some files, not all, not allways) .doc and .xls files with doubleclicking it takes up to 10 minutes to finaly open the file. In that time, file seems to be locked for all other users. If I cancel opening, file remains locked for some time. Owner on that files is the one who last wrote changes in them. If I change the owner to larger group, which I am member of file gets opened super fast. When opened file can be saved normaly and fast. That file reopens fast. One other user reports that there is only problem when opening the files for the first time in a day. When he openes first file he has no problems with other files at all (or so he says). He also states that when accessing files from home via VPN he has no such problems with files. And now: anybody has a clue where to start looking? I suppose that is misconfiguration problem. But where? File system? Permissions? DFS? VMWare network config? My setup is as follows: Physical server: HP Prolian ML350 G6 Virtual host: VMWare ESXi 4 Guest: Windows Server 2008 Standard Files are accessed via DFS shares. Please help me. Thanks. Mazupan

    Read the article

  • Users are getting a temporary profile

    - by Serhiy
    A bit about current setup: It is windows 2008 R2 AD servers (all of them are 2008R2) and couple locations which set as Sites. Each location has DFS on AD server. Roaming profiles are not used nor configured. Users have their home folder configured as mapped S: drive to DFS shared folder. For example: in profile tab user has: Home Folder - connect - S: to \\domain.com\dc\users\%username% We also have redirected Desktop, Documents and Downloads folders to \\domain.com\dc\users. Everything was fine. Suddenly (today), users in most locations lost their local profile (both XP and W7 desktops) and got temporary profiles. Also, it looks like local profile was created today (from folder properties). I checked events at couple machines and there is not errors related to profiles or logon process. I do not see issues in event logs at servers as well. Basically, I run out of ideas what is wrong and why machines lost their local profiles. PS: Laptop users do not have their folders redirected, but lost profiles as well.

    Read the article

  • Alias for Drupal "Sites" folder with Apache on Windows Server 2008

    - by sgtbeano
    I'm having to move a number of sites from a LAMP stack to a WAMP one, provided by Zend, and I've hit a problem. Our architecture is a number of loadbalanced web servers which have their own local webapp drives which are kept in sync with one server performing as the master copy. There is then a separate DFS share provided to all web servers from our pillar san. Usually a Drupal install under our LAMP cluster would have the main Drupal web app in a local HTDOCS mount for each server and the SITES directory within Drupal would then be symlinked out to the DFS or NFS share so that there is a common FILES and TMP directory. The problem I'm having is that there seems to be no equivalent of symlinks on Win Server 2008, shortcuts have a .ink at the end making Apache see them as a distinct file. So I've tried using an alias call in the vhost file like this; <Location /drupal-626/sites> Order deny, allow Allow from all </Location> Alias /drupal-626/sites "Z:\Path to alternate sites directory" The root for this test is; http://main-domain-url/drupal-626/ Unfortunately this isn't work so I'm wondering if any of you have a solution which would work? Many thanks for taking the time to read this.

    Read the article

  • Cross-platform distributed fault-tolerant (disconnected operation/local cache) filesystem

    - by Adrian Frühwirth
    We are facing a design "challenge" where we are required to set up a storage solution with the following properties: What we need HA a scalable storage backend offline/disconnected operation on the client to account for network outages cross-platform access client-side access from certainly Windows (probably XP upwards), possibly Linux backend integrates with AD/LDAP (permission management (user/group management, ...)) should work reasonably well over slow WAN-links Another problem is that we don't really know all possible use cases here, if people need to be able to have concurrent access to shared files or if they will only be accessing their own files, so a possible solution needs to account for concurrent access and how conflict management would look in this case from a user's point of view. This two years old blog posts sums up the impression that I have been getting during the last couple of days of research, that there are lots of current übercool projects implementing (non-Windows) clustered petabyte-capable blob-storage solutions but that there is none that supports disconnected operation nicely and natively, but I am hoping that we have missed an obvious solution. What we have tried OpenAFS We figured that we want a distributed network filesystem with a local cache and tested OpenAFS (which, as the only currently "stable" DFS supporting disconnected operation, seemed the way to go) for a week but there are several problems with it: it's a real pain to set up there are no official RHEL/CentOS packages the package of the current stable version 1.6.5.1 from elrepo randomly kernel panics on fresh installs, this is an absolute no-go Windows support (including the required Kerberos packages) is mystical. The current client for the 1.6 branch does not run on Windows 8, the current client for the 1.7 does but it just randomly crashes. After that experience we didn't even bother testing on XP and Windows 7. Suffice to say, we couldn't get it working and the whole setup has been so unstable and complicated to setup that it's just not an option for production. Samba + Unison Since OpenAFS was a complete disaster and no other DFS seems to support disconnected operation we went for a simpler idea that would sync files against a Samba server using Unison. This has the following advantages: Samba integrates with ADs; it's a pain but can be done. Samba solves the problem of remotely accessing the storage from Windows but introduces another SPOF and does not address the actual storage problem. We could probably stick any clustered FS underneath Samba, but that means we need a HA Samba setup on top of that to maintain HA which probably adds a lot of additional complexity. I vaguely remember trying to implement redundancy with Samba before and I could not silently failover between servers. Even when online, you are working with local files which will result in more conflicts than would be necessary if a local cache were only touched when disconnected It's not automatic. We cannot expect users to manually sync their files using the (functional, but not-so-pretty) GTK GUI on a regular basis. I attempted to semi-automate the process using the Windows task scheduler, but you cannot really do it in a satisfactory way. On top of that, the way Unison works makes syncing against Samba a costly operation, so I am afraid that it just doesn't scale very well or even at all. Samba + "Offline Files" After that we became a little desparate and gave Windows "offline files" a chance. We figured that having something that is inbuilt into the OS would reduce administrative efforts, helps blaming someone else when it's not working properly and should just work since people have been using this for years. Right? Wrong. We really wanted it to work, but it just doesn't. 30 minutes of copying files around and unplugging network cables/disabling network interfaces left us with (silent! there is only a tiny notification in Windows explorer in the statusbar, which doesn't even open Sync Center if you click on it!) undeletable files on the server (!) and conflicts that should not even be conflicts. In the end, we had one successful sync of a tiny text file, everything else just exploded horribly. Beyond that, there are other problems: Microsoft admits that "offline files" in Windows XP cannot cope with "large files" and therefore does not cache/sync them at all which would mean those files become unavailable if the connection drop In Windows 7 the feature is only available in the Professional/Ultimate/Enterprise editions. Summary Unless there is another fault-tolerant DFS that supports Windows natively I assume that stacking a HA Samba cluster on top of something like GlusterFS/Lustre/whatnot is the only option, but I hope that I am wrong here. How do other companies allow fault-tolerant network access to redundant storage in a heterogeneous environment with Windows?

    Read the article

  • GRAPH PROBLEM: find an algorithm to determine the shortest path from one point to another in a recta

    - by newba
    I'm getting such an headache trying to elaborate an appropriate algorithm to go from a START position to a EXIT position in a maze. For what is worth, the maze is rectangular, maxsize 500x500 and, in theory, is resolvable by DFS with some branch and bound techniques ... 10 3 4 7 6 3 3 1 2 2 1 0 2 2 2 4 2 2 5 2 2 1 3 0 2 2 2 2 1 3 3 4 2 3 4 4 3 1 1 3 1 2 2 4 2 2 1 Output: 5 1 4 2 Explanation: Our agent looses energy every time he gives a step and he can only move UP, DOWN, LEFT and RIGHT. Also, if the agent arrives with a remaining energy of zero or less, he dies, so we print something like "Impossible". So, in the input 10 is the initial agent's energy, 3 4 is the START position (i.e. column 3, line 4) and we have a maze 7x6. Think this as a kind of labyrinth, in which I want to find the exit that gives the agent a better remaining energy (shortest path). In case there are paths which lead to the same remaining energy, we choose the one which has the small number of steps, of course. I need to know if a DFS to a maze 500x500 in the worst case is feasible with these limitations and how to do it, storing the remaining energy in each step and the number of steps taken so far. The output means the agent arrived with remaining energy= 5 to the exit pos 1 4 in 2 steps. If we look carefully, in this maze it's also possible to exit at pos 3 1 (column 3, row 1) with the same energy but with 3 steps, so we choose the better one. With these in mind, can someone help me some code or pseudo-code? I have troubles working this around with a 2D array and how to store the remaining energy, the path (or number of steps taken)....

    Read the article

  • FreeNAS and AD authentication on Windows 2008 R2

    - by FrancisV
    Has anyone successfully used AD authentication using the latest version of FreeNAS with Windows 2008 R2 domain controllers? I wanted to use FreeNAS to host files and share them via CIFS but I couldn't make FreeNAS authenticate with a Windows 2008 R2 domain controller. Ultimately, the new CIFS shares will be referenced in the DFS namespace that we already have running on Windows 2008 R2 servers. Any tip you can share with me?

    Read the article

  • Solaris10 x86 mirror. Making second disk booteable when failure

    - by Kani
    Did a mirror (RAID1) with Solaris 10 in x86. Everything OK. Now, I´m trying to make the second disk booteable, this is: from grub or in case of failure of disk1. I edited /boot/grub/menu.lst: #---------- ADDED BY BOOTADM - DO NOT EDIT ---------- title Solaris 10 9/10 s10x_u9wos_14a X86 findroot (rootfs1,0,a) kernel /platform/i86pc/multiboot module /platform/i86pc/boot_archive #---------------------END BOOTADM-------------------- #---------- ADDED BY BOOTADM - DO NOT EDIT ---------- title Solaris failsafe findroot (rootfs1,0,a) kernel /boot/multiboot -s module /boot/amd64/x86.miniroot-safe #---------------------END BOOTADM-------------------- #---------- ADDED BY BOOTADM - DO NOT EDIT ---------- title Solaris failsafe findroot (rootfs1,0,a) kernel /boot/multiboot kernel/unix -s module /boot/x86.miniroot-safe #---------------------END BOOTADM-------------------- #Make second disk booteable!!!!!!! title alternate boot findroot (rootfs1,1,a) kernel /boot/multiboot kernel/unix -s module /boot/x86.miniroot-safe But is not working. In the BIOS, when I select "alternate boot" I get: Error 15: 15 file not found also, how to configure to GRUB to make the disk2 to boot in case of error in disk1? Additionally, I did (but not related to GRUB): eeprom altbootpath=/devices/pci@0,0/pci108e,5352@1f,2/disk@1,0:a Here is the output of some commands that may help you: /sbin/biosdev 0x80 /pci@0,0/pci108e,5352@1f,2/disk@0,0 0x81 /pci@0,0/pci108e,5352@1f,2/disk@1,0 ls -l /dev/dsk/c1t?d0s0 lrwxrwxrwx 1 root root 50 Jul 7 12:01 /dev/dsk/c1t0d0s0 -> ../../devices/pci@0,0/pci108e,5352@1f,2/disk@0,0:a lrwxrwxrwx 1 root root 50 Jul 7 12:01 /dev/dsk/c1t1d0s0 -> ../../devices/pci@0,0/pci108e,5352@1f,2/disk@1,0:a more /boot/solaris/bootenv.rc setprop ata-dma-enabled '1' setprop atapi-cd-dma-enabled '0' setprop ttyb-rts-dtr-off 'false' setprop ttyb-ignore-cd 'true' setprop ttya-rts-dtr-off 'false' setprop ttya-ignore-cd 'true' setprop ttyb-mode '9600,8,n,1,-' setprop ttya-mode '9600,8,n,1,-' setprop lba-access-ok '1' setprop prealloc-chunk-size '0x2000' setprop bootpath '/pci@0,0/pci108e,5352@1f,2/disk@0,0:a' setprop keyboard-layout 'US-English' setprop console 'text' setprop altbootpath '/pci@0,0/pci108e,5352@1f,2/disk@1,0:a' cat /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # fd - /dev/fd fd - no - /proc - /proc proc - no - #/dev/dsk/c1t0d0s1 - - swap - no - /dev/md/dsk/d1 - - swap - no - /dev/md/dsk/d0 /dev/md/rdsk/d0 / ufs 1 no - /devices - /devices devfs - no - sharefs - /etc/dfs/sharetab sharefs - no - ctfs - /system/contract ctfs - no - objfs - /system/object objfs - no - swap - /tmp tmpfs - yes - df -h Filesystem size used avail capacity Mounted on /dev/md/dsk/d0 909G 11G 889G 2% / /devices 0K 0K 0K 0% /devices ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 14G 972K 14G 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object sharefs 0K 0K 0K 0% /etc/dfs/sharetab /usr/lib/libc/libc_hwcap1.so.1 909G 11G 889G 2% /lib/libc.so.1 fd 0K 0K 0K 0% /dev/fd swap 14G 40K 14G 1% /tmp swap 14G 28K 14G 1% /var/run

    Read the article

  • MSMQ Resilience

    - by Paddy Carroll
    I have a requirement for a resilient MSMQ setup on VMWare ESX5. I am aware that we cannot allow the queue storage to be shared as it must be installed on physical disk mount, e.g. it cant be an CIFS or DFS Share. The following constraints apply: We don't use windows clustering We dont't rely on hot standbys Is there a way I can replicate the queue storage to another platform so that it can assume MSMQ duties on failure of the primary platforms using any method including queue forwarding?

    Read the article

  • Is real-time or synchronous replication possible over WAN link?

    - by johnnyb10
    The company I work for is looking to implement truly real-time file replication with file locking over a WAN link that spans over 2000 miles. We currently have a 16-drive SAN setup in our east coast office. We also have an office out in Colorado that will have the same exact SAN setup. The idea is to have those two SANs contain the same exact data at all times, which will allow us to work with the same data pool, and which will also provide use with an offsite backup solution, should a failure occur on either end. We're running Server 2008. The objective is to enable users in the east coast office to work on files and have those changes be instantly updated on the Colorado SAN as well. We also need there to be file locking so that there will be no conflicts or overwritten changes if users attempt to work on the same file. Is this scenario even possible, at speeds that would make the files usable? And if so, what software would we need to pull this off? As I understand it, DFS-R does not provide file locking, so if we used that, we would need to go with a third-party product like Peerlock. But I don't even know if DFS-R is an option. Can it replicate quickly enough over a WAN link? Can any product? It seems that if we were to use synchronous replication, the programs would be unacceptably slow, as every write would have to wait for confirmation from the other end of the link. But if we used asynchronous replication, what kind of latency would we be looking at? There is a product from GlobalScape called WAFS that claims to provide "File coherence with real-time file locking, file release, and synchronization" and says that "As files are modified, changes are mirrored instantly using intelligent byte-level differencing to minimize the impact on network bandwidth". So this sounds like synchronous replication, but that doesn't even seem possible, given physical limitations such as the speed of light. If anyone has any experience with this kind of setup, or knows whether it's even possible, I'd appreciate your input and suggestions, including recommendations for software that we should check out.

    Read the article

  • How to set WINS through GPO?

    - by Robert
    I'm looking to set our WINS servers via GPO, is that possible? We have 2 SSID which our clients can connect to (one which connects to a DHCP we control, the other to a different DHCP we don't - each with their own WINS). We do a lot of mapping via DFS so that's why we need our wins servers. Help?

    Read the article

  • Forcedeth - too many iterations (6) in > nv_nic_irq

    - by RyanC
    Hey, I'm having trouble with an onboard nvidia gigabit network, under times of heavy load on the network, I'm seeing this error logged: "too many iterations (6) in nv_nic_irq" I'm running Hadoop DFS over these NICs and I see checksum errors build up until the whole thing just fails. I'm running the 2.6.26-2-amd64 kernel, and my initial research seems to imply its a problem with the forcedeth driver. Has anyone run into this problem before? Thanks in advance if anyone can help! Ryan

    Read the article

  • Algorithmia Source Code released on CodePlex

    - by FransBouma
    Following the release of our BCL Extensions Library on CodePlex, we have now released the source-code of Algorithmia on CodePlex! Algorithmia is an algorithm and data-structures library for .NET 3.5 or higher and is one of the pillars LLBLGen Pro v3's designer is built on. The library contains many data-structures and algorithms, and the source-code is well documented and commented, often with links to official descriptions and papers of the algorithms and data-structures implemented. The source-code is shared using Mercurial on CodePlex and is licensed under the friendly BSD2 license. User documentation is not available at the moment but will be added soon. One of the main design goals of Algorithmia was to create a library which contains implementations of well-known algorithms which weren't already implemented in .NET itself. This way, more developers out there can enjoy the results of many years of what the field of Computer Science research has delivered. Some algorithms and datastructures are known in .NET but are re-implemented because the implementation in .NET isn't efficient for many situations or lacks features. An example is the linked list in .NET: it doesn't have an O(1) concat operation, as every node refers to the containing LinkedList object it's stored in. This is bad for algorithms which rely on O(1) concat operations, like the Fibonacci heap implementation in Algorithmia. Algorithmia therefore contains a linked list with an O(1) concat feature. The following functionality is available in Algorithmia: Command, Command management. This system is usable to build a fully undo/redo aware system by building your object graph using command-aware classes. The Command pattern is implemented using a system which allows transparent undo-redo and command grouping so you can use it to make a class undo/redo aware and set properties, use its contents without using commands at all. The Commands namespace is the namespace to start. Classes you'd want to look at are CommandifiedMember, CommandifiedList and KeyedCommandifiedList. See the CommandQueueTests in the test project for examples. Graphs, Graph algorithms. Algorithmia contains a sophisticated graph class hierarchy and algorithms implemented onto them: non-directed and directed graphs, as well as a subgraph view class, which can be used to create a view onto an existing graph class which can be self-maintaining. Algorithms include transitive closure, topological sorting and others. A feature rich depth-first search (DFS) crawler is available so DFS based algorithms can be implemented quickly. All graph classes are undo/redo aware, as they can be set to be 'commandified'. When a graph is 'commandified' it will do its housekeeping through commands, which makes it fully undo-redo aware, so you can remove, add and manipulate the graph and undo/redo the activity automatically without any extra code. If you define the properties of the class you set as the vertex type using CommandifiedMember, you can manipulate the properties of vertices and the graph contents with full undo/redo functionality without any extra code. Heaps. Heaps are data-structures which have the largest or smallest item stored in them always as the 'root'. Extracting the root from the heap makes the heap determine the next in line to be the 'maximum' or 'minimum' (max-heap vs. min-heap, all heaps in Algorithmia can do both). Algorithmia contains various heaps, among them an implementation of the Fibonacci heap, one of the most efficient heap datastructures known today, especially when you want to merge different instances into one. Priority queues. Priority queues are specializations of heaps. Algorithmia contains a couple of them. Sorting. What's an algorithm library without sort algorithms? Algorithmia implements a couple of sort algorithms which sort the data in-place. This aspect is important in situations where you want to sort the elements in a buffer/list/ICollection in-place, so all data stays in the data-structure it already is stored in. PropertyBag. It re-implements Tony Allowatt's original idea in .NET 3.5 specific syntax, which is to have a generic property bag and to be able to build an object in code at runtime which can be bound to a property grid for editing. This is handy for when you have data / settings stored in XML or other format, and want to create an editable form of it without creating many editors. IEditableObject/IDataErrorInfo implementations. It contains default implementations for IEditableObject and IDataErrorInfo (EditableObjectDataContainer for IEditableObject and ErrorContainer for IDataErrorInfo), which make it very easy to implement these interfaces (just a few lines of code) without having to worry about bookkeeping during databinding. They work seamlessly with CommandifiedMember as well, so your undo/redo aware code can use them out of the box. EventThrottler. It contains an event throttler, which can be used to filter out duplicate events in an event stream coming into an observer from an event. This can greatly enhance performance in your UI without needing to do anything other than hooking it up so it's placed between the event source and your real handler. If your UI is flooded with events from data-structures observed by your UI or a middle tier, you can use this class to filter out duplicates to avoid redundant updates to UI elements or to avoid having observers choke on many redundant events. Small, handy stuff. A MultiValueDictionary, which can store multiple unique values per key, instead of one with the default Dictionary, and is also merge-aware so you can merge two into one. A Pair class, to quickly group two elements together. Multiple interfaces for helping with building a de-coupled, observer based system, and some utility extension methods for the defined data-structures. We regularly update the library with new code. If you have ideas for new algorithms or want to share your contribution, feel free to discuss it on the project's Discussions page or send us a pull request. Enjoy!

    Read the article

  • Graph representation benchmarking

    - by Carlucho
    Currently am developing a program that solves (if possible) any given labyrinth of dimensions from 3X4 to 26x30. I represent the graph using both adj matrix (sparse) and adj list. I would like to know how to output the total time taken by the DFS to find the solution using one and then the other method. Programatically, how could i produce such benchmark?

    Read the article

  • Exception in thread "main" java.lang.OutOfMemoryError, How to find and fix??

    - by or.nomore
    hey, I'm trying to programming a crossword creator. using a given dictionary txt file and a given pattern txt file. The basic idea is using DFS algorithm. the problem begin when the dictionary file is v-e-r-y big (about 50000 words). then i recive the : Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded i know that there is a part in my program that waists memory, but i don't know where it is, how to find it and how to fix it

    Read the article

  • All possible paths in a cyclic undirected graph

    - by Elias
    I'm trying to develop an algorithm that identifies all possible paths between two nodes in a graph, as in this example. in fact, i just need to know which nodes appear in all existing paths. in the web only got references about DFS, A* or dijkstra, but i think they doesn't work in this case. Does anyone know how to solve it?

    Read the article

  • Nutch search always returns 0 results

    - by darbour
    I have set up nutch 1.0 on a cluster. It has been setup and has successfully crawled, I copied the crawl directory using the dfs -copyToLocal and set the value of searcher.dir in the nutch-site.xml file located in the tomcat directory to point to that directory. Still when I try to search I receive 0 results. Any help would be greatly appreciated.

    Read the article

  • System.InvalidOperationException in Output Window

    - by user318068
    I constantly get the following message in my output/debug windows. The app doesn't crash but I was wondering what the deal with it is: A first chance exception of type 'System.InvalidOperationException' occurred in System.dll my code :sol.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace ConsoleApplication1 { class Sol { public LinkedList<int> tower1 = new LinkedList<int>(); public LinkedList<int> tower2 = new LinkedList<int>(); public LinkedList<int> tower3 = new LinkedList<int>(); public static LinkedList<string> BFS = new LinkedList<string>(); public static LinkedList<string> DFS = new LinkedList<string>(); public static LinkedList<string> IDS = new LinkedList<string>(); public int depth; public LinkedList<Sol> neighbors; public Sol() { } public Sol(LinkedList<int> tower1, LinkedList<int> tower2, LinkedList<int> tower3) { this.tower1 = tower1; this.tower2 = tower2; this.tower3 = tower3; neighbors = new LinkedList<Sol>(); } public virtual void getneighbors() { Sol temp = this.copy(); Sol neighbor1 = this.copy(); Sol neighbor2 = this.copy(); Sol neighbor3 = this.copy(); Sol neighbor4 = this.copy(); Sol neighbor5 = this.copy(); Sol neighbor6 = this.copy(); if (temp.tower1.Count != 0) { if (neighbor1.tower2.Count != 0) { if (neighbor1.tower1.First.Value < neighbor1.tower2.First.Value) { neighbor1.tower2.AddFirst(neighbor1.tower1.First); neighbor1.tower1.RemoveFirst(); neighbors.AddLast(neighbor1); } } else { neighbor1.tower2.AddFirst(neighbor1.tower1.First); neighbor1.tower1.RemoveFirst(); neighbors.AddLast(neighbor1); } if (neighbor2.tower3.Count != 0) { if (neighbor2.tower1.First.Value < neighbor2.tower3.First.Value) { neighbor2.tower3.AddFirst(neighbor2.tower1.First); neighbor2.tower1.RemoveFirst(); neighbors.AddLast(neighbor2); } } else { neighbor2.tower3.AddFirst(neighbor2.tower1.First); neighbor2.tower1.RemoveFirst(); neighbors.AddLast(neighbor2); } } //------------- if (temp.tower2.Count != 0) { if (neighbor3.tower1.Count != 0) { if (neighbor3.tower2.First.Value < neighbor3.tower1.First.Value) { neighbor3.tower1.AddFirst(neighbor3.tower2.First); neighbor3.tower2.RemoveFirst(); neighbors.AddLast(neighbor3); } } else { neighbor3.tower1.AddFirst(neighbor3.tower2.First); neighbor3.tower2.RemoveFirst(); neighbors.AddLast(neighbor3); } if (neighbor4.tower3.Count != 0) { if (neighbor4.tower2.First.Value < neighbor4.tower3.First.Value) { neighbor4.tower3.AddFirst(neighbor4.tower2.First); neighbor4.tower2.RemoveFirst(); neighbors.AddLast(neighbor4); } } else { neighbor4.tower3.AddFirst(neighbor4.tower2.First); neighbor4.tower2.RemoveFirst(); neighbors.AddLast(neighbor4); } } //------------------------ if (temp.tower3.Count() != 0) { if (neighbor5.tower1.Count() != 0) { if (neighbor5.tower3.ElementAtOrDefault(0) < neighbor5.tower1.ElementAtOrDefault(0)) { neighbor5.tower1.AddFirst(neighbor5.tower3.First); neighbor5.tower3.RemoveFirst(); neighbors.AddLast(neighbor5); } } else { neighbor5.tower1.AddFirst(neighbor5.tower3.First); neighbor5.tower3.RemoveFirst(); neighbors.AddLast(neighbor5); } if (neighbor6.tower2.Count() != 0) { if (neighbor6.tower3.ElementAtOrDefault(0) < neighbor6.tower2.ElementAtOrDefault(0)) { neighbor6.tower2.AddFirst(neighbor6.tower3.First); neighbor6.tower3.RemoveFirst(); neighbors.AddLast(neighbor6); } } else { neighbor6.tower2.AddFirst(neighbor6.tower3.First); neighbor6.tower3.RemoveFirst(); neighbors.AddLast(neighbor6); } } } public override string ToString() { string str; str = "tower1" + tower1.ToString() + " tower2" + tower2.ToString() + " tower3" + tower3.ToString(); return str; } public Sol copy() { Sol So; LinkedList<int> l1 = new LinkedList<int>(); LinkedList<int> l2 = new LinkedList<int>(); LinkedList<int> l3 = new LinkedList<int>(); for (int i = 0; i <= this.tower1.Count() - 1; i++) { l1.AddLast(tower1.ElementAt(i)); } for (int i = 0; i <= this.tower2.Count - 1; i++) { l2.AddLast(tower2.ElementAt(i)); } for (int i = 0; i <= this.tower3.Count - 1; i++) { l3.AddLast(tower3.ElementAt(i)); } So = new Sol(l1, l2, l3); return So; } public bool Equals(Sol sol) { if (this.tower1.Equals(sol.tower1) & this.tower2.Equals(sol.tower2) & this.tower3.Equals(sol.tower3)) return true; return false; } public virtual bool containedin(Stack<Sol> vec) { bool found = false; for (int i = 0; i <= vec.Count - 1; i++) { if (vec.ElementAt(i).tower1.Equals(this.tower1) && vec.ElementAt(i).tower2.Equals(this.tower2) && vec.ElementAt(i).tower3.Equals(this.tower3)) { found = true; break; } } return found; } public virtual bool breadthFirst(Sol start, Sol goal) { Stack<Sol> nextStack = new Stack<Sol>(); Stack<Sol> traversed = new Stack<Sol>(); bool found = false; start.depth = 0; nextStack.Push(start); while (nextStack.Count != 0) { Sol sol = nextStack.Pop(); BFS.AddFirst("poped State:" + sol.ToString() + "level " + sol.depth); traversed.Push(sol); if (sol.Equals(goal)) { found = true; BFS.AddFirst("Goal:" + sol.ToString()); break; } else { sol.getneighbors(); foreach (Sol neighbor in sol.neighbors) { if (!neighbor.containedin(traversed) && !neighbor.containedin(nextStack)) { neighbor.depth = (sol.depth + 1); nextStack.Push(neighbor); } } } } return found; } public virtual bool depthFirst(Sol start, Sol goal) { Stack<Sol> nextStack = new Stack<Sol>(); Stack<Sol> traversed = new Stack<Sol>(); bool found = false; start.depth = 0; nextStack.Push(start); while (nextStack.Count != 0) { //Dequeue next State for comparison //And add it 2 list of traversed States Sol sol = nextStack.Pop(); DFS.AddFirst("poped State:" + sol.ToString() + "level " + sol.depth); traversed.Push(sol); if (sol.Equals(goal)) { found = true; DFS.AddFirst("Goal:" + sol.ToString()); break; } else { sol.getneighbors(); foreach (Sol neighbor in sol.neighbors) { if (!neighbor.containedin(traversed) && !neighbor.containedin(nextStack)) { neighbor.depth = sol.depth + 1; nextStack.Push(neighbor); } } } } return found; } public virtual bool iterativedeepening(Sol start, Sol goal) { bool found = false; for (int level = 0; ; level++) { Stack<Sol> nextStack = new Stack<Sol>(); Stack<Sol> traversed = new Stack<Sol>(); start.depth = 0; nextStack.Push(start); while (nextStack.Count != 0) { Sol sol = nextStack.Pop(); IDS.AddFirst("poped State:" + sol.ToString() + "Level" + sol.depth); traversed.Push(sol); if (sol.Equals(goal)) { found = true; IDS.AddFirst("Goal:" + sol.ToString()); break; } else if (sol.depth < level) { sol.getneighbors(); foreach (Sol neighbor in sol.neighbors) { if (!neighbor.containedin(traversed) && !neighbor.containedin(nextStack)) { neighbor.depth = sol.depth + 1; nextStack.Push(neighbor); } //end if } //end for each } //end else if } // end while if (found == true) break; } // end for return found; } } } Just wondering if I may be doing something wrong somewhere or something.

    Read the article

  • Combining HBase and HDFS results in Exception in makeDirOnFileSystem

    - by utrecht
    Introduction An attempt to combine HBase and HDFS results in the following: 2014-06-09 00:15:14,777 WARN org.apache.hadoop.hbase.HBaseFileSystem: Create Dir ectory, retries exhausted 2014-06-09 00:15:14,780 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown. java.io.IOException: Exception in makeDirOnFileSystem at org.apache.hadoop.hbase.HBaseFileSystem.makeDirOnFileSystem(HBaseFile System.java:136) at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFi leSystem.java:428) at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSyst emLayout(MasterFileSystem.java:148) at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSyst em.java:133) at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.j ava:572) at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:432) at java.lang.Thread.run(Thread.java:744) Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=hbase, access=WRITE, inode="/":vagrant:supergroup:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPe rmissionChecker.java:224) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPe rmissionChecker.java:204) at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermi ssion(FSPermissionChecker.java:149) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(F SNamesystem.java:4891) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(F SNamesystem.java:4873) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAcce ss(FSNamesystem.java:4847) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FS Namesystem.java:3192) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNames ystem.java:3156) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesyst em.java:3137) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameN odeRpcServer.java:669) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTra nslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:419) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$Cl ientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:4497 0) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.cal l(ProtobufRpcEngine.java:453) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1752) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1748) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma tion.java:1438) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1746) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstruct orAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingC onstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteExce ption.java:90) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteExc eption.java:57) at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2153) at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2122) at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSy stem.java:545) at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1915) at org.apache.hadoop.hbase.HBaseFileSystem.makeDirOnFileSystem(HBaseFile System.java:129) ... 6 more while configuration and system settings are as follows: [vagrant@localhost hadoop-hdfs]$ hadoop fs -ls hdfs://localhost/ Found 1 items -rw-r--r-- 3 vagrant supergroup 1010827264 2014-06-08 19:01 hdfs://localhost/u buntu-14.04-desktop-amd64.iso [vagrant@localhost hadoop-hdfs]$ /etc/hadoop/conf/core-site.xml <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:8020</value> </property> </configuration> /etc/hbase/conf/hbase-site.xml <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://localhost:8020/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> </configuration> /etc/hadoop/conf/hdfs-site.xml <configuration> <property> <name>dfs.name.dir</name> <value>/var/lib/hadoop-hdfs/cache</value> </property> <property> <name>dfs.data.dir</name> <value>/tmp/hellodatanode</value> </property> </configuration> NameNode directory permissions [vagrant@localhost hadoop-hdfs]$ ls -ltr /var/lib/hadoop-hdfs/cache total 8 -rwxrwxrwx. 1 hbase hdfs 15 Jun 8 23:43 in_use.lock drwxrwxrwx. 2 hbase hdfs 4096 Jun 8 23:43 current [vagrant@localhost hadoop-hdfs]$ HMaster is able to start if fs.defaultFS property has been commented in core-site.xml NameNode is listening [vagrant@localhost hadoop-hdfs]$ netstat -nato | grep 50070 tcp 0 0 0.0.0.0:50070 0.0.0.0:* LIST EN off (0.00/0/0) tcp 0 0 33.33.33.33:50070 33.33.33.1:57493 ESTA BLISHED off (0.00/0/0) and accessible by navigating to http://33.33.33.33:50070/dfshealth.jsp. Question How to solve makeDirOnFileSystem exception and let HBase connect to HDFS?

    Read the article

  • single point of failure in IIS Web Farm Framework setting?

    - by aamir sajjad
    ASP.NET WEB API Windows Server 2008 R2/IIS 7.5/Web Farm Framework 2.5 I am planning to deploy application across 4 web servers. Should i use shared content/configuration using DFS among web servers for web farm scenario? Second option is to use Web Farm Framework for deployment. Furthermore, is there chance of single point of failure in WFF? for example what if primary server goes down. which option would be better? pros and cons of each of the above. I appreciate your response.

    Read the article

  • Windows Server 2008 repair logonui.exe

    - by Josh R
    I'm fairly certain some data on my server got corrupted and from a dialog box that popped up I think one of the things that got corrupted is logonui.exe I think this because while the server goes through BIOS and the windows loading splash screen, when it finished that it goes to a black screen with just a mouse cursor that I can move around. But most of the services come back up, for example: DNS, DHCP, DFS, and the print service all load up while it's sitting on the black screen. Is there anything I can do to repair the logon screen? Maybe copy a file off another copy of Server 2008, get something off the original media (which I have) or bring the system back to a previous restore point? Thanks! I'm in the weeds right now...

    Read the article

  • hadoop install cluster

    - by chnet
    Is it possible to run hadoop on multi-node and these nodes install hadoop on different paths? I noticed the tutorial, such as Michael Noll's, http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/, asks hadoop to be installed on different nodes with the same installation path. I tried to run hadoop on multi-node. Each node installed hadoop on different path. But not success. I noticed that the hadoop assume installation paths are the same in default. The problem is when I execute start-dfs.sh it tries to start hadoop deamons on the salve nodes from the same path as on master. "Whereas the installation path is different. Were the relevant files modified to reflect the installed locations on each node"? Which file I should modify to reflect the installed locations on each node?

    Read the article

  • How to copy files from shadow copy with long source path

    - by Jake
    The files and folders in my shared network drive (set up with DFS) were mass deleted. Currently I am trying to recover the files from the shadow copy "Previous Version". Problem is, thousands of files are deeply nested with long paths making the file path too long. When copying, it shows the dialog "Source Path Too Long". My guess is that the file path just barely hits the limit when saved into the network drive, but shadaw copy service appends the date and time to the folders so the path character limit is exceeded. How else can I copy the files from shadow copy?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9  | Next Page >