Search Results

Search found 2628 results on 106 pages for 'martin pain'.

Page 96/106 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Dojo: Setting a CheckBox label programmatically

    - by Mitchell Flaherty
    Let me preface by saying that I saw this other question on the subject of CheckBox labels that was asked and answered well over a year ago. I was confused by the answers and am hoping that someone can clarify or that there has been new dojo functionality introduced since then that allows me to do this without resorting to HTML. So without further ado, I would like to know how to programmatically create labels for check boxes. I have a check box like so: this.pubBoxId = new dijit.form.CheckBox({ label: "IdChannel", checked: false, channel: that.idChannel }, that.name + "_PBI"); As you can see I've tried to edit the "label" field, but the label never actually shows up on the page. I have multiple CheckBoxes that I am adding to a ContentPane and simply want a label to the left or right of the check box. Is there any way I can do this without having to write separate HTML? Also, making a separate ContentPane for each individual label would be a big pain because of how many CheckBoxes I plan to have. Thank you for reading, and let me know if further clarification is needed!

    Read the article

  • C# XML Documentation Compiler Warning

    - by ImperialLion
    I am curious as to why I get a compiler warning in the following situation. /// <summary>This is class A /// </summary> public class A { /// <summary>This is the documentation for Method A /// </summary> public void MethodA() { //Do something } } /// <summary>This is class B /// </summary> public class B : A { /// <summary>This does something that I want to /// reference <see cref="MethodA"/> /// </summary> public void MethodB() { //Do something } } The warning states that "XML comment on 'B.MethodB()' has cref attribute 'MethodA' that could not be resolved." If B inherits from A shouldn't the compiler be able to see that method when generating the documentation without me specifying the parent class in the cref? If I change the cref to be cref="A.MethodA()" it works fine, but it seems like that's unnecessary and is a pain to do, especially if I have to go up more than one level. As a note to anyone testing this you have to be sure to "XML documentation file" checked in the Properties - Build in order to see the warning.

    Read the article

  • linux bash script: set date/time variable to auto-update (for inclusion in file names)

    - by user1859492
    Essentially, I have a standard format for file naming conventions. It breaks down to this: target_dateUTC_timeUTC_tool So, for instance, if I run tcpdump on a target of 'foo', then the file would be foo_dateUTC_timeUTC_tcpdump. Simple enough, but a pain for everyone to constantly (and consistently) enter... so I've tried to create a bash script which sets system variables like so: FILENAME=$TARGET\_$UTCTIME\_$TOOL Then, I can just call the variable at runtime, like so: tcpdump -w $FILENAME.lpc All of this works like a champ. I've got a menu-driven .sh which gives the user the options of viewing the current variables as well as setting them... file generation is a breeze. Unfortunately, by setting the date/time variable, it is locked to the value at the time of creation (naturally). I set the variable like so: UTCTIME=$(/bin/date --utc +"%Y%m%d_%H%M%Z") What I really need is either a way to create a variable which updates at runtime, or (more likely) another way to skin this cat. While scouring for solutions, I came across a similar issues... like this. But, to be honest, I'm stumped on how to marry the two approaches and create a simple, distributable solution. I can post the entire .sh if anyone cares to review (about 120 lines)

    Read the article

  • Plugin architecture in C using libdl

    - by LukeN
    I've been toying around, writing a small IRC framework in C that I'm now going to expand with some core functionality - but beyond that, I'd like it to be extensible with plugins! Up until now, whenever I wrote something IRC related (and I wrote a lot, in about 6 different languages now... I'm on fire!) and actually went ahead to implement a plugin architecture, it was inside an interpreted language that had facilities for doing (read: abusing) so, like jamming a whole script file trough eval in Ruby (bad!). Now I want to abuse something in C! Basically there's three things I could do define a simple script language inside of my program use an existing one, embedding an interpreter use libdl to load *.so modules on runtime I'm fond of the third one and raather avoid the other two options if possible. Maybe I'm a masochist of some sort, but I think it could be both fun and useful for learning purposes. Logically thinking, the obvious "pain-chain" would be (lowest to highest) 2 - 1 - 3, for the simple reason that libdl is dealing with raw code that can (and will) explode in my face more often than not. So this question goes out to you, fellow users of stackoverflow, do you think libdl is up to this task, or even a realistic thought?

    Read the article

  • A question of long-running and disruptive branches

    - by Matt Enright
    We are about to begin prototyping a new application that will share some existing infrastructure assemblies with an existing application, and also involve a significant subset of the existing domain model. Parts of the domain model will likely undergo some serious changes for this new application, and the endgame for all of this, once the new application has been fully specified and is launch-ready is that we would like to re-unify the models of the two applications (as well as share a database, link functionality, etc.), but for the duration of development, prototyping, etc, we will be using a separate database so that we can change things without worrying about impact to development or use of the existing application. Since it is a prototype, there will be a pretty long window during which serious changes or rearchitecturing can occur as product management experiments with different workflows, different customer bases are surveyed, and we try and keep up. We have already made a Subversion branch, so as to not impact concurrent development on the mature application, and are toying with 2 potential ways of moving forward with this: Use the svn branch as the sole mechanism of separation. Make our changes to the existing domain models, and evaluate their impact on the existing application (and make requisite changes to ProjectA) when we have established that our long-running side branch is stable enough for re-entry to trunk. "Fork" the shared code (temporarily): Copy ProjectA.Entities to NewProject.Entities, and treat all of the NewProject code as self-contained. When all of the perturbations around the model have died down and we feel satisfied, manually re-integrate the changes (as granular or sweeping as warranted) back into ProjectA.Entities, updating ProjectA to use the improved models at each step (this can take place either before or after the subversion merge has occurred). The subversion merge will then not handle recombination of any of the heavy changes here. Note: the "fork" method only applies to the code we see significant changes in store for, and whose modification will break ProjectA - shared infrastructure stuff for example, we would just modify in place (on our branch) and let the merge sort out. Development is hard, go shopping. Naturally, after not coming to an agreement, we're turning it over to the oracle of power that is SO. Any experience with any of these methods, pain points to watch out for, something new entirely?

    Read the article

  • Optimizing landing pages

    - by Oleg Shaldybin
    In my current project (Rails 2.3) we have a collection of 1.2 million keywords, and each of them is associated with a landing page, which is effectively a search results page for a given keywords. Each of those pages is pretty complicated, so it can take a long time to generate (up to 2 seconds with a moderate load, even longer during traffic spikes, with current hardware). The problem is that 99.9% of visits to those pages are new visits (via search engines), so it doesn't help a lot to cache it on the first visit: it will still be slow for that visit, and the next visit could be in several weeks. I'd really like to make those pages faster, but I don't have too many ideas on how to do it. A couple of things that come to mind: build a cache for all keywords beforehand (with a very long TTL, a month or so). However, building and maintaing this cache can be a real pain, and the search results on the page might be outdated, or even no longer accessible; given the volatile nature of this data, don't try to cache anything at all, and just try to scale out to keep up with traffic. I'd really appreciate any feedback on this problem.

    Read the article

  • Creating a Simple C# Wrapper to clean up code

    - by Tangopop
    I have this code: public void Contacts(string domainToBeTested, string[] browserList, string timeOut, int numberOfBrowsers) { verificationErrors = new StringBuilder(); for (int i = 0; i < numberOfBrowsers; i++) { ISelenium selenium = new DefaultSelenium("LMTS10", 4444, browserList[i], domainToBeTested); try { selenium.Start(); selenium.Open(domainToBeTested); selenium.Click("link=Email"); Assert.IsTrue(selenium.IsElementPresent("//div[@id='tabs-2']/p/a/strong")); selenium.Click("link=Address"); Assert.IsTrue(selenium.IsElementPresent("//div[@id='tabs-3']/p/strong")); selenium.Click("link=Telephone"); Assert.IsTrue(selenium.IsElementPresent("//div[@id='tabs-1']/ul/li/strong")); } catch (AssertionException e) { verificationErrors.AppendLine(browserList[i] + " :: " + e.Message); } finally { selenium.Stop(); } } Assert.AreEqual("", verificationErrors.ToString(), verificationErrors.ToString()); } My problem is i would like to make it so that i can use the code surrounding the 'try' many many times in the rest of the code. I think it has something to do with wrappers, but i can't get a simple answer for this from the web. So in simple terms the only piece of this code which changes is the bit between the try {} the rest is standard code that i have currently used over 100 times and is turning out to be a pain to maintain. Hope this is clear, many thanks.

    Read the article

  • Developers: How does BitLocker affect performance?

    - by Chris
    I'm an ASP.NET / C# developer. I use VS2010 all the time. I am thinking of enabling BitLocker on my laptop to protect the contents, but I am concerned about performance degradation. Developers who use IDEs like Visual Studio are working on lots and lots of files at once. More than the usual office worker, I would think. So I was curious if there are other developers out there who develop with BitLocker enabled. How has the performance been? Is it noticeable? If so, is it bad? My laptop is a 2.53GHz Core 2 Duo with 4GB RAM and an Intel X25-M G2 SSD. It's pretty snappy but I want it to stay that way. If I hear some bad stories about BitLocker, I'll keep doing what I am doing now, which is keeping stuff RAR'ed with a password when I am not actively working on it, and then SDeleting it when I am done (but it's such a pain).

    Read the article

  • java.awt.Desktop.open doesn’t work with PDF files?

    - by Jason S
    It looks like I cannot use Desktop.open() on PDF files regardless of location. Here's a small test program: package com.example.bugs; import java.awt.Desktop; import java.io.File; import java.io.IOException; public class DesktopOpenBug { static public void main(String[] args) { try { Desktop desktop = null; // Before more Desktop API is used, first check // whether the API is supported by this particular // virtual machine (VM) on this particular host. if (Desktop.isDesktopSupported()) { desktop = Desktop.getDesktop(); for (String path : args) { File file = new File(path); System.out.println("Opening "+file); desktop.open(file); } } } catch (IOException e) { e.printStackTrace(); } } } If I run DesktopOpenBug with arguments c:\tmp\zz1.txt c:\tmp\zz.xml c:\tmp\ss.pdf (3 files I happen to have lying around) I get this result: (the .txt and .xml files open up fine) Opening c:\tmp\zz1.txt Opening c:\tmp\zz.xml Opening c:\tmp\ss.pdf java.io.IOException: Failed to open file:/c:/tmp/ss.pdf. Error message: The parameter is incorrect. at sun.awt.windows.WDesktopPeer.ShellExecute(Unknown Source) at sun.awt.windows.WDesktopPeer.open(Unknown Source) at java.awt.Desktop.open(Unknown Source) at com.example.bugs.DesktopOpenBug.main(DesktopOpenBug.java:21) What the heck is going on? I'm running WinXP, I can type "c:\tmp\ss.pdf" at the command prompt and it opens up just fine. edit: if this is an example of Sun Java bug #6764271 please help by voting for it. What a pain. :(

    Read the article

  • Need help making a div appear on the bottom of the screen while the rest of the divs scroll

    - by user1896600
    It's hard to describe my specific problem without just showing you the HTML code. The HTML source can be seen easily from clicking "View Source" while seeing the page http://techdot.us/projects/jeopardy/view.php. The CSS is located here: http://techdot.us/projects/jeopardy/style/gameStyle.css. My main goal is to have the main content table rows/columns appear on the majority of the screen (everything except 69px, to be exact). So, the bottom 69px contains an informational panel that is supposed to stay on the bottom of the screen, even when the user scrolls down the page. Scrolling is supposed to, in theory, trigger the majority of the content to go down the page normally, except the bottom bar which stays static. I have achieved this effect on the website. However, there is a big problem. On smaller screens (as you can simulate by resizing the window), some of the main table gets cut off. It seems that my CSS solution is a botch, and, in effect, does not accomplish my main goal. The bottom bar should not cut off part of the table from the main content div (gameTable), but the main content div should display all of its content in a scrollable fashion. My CSS at the moment works as long as the viewer's screen is a certain pixel height. This is definitely not permanent. Thank you SO much for the help. I really appreciate it and totally understand that I'm being a total pain by just throwing down tons of CSS and HTML code to edit.

    Read the article

  • How to store arbitrary data for some HTML tags

    - by nickf
    I'm making a page which has some interaction provided by javascript. Just as an example: links which send an AJAX request to get the content of articles and then display that data in a div. Obviously in this example, I need each link to store an extra bit of information: the id of the article. The way I've been handling it in case was to put that information in the href link this: <a class="article" href="#5"> I then use jQuery to find the a.article elements and attach the appropriate event handler. (don't get too hung up on the usability or semantics here, it's just an example) Anyway, this method works, but it smells a bit, and isn't extensible at all (what happens if the click function has more than one parameter? what if some of those parameters are optional?) The immediately obvious answer was to use attributes on the element. I mean, that's what they're for, right? (Kind of). <a articleid="5" href="link/for/non-js-users.html"> In my recent question I asked if this method was valid, and it turns out that short of defining my own DTD (I don't), then no, it's not valid or reliable. A common response was to put the data into the class attribute (though that might have been because of my poorly-chosen example), but to me, this smells even more. Yes it's technically valid, but it's not a great solution. Another method I'd used in the past was to actually generate some JS and insert it into the page in a <script> tag, creating a struct which would associate with the object. var myData = { link0 : { articleId : 5, target : '#showMessage' // etc... }, link1 : { articleId : 13 } }; <a href="..." id="link0"> But this can be a real pain in butt to maintain and is generally just very messy. So, to get to the question, how do you store arbitrary pieces of information for HTML tags?

    Read the article

  • Displaying Query Results Horizontally

    - by AndyD273
    I am wondering if it is possible to take the results of a query and return them as a CSV string instead of as a column of cells. Basically, we have a table called Customers, and we have a table called CustomerTypeLines, and each Customer can have multiple CustomerTypeLines. When I run a query against it, I run into problems when I want to check multiple types, for instance: Select * from Customers a Inner Join CustomerTypeLines b on a.CustomerID = b.CustomerID where b.CustomerTypeID = 14 and b.CustomerTypeID = 66 ...returns nothing because a customer can't have both on the same line, obviously. In order to make it work, I had to add a field to Customers called CustomerTypes that looks like ,14,66,67, so I can do a Where a.CustomerTypes like '%,14,%' and a.CustomerTypes like '%,66,%' which returns 85 rows. Of course this is a pain because I have to make my program rebuild this field for that Customer each time the CustomerTypeLines table is changed. It would be nice if I could do a sub query in my where that would do the work for me, so instead of returning the results like: 14 66 67 it would return them like ,14,66,67, Is this possible?

    Read the article

  • How to change the default branch to push in mercurial?

    - by timmfin
    I like creating named branches in Mercurial to deal with features that might take a while to code, so when I push I do a hg push -r default to insure I'm only pushing changes to the default branch. However, it is a pain to have to remember -r default every since time I do do a push or outgoing command. So I tried fix this by adding this config to my ~/.hgrc: [defaults] push = push -r default outgoing = outgoing -r default The problem is, those config lines are not really defaults, they are aliases. They work as intended until I try to do a hg push -r <some revision>. And the "default" I've setup just obliterates the revision I passed in. (I see that defaults are deprecated, but aliases have the same problem). I tried looking around, but I can't find anything that will allow me to set a default branch to push AND allow me to override it when necessary. Anyone know of something else I could do? ps: I do realize that I could have separate clones for each branch, but I would rather not do that. It's annoying to have to switch directories, particularly when you have shared configuration or editor workspaces.

    Read the article

  • Multiple locking task (threading)

    - by Archeg
    I need to implement the class that should perform locking mechanism in our framework. We have several threads and they are numbered 0,1,2,3.... We have a static class called ResourceHandler, that should lock these threads on given objects. The requirement is that n Lock() invokes should be realeased by m Release() invokes, where n = [0..] and m = [0..]. So no matter how many locks was performed on single object, only one Release call is enough to unlock all. Even further if o object is not locked, Release call should perform nothing. Also we need to know what objects are locked on what threads. I have this implementation: public class ResourceHandler { private readonly Dictionary<int, List<object>> _locks = new Dictionary<int, List<object>>(); public static ResourceHandler Instance {/* Singleton */} public virtual void Lock(int threadNumber, object obj) { Monitor.Enter(obj); if (!_locks.ContainsKey(threadNumber)) {_locks.Add(new List<object>());} _locks[threadNumber].Add(obj); } public virtual void Release(int threadNumber, object obj) { // Check whether we have threadN in _lock and skip if not var count = _locks[threadNumber].Count(x => x == obj); _locks[threadNumber].RemoveAll(x => x == obj); for (int i=0; i<count; i++) { Monitor.Exit(obj); } } // ..... } Actually what I am worried here about is thread-safety. I'm actually not sure, is it thread-safe or not, and it's a real pain to fix that. Am I doing the task correctly and how can I ensure that this is thread-safe?

    Read the article

  • More efficient way to update multiple elements in javascript and/or jquery?

    - by Seiverence
    Say I have 6 divs with ID "#first", ""#second" ... "#sixth". Say if I wanted to execute functions on each of those divs, I would set up an array that contains each of the names of the divs I want to update as an element in the array of strings. ["first", "second", "third"] that I want to update. If I wanted to apply I function, I set up a for loop that iterates through each element in the array and say if I wanted to change the background color to red: function updateAllDivsInTheList() { for(var i = 0; i < array.size; i++) $("#"+array[i]).changeCssFunction(); } } Whenever I create a new div, i would add it to the array. The issue is, if I have a large number of divs that need to get updated, say if I wanted to update 1000 out of 1200 divs, it may be a pain/performance tank to have to sequentially iterate through every single element in the array. Is there some alternative more efficient way of updating multiple divs without having to sequentially iterate through every element in an array, maybe with some other more efficient data structure besides array? Or is what I am doing the most efficient way to do it? If can provide some example, that would be great.

    Read the article

  • Keeping a web app project organized?

    - by user246114
    Hi, I'm writing a web app, using jsp to create the page content. I need a pretty good amount of javascript to make the app work. Does anyone have any recommendations on how to structure my project, such that it doesn't become a mess? This is a broad question, but the basic problem is that I'm insert javascript code directly into my jsp content. Then I might have some external js files. Ids and such are strewn between multiple files. I'm not really sure what a best practice is for keeping this type of project organized. Do you always keep your javascript in separate files? There has to be a few hooks in the jsp pages though for them, right? I tried using GWT because I'm really a c/java developer, and I was hoping it would help keep my project more organized (definitely helps) - but GWT is a pain to use with jsp, it really wants you to do all UI generation client side after the page is done loading, doesn't work for what I need to do. Again, broad question, any tips would be great, Thanks

    Read the article

  • Mercurial on shared network drive?

    - by user1164199
    Right now I have my repo on my local drive. In order to back it up, I have to copy .hg to a window's network drive. At Is it a good idea to put Mercurial Repository in shared Network drive?, Lasse Karlsen said the repo shouldn't be on a shared folder on a network server because "mercurial cannot reliably hold locks in all situations". Would this still be an issue when the repository is only updated by a single user? If so, can someone explain to me why the corruption happens? A while back our IT had problem setting up a mercurial server. I am very fond of mercurial (it has a great interface and is very easy to work with), but if it's going to be such a pain in the neck to set up for multiple users, I am willing to look for something else. Does anyone have any suggestions (with reasons)? I am looking for a revision control program that has the following attributes: 2. Good interface (allow you to easily see revision and changes to the code over multiple revisions). 3. Work as a local repo or a network repo. 4. IT will feel comfortable installing on their network. Thanks, Stephen

    Read the article

  • find the difference between two very large list

    - by user157195
    I have two large list(could be a hundred million items), the source of each list can be either from a database table or a flat file. both lists are of comparable sizes, both unsorted. I need to find the difference between them. so I have 3 scenarios: 1. List1 is a database table(assume each row simply have one item(key) that is a string), List2 is a large file. 2. Both lists are from 2 db tables. 3. both lists are from two files. in case 2, I plan to use: select a.item from MyTable a where a.item not in (select b.item form MyTable b) this clearly is inefficient, is there a better way? Another approach is: I plan to sort each list, and then walk down both of them to find the diff. If the list is from a file, I have to read it into a db table first, then use db sorting to output the list. Is the run time complexity still O(nlogn) in db sorting? either approach is a pain and seems would be very slow when the list involved has hundreds of millions of items. any suggestions?

    Read the article

  • What are some fast methods for navigating to frequently used folders in Windows 7?

    - by fostandy
    (This is a followup question from my previous question.) In windows XP I used to be able to quickly navigate to frequently used folders by making use of the 'Favorites' menu item and the hotkey behaviour. In certain conditions it could be set up so that getting to a particular folder was as easy as alt-a x (and without a file explorer window open it was as fast as win-e alt-a x). I am struggling to get anywhere near this speed in Windows 7 and would like to solicit advice from others regarding fast folder navigation to see if I am missing any methods. My current way to navigate quickly is basically move hand to mouse move cursor to navigation pane/pain. scroll all the way to the top (because normally I the panel is focused on whatever deep directory structure I am already in). sift through my 50+ favorites to get the one I want, or click a link to a folder that contains further links in some sort of 'pseudo-tree' functionality. select it. This is slower than my previous method by upwards of an order of magnitude. There are a couple of things I've contemplated: add expandable folders, not just direct links, to the favorites menu. add expandable folders, not just direct links, to the start menu. add links of my favorite folders to a submenu of the start menu so that they come up when I search them. They do but this still rather cumbersome started using 7stacks - url here (I cannot link the url directly due to lack of reputation but http://www.alastria.com/index.php?p=software-7s). This is about the closest I've gotten to some sort of compact, customizeable, easy to access, tree based navigation structure. How do you power users quickly navigate to your favorite folders? Are there keyboard shortcuts I am missing? Can someone recommend other apps or addon or extensions that can achieve this sort of functionality? The Current solution (thanks to the answers below) I am going to use is a combination of Autohotkey and 7stacks - autohotkey to launch 7stacks, 7stacks with the 'menu' stack type for fast, key-enabled navigation to folders organised in a tree structure. This solves about 90% of the issue, the only issues are (note that these are really minor, I am really splitting hairs more than anything here) Can't use this for existing folder navigation (ie already have a explorer window open, want to go to another directory) A bit more cumbersome to add/remove entries to compared to xp favorites. A little slower than xp favorites. Whatever. I'm happy. Thanks guys. I think the answer is a split to John T and Kelbizzle - I've elected to give the answer to John T and +1 to Kelbizzle as I had already mentioned 7stacks.

    Read the article

  • 403 forbidden while submitting a POST request with image data via iPhone application

    - by binnyb
    I am creating an iOS application which allows users to send image/text data to my webserver via a POST request. I am successfully sending POSTS to the server when image data is not included in the request. Any time i POST with image data the server spits back a 403 forbidden. I have tried adding the following to the .htaccess file in the directory of the script with no luck: Options +Indexes FollowSymLinks +ExecCGI Order allow,deny Allow from all web browsers and Android devices can successfully POST with image data to the script, the only device which cannot is the iPhone. POSTING with data to other hosting providers works as expected - it is just this host(ipowerweb.com). i noticed that when i try to POST to ANY script on the server with data returns a 403 forbidden. another note: i can successfully post to another server that is hosted by ipowerweb, but mine cant seem to handle it. My host has tried to resolve the issue but cannot, and they have marked it on their end as "resolved", so no more help from them. I wish to keep this host as moving would be a pain - i will change hosts as a last resort, so please help me! Why am i getting this 403 forbidden error only when i submit data via my iPhone application? How can i resolve the issue so i can successfully POST data? any advice on what i can do would be greatly appreciated. edit: as request, here are the response headers: { Connection = close; "Content-Length" = 217; "Content-Type" = "text/html; charset=iso-8859-1"; Date = "Wed, 12 Jan 2011 19:11:19 GMT"; Server = "Apache/2"; } edit: as request here are the request headers(oops): { "Accept-Encoding" = gzip; "Content-Length" = 5781; "Content-Type" = "multipart/form-data; charset=utf-8; boundary=0xKhTmLbOuNdArY"; "User-Agent" = "YeahIAteThat 1.0 (iPhone; iPhone OS 4.2.1; en_US)"; }

    Read the article

  • How can Windows XP/7 users cleanly connect to Mac OS X Server 10.9.4 Mavericks with Active Directory integration?

    - by JakeGould
    I’m a Linux/Unix systems admin who also manages a Macintosh server infrastructure & there is a lone Mac Mini in the mix running 10.9.4 that I would like Windows XP & Windows 7 users to connect to with little or no hassle. The problem? Windows users can’t seem to even get to the point of a password prompt yet connect. Mind you this server replaced a Mac OS X 10.6.8 server that had issues, but never had issues with Windows users connected. The gist of this post is: The tons of different messages out there about Mac OS X 10.9.4 Samba support are mind-numbingly confusing. Can anyone share some solid specifics here? I’ve read pieces like this one here that suggest turning off file sharing & then adding a share with AFP/SMB enabled would work. But the suggestion seems to apply to 10.8. And from what I know a lot has changed in Samba support in 10.9 let alone the iterations to 10.9.4. Then I found this great tutorial here that explains things step-by-step. Which seems like it should work, but the problem is the example given applies to a local user created on the Mac when I would like users in an Active Directory group—which the Mac is bound to—access the Mac Mini shares. There are also tons of great tips here on MacWindows.com but nothing seems solid to the issue I am facing. So from what I am reading these are my options: Local User Versus Active Directory: Setup a common local user on the Mac OS X 10.9.4 server to be used for Samba sharing since Active Directory won’t work. Is this really the case? Because loss of AD integration is a major pain. Do Extended File Attributes Get Retained from Windows Users: If this were to work, how do extended attributes come into play? Loss of metadata & related info is not an option. How Fragile is Any of this to Updates: How does any of this shake out with Mac OS X updates as well as Windows updates? Installing Official, Open Source Samba: Would upgrading the Samba install on the server to the official open source Samba via a package like SMBUp or via the Hombrew method described here help or make the issue worse? I fully understand there have historically been issues in mixed environments, but nowadays Windows users connecting to a Mac seem to have a truly hellish road ahead of them. Unless I am missing something?

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • iptables management tools for large scale environment

    - by womble
    The environment I'm operating in is a large-scale web hosting operation (several hundred servers under management, almost-all-public addressing, etc -- so anything that talks about managing ADSL links is unlikely to work well), and we're looking for something that will be comfortable managing both the core ruleset (around 12,000 entries in iptables at current count) plus the host-based rulesets we manage for customers. Our core router ruleset changes a few times a day, and the host-based rulesets would change maybe 50 times a month (across all the servers, so maybe one change per five servers per month). We're currently using filtergen (which is balls in general, and super-balls at our scale of operation), and I've used shorewall in the past at other jobs (which would be preferable to filtergen, but I figure there's got to be something out there that's better than that). The "musts" we've come up with for any replacement system are: Must generate a ruleset fairly quickly (a filtergen run on our ruleset takes 15-20 minutes; this is just insane) -- this is related to the next point: Must generate an iptables-restore style file and load that in one hit, not call iptables for every rule insert Must not take down the firewall for an extended period while the ruleset reloads (again, this is a consequence of the above point) Must support IPv6 (we aren't deploying anything new that isn't IPv6 compatible) Must be DFSG-free Must use plain-text configuration files (as we run everything through revision control, and using standard Unix text-manipulation tools are our SOP) Must support both RedHat and Debian (packaged preferred, but at the very least mustn't be overtly hostile to either distro's standards) Must support the ability to run arbitrary iptables commands to support features that aren't part of the system's "native language" Anything that doesn't meet all these criteria will not be considered. The following are our "nice to haves": Should support config file "fragments" (that is, you can drop a pile of files in a directory and say to the firewall "include everything in this directory in the ruleset"; we use configuration management extensively and would like to use this feature to provide service-specific rules automatically) Should support raw tables Should allow you to specify particular ICMP in both incoming packets and REJECT rules Should gracefully support hostnames that resolve to more than one IP address (we've been caught by this one a few times with filtergen; it's a rather royal pain in the butt) The more optional/weird iptables features that the tool supports (either natively or via existing or easily-writable plugins) the better. We use strange features of iptables now and then, and the more of those that "just work", the better for everyone.

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >