Search Results

Search found 7324 results on 293 pages for 'operations research'.

Page 123/293 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • What's the simplest configuration of SVN on a Windows Server to avoid plain text password storage?

    - by detly
    I have an SVN 1.6 server running on a Windows Server 2003 machine, served via CollabNet's svnserve running as a service (using the svn protocol). I would like to avoid storing passwords in plain text on the server. Unfortunately, the default configuration and SASL with DIGEST-MD5 both require plain text password storage. What is the simplest possible way to avoid storing passwords in plain text? My constraints are: Path-based access control to the SVN repository needs to be possible (currently I can use an authz file). As far as I know, this is more-or-less independent of the authentication method. Active directory is available, but it's not just domain-connected windows machines that need to authenticate: workgroup PCs, Linux PCs and software that uses PySVN to perform SVN operations all need to be able to access the repositories. Upgrading the SVN server is feasible, as is installing additional software.

    Read the article

  • App pool gets stuck on reset and takes .net pages out

    - by delenda
    Several times after our app pool has been told to reset, it gets stuck, the .net pages go down and the following error appears in the application event log: Failed to execute request because the App-Domain could not be created. Error: 0x80070057 The parameter is incorrect. Our app pool is scheduled to automatically reset at 4am, so the errors stay up until we manually restart the app pool. Has anyone else encountered the error or know of any solutions? Research has suggested it's a permissions issue, but the permissions don't change and the error happens infrequently. The site has no other permission based problems and the app pool identity has permission where needed.

    Read the article

  • Implications and benefits of removing NT AUTHORITY\SYSTEM from sysadmin role?

    - by Cade Roux
    Disclaimer: I am not a DBA. I am a database developer. A DBA just sent a report to our data stewards and is planning to remove the NT AUTHORITY\SYSTEM account from the sysadmin role on a bunch of servers. (The probably violate some audit report they received). I see a MSKB article that says not to do this. From what I can tell reading a variety of disparate information on the web, a bunch of special services/operations (Volume Copy, Full Text Indexing, MOM, Windows Update) use this account even when the SQL Server and Agent service etc are all running under dedicated accounts.

    Read the article

  • Export SharePoint Wiki to PDF from the Command Line

    - by Wyatt Barnett
    We use a SharePoint wiki* at the office to serve as a knowledgebase for our IT operations. Recently we went through a disaster recovery exercise where we realized we had a key hole in our plans: how do you restore the services if your instruction manual is down because some services are offline? Anyhow, we did realize that the wiki angle was definitely something we wanted to keep, but rather that we should explore a way to create offline backups of the wiki which could be easily read using common software we should be able to setup without any knowledge from the wiki. So, does anyone know of a good utility that can take a SharePoint wiki and dump it to PDF/Word/RTF/[INSERT HUMAN FRIENDLY FORMAT] easily from the command line? *-Yes, there are better solutions out there. But this was easy and used existing infrastructure and generally does what we need it to do.

    Read the article

  • Errors related to python version added to error log when I start apache2

    - by Jean-Nicolas Boulay Desjardins
    When I start apache I am getting those errors: [Tue Jun 14 02:28:58 2011] [error] python_init: Python version mismatch, expected '2.6.5', found '2.6.6'. [Tue Jun 14 02:28:58 2011] [error] python_init: Python executable found '/usr/bin/python'. [Tue Jun 14 02:28:58 2011] [error] python_init: Python path being used '/usr/lib/python2.6/:/usr/lib/python2.6/plat-linux2:/usr/lib/python2.6/lib-tk:/usr/lib/python2.6/lib-old:/usr/lib/python2.6/lib-dynload'. [Tue Jun 14 02:28:58 2011] [notice] mod_python: Creating 8 session mutexes based on 150 max processes and 0 max threads. [Tue Jun 14 02:28:58 2011] [notice] mod_python: using mutex_directory /tmp [Tue Jun 14 02:28:58 2011] [notice] Apache/2.2.16 (Ubuntu) PHP/5.3.3-1ubuntu9.5 with Suhosin-Patch mod_python/3.3.1 Python/2.6.6 configured -- resuming normal operations I am using Ubuntu Server... Thanks in advance for any help.

    Read the article

  • Using UUIDs for cheap equals() and hashCode()

    - by Tom McIntyre
    I have an immutable class, TokenList, which consists of a list of Token objects, which are also immutable: @Immutable public final class TokenList { private final List<Token> tokens; public TokenList(List<Token> tokens) { this.tokens = Collections.unmodifiableList(new ArrayList(tokens)); } public List<Token> getTokens() { return tokens; } } I do several operations on these TokenLists that take multiple TokenLists as inputs and return a single TokenList as the output. There can be arbitrarily many TokenLists going in, and each can have arbitrarily many Tokens. These operations are expensive, and there is a good chance that the same operation (ie the same inputs) will be performed multiple times, so I would like to cache the outputs. However, performance is critical, and I am worried about the expense of performing hashCode() and equals() on these objects that may contain arbitrarily many elements (as they are immutable then hashCode could be cached, but equals will still be expensive). This led me to wondering whether I could use a UUID to provide equals() and hashCode() simply and cheaply by making the following updates to TokenList: @Immutable public final class TokenList { private final List<Token> tokens; private final UUID uuid; public TokenList(List<Token> tokens) { this.tokens = Collections.unmodifiableList(new ArrayList(tokens)); this.uuid = UUID.randomUUID(); } public List<Token> getTokens() { return tokens; } public UUID getUuid() { return uuid; } } And something like this to act as a cache key: @Immutable public final class TopicListCacheKey { private final UUID[] uuids; public TopicListCacheKey(TopicList... topicLists) { uuids = new UUID[topicLists.length]; for (int i = 0; i < uuids.length; i++) { uuids[i] = topicLists[i].getUuid(); } } @Override public int hashCode() { return Arrays.hashCode(uuids); } @Override public boolean equals(Object other) { if (other == this) return true; if (other instanceof TopicListCacheKey) return Arrays.equals(uuids, ((TopicListCacheKey) other).uuids); return false; } } I figure that there are 2^128 different UUIDs and I will probably have at most around 1,000,000 TokenList objects active in the application at any time. Given this, and the fact that the UUIDs are used combinatorially in cache keys, it seems that the chances of this producing the wrong result are vanishingly small. Nevertheless, I feel uneasy about going ahead with it as it just feels 'dirty'. Are there any reasons I should not use this system? Will the performance costs of the SecureRandom used by UUID.randomUUID() outweigh the gains (especially since I expect multiple threads to be doing this at the same time)? Are collisions going to be more likely than I think? Basically, is there anything wrong with doing it this way?? Thanks.

    Read the article

  • Microsoft Excel 2007 constantly calculating sheets

    - by acseven
    I believe this happening for two weeks now: Excel 2007 (on Windows XP) is acting funny on my computer; any medium sized sheet with some formulas in it takes a significant amount of time recalculating. I can see this because the "calculating: 2 processors xx%" message was almost unseen before and now it appears on most operations like calculating a formula (on one cell), saving, previewing, etc. If the sheet is complex (lots of formulas) I have to disable automatic calculations because excel renders as unusable - it hangs for a really long time, measureable in minutes. Any idea on what may be causing this? ps: this is a Core2 Duo computer with 2 Gb of RAM

    Read the article

  • Redirecting wildcard emails to one email with postfix

    - by Burning the Codeigniter
    I'm creating a bounce email system where emails can reply to messages on my site. However when the emails are sent to the user containing the previous message, the Reply-To field contains an email something like this [email protected] (which contains the ID at the end). If the user replies, the reply message will be sent back to [email protected] which of course, doesn't have its own mailbox, except the [email protected]. How would I redirect all incoming messages coming from a specific wildcard notification-message-*@mysite.com to [email protected]? I did some research, but no solid part worked, including the luser_relay = [email protected] and putting notification-message-* in the postfix aliases table, the notification@ has a Maildir, so the emails would go into it. I am using Ubuntu 11.04.

    Read the article

  • python: what are efficient techniques to deal with deeply nested data in a flexible manner?

    - by AlexandreS
    My question is not about a specific code snippet but more general, so please bear with me: How should I organize the data I'm analyzing, and which tools should I use to manage it? I'm using python and numpy to analyse data. Because the python documentation indicates that dictionaries are very optimized in python, and also due to the fact that the data itself is very structured, I stored it in a deeply nested dictionary. Here is a skeleton of the dictionary: the position in the hierarchy defines the nature of the element, and each new line defines the contents of a key in the precedent level: [AS091209M02] [AS091209M01] [AS090901M06] ... [100113] [100211] [100128] [100121] [R16] [R17] [R03] [R15] [R05] [R04] [R07] ... [1263399103] ... [ImageSize] [FilePath] [Trials] [Depth] [Frames] [Responses] ... [N01] [N04] ... [Sequential] [Randomized] [Ch1] [Ch2] Edit: To explain a bit better my data set: [individual] ex: [AS091209M02] [imaging session (date string)] ex: [100113] [Region imaged] ex: [R16] [timestamp of file] ex [1263399103] [properties of file] ex: [Responses] [regions of interest in image ] ex [N01] [format of data] ex [Sequential] [channel of acquisition: this key indexes an array of values] ex [Ch1] The type of operations I perform is for instance to compute properties of the arrays (listed under Ch1, Ch2), pick up arrays to make a new collection, for instance analyze responses of N01 from region 16 (R16) of a given individual at different time points, etc. This structure works well for me and is very fast, as promised. I can analyze the full data set pretty quickly (and the dictionary is far too small to fill up my computer's ram : half a gig). My problem comes from the cumbersome manner in which I need to program the operations of the dictionary. I often have stretches of code that go like this: for mk in dic.keys(): for rgk in dic[mk].keys(): for nk in dic[mk][rgk].keys(): for ik in dic[mk][rgk][nk].keys(): for ek in dic[mk][rgk][nk][ik].keys(): #do something which is ugly, cumbersome, non reusable, and brittle (need to recode it for any variant of the dictionary). I tried using recursive functions, but apart from the simplest applications, I ran into some very nasty bugs and bizarre behaviors that caused a big waste of time (it does not help that I don't manage to debug with pdb in ipython when I'm dealing with deeply nested recursive functions). In the end the only recursive function I use regularly is the following: def dicExplorer(dic, depth = -1, stp = 0): '''prints the hierarchy of a dictionary. if depth not specified, will explore all the dictionary ''' if depth - stp == 0: return try : list_keys = dic.keys() except AttributeError: return stp += 1 for key in list_keys: else: print '+%s> [\'%s\']' %(stp * '---', key) dicExplorer(dic[key], depth, stp) I know I'm doing this wrong, because my code is long, noodly and non-reusable. I need to either use better techniques to flexibly manipulate the dictionaries, or to put the data in some database format (sqlite?). My problem is that since I'm (badly) self-taught in regards to programming, I lack practical experience and background knowledge to appreciate the options available. I'm ready to learn new tools (SQL, object oriented programming), whatever it takes to get the job done, but I am reluctant to invest my time and efforts into something that will be a dead end for my needs. So what are your suggestions to tackle this issue, and be able to code my tools in a more brief, flexible and re-usable manner?

    Read the article

  • Changing memory allocator to Jemalloc Centos 6

    - by Brian Lovett
    After reading this blog post about the impact of memory allocators like jemalloc on highly threaded applications, I wanted to test things on a larger scale on some of our cluster of servers. We run sphinx, and apache using threads, and on 24 core machines. Installing jemalloc was simple enough. We are running Centos 6, so yum install jemalloc jemalloc-devel did the trick. My question is, how do we change everything on the system over to using jemalloc instead of the default malloc built into Centos. Research pointed me at this as a potential option: LD_PRELOAD=$LD_PRELOAD:/usr/lib64/libjemalloc.so.1 Would this be sufficient to get everything using jemalloc?

    Read the article

  • sudo midnight commander

    - by mit
    I sometimes start midnight commander as superuser with the command sudo mc to do some operations on the current working directory as superuser. But this results in ~/.mc having the wrong permissions, which I need to fix manually. Any solution? Edit: I accepted an answer. I want to further add, that .mc is a directory, so my solution goes like this: $ cd ~ ~$ sudo chown -R mit.mit .mc ~$ chmod 775 .mc ~$ cd .mc ~$ chmod -R 664 .mc ~/.mc$ chmod 775 cedit It seems not to be a good idea after installing mc to use sudo on its first start .

    Read the article

  • How to Shrink large Hyper-V VM

    - by autrevo
    Using Disk2VHD utility I converted my bare-metal OS into Hyper-V VHD - http://technet.microsoft.com/en-us/sysinternals/ee656415.aspx And I could obtain a huge 190GB VHD file. Apart from performance issues, this VHD worked fine as guest when hosted on Windows Server 200 R2, Hyper-V. Having realized need to keeping only system files and application installations on VHD. I have deleted most of the junk data from this VHD and now it contains only 20-25 GB. But I am not able to shrink the VHD VM. Having done some research, I came to know, this as a limitation of .VHD files. Subsequently I followed these two step using Edit Virtual Hard Wizard on Windows 2012 Box. Convert from VHD to VHDX (took close to 3 hrs.) Compact (Another 4 hrs.) This did not ever shrink the VHDX either. Does Hyper-V does not provide proper support to handle large VHDs or VHDXs whose size are the range of 200GB.

    Read the article

  • Smartcards for storing gpg/ssh keys (Linux) - what do I need?

    - by Ninefingers
    Hi All, I'm interested in storing my SSH keys and gpg keys on a smartcard for added security. However, I'm a bit uncertain on a few points, which are as follows: How many keys can I get on a card? I assume both SSH and GPG can store keys on the card. Is there a limit to key size? I see a lot of cards saying they support 2048-bit keys, what about larger sizes? Hardware: can anyone recommend a card/reader combination that works well? I've done a fair amount of research and it seems PC/SC readers can be a bit iffy - is this your experience? Have I missed anything I should be asking? Are there any other hurdles? I'm aware fsf europe give away cards with membership - I'm not sure I want to join, but... are these cards any good?

    Read the article

  • best practice with memcache/php - multi memcache nodes

    - by user62835
    So I am working on a web app - that has to be built for scalability. It stores frequent MySQL querys into the cache. I have pretty much everything built and ready to go - but I am concerned on best practices on handling where to cache the data. I've talked to a few people and one of them suggested to split each key/value across all the memcache nodes. Meaning if i store the example: 'somekey','this is the value' it will be split across lets say 3 memcache servers. Is that a better way? or is memcache more built on a 1 to 1 relationship?. For example. store value on server A till it faults out - go to server B and store there. that is my current understanding from the research I have done and past experience working with memcache. Could someone please point me in the right direction in this and let me know which way is best or if I completely have this mixxed up. Thanks

    Read the article

  • .vob to h.264 MP4 Files - Worth The Effort?

    - by harper89
    When I was doing the converting to digital format a while back I chose .VOB due to no quality loss. However recently I have been informed of this h.264 compression method. Time is not an issue here, I don't mind waiting for conversions etc. I also understand that any sort of compression will reduce quality. To test I converted a 4GB .VOB to a .mp4 using h264 in handbrake and the quality loss was very very very hard to notice. From what I have understood through research Space = .mp4(h.264) Quality = .Vob Playback = Both equally supported? But these concerns have yet to be answered: My comparison was done on a computer monitor, would the quality loss be substantially noticable if I purchased a 50 inch TV in the future? Is this type of file highly supported? (I don't want to experience incompatible players) What other issues could a conversion of files such as this cause in the future?

    Read the article

  • How does fail2ban 0.9 database storage actually works?

    - by Arantir
    Fail2ban 0.9 introduce database storage to save bans on restart. But I can't find out the actual mechanism of it work. There is dbpurgeage parameter which controls lifetime of old bans, defaults to 24 hours. As I see from code research, fail2ban saves a ban to the db with timeofban equals to the moment of ban being saved. Then every dbpurgeage period it removes all bans with timeofban < MyTime.time() - self._purgeAge, in other words removes all bans have been stored more than 24 hours ago. But what if an IP was banned for the month? Does all this mean that with dbpurgeage = 86400 after restart in 24 hours I will lost all bans longer than 24 hours? I just want that all my permanent bans will be preserved in any case.

    Read the article

  • Can Resource Governor for SQL Server 2008 be scripted?

    - by blueberryfields
    I'm looking for a method to, in real-time, automatically, adjust Resource Governor settings. Here's an example: Imagine that I have 10 applications, each hitting a different database on the same database machine. For normal operations, they do not hit the database very hard, so I might want each one to have 10% CPU power reserved. Occasionally, though, one or two of them might spike, and run an operation which could really use the extra power to run faster. I'd like to be able to adjust to compensate (say, reducing the non-spiking apps to 3%, and splitting the difference between the spiking apps). This is a kind of poor man's method of trying to dynamically adjust resource allocation and priorities. Scripts (or something script-like) is preferred, since the requirement is for meta-level adjustments to be possible in real-time, also.

    Read the article

  • Lazy umount or Unmounting a busy disk in Linux

    - by deed02392
    I have read that it is possible to 'umount' a disk that is otherwise busy by using the 'lazy' option. The manpage has this to say about it: umount - unmount file systems -l Lazy unmount. Detach the filesystem from the filesystem hierarchy now, and cleanup all references to the filesystem as soon as it is not busy anymore. This option allows a "busy" filesystem to be unmounted. (Requires kernel 2.4.11 or later.) But what would be the point in that? I considered why we dismount partitions at all: To remove the hardware To perform operations on the filesystem that would be unsafe to do while mounted In either of these cases, all a 'lazy' unmount serves IMHO is to make it more difficult to determine if the disk really is dismounted and you can actually proceed with these actions. The only application for umount -l seems to be for inexperienced users to 'feel' like they've achieved something they haven't. Why would you use a lazy unmount?

    Read the article

  • Considerations for a business looking to transition from PSTN to IP Telephony

    - by Bryce Thomas
    Full disclosure - This is related to a homework assignment question. I am not asking you to do my work for me, I am merely looking for some pointers and considerations to direct me in my further research. I have an assignment I'm working on where I've been given a scenario where a business wants to look into transitioning to using "Internet Telephone" as opposed to a traditional PSTN/PBX system and I need to write a report on it. I'm after some high level pointers from people, especially anyone that has been involved in a real life transition of this nature, on what some of the most important considerations are. These can be financial considerations, initial setup considerations, ongoing administrative considerations, quality of service considerations or anything else that is pertinent to performing such a transition.

    Read the article

  • How do I know if my SSD Drive supports TRIM?

    - by Omar Shahine
    Windows 7 has support for the TRIM command which should help ensure that the performance of an SSD drive remains good through it's life. How can you tell if a given SSD drive supports TRIM? See here for a description of TRIM. Also the following from a Microsoft presentation: Microsoft implementation of “Trim” feature is supported in Windows 7 NTFS will send down delete notification to the device supporting “trim” File system operations: Format, Delete, Truncate, Compression OS internal processes: e.g., Snapshot, Volume Manager Three optimization opportunities for the device Enhancing device wear leveling by eliminating merge operation for all deleted data blocks Making early garbage collection possible for fast write Keeping device’s unused storage area as much as possible; more room for device wear leveling.

    Read the article

  • Mutli-processor workstation as a workstation/server

    - by posdef
    I work in a research institute and a number of programs we use are computationally intensive (I actually wrote one of them). Right now we have one computer that is dedicated for one of these programs (with local accounts only, as in users physically sitting in front of that pc) and the other programs are run on individual workstations assigned to people. I have been looking around to common brands such as Dell and HP, for a some sort of a small/medium scale server, which can be used as a workhorse by sending tasks remotely. It appears as if there is nothing in between workstations with one 6-core processor and a bunch of extras (like fancy graphics etc) and rack mount servers with ridiculous amount of RAM and HDD expansion capabilities but still relatively little number of processors/cores. I wonder if what I am looking for is such a small niche product? Are there other solutions that I might not be aware of? Does anyone know of a multi proc- multi-core workstation/server that is still within the reasonable

    Read the article

  • Lost HDD partition to clean command, testdisk unresponsive

    - by Sujay Anjankar
    I accidentally cleaned my external HDD by the clean command in Diskpart and got a full sized heart attack after that. I did some research and I have tried a number of tools already. To name a few: Testdisk Recuva Partition Find and mount Eassos Recovery I have even tried some other dumb sounding tools, but none of them could find the lost partition. They just show "no partition found". Testdisk shows Partition sector doesn't have the endmark 0xAA55. The file recovery softwares list the files, but none of them seem to be able to restore the lost partition. I need to recover the disk as it was. Any help would be much appreciated!

    Read the article

  • What are the consequences of giving an AD domain differing NetBIOS and DNS names?

    - by Newt
    In the past, when creating AD domains, I've used the common convention of using a sub-domain of the company's publicly registered domain name, e.g "corp.mycompany.com" or "int.mycompany.com". I've always accepted the default NetBIOS name when running DCPromo, for fear that creating a NetBIOS name that differs from the sub-domain may cause complications. I've recently been doing a bit of research on the consequences of providing an alternate NetBIOS name. The main reasons behind this are: The NetBIOS name isn't particularly descriptive or unique to the company Apparently generic NetBIOS names such as "CORP" or "INT" can cause issues when merging IT systems (although I've not had experience with this myself) Providing something "before the slash" that means more to users (less important) In looking at the possible downsides, the only one I can come up with is the disjointed namespace issue when configuring Exchange. Can anybody with more experience than I elaborate on my findings at all? Many thanks

    Read the article

  • Why do browsers have so many possible exploits?

    - by Beau Martínez
    When browsing I am ocassionally given warnings about pages that host malware "that could damage my computer". I am seriously perplexed as to why, in 2010, browsers still have possible exploits and can be cracked. My question is "Why?". I'm assuming it's because of the quick development that occured in the browser wars which were unsufficiently tested, but I'm unsure. Surely WebKit would have patched all the issues in KHTML, or Gecko sorted out the flaws in Netscape's engine, and the IE coders sorted through their codebase to eliminate possible flaws? (Somewhat related: http://superuser.com/questions/117770/which-browser-is-the-most-secure-research-and-practically-based.)

    Read the article

  • Windows Server 2012 and Ubuntu 12.04.1 under Hyper-V

    - by Technicolour
    I've set up an instance of Ubuntu 12.04.1 LTS under Hyper-V 2012. However it seems to be nondeterministic as to whether or not it completes the boot process. I get a Kernel Panic, "IO-APIC + timer doesn't work!", which from my research is caused by not having integration services correctly installed? It was my understanding that the integration services were all now baked into the kernel? It should then be fine to update the OS (including any kernel updates, as I'm guessing that's what has happened) Being able to rely on this successfully booting would be great as I intend on using ssh for crisis situations.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >