Search Results

Search found 13262 results on 531 pages for 'tools utilities'.

Page 309/531 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • System.Windows.Forms.DataVisualization Namespace Fine in One Class but Not in Another

    - by jxpx777
    Hi. I'm getting this error The type or namespace name 'DataVisualization' does not exist in the namespace 'System.Windows.Forms' (are you missing an assembly reference?) Here is my using section of the class: using System; using System.Collections; using System.Collections.Generic; using System.Windows.Forms.DataVisualization.Charting; using System.Windows.Forms.DataVisualization.Charting.Borders3D; using System.Windows.Forms.DataVisualization.Charting.ChartTypes; using System.Windows.Forms.DataVisualization.Charting.Data; using System.Windows.Forms.DataVisualization.Charting.Formulas; using System.Windows.Forms.DataVisualization.Charting.Utilities; namespace myNamespace { public class myClass { // Usual class stuff } } The thing is that I am using the same DataVisualization includes in another class. The only thing that I can think that is different is that the classes that are giving this missing namespace error are Solution Items rather than specific to a project. The projects reference them by link. Anyone have thoughts on what the problem is? I've installed the chart component, .Net 3.5 SP1, and the Chart Add-in for Visual Studio 2008. UPDATE: I moved the items from Solution Items to be regular members of my project and I'm still seeing the same behavior. UPDATE 2: Removing the items from the Solution Items and placing them under my project worked. Another project was still referencing the files which is why I didn't think it worked previously. I'm still curious, though, why I couldn't use the namespace when the classes were Solution Items but moving them underneath a project (with no modifications, mind you) instantly made them recognizable. :\

    Read the article

  • Codeplex/Sourceforge for internal use

    - by Josh
    I'm looking for a free/open source collaborative project manager that can be deployed internally in my workplace that would act similar to Codeplex or Sourceforge. Does anyone know of something like this, and if so do you have experience with it. Requirements: Open Source or Free Locally Deployable Has the same types of features found in Sourceforge / Codeplex Issue/Feature Tracking Community Interaction (ie. Voting, Roles, etc.) SCM Integration (Optional) .NET/Windows Friendly (Optional) Every business ends up having internal utilities, and domain specific apps that developers create to make life easier. Given the input of the internal developer community they have the potential to become much better (can you say GMail...), and I would simply like to foster such an environment internally by providing an easy place for that interaction to take place. UPDATE: So I like what I am seeing in both Trac and GForge, but both are heavily geared towards UNIX/Subversion environments. I should have specified this, but we are a MS shop from top to bottom. How practical do you think it is going to be to try and use these in a MS .NET environment? Would that be like trying to shove a square peg through a round hole?

    Read the article

  • Directly call distutils' or setuptools' setup() function with command name/options, without parsing

    - by Ryan B. Lynch
    I'd like to call Python's distutils' or setuptools' setup() function in a slightly unconventional way, but I'm not sure whether distutils is meant for this kind of usage. As an example, let's say I currently have a 'setup.py' file, which looks like this (lifted verbatim from the distutils docs--the setuptools usage is almost identical): from distutils.core import setup setup(name='Distutils', version='1.0', description='Python Distribution Utilities', author='Greg Ward', author_email='[email protected]', url='http://www.python.org/sigs/distutils-sig/', packages=['distutils', 'distutils.command'], ) Normally, to build just the .spec file for an RPM of this module, I could run python setup.py bdist_rpm --spec-only, which parses the command line and calls the 'bdist_rpm' code to handle the RPM-specific stuff. The .spec file ends up in './dist'. How can I change my setup() invocation so that it runs the 'bdist_rpm' command with the '--spec-only' option, WITHOUT parsing command-line parameters? Can I pass the command name and options as parameters to setup()? Or can I manually construct a command line, and pass that as a parameter, instead? NOTE: I already know that I could call the script in a separate process, with an actual command line, using os.system() or the subprocess module or something similar. I'm trying to avoid using any kind of external command invocations. I'm looking specifically for a solution that runs setup() in the current interpreter. For background, I'm converting some release-management shell scripts into a single Python program. One of the tasks is running 'setup.py' to generate a .spec file for further pre-release testing. Running 'setup.py' as an external command, with its own command line options, seems like an awkward method, and it complicates the rest of the program. I feel like there may be a more Pythonic way.

    Read the article

  • Single developer, project organization

    - by poke
    I am looking for a good (and free) way to organize some of my personal projects. I am saying "organize" because I'm not sure, if the standard project management software solutions are exactly what I am looking for and especially something what I, as a single developer, need. In general, I just want to keep my progress of my projects organized in some way. I would like to be able to keep track of milestones and split those into multiple smaller tasks, so I can keep track of my progress. So some task/issue based system would probably be good, especially as I also want to keep track of issues/bugs with specific versions (although I alone will create those issues). I am and will be the only developer on those projects, so it doesn't matter if the software is offline or online, and I also don't need any collaboration features (like commenting on things, or assigning tasks to other developers etc). But if there is a good software that fits my needs, and in addition it has those things, I don't really care. After all it's easy enough to not use available features. Many online solutions also offer integrated code hosting. I am using git internally, but I don't plan to push any of the code, so such a feature is not needed either. In case of online solutions however I would like the projects to be closed to the public (some of the online utilities only offer open source projects for free and require payments for private projects). I have looked at some project management solutions already, I also read some similar questions here on SO. But given that I'm a single developer, my focus is probably a bit different as when others ask for a huge distributed software that supports many developers and different collaboration features. Some standard answers such as Trac (which also only supports one project), Redmine and FogBUGZ look interesting, but are a bit off my interest (although you may change my mind on that :P). Currently, I'm looking at Indefero which doesn't look too bad. But what do you think?

    Read the article

  • How to get current datetime on windows command line, in a suitable format for using in a filename?

    - by Rory
    What's a windows command line statement(s) I can use to get the current datetime in a format that I can put into a filename? I want to have a .bat file that zips up a directory into an archive with the current date & time as part of the name, eg "Code_2008-10-14_2257.zip". Is there any easy way I can do this, independent of the regional settings of the machine? I don't really mind about the date format, ideally it'd be yyyy-mm-dd but anything simple is fine. So far I've got this, which on my machine gives me "Tue_10_14_2008_230050_91" rem Get the datetime in a format that can go in a filename. set _my_datetime=%date%_%time% set _my_datetime=%_my_datetime: =_% set _my_datetime=%_my_datetime::=% set _my_datetime=%_my_datetime:/=_% set _my_datetime=%_my_datetime:.=_% rem now use the timestamp by in a new zip file name "d:\Program Files\7-Zip\7z.exe" a -r Code_%_my_datetime%.zip Code I can live with this but it seems a bit clunky. Ideally it'd be briefer and have the format mentioned earlier. I'm using Windows Server 2003 and Win XP Pro. I don't want to install additional utilities to achieve this (although I realise there are some that will do nice date formatting).

    Read the article

  • OO vs Simplicity when it comes to user interaction

    - by Oetzi
    Firstly, sorry if this question is rather vague but it's something I'd really like an answer to. As a project over summer while I have some downtime from Uni I am going to build a monopoly game. This question is more about the general idea of the problem however, rather than the specific task I'm trying to carry out. I decided to build this with a bottom up approach, creating just movement around a forty space board and then moving on to interaction with spaces. I realised that I was quite unsure of the best way of proceeding with this and I am torn between two design ideas: Giving every space its own object, all sub-classes of a Space object so the interaction can be defined by the space object itself. I could do this by implementing different land() methods for each type of space. Only giving the Properties and Utilities (as each property has unique features) objects and creating methods for dealing with the buying/renting etc in the main class of the program (or Board as I'm calling it). Spaces like go and super tax could be implemented by a small set of conditionals checking to see if player is on a special space. Option 1 is obviously the OO (and I feel the correct) way of doing things but I'd like to only have to handle user interaction from the programs main class. In other words, I don't want the space objects to be interacting with the player. Why? Errr. A lot of the coding I've done thus far has had this simplicity but I'm not sure if this is a pipe dream or not for larger projects. Should I really be handling user interaction in an entirely separate class? As you can see I am quite confused about this situation. Is there some way round this? And, does anyone have any advice on practical OO design that could help in general?

    Read the article

  • C++: parsing with simple regular expression or shoud I use sscanf?

    - by Helltone
    I need to parse a string like func1(arg1, arg2); func2(arg3, arg4);. It's not a very complex parsing problem, so I would prefer to avoid resorting to flex/bison or similar utilities. My first approch was to try to use POSIX C regcomp/regexec or Boost implementation of C++ std::regex. I wrote the following regular expression, which does not work (I'll explain why further on). "^" "[ ;\t\n]*" "(" // (1) identifier "[a-zA-Z_][a-zA-Z0-9_]*" ")" "[ \t\n]*" "(" // (2) non-marking "\[" "(" // (3) non-marking "[ \t]*" "(" // (4..n-1) argument "[a-zA-Z0-9_]+" ")" "[ \t\n]*" "," ")*" "[ \t\n]*" "(" // (n) last argument "[a-zA-Z0-9_]+" ")" "]" ")?" "[ \t\n]*" ";" Note that the group 1 captures the identifier and groups 4..n-1 are intended to capture arguments except the last, which is captured by group n. When I apply this regex to, say func(arg1, arg2, arg3) the result I get is an array {func, arg2, arg3}. This is wrong because arg1 is not in it! The problem is that in the standard regex libraries, submarkings only capture the last match. In other words, if you have for instance the regex "((a*|b*))*" applied on "babb", the results of the inner match will be bb and all previous captures will have been forgotten. Another thing that annoys me here is that in case of error there is no way to know which character was not recognized as these functions provide very little information about the state of the parser when the input is rejected. So I don't know if I'm missing something here... In this case should I use sscanf or similar instead? Note that I prefer to use C/C++ standard libraries (and maybe boost).

    Read the article

  • how to automate the testing of a text based menu

    - by Reagan Penner
    Hi there, I have a text based menu running on a remote Linux host. I am using expect to ssh into this host and would like to figure out how to interact with the menus. Interaction involves arrowing up, down and using the enter and back arrow keys. For example, Disconnect Data Collection > Utilities > Save Changes When you enter the system Disconnect is highlighted. So simply pressing enter twice you can disconnect from the system. Second enter confirms the disconnect. The following code will ssh into my system and bring up the menu. If I remove the expect eof and try to send "\r" thinking that this would select the Disconnect menu option I get the following error: "write() failed to write anything - will sleep(1) and retry..." #!/usr/bin/expect set env(TERM) vt100 set password abc123 set ipaddr 162.116.11.100 set timeout -1 match_max -d 100000 spawn ssh root@$ipaddr exp_internal 1 expect "*password:*" send -- "$password\r" expect "Last login: *\r" expect eof I have looked at the virterm and term_expect examples but cannot figure out how to tweak them to work for me. If someone can point me in the right direction I would greatly appreciate it. What I need to know is can I interact with a text based menu system and what is the correct method for doing this, examples if any exist would be great. thanks, -reagan

    Read the article

  • Looking to reimplement build toolchain from bash/grep/sed/awk/(auto)make/configure to something more

    - by wash
    I currently maintain a few boxes that house a loosely related cornucopia of coding projects, databases and repositories (ranging from a homebrew *nix distro to my class notes), maintained by myself and a few equally pasty-skinned nerdy friends (all of said cornucopia is stored in SVN). The vast majority of our code is in C/C++/assembly (a few utilities are in python/perl/php, we're not big java fans), compiled in gcc. Our build toolchain typically consists of a hodgepodge of make, bash, grep, sed and awk. Recent discovery of a Makefile nearly as long as the program it builds (as well as everyone's general anxiety with my cryptic sed and awking) has motivated me to seek a less painful build system. Currently, the strongest candidate I've come across is Boost Build/Bjam as a replacement for GNU make and python as a replacement for our build-related bash scripts. Are there any other C/C++/asm build systems out there worth looking into? I've browsed through a number of make alternatives, but I haven't found any that are developed by names I know aside from Boost's. (I should note that an ability to easily extract information from svn commandline tools such as svnversion is important, as well as enough flexibility to configure for builds of asm projects as easily as c/c++ projects)

    Read the article

  • What exactly is the GNU tar ././@LongLink "trick"?

    - by Cheeso
    I read that a tar entry type of 'L' (76) is used by gnu tar and gnu-compliant tar utilities to indicate that the next entry in the archive has a "long" name. In this case the header block with the entry type of 'L' usually encodes the name ././@LongLink . My question is: where is the format of the next block described? The format of a tar archive is very simple: it is just a series of 512-byte blocks. In the normal case, each file in a tar archive is represented as a series of blocks. The first block is a header block, containing the file name, entry type, modified time, and other metadata. Then the raw file data follows, using as many 512-byte blocks as required. Then the next entry. If the filename is longer than will fit in the space allocated in the header block, gnu tar apparently uses what's known as "the ././@LongLink trick". I can't find a precise description for it. When the entry type is 'L', how do I know how long the "long" filename is? Is the long name limited to 512 bytes, in other words, whatever fits in one block? Most importantly: where is this documented?

    Read the article

  • Where should an application's default folder live?

    - by HotOil
    Hi: I'm creating a little app that configures a connected device and then saves the config information in a file. The filename cannot be chosen by the user, but its location can be chosen. Where is the best place for the app's default save-to folder? I have seen examples out there where it is the "MyDocuments" location (eg Visual Studio does this). I have seen a folder created right at the top of the C:\ drive. I find that to be a little obnoxious, personally. It could be in the Program Files[Manufacturer] or Program Files[Product Name], or wherever the app was installed. I have used this location in the past; I dislike it because Windows Explorer does not allow a user to browse to there very easily ('browsability'). Going with this last notion that 'browsability' is a factor, I suppose MyDocuments is the best choice. Is this the most common, most widely accepted practice? I think historically we have chosen the install folder because that co-locates the data with the device management utilities. But I would really like to get away from that. I don't want the user to have to go pawing through system files to find his/her data, esp if that person is not too Windows-savvy. Also, I am using the .NET WinForms FolderBrowserDialog, and the "Environment.SpecialFolders" enum isn't helpful in setting up the dialog to point into the Program Files folder. Thanks for your input! Suz.

    Read the article

  • JQuery autocomplete problem

    - by heffaklump
    Im using JQuerys Autocomplete plugin, but it doesn't autocomplete upon entering anything. Any ideas why it doesnt work? The basic example works, but not mine. var ppl = {"ppl":[{"name":"peterpeter", "work":"student"}, {"name":"piotr","work":"student"}]}; var options = { matchContains: true, // So we can search inside string too minChars: 2, // this sets autocomplete to begin from X characters dataType: 'json', parse: function(data) { var parsed = []; data = data.ppl; for (var i = 0; i < data.length; i++) { parsed[parsed.length] = { data: data[i], // the entire JSON entry value: data[i].name, // the default display value result: data[i].name // to populate the input element }; } return parsed; }, // To format the data returned by the autocompleter for display formatItem: function(item) { return item.name; } }; $('#inputplace').autocomplete(ppl, options); Ok. Updated: <input type="text" id="inputplace" /> So, when entering for example "peter" in the input field. No autocomplete suggestions appear. It should give "peterpeter" but nothing happens. And one more thing. Using this example works perfectly. var data = "Core Selectors Attributes Traversing Manipulation CSS Events Effects Ajax Utilities".split(" "); $("#inputplace").autocomplete(data);

    Read the article

  • Need a better way to execute console commands from python and log the results

    - by Wim Coenen
    I have a python script which needs to execute several command line utilities. The stdout output is sometimes used for further processing. In all cases, I want to log the results and raise an exception if an error is detected. I use the following function to achieve this: def execute(cmd, logsink): logsink.log("executing: %s\n" % cmd) popen_obj = subprocess.Popen(\ cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) (stdout, stderr) = popen_obj.communicate() returncode = popen_obj.returncode if (returncode <> 0): logsink.log(" RETURN CODE: %s\n" % str(returncode)) if (len(stdout.strip()) > 0): logsink.log(" STDOUT:\n%s\n" % stdout) if (len(stderr.strip()) > 0): logsink.log(" STDERR:\n%s\n" % stderr) if (returncode <> 0): raise Exception, "execute failed with error output:\n%s" % stderr return stdout "logsink" can be any python object with a log method. I typically use this to forward the logging data to a specific file, or echo it to the console, or both, or something else... This works pretty good, except for three problems where I need more fine-grained control than the communicate() method provides: stdout and stderr output can be interleaved on the console, but the above function logs them separately. This can complicate the interpretation of the log. How do I log stdout and stderr lines interleaved, in the same order as they were output? The above function will only log the command output once the command has completed. This complicates diagnosis of issues when commands get stuck in an infinite loop or take a very long time for some other reason. How do I get the log in real-time, while the command is still executing? If the logs are large, it can get hard to interpret which command generated which output. Is there a way to prefix each line with something (e.g. the first word of the cmd string followed by :).

    Read the article

  • How to reference SMF libraries when deploying on phone 7 (Release)

    - by aHaH
    Initially, at Visual Studio, I clicked debug instead of release to deploy my app on phone 7 device. No errors, works perfectly fine! Received multitude of errors that mention that some of the libraries don't seem to exist on the mobile phone. For example, extract from the entire list of errors include Warning 10 The referenced component 'Microsoft.SilverlightMediaFramework.Utilities' could not be found. Warning 3 Could not resolve this reference. Could not locate the assembly "Microsoft.SilverlightMediaFramework.Plugins". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors. SLARToolKitWinPhoneSample In addition, im also using Slartookit to power up the AR capabilities. Deploying (Release) on the mobile prompts the following errors too. Error 11 The type or namespace name 'SLARToolKit' could not be found (are you missing a using directive or an assembly reference?) What should I do? Will updating the mobile solve this? Do I have to manually install? Or? Thanks

    Read the article

  • Best way to associate data files with particular tests in RSpec / Ruby

    - by Bill T
    For my RSpec tests I would to automatically associate data files with each test. To clarify, if my tests each require an xml file as input data and then some xpath statements to validate the responses they get back I would like to externalize the xml and xpath as files and have the testing framework easily associate them with the particular test being run by using the unique ID of the test as the file(s) name. I tried to get this behavior but the solution isn't very clean. I wrote a helper method that takes the value of "description" and combines it with FILE to create a unique identifier which is set into a global variable that other utilities can access. The unique identifier is used to associate the data files I need. I have to call this helper method as the first line of every test, which is ugly. If I have an RSpec example that looks like this: describe "Basic functions of this server I'm testing" do it "should give me back a response" do # Sets a global var to: "my_tests_spec.rb_should_give_me_back_a_response" TestHelper::who_am_i __FILE__, description ... end end Is there some better/cleaner/slicker way I can get an unique ID for each test that I could use to associate data files with? Perhaps something build into RSpec I'm unaware of? Thank you, -Bill

    Read the article

  • Capturing output of find . -print0 into a bash array

    - by Idris
    Using find . -print0 seems to be the only safe way of obtaining a list of files in bash due to the possibility of filenames containing spaces, newlines, quotation marks etc. However, I'm having a hard time actually making find's output useful within bash or with other command line utilities. The only way I have managed to make use of the output is by piping it to perl, and changing perl's IFS to null: find . -print0 | perl -e '$/="\0"; @files=<>; print $#files;' This example prints the number of files found, avoiding the danger of newlines in filenames corrupting the count, as would occur with: find . | wc -l As most command line programs do not support null-delimited input, I figure the best thing would be to capture the output of find . -print0 in a bash array, like I have done in the perl snippet above, and then continue with the task, whatever it may be. How can I do this? This doesn't work: find . -print0 | ( IFS=$'\0' ; array=( $( cat ) ) ; echo ${#array[@]} ) A much more general question might be: How can I do useful things with lists of files in bash?

    Read the article

  • Segmentation Fault when trying to push a string to the back of a list.

    - by user308012
    I am trying to write a logger class for my C++ calculator, but I'm experiencing a problem while trying to push a string into a list. I have tried researching this issue and have found some information on this, but nothing that seems to help with my problem. I am using a rather basic C++ compiler, with little debugging utilities and I've not used C++ in quite some time (even then it was only a small amount). My code: #ifndef _LOGGER_H_ #define _LOGGER_H_ #include <iostream> #include <list> #include <string> using std::cout; using std::cin; using std::endl; using std::list; using std::string; class Logger { private: list<string> *mEntries; public: Logger() { // Initialize the entries list mEntries = new list<string>(); } ~Logger() { // Release the list mEntries->clear(); delete mEntries; } // Public Methods void WriteEntry(string entry) { // *** BELOW LINE IS MARKED WITH THE ERROR *** mEntries->push_back(string(entryData)); } void DisplayEntries() { cout << endl << "**********************" << endl << "* Logger Entries *" << endl << "**********************" << endl << endl; for(list<string>::iterator it = mEntries->begin(); it != mEntries->end(); it++) { cout << *it << endl; } } }; #endif I am calling the WriteEntry method by simply passing in a string, like so: mLogger->WriteEntry("Testing"); Any advice on this would be greatly appreciated.

    Read the article

  • Information about PTE's (Page Table Entries) in Windows

    - by Patrick
    In order to find more easily buffer overflows I am changing our custom memory allocator so that it allocates a full 4KB page instead of only the wanted number of bytes. Then I change the page protection and size so that if the caller writes before or after its allocated piece of memory, the application immediately crashes. Problem is that although I have enough memory, the application never starts up completely because it runs out of memory. This has two causes: since every allocation needs 4 KB, we probably reach the 2 GB limit very soon. This problem could be solved if I would make a 64-bit executable (didn't try it yet). even when I only need a few hundreds of megabytes, the allocations fail at a certain moment. The second problem is the biggest one, and I think it's related to the maximum number of PTE's (page table entries, which store information on how Virtual Memory is mapped to physical memory, and whether pages should be read-only or not) you can have in a process. My questions (or a cry-for-tips): Where can I find information about the maximum number of PTE's in a process? Is this different (higher) for 64-bit systems/applications or not? Can the number of PTE's be configured in the application or in Windows? Thanks, Patrick PS. note for those who will try to argument that you shouldn't write your own memory manager: My application is rather specific so I really want full control over memory management (can't give any more details) Last week we had a memory overwrite which we couldn't find using the standard C++ allocator and the debugging functionality of the C/C++ run time (it only said "block corrupt" minutes after the actual corruption") We also tried standard Windows utilities (like GFLAGS, ...) but they slowed down the application by a factor of 100, and couldn't find the exact position of the overwrite either We also tried the "Full Page Heap" functionality of Application Verifier, but then the application doesn't start up either (probably also running out of PTE's)

    Read the article

  • C++: how to build my own utility library?

    - by augustin
    I am starting to be proficient enough with C++ so that I can write my own C++ based scripts (to replace bash and PHP scripts I used to write before). I find that I am starting to have a very small collection of utility functions and sub-routines that I'd like to use in several, otherwise unrelated C++ scripts. I know I am not supposed to reinvent the wheel and that I could use external libraries for some of the utilities I'm creating for myself. However, it's fun to create my own utility functions, they are perfectly tailored to the job I have in mind, and it's for me a large part of the learning process. I'll see about using more polished external libraries when I am proficient enough to work on more serious, long term projects. So, the question is: how do I manage my personal utility library in a way that the functions can be easily included in my various scripts? I am using linux/Kubuntu, vim, g++, etc. and mostly coding CLI scripts. Don't assume too much in terms of experience! ;) Links to tutorials or places where relevant topics are properly documented are welcome.

    Read the article

  • Task Scheduler Cannot Apply My Changes - Adding a User with Permissions

    - by Aaron
    I can log in to the server using a domain account without administrator privileges and create a task in the Task Scheduler. I am allowed to do an initial save of the task but unable to modify it with the same user account. When changes are complete, a message box prompts for the user password (same domain user I logged in with), then fails with the following message. Task Scheduler cannot apply your changes. The user account is unknown, the password is incorrect, or the account does not have permission to modify the task. When I check Log on as Batch Job Properties (found this from the Help documentation): This policy is accessible by opening the Control Panel, Administrative Tools, and then Local Security Policy. In the Local Security Policy window, click Local Policy, User Rights Assignment, and then Logon as batch job. Everything is grayed out, so I can't add a user. How can I add a user?

    Read the article

  • HP ACU shows parity initialization failed (with screenshot)

    - by lbanz
    I put in a new drive due to a hard drive failure. When the rebuild got to 100%, the controller fails and I need to reboot the server to bring it online. I had to do this about three times and it eventually finished rebuilding. But I found that it says parity initialization status failed. I've left it for a few hours but it didn't seem to reinitialize. Then I ran the insight online diagnostic tools and it reported the disk that I put in reached read/write error threshold. So I'm beginning to think that the brand new disk I put in is faulty. Before I put in the disk, the parity initialization was at a finished state. Should I replace the new disk I put in? I'm very worried as I think the parity is broken. Or is there a way to kick start the initialization process?

    Read the article

  • Backup multiple Exchange Accounts without direct access to exchange server

    - by Mike Wallace
    For e-mail, we use Microsoft Exchange and it is hosted by 1and1.com. We have about 30 Exchange accounts that I would like to backup to a PST file. That is, for each account that we have (all 30), I would like to create a single PST file (1.pst thru 30.pst). I do not have direct access to the Exchange server. Basically, for each Exchange account, I can supply: The IP address for the Exchange server or the URL to the OWA. The Username The Password Is there a tool out there that can do this for me? It seems that Microsoft's "Online Services Migration Tools" comes awfully close, but it appears that its geared to pull data out of any Exchange server and push it into Microsoft Online. I don't believe it can be used to simply pull the data out and generate PST's.

    Read the article

  • Cannot find one or more components. Please reinstall application

    - by Chris
    I am running Windows 8, 64 bit and have SQL Server 2012 installed. I downloaded the client tools, looked in the directory for SQL Server Management Studio, and see it's there. When I try to run SQL Management Studio I receive the error message: "Cannot find one or more components. Please reinstall application". This problem just started. I have reinstalled the application and downloaded the service packs. The shortcut key shows the path but it still will not run.

    Read the article

  • Mac OS X Server Configure DHCP Options 66 and 67

    - by Paul Adams
    I need to configure Mountain Lion (10.8.2) OS X Server BOOTP to provide DHCP options 66 and 67 to provide PXE booting for PCs on my network. I have tried following the bootpd MAN pages, but they are not specific enough. I have also read conflicting information on the net, but nothing definitive for Mountain Lion DHCP. From bootpd man page: bootpd has a built-in type conversion table for many more options, mostly those specified in RFC 2132, and will try to convert from whatever type the option appears in the property list to the binary, packet format. For example, if bootpd knows that the type of the option is an IP address or list of IP addresses, it converts from the string form of the IP address to the binary, network byte order numeric value. If the type of the option is a numeric value, it converts from string, integer, or boolean, to the proper sized, network byte-order numeric value. Regardless of whether bootpd knows the type of the option or not, you can always specify the DHCP option using the data property list type <key>dhcp_option_128</key> <data> AAqV1Tzo </data> My TFTP server is 172.16.152.20 and the bootfile is pxelinux.0 I have edited /etc/bootpd.plist and added the following to the subnet dict: <key>dhcp_option_66</key> <data> LW4gLWUgrBCYFAo= </data> <key>dhcp_option_67</key> <data> LW4gLWUgcHhlbGludXguMAo= </data> According to the man page, the data elements are supposed to be Base64 encoded, but no matter what I try, I cannot get PXE clients to boot. I have tried encoding 172.16.152.20 using various methods: echo "172.16.152.20" | openssl enc -base64 returns MTcyLjE2LjE1Mi4yMAo= DHCP Option Code Utility (http://mac.softpedia.com/get/Internet-Utilities/DHCP-Option-Code-Utility.shtml) generating a string from 172.16.152.20 yields: LW4gLWUgMTcyLjE2LjE1Mi4yMAo= (used in the above example) DHCP Option Code Utility generating an IP Addresss from 172.16.152.20 yields: LW4gLWUgrBCYFAo= Encoding pxelinux.0 with the above methods likewise yields different encodings. I have tried using all three methods of encoding the data elements, but nothing seems to work i.e. my PXE boot clients do not get directed to my TFTP server. Can anyone help? Regards, Paul Adams.

    Read the article

  • Firefox Furigana Injector on Debian

    - by Ken
    I'm using Iceweasel (un/rebranded Firefox) 3.5.9 on Debian (amd64). I want to use the "Furigana Injector" plugin. I installed it via the Tools - Add-ons menuitem (version 1.3), and restarted Firefox. Unfortunately, when I click the button, it only says: "The 'SimpleMecab' XPCOM component could not be loaded." in a dialog box, several dozen times (!). I found a Debian package "libmecab1", but installing it didn't help. Is there some "mecab" package I can install that will make this work?

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >