Search Results

Search found 11930 results on 478 pages for 'shared machines'.

Page 85/478 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • How to install MariaDB rpms in CentOS 6.4 using rpm (not yum cmd) + handling mysql-libs conflicts

    - by Pat C
    I need to script the install of MariaDB using the rpm command in CentOS 6.4. I can't use yum since it's going to be an offline install so there's no access to the repository. The only MySQL package installed is mysql-libs as various other packages in CentOS depend on it. When I did a test install of MariaDB with yum it correctly accounted for mysql-libs and uninstalled it at the end as MariaDB could handle the dependencies after it was installed: [root@new-host-6 ~]# yum install MariaDB-client MariaDB-common MariaDB-compat MariaDB-devel MariaDB-server MariaDB-shared Loaded plugins: downloadonly, fastestmirror, refresh-packagekit, security, verify Loading mirror speeds from cached hostfile * base: mirrors.kernel.org * extras: mirror.keystealth.org * updates: mirror.umd.edu Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package MariaDB-client.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-common.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-compat.x86_64 0:5.5.32-1 will be obsoleting ---> Package MariaDB-devel.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-server.x86_64 0:5.5.32-1 will be installed ---> Package MariaDB-shared.x86_64 0:5.5.32-1 will be obsoleting ---> Package mysql-libs.x86_64 0:5.1.66-2.el6_3 will be obsoleted --> Finished Dependency Resolution Dependencies Resolved ==================================================================================================================================================================== Package Arch Version Repository Size ==================================================================================================================================================================== Installing: MariaDB-client x86_64 5.5.32-1 mariadb 10 M MariaDB-common x86_64 5.5.32-1 mariadb 23 k MariaDB-compat x86_64 5.5.32-1 mariadb 2.7 M replacing mysql-libs.x86_64 5.1.66-2.el6_3 MariaDB-devel x86_64 5.5.32-1 mariadb 5.6 M MariaDB-server x86_64 5.5.32-1 mariadb 34 M MariaDB-shared x86_64 5.5.32-1 mariadb 1.1 M replacing mysql-libs.x86_64 5.1.66-2.el6_3 Transaction Summary ==================================================================================================================================================================== Install 6 Package(s) Total download size: 53 M Is this ok [y/N]: y Downloading Packages: (1/6): MariaDB-5.5.32-centos6-x86_64-client.rpm | 10 MB 00:06 (2/6): MariaDB-5.5.32-centos6-x86_64-common.rpm | 23 kB 00:00 (3/6): MariaDB-5.5.32-centos6-x86_64-compat.rpm | 2.7 MB 00:02 (4/6): MariaDB-5.5.32-centos6-x86_64-devel.rpm | 5.6 MB 00:06 (5/6): MariaDB-5.5.32-centos6-x86_64-server.rpm | 34 MB 00:23 (6/6): MariaDB-5.5.32-centos6-x86_64-shared.rpm | 1.1 MB 00:00 -------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 1.3 MB/s | 53 MB 00:40 warning: rpmts_HdrFromFdno: Header V4 DSA/SHA1 Signature, key ID 1bb943db: NOKEY Retrieving key from https://yum.mariadb.org/RPM-GPG-KEY-MariaDB Importing GPG key 0x1BB943DB: Userid: "Daniel Bartholomew (Monty Program signing key) <[email protected]>" From : https://yum.mariadb.org/RPM-GPG-KEY-MariaDB Is this ok [y/N]: y Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Warning: RPMDB altered outside of yum. Installing : MariaDB-compat-5.5.32-1.x86_64 1/7 Installing : MariaDB-common-5.5.32-1.x86_64 2/7 Installing : MariaDB-server-5.5.32-1.x86_64 3/7 chown: cannot access `/var/lib/mysql': No such file or directory PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! To do so, start the server, then issue the following commands: '/usr/bin/mysqladmin' -u root password 'new-password' '/usr/bin/mysqladmin' -u root -h new-host-6 password 'new-password' Alternatively you can run: '/usr/bin/mysql_secure_installation' which will also give you the option of removing the test databases and anonymous user created by default. This is strongly recommended for production servers. See the MariaDB Knowledgebase at http://kb.askmonty.org or the MySQL manual for more instructions. Please report any problems with the '/usr/bin/mysqlbug' script! The latest information about MariaDB is available at http://mariadb.org/. You can find additional information about the MySQL part at: http://dev.mysql.com Support MariaDB development by buying support/new features from Monty Program Ab. You can contact us about this at [email protected]. Alternatively consider joining our community based development effort: http://kb.askmonty.org/en/contributing-to-the-mariadb-project/ Installing : MariaDB-devel-5.5.32-1.x86_64 4/7 Installing : MariaDB-client-5.5.32-1.x86_64 5/7 Installing : MariaDB-shared-5.5.32-1.x86_64 6/7 Erasing : mysql-libs-5.1.66-2.el6_3.x86_64 7/7 Verifying : MariaDB-common-5.5.32-1.x86_64 1/7 Verifying : MariaDB-server-5.5.32-1.x86_64 2/7 Verifying : MariaDB-devel-5.5.32-1.x86_64 3/7 Verifying : MariaDB-client-5.5.32-1.x86_64 4/7 Verifying : MariaDB-compat-5.5.32-1.x86_64 5/7 Verifying : MariaDB-shared-5.5.32-1.x86_64 6/7 Verifying : mysql-libs-5.1.66-2.el6_3.x86_64 7/7 Installed: MariaDB-client.x86_64 0:5.5.32-1 MariaDB-common.x86_64 0:5.5.32-1 MariaDB-compat.x86_64 0:5.5.32-1 MariaDB-devel.x86_64 0:5.5.32-1 MariaDB-server.x86_64 0:5.5.32-1 MariaDB-shared.x86_64 0:5.5.32-1 Replaced: mysql-libs.x86_64 0:5.1.66-2.el6_3 Complete! My question is, what is the equivalent way to install the MariaDB packages using the rpm command only as opposed to yum? If I do rpm -ivh MariaDB*.rpm, I will get a ton of messages like the following about conflicts with mysql-libs: file /etc/my.cnf from install of MariaDB-common-5.5.32-1.x86_64 conflicts with file from package mysql-libs-5.1.66-2.el6_3.x86_64 file /usr/share/mysql/charsets/Index.xml from install of MariaDB-common-5.5.32-1.x86_64 conflicts with file from package mysql-libs-5.1.66-2.el6_3.x86_64 I then used the --force option to install the MariaDB rpms and uninstalled mysql-lib, I didn't get any weird messages but I'm not sure that is the cleanest method to handle the conflicts and do the install. So can someone confirm that installing MariaDB with the following rpm commands would be the same as using yum to install the packages and handle mysql-libs conflicts/removal: rpm -ivh --force MariaDB*.rpm rpm -e mysql-libs Thanks for any input!

    Read the article

  • how to know who is accessing my system? [closed]

    - by calvin
    Is it possible to know if anyone is accessing any of folders or drives in my system(32 bit windows 2003)? I mean shared folders or non-shared folders, anything. And once if we know, how to deny access to particular host. For shared folders i know how to do, but if anyone is accessing some folder with proper credentials, i don't know how to control.

    Read the article

  • EC2 persistence of machine

    - by Seagull
    I want to 'persist' my Amazon EC2 images. My scenario: I have a range of Windows and Linux machines Some machines are EBS backed, whereas others are S3 backed. I need to be able to persist a machine (put it to sleep), preferably keeping all settings active I had them when the machine was running. I need to be able to quickly wake up a machine from sleep [Ideally with an SLA of less than 2 min to turn-on, if such an SLA is available with Amazon]. Here's the stuff that confuses me: AWS allows me to put EBS backed machines to sleep, but not S3 backed. I believe I can put S3 machines into some sort of persistence mode. But this involves shutting down the machine, writing it to S3 storage and then recovering from there (not a real sleep mode, but at least I don't continue to get billed for CPU). S3 backing seems to take a long time to either writing a machine to disk, or to recover (turn on a machine). I can't tell immediately which machines are EBS backed and which are S3 backed? It seems like I can instantiate either type, but it's not immediately clear how Amazon decided whether a given machine should be EBS or S3 backed. Advice?

    Read the article

  • Amazon Product API: "Your request is missing a required parameter combination" on Blended ItemSearch

    - by Daniel Schaffer
    I'm having some problems trying to do an ItemSearch on the Blended index using the Amazon Product API. According to the documentation, Blended requests cannot specify the MerchantId parameter - and indeed, if I try to include it I get an error telling me so. However, when I don't include it, I get an error telling me that my request is missing a required parameter combination and that a valid combination includes MerchantId... what the hell? Here's the XML response: <Items xmlns="http://webservices.amazon.com/AWSECommerceService/2005-10-05"> <Request> <IsValid>False</IsValid> <ItemSearchRequest> <Availability>Available</Availability> <Condition>All</Condition> <Keywords> home theater pc and other geekery</Keywords> <ResponseGroup>Similarities</ResponseGroup> <ResponseGroup>SalesRank</ResponseGroup> <ResponseGroup>OfferSummary</ResponseGroup> <ResponseGroup>Small</ResponseGroup> <ResponseGroup>Images</ResponseGroup> <SearchIndex>Blended</SearchIndex> </ItemSearchRequest> <Errors> <Error> <Code>AWS.MissingParameterCombination</Code> <Message>Your request is missing a required parameter combination. Required parameter combinations include MerchantId, Availability.</Message> </Error> </Errors> </Request> </Items> The failing requests are being sent as part of batches with other requests that are succeeding. I'm using REST to send my requests, so here's an example of a request: http://ecs.amazonaws.com/onca/xml?AWSAccessKeyId=-------------& ItemSearch.1.Keywords=Mates%20of%20State& ItemSearch.1.MerchantId=Amazon& ItemSearch.1.SearchIndex=DVD& ItemSearch.2.Keywords=teaching%20Lily%20various%20computer%20related%20skills& ItemSearch.2.SearchIndex=Blended& ItemSearch.Shared.Availability=Available& ItemSearch.Shared.Condition=All& ItemSearch.Shared.ResponseGroup=Small%2CSalesRank%2CImages%2COfferSummary%2CSimilarities& Operation=ItemSearch%2CSimilarityLookup& Service=AWSECommerceService& SimilarityLookup.1.ItemId=B000FNNHZ2& SimilarityLookup.2.ItemId=B000EQ5UPU& SimilarityLookup.Shared.Availability=Available& SimilarityLookup.Shared.Condition=All& SimilarityLookup.Shared.MerchantId=Amazon& SimilarityLookup.Shared.ResponseGroup=Small%2CSalesRank%2CImages%2COfferSummary& Timestamp=2010-04-02T17%3A18%3A05Z& Signature=---------------- Any ideas as to what I'm doing wrong?

    Read the article

  • SVN Externals in a different SCM

    - by Sean Chambers
    At a previous workplace we used svn externals to update dependent projects when a shared component was updated. This made it easy to see anything that those changes broke, as well as update dependent projects to the latest version of a shared component automatically without any intervention. At a new workplace we are using cc.net with surround scm and I'm trying to find something similar in surround. I haven't found anything like externals, only "shared files", but unlike externals, the shared files doesn't allow you to point at a specific revision of a file for the external. I'm interested in what other people are doing in these scenarios to lean on their continuous integration and treat it more for integration than a "continuous build" server. Does anyone know of a tool or something to do "externals" behavior without using svn? I suppose having an xml registry file of which projects depend on which assemblies and if they should be using the latest version but this seems like overkill.

    Read the article

  • Compiling and executing through commandLine shows NoClassDefFoundError when trying to find Java pack

    - by eruina
    I have a client/server program that attempts to send and receive an object. There are three packages: server, client and shared shared contains only the Message class I put Message.java from shared package into the same folder as calcclient package source files and calcserver package source files. I compile using the line: javac -classpath .; Message.java They can compile. Then I change directory up one level and ran with: java -classpath .; .Main When I use Netbeans to run, the entire program works as per normal. But not if I run from command line. If its executed through command line, the program will work until it needs to use the Message object. Then it will show a NoClassDefFoundError Am I putting the right files at the right places? How do I get the program to find shared package through command line?

    Read the article

  • Is switching to PHP 5.3 a good idea?

    - by understack
    I'm using PHP 5.2.11. I've seen that 5.3 version has several backward incompatibility issues and because of that some code might require changes. Since my website is hosted on shared server, many have suggested that those server won't upgrade to 5.3 in near future (probably due to incompatibility issues for many shared users). Would it be wise to develop a new project in 5.3 which would be hosted on shared server?

    Read the article

  • Why does this Rails named scope return empty (uninitialized?) objects?

    - by mipadi
    In a Rails app, I have a model, Machine, that contains the following named scope: named_scope :needs_updates, lambda { { :select => self.column_names.collect{|c| "\"machines\".\"#{c}\""}.join(','), :group => self.column_names.collect{|c| "\"machines\".\"#{c}\""}.join(','), :joins => 'LEFT JOIN "machine_updates" ON "machine_updates"."machine_id" = "machines"."id"', :having => ['"machines"."manual_updates" = ? AND "machines"."in_use" = ? AND (MAX("machine_updates"."date") IS NULL OR MAX("machine_updates"."date") < ?)', true, true, UPDATE_THRESHOLD.days.ago] } } This named scope works fine in development mode. In production mode, however, it returns the 2 models as expected, but the models are empty or uninitialized; that is, actual objects are returned (not nil), but all the fields are nil. For example, when inspecting the return value of the named scope in the console, the following is returned: [#<Machine >, #<Machine >] But, as you can see, all the fields of the objects returned are set to nil. The production and development environments are essentially the same. Both are using a SQLite database. Any ideas what's going wrong?

    Read the article

  • How can I prevent external MSBuild files from being cached (by Visual Studio) during a project build

    - by Damian Powell
    I have a project in my solution which started life as a C# library project. It's got nothing of any interest in it in terms of code, it is merely used as a dependency in the other projects in my solution in order to ensure that it is built first. One of the side-effects of building this project is that a shared AssemblyInfo.cs is created which contains the version number in use by the other projects. I have done this by adding the following to the .csproj file: <ItemGroup> <None Include="Properties\AssemblyInfo.Shared.cs.in" /> <Compile Include="Properties\AssemblyInfo.Shared.cs" /> <None Include="VersionInfo.targets" /> </ItemGroup> <Import Project="$(ProjectDir)VersionInfo.targets" /> <Target Name="BeforeBuild" DependsOnTargets="UpdateSharedAssemblyInfo" /> The referenced file, VersionInfo.targets, contains the following: <Project ToolsVersion="4.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <!-- Some properties defining tool locations and the name of the AssemblyInfo.Shared.cs.in file etc. --> </PropertyGroup> <Target Name="UpdateSharedAssemblyInfo"> <!-- Uses the Exec task to run one of the tools to generate AssemblyInfo.Shared.cs based on the location of AssemblyInfo.Shared.cs.in and some of the other properties. --> </Target> </Project> The contents of the VersionInfo.targets file could simply be embedded within the .csproj file but it is external because I am trying to turn all of this into a project template. I want the users of the template to be able to add the new project to the solution, edit the VersionInfo.targets file, and run the build. The problem is that modifying and saving the VersionInfo.targets file and rebuilding the solution has no effect - the project file uses the values from the .targets file as they were when the project was opened. Even unloading and reloading the project has no effect. In order to get the new values, I need to close Visual Studio and reopen it (or reload the solution). How can I set this up so that the configuration is external to the .csproj file and not cached between builds?

    Read the article

  • User Controls Importing Problem (asp.net)(vb)

    - by Phil
    I have this class that contains vars for db connection; Public Shared s As String Public Shared con As String = WebConfigurationManager.ConnectionStrings("foo").ToString() Public Shared c As New SqlConnection(con) Public Shared x As New SqlCommand(s, c) I import this to my page like this; Imports DBVars I'm then able to access these vars from my page. But if I try to import them into a user control the variables are not available. Am I making an error or is this expected? Thanks.

    Read the article

  • Fast inter-process (inter-threaded) communications IPC on large multi-cpu system.

    - by IPC
    What would be the fastest portable bi-directional communication mechanism for inter-process communication where threads from one application need to communicate to multiple threads in another application on the same computer, and the communicating threads can be on different physical CPUs). I assume that it would involve a shared memory and a circular buffer and shared synchronization mechanisms. But shared mutexes are very expensive (and there are limited number of them too) to synchronize when threads are running on different physical CPUs.

    Read the article

  • Sharing a global/static variable between a process and DLL

    - by minjang
    I'd like to share a static/global variable only between a process and a dll that is invoked by the process. The exe and dll are in the same memory address space. I don't want the variable to be shared among other processes. Elaboration of the problem: Say that there is a static/global variable x in a.cpp. Both the exe foo.exe and the dll bar.dll have a.cpp, so the variable x is in both images. Now, foo.exe dynamically loads (or statically) bar.dll. Then, the problem is whether the variable x is shared by the exe and dll, or not. In Windows, these two guys never share the x: the exe and dll will have a separate copy of x. However, in Linux, the exe and dll do share the variable x. Unfortunately, I want the behavior of Linux. I first considered using pragma data_seg on Windows. However, even if I correctly setup the shared data segment, foo.exe and bar.dll never shares the x. Recall that bar.dll is loaded into the address space of foo.exe. However, if I run another instance of foo.exe, then x is shared. But, I don't want x to be shared by different processes. So, using data_seg was failed. I may it use a memory-mapped file by making an unique name between exe and dll, which I'm trying now. Two questions: Why the behavior of Linux and Windows is different? Can anyone explain more about this? What would be most easiest way to solve this problem on Windows?

    Read the article

  • Sharing the same file between different projects

    - by selsine
    Hi Everyone, For version control we currently use Visual Source Safe and are thinking of migrating to another version control system (SVN, Mercurial, Git). Currently we use Visual Source Safe's "Shared" file feature quite heavily. This allows us to share code between design and runtimes of a single product, and between multiple products as well. For example: **Product One** - Design Login.cpp Login.h Helper.cpp Helper.h - Runtime Login.cpp Login.h Helper.cpp Helper.h **Product Two** - Design Login.cpp Login.h - Launcher Login.cpp Login.h - Runtime Login.cpp Login.h In this example Login.cpp and Login.h contain common code that all of our projects need, Helper.cpp and Helper.h is only used in Product One. In Visual Source Safe they are shared between the specific projects, which means that whenever the files are updated in one project they are updated in any project they are shared with. This is a simple example but hopefully it explains why we use the shared feature: to reduce the amount of duplicated code and ensure that when a bug is fixed all projects automatically have access to the new fixed code. After researching alternatives to Visual Source Safe it seems that most version control systems do not have the idea of shared files, instead they seem to use the idea of sub repositories. (http://mercurial.selenic.com/wiki/subrepos http://svnbook.red-bean.com/en/1.0/ch07s03.html) My question (after all of that) is about what the best practices for achieving this are using other version control systems? Should we restructure our projects so that two copies of the files do not exist and an include directory is used instead? e.g. Product One Design Login.cpp Login.h Runtime Login.cpp Login.h Common Helper.cpp Helper.h This still leaves what to do with Login.cpp and Logon.h Should the shared files be moved to their own repository and then compiled into a lib or dll? This would make bug fixing more time consuming as the lib projects would have to be edited and then rebuilt. Should we use externals or sub repositories? Should we combine our projects (i.e. runtime, design, and launcher) into one large project? Any help would be appreciated. We have the feeling that our project design has evolved based on the tools that we used and now that we are thinking of switching tools it's difficult for us to see how we can best modify our practices. Or maybe we are the only people are there doing this...? Also, we use Visual Studio for all of our stuff. Thanks.

    Read the article

  • Is it good practise to blank out inherited functionality that will not be used?

    - by Timo Kosig
    I'm wondering if I should change the software architecture of one of my projects. I'm developing software for a project where two sides (in fact a host and a device) use shared code. That helps because shared data, e.g. enums can be stored in one central place. I'm working with what we call a "channel" to transfer data between device and host. Each channel has to be implemented on device and host side. We have different kinds of channels, ordinary ones and special channels which transfer measurement data. My current solution has the shared code in an abstract base class. From there on code is split between the two sides. As it has turned out there are a few cases when we would have shared code but we can't share it, we have to implement it on each side. The principle of DRY (don't repeat yourself) says that you shouldn't have code twice. My thought was now to concatenate the functionality of e.g. the abstract measurement channel on the device side and the host side in an abstract class with shared code. That means though that once we create an actual class for either the device or the host side for that channel we have to hide the functionality that is used by the other side. Is this an acceptable thing to do: public abstract class MeasurementChannelAbstract { protected void MethodUsedByDeviceSide() { } protected void MethodUsedByHostSide() { } } public class DeviceMeasurementChannel : MeasurementChannelAbstract { public new void MethodUsedByDeviceSide() { base.MethodUsedByDeviceSide(); } } Now, DeviceMeasurementChannel is only using the functionality for the device side from MeasurementChannelAbstract. By declaring all methods/members of MeasurementChannelAbstract protected you have to use the new keyword to enable that functionality to be accessed from the outside. Is that acceptable or are there any pitfalls, caveats, etc. that could arise later when using the code?

    Read the article

  • How can I pass an object as a parameter in the google app engine RPC flow?

    - by jimmartens
    I'm building a pretty basic app, and one thing I want to do is pass an object as a parameter up through the service - async - impl instead of passing up a million separate parameters. so in async, I do something like this: import shared.Profile; ... public interface ProfileServiceAsync { public void addProfile(Profile inProf, AsyncCallback<Void> async); Now, profile is a class in com. ... .shared and I have the following in my ... .gwt.xml <source path='shared'/> That being said when I try to compile I get this error. [ERROR] Errors in 'file:/D:/projects/eclipse/workspace/.../src/com/.../client/ProfileServiceAsync.java' [ERROR] Line 11: No source code is available for type shared.Profile; did you forget to inherit a required module? Any ideas on this?

    Read the article

  • Sharing memory between modules

    - by John Holecek
    Hi, I was wondering how to share some memory between different program modules - lets say, I have a main application (exe), and then some module (dll). They both link to the same static library. This static library will have some manager, that provides various services. What I would like to achieve, is to have this manager shared between all application modules, and to do this transparently during the library initialization. Between processes I could use shared memory, but I want this to be shared in the current process only. Could you think of some cross-platform way to do this? Possibly using boost libraries, if they provide some facilities to do this. Only solution I can think of right now, is to use shared library of the respective OS, that all other modules will link to at runtime, and have the manager saved there.

    Read the article

  • Network authentication + roaming home directory - which technology should I look into using?

    - by Brian
    I'm looking into software which provides a user with a single identity across multiple computers. That is, a user should have the same permissions on each computer, and the user should have access to all of his or her files (roaming home directory) on each computer. There seem to be many solutions for this general idea, but I'm trying to determine the best one for me. Here are some details along with requirements: The network of machines are Amazon EC2 instances running Ubuntu. We access the machines with SSH. Some machines on this LAN may have different uses, but I am only discussing machines for a certain use (running a multi-tenancy platform). The system will not necessarily have a constant amount of machines. We may have to permanently or temporarily alter the amount of machines running. This is the the reason why I'm looking into centralized authentication/storage. The implementation of this effect should be a secure one. We're unsure if users will have direct shell access, but their software will potentially be running (under restricted Linux user names, of course) on our systems, which is as good as direct shell access. Let's assume that their software could potentially be malicious for the sake of security. I have heard of several technologies/combinations to achieve my goal, but I'm unsure of the ramifications of each. An older ServerFault post recommended NFS & NIS, though the combination has security problems according to this old article by Symantec. The article suggests moving to NIS+, but, as it is old, this Wikipedia article has cited statements suggesting a trending away from NIS+ by Sun. The recommended replacement is another thing I have heard of... LDAP. It looks like LDAP can be used to save user information in a centralized location on a network. NFS would still need to be used to cover the 'roaming home folder' requirement, but I see references of them being used together. Since the Symantec article pointed out security problems in both NIS and NFS, is there software to replace NFS, or should I heed that article's suggestions for locking it down? I'm tending toward LDAP because another fundamental piece of our architecture, RabbitMQ, has a authentication/authorization plugin for LDAP. RabbitMQ will be accessible in a restricted manner to users on the system, so I would like to tie the security systems together if possible. Kerberos is another secure authentication protocol that I have heard of. I learned a bit about it some years ago in a cryptography class but don't remember much about it. I have seen suggestions online that it can be combined with LDAP in several ways. Is this necessary? What are the security risks of LDAP without Kerberos? I also remember Kerberos being used in another piece of software developed by Carnegie Mellon University... Andrew File System, or AFS. OpenAFS is available for use, though its setup seems a bit complicated. At my university, AFS provides both requirements... I can log in to any machine, and my "AFS folder" is always available (at least when I acquire an AFS token). Along with suggestions for which path I should look into, does anybody have any guides which were particularly helpful? As the bold text pointed out, LDAP looks to be the best choice, but I'm particularly interested in the implementation details (Keberos? NFS?) with respect to security.

    Read the article

  • File server share access intermittent/slow/machine unstable: win2k8r2

    - by Jack B.
    I have a file server running Win2k8R2 on an older HP DL380G4. It has nothing set up on it other than file sharing. All drivers/firmware/updates installed. The file server is used as a dump for a bunch of test machines - so essentially a lot of small files are being written to it. It was working fine until it started showing the following symptoms: Shares became either very slow/intermittent or could not access them at all. Logging in the the server, you could use it like normal but windows would start freezing and eventually you had to hard reboot it because nothing was responsive. After rebooting, it would work fine for 20min-2hours and then degrade into this broken state again. Some info after investigation: HP Raid Config utility shows the Raid array as functioning properly (RAID5 btw). Event log shows a bunch of DoS attacks from the test machines, saying it has disconnected the connection a. AFAIK (not part of my job) the test machines haven't changed the way they log information to this server or the amount of them hasn't increased. b. Nothing is infected, this server was scanned fully, and the test machines are re-imaged almost daily. Nothing in performance monitor shows as anything being pegged at maximum (CPU/HD/Network/RAM) I installed MS Network Monitor and it is showing a lot of traffic The server was using one gigabit Ethernet connection, I connected the second one as well with the same results. Forgot to add - one of the commonly written to dirs on the share has over 16k subdirs in it, with a crapton of small files within those dirs. Some of the OS instability was slow access to the drive which has this directory - perfmon doesn't show much activity on the HD though so I'm not sure if this crowded dir is the cause. Here is one important fact: I ran into this issue 2-3 months ago, couldn't figure it out, but I had a spare identical machine so I swapped them out (thought it was related to the machine), and now I have the same issue. Also, the computer will be stable if I turn off file sharing. So is the server just getting DoS'd by the test machines? I've never dealt with such an issue. Is instability in the server's OS common when getting DoS'd? Is there anything I can do to confirm this before telling the owners of the test machines to optimize their traffic? (I'm not sure what they'll be able to do). Is there something within Win2k8R2 that can balance the traffic across the two NICs? Any help would be appreciated. Update: Another thought - the drive with the share is RAID5 across 6 SCSI320 300GB HDs. They are near full capacity about 100GB from 1TB left. Could the amount of tiny files could be causing some weirdness with the parity in this array? I think I've read something about this in the past but I'm no expert on RAID.

    Read the article

  • File server share access intermittent/slow/machine unstable: win2k8r2

    - by Jack B.
    I have a file server running Win2k8R2 on an older HP DL380G4. It has nothing set up on it other than file sharing. All drivers/firmware/updates installed. The file server is used as a dump for a bunch of test machines - so essentially a lot of small files are being written to it. It was working fine until it started showing the following symptoms: Shares became either very slow/intermittent or could not access them at all. Logging in the the server, you could use it like normal but windows would start freezing and eventually you had to hard reboot it because nothing was responsive. After rebooting, it would work fine for 20min-2hours and then degrade into this broken state again. Some info after investigation: HP Raid Config utility shows the Raid array as functioning properly (RAID5 btw). Event log shows a bunch of DoS attacks from the test machines, saying it has disconnected the connection a. AFAIK (not part of my job) the test machines haven't changed the way they log information to this server or the amount of them hasn't increased. b. Nothing is infected, this server was scanned fully, and the test machines are re-imaged almost daily. Nothing in performance monitor shows as anything being pegged at maximum (CPU/HD/Network/RAM) I installed MS Network Monitor and it is showing a lot of traffic The server was using one gigabit Ethernet connection, I connected the second one as well with the same results. Forgot to add - one of the commonly written to dirs on the share has over 16k subdirs in it, with a crapton of small files within those dirs. Some of the OS instability was slow access to the drive which has this directory - perfmon doesn't show much activity on the HD though so I'm not sure if this crowded dir is the cause. Here is one important fact: I ran into this issue 2-3 months ago, couldn't figure it out, but I had a spare identical machine so I swapped them out (thought it was related to the machine), and now I have the same issue. Also, the computer will be stable if I turn off file sharing. So is the server just getting DoS'd by the test machines? I've never dealt with such an issue. Is instability in the server's OS common when getting DoS'd? Is there anything I can do to confirm this before telling the owners of the test machines to optimize their traffic? (I'm not sure what they'll be able to do). Is there something within Win2k8R2 that can balance the traffic across the two NICs? Any help would be appreciated. Update: Another thought - the drive with the share is RAID5 across 6 SCSI320 300GB HDs. They are near full capacity about 100GB from 1TB left. Could the amount of tiny files could be causing some weirdness with the parity in this array? I think I've read something about this in the past but I'm no expert on RAID.

    Read the article

  • File server share access intermittent/slow/machine unstable: win2kr2

    - by Jack B.
    I have a file server running Win2k8R2 on an older HP DL380G4. It has nothing set up on it other than file sharing. All drivers/firmware/updates installed. The file server is used as a dump for a bunch of test machines - so essentially a lot of small files are being written to it. It was working fine until it started showing the following symptoms: Shares became either very slow/intermittent or could not access them at all. Logging in the the server, you could use it like normal but windows would start freezing and eventually you had to hard reboot it because nothing was responsive. After rebooting, it would work fine for 20min-2hours and then degrade into this broken state again. Some info after investigation: HP Raid Config utility shows the Raid array as functioning properly (RAID5 btw). Event log shows a bunch of DoS attacks from the test machines, saying it has disconnected the connection a. AFAIK (not part of my job) the test machines haven't changed the way they log information to this server or the amount of them hasn't increased. b. Nothing is infected, this server was scanned fully, and the test machines are re-imaged almost daily. Nothing in performance monitor shows as anything being pegged at maximum (CPU/HD/Network/RAM) I installed MS Network Monitor and it is showing a lot of traffic The server was using one gigabit Ethernet connection, I connected the second one as well with the same results. Forgot to add - one of the commonly written to dirs on the share has over 16k subdirs in it, with a crapton of small files within those dirs. Some of the OS instability was slow access to the drive which has this directory - perfmon doesn't show much activity on the HD though so I'm not sure if this crowded dir is the cause. Here is one important fact: I ran into this issue 2-3 months ago, couldn't figure it out, but I had a spare identical machine so I swapped them out (thought it was related to the machine), and now I have the same issue. Also, the computer will be stable if I turn off file sharing. So is the server just getting DoS'd by the test machines? I've never dealt with such an issue. Is instability in the server's OS common when getting DoS'd? Is there anything I can do to confirm this before telling the owners of the test machines to optimize their traffic? (I'm not sure what they'll be able to do). Is there something within Win2k8R2 that can balance the traffic across the two NICs? Any help would be appreciated. Update: Another thought - the drive with the share is RAID5 across 6 SCSI320 300GB HDs. They are near full capacity about 100GB from 1TB left. Could the amount of tiny files could be causing some weirdness with the parity in this array? I think I've read something about this in the past but I'm no expert on RAID.

    Read the article

  • Remote installing an msi on citrix servers using WMI

    - by capn
    OK, I'm a C# programmer that is trying to streamline the deployment of a custom windows form app I inherited and built an installer for with WiX (this app will need to be reinstalled regularly as I'm making changes to it). I'm not really used to admin type things (or vbs, or WMI, or terminal servers, or Citrix, and even WiX and MSI are not things I usually deal with) but so far I put together some vbs and have an end goal in mind. The msi does work, and I've installed it from the mapped O: drive on my dev machine and while RDP'd to a citrix machine. End Goal: Deploy code written on my dev machine and compiled into an MSI (that I can improve upon within the confines of WiX and whatever the Windows Installer Engine allows) to the cluster of Citrix machines my users have access to. What am I missing in my script to get the MSI to execute on the remote machines? Layout: Machine A is my dev machine, and has the vbs script and the msi file (XP SP3) Machines C1 - C6 are the Citrix Servers that need the application installed them via the msi (Server 2003 R2 SP2) There is also optionally a shared network resource that all the machines can access. Script: 'Set WMI Constants Const wbemImpersonationLevelImpersonate = 3 Const wbemAuthenticationLevelPktPrivacy = 6 'Set whether this is installing to the debug Citrix Servers Const isDebug = true 'Set MSI location 'Network location yields error 1619 (This installation package could not be opened.) msiLocation = "\\255.255.255.255\odrive\Citrix Deployment\Setup.msi" 'Directory on machine A yields error 3 (file not found) 'msiLocation = "C:\Temp\Deploy\Setup.msi" 'Mapped network drive (on both machines) yield error 3 (file not found) 'msiLocation = "O:\Citrix Deployment\Setup.msi" 'Set login information strDomain = "MyDomain" Wscript.StdOut.Write "user name:" strUser = Wscript.StdIn.ReadLine Set objPassword = CreateObject("ScriptPW.Password") Wscript.StdOut.Write "password:" strPassword = objPassword.GetPassword() 'Names of Citrix Servers Dim citrixServerArray If isDebug Then citrixServerArray = array("C4") Else 'citrixServerArray = array("C1","C2","C3","C5","C6") End If 'Loop through each Citrix Server For Each citrixServer in citrixServerArray 'Login to remote computer Set objLocator = CreateObject("WbemScripting.SWbemLocator") Set objWMIService = objLocator.ConnectServer(citrixServer, _ "root\cimv2", _ strUser, _ strPassword, _ "MS_409", _ "ntlmdomain:" + strDomain) 'Set Remote Impersonation level objWMIService.Security_.ImpersonationLevel = wbemImpersonationLevelImpersonate objWMIService.Security_.AuthenticationLevel = wbemAuthenticationLevelPktPrivacy 'Reference to a process on the machine Dim objProcess : Set objProcess = objWMIService.Get("Win32_Process") 'Change user to install for terminal services errReturn = objProcess.Create _ ("cmd.exe /c change user /install", Null, Null, intProcessID) WScript.Echo errReturn 'Install MSI here 'Reference to a product on the machine Set objSoftware = objWMIService.Get("Win32_Product") 'All users set in option parameter, I'm led to believe that the third parameter is actually ignored 'http://www.webmasterkb.com/Uwe/Forum.aspx/vbscript/2433/Installing-programs-with-VbScript errReturn = objSoftware.Install(msiLocation,"ALLUSERS=2 REBOOT=ReallySuppress",True) Wscript.Echo errReturn 'Change user back to execute errReturn = objProcess.Create _ ("cmd.exe /c change user /execute", Null, Null, intProcessID) WScript.Echo errReturn Next I also tried using this to install, it doesn't return an error code, but doesn't install the msi either, and it makes me wonder if the change user /install command is even really working. errReturn = objProcess.Create _ ("cmd.exe /c msiexec /i ""O:\Citrix Deployment\Setup.msi"" /quiet") Wscript.Echo errReturn

    Read the article

  • Backing Up vs. Redundancy

    - by TK Kocheran
    I'm currently in stage 2 of 3 of building my home workstation. What this means is that my RAID-0 array of solid state disks will be backed up nightly to a RAID-5 or RAID-6 array of traditional spinning hard disks. However, it recently dawned on me that redundancy is not backup. The main reason for setting up a RAID array with redundancy was to protect myself in the event of a drive failure to serve as an effective backup solution. Wait. What if a bolt of lightning finds a way to travel into my house, through my surge-protector, into my power supply and physically destroys all of my hard disks and SSDs? Well, in that case, I guess I'd be fine because I generally keep most important files (music, pictures, videos) stored in multiple places like on my laptop, my wife's laptop, and an encrypted USB hard drive. Wait. What if a giant hedgehog meteor attacks my house from space traveling at mach 3 and all machines and hard disks are blown to smithereens. Well, I guess I could find a way to do ridiculously slow and cumbersome rsyncs or backups to Amazon's Glacier. Wait. What if there's a nuclear apocalypse... and at this point I start laughing hysterically. At what point does backing up become irrelevant? I completely understand situation one (mechanical drive failure), situation two (workstation compromised or destroyed somehow), possibly even situation three (all machines and disks destroyed), but situation four? There's no questioning the need for backups. None. However, there are three questions I'd really like addressed: To what level should one backup? I definitely understand the merits of physical disk redundancy. I also believe in keeping important files on multiple machines and thinning out the possibility of losing all of my files. Online backups make sense, but they beg the following question. What should I be backing up remotely and how often? It's no problem storage-wise to back up important files (music, pictures, videos) and even configuration and temporal data for all of the machines in my network (all Linux based)... albeit locally. Transferring to the cloud is another story. Worst-case scenario, if I lost all of my configuration for my individual computers, the reality is that I probably lost the machines too. The cloud is a long way away from here; I can run backups over CAT-6 here and see 100MB/s easily, but I'm afraid that I'm only going to see 2MB/s at best when transferring up to the cloud.

    Read the article

  • "Hostile" network in the company - please comment on a security setup

    - by TomTom
    I have a little specific problem here that I want (need) to solve in a satisfactory way. My company has multiple (IPv4) networks that are controlled by our router sitting in the middle. Typical smaller shop setup. There is now one additional network that has an IP Range OUTSIDE of our control, connected to the internet with another router OUTSIDE of our control. Call it a project network that is part of another companies network and combined via VPN they set up. This means: They control the router that is used for this network and They can reconfigure things so that they can access the machines in this network. The network is physically split on our end through some VLAN capable switches as it covers three locations. At one end there is the router the other company controls. I Need / want to give the machines used in this network access to my company network. In fact, it may be good to make them part of my active directory domain. The people working on those machines are part of my company. BUT - I need to do so without compromising the security of my company network from outside influence. Any sort of router integration using the externally controlled router is out by this idea So, my idea is this: We accept the IPv4 address space and network topology in this network is not under our control. We seek alternatives to integrate those machines into our company network. The 2 concepts I came up with are: Use some sort of VPN - have the machines log into VPN. Thanks to them using modern windows, this could be transparent DirectAccess. This essentially treats the other IP space not different than any restaurant network a laptop of the company goes in. Alternatively - establish IPv6 routing to this ethernet segment. But - and this is a trick - block all IPv6 packets in the switch before they hit the third party controlled router, so that even IF they turn on IPv6 on that thing (not used now, but they could do it) they would get not a single packet. The switch can nicely do that by pulling all IPv6 traffic coming to that port into a separate VLAN (based on ethernet protocol type). Anyone sees a problem with using he switch to isolate the outer from IPv6? Any security hole? It is sad we have to treat this network as hostile - would be a lot easier - but the support personnel there is of "known dubious quality" and the legal side is clear - we can not fulfill our obligations when we integrate them into our company while they are under a jurisdiction we don't have a say in.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >