Search Results

Search found 23079 results on 924 pages for 'local variables'.

Page 141/924 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • If all variables are a subset of the superkey, is the database design 5NF? [migrated]

    - by Lukazoid
    I have a table called LogMessages, which has the following columns: Level A numeric value which represents Trace, Debug, Info, Warning, Error or Fatal Time A UTC time Message Foreign key to a Messages table Source Foreign key to a Sources table User Foreign key to a Users table From what I can see, all of these columns are a part of the super key; if any single value differs to an existing row, a new row can be created. My question is, does this design comply to fifth normal form? I am unsure as some groups of data will be repeating, however I don't believe this violates 5NF? (correct me if I'm wrong)

    Read the article

  • Locale variables have no effect in remote shell (perl: warning: Setting locale failed.)

    - by Janning
    I have a fresh ubuntu 12.04 installation. When i connect to my remote server i got errors like this: ~$ ssh example.com sudo aptitude upgrade ... Traceback (most recent call last): File "/usr/bin/apt-listchanges", line 33, in <module> from ALChacks import * File "/usr/share/apt-listchanges/ALChacks.py", line 32, in <module> sys.stderr.write(_("Can't set locale; make sure $LC_* and $LANG are correct!\n")) NameError: name '_' is not defined perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_TIME = "de_DE.UTF-8", LC_MONETARY = "de_DE.UTF-8", LC_ADDRESS = "de_DE.UTF-8", LC_TELEPHONE = "de_DE.UTF-8", LC_NAME = "de_DE.UTF-8", LC_MEASUREMENT = "de_DE.UTF-8", LC_IDENTIFICATION = "de_DE.UTF-8", LC_NUMERIC = "de_DE.UTF-8", LC_PAPER = "de_DE.UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_ALL to default locale: No such file or directory No packages will be installed, upgraded, or removed. 0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B of archives. After unpacking 0 B will be used. ... I don't have this problem when i connect from an older ubuntu installation. This is output from my ubuntu 12.04 installation, LANG and LANGUAGE are set $ locale LANG=de_DE.UTF-8 LANGUAGE=de_DE:en_GB:en LC_CTYPE="de_DE.UTF-8" LC_NUMERIC=de_DE.UTF-8 LC_TIME=de_DE.UTF-8 LC_COLLATE="de_DE.UTF-8" LC_MONETARY=de_DE.UTF-8 LC_MESSAGES="de_DE.UTF-8" LC_PAPER=de_DE.UTF-8 LC_NAME=de_DE.UTF-8 LC_ADDRESS=de_DE.UTF-8 LC_TELEPHONE=de_DE.UTF-8 LC_MEASUREMENT=de_DE.UTF-8 LC_IDENTIFICATION=de_DE.UTF-8 LC_ALL= Does anybody know what has changed in ubuntu to get this error message on remote servers?

    Read the article

  • Local user vs. domain user? What is the right way here?

    - by ebeeb
    I'm a software developer in a company with 6 employees. Everyone has a machine for him-/herself, so none of the machines is shared. I'm currently setting up my machine with Windows 8 and was experimenting a bit with domain and local user accounts. Correct me please if I'm wrong, but I think the idea behind is, that domain users generally should not be able to modify the configuration of a machine (like installing software), since they are able to login on every single machine in the domain. The local user (usually just one local administrator per machine) is the one who cares about the configuration of the machine. But in my case the login into the domain is just for being able to access directories/servers in the domain (I do not really know the details, all I know is, that loggin into the domain user account is necessary). So overall I've got a local admin account and a domain account used on my machine. While working I'm logged in to my domain user account. But it annoys me, that I always need to enter the credentials of my local user account when I'm about to update/install something, which happens quite often as a software developer. I fixed this with adding the domain account into the user accounts in my control panel and putting it into the Administrators group. The only thing I wanted to know about this: is there something REALLY bad about doing this? Or is there maybe a more common way to be able to act like a local admin, while logged in as a domain user? PS: I'm sorry about the tags, but I don't know the proper ones. I'd be glad if some of the superuser experts could fix this :-)

    Read the article

  • Can I set up samba so it automatically allows all the local usernames and passwords?

    - by dialer
    I have set up samba like this (this is the complete smb.conf): [global] log file = /var/log/samba/log log level = 2 security = user [homes] browsable = false read only = no valid users = %S I'd like to enable every user on server to access their home directories, but for some unknown reason only my 'administrator' account can do so. (I have done that with ftp before, but now smb is also needed). When I try to smbclient -L localhost -U [user], I get NT_STATUS_LOGON_FAILURE, except with the administrator (which is the user created during the ubuntu installation, not root). The samba log file says NT_STATUS_NO_SUCH_USER: [2012/04/04 20:26:02.081454, 2] smbd/reply.c:554(reply_special) netbios connect: name1=LOCALHOST 0x20 name2=DIALER-X 0x0 [2012/04/04 20:26:02.081733, 2] smbd/reply.c:565(reply_special) netbios connect: local=localhost remote=dialer-x, name type = 0 [2012/04/04 20:26:02.087200, 2] auth/auth.c:314(check_ntlm_password) check_ntlm_password: Authentication for user [public] - [public] FAILED with error NT_STATUS_NO_SUCH_USER I suspect that I have to manually create samba users, but the man pages state that If the client has passed a username/password pair and that username/password pair is validated by the UNIX system's password programs, the connection is made as that username. To me that sounds like as long as the provided username/password is a valid login on the server, it should work. Am I missing something totally obvious? I don't want / can't afford to manually update the samba users and passwords to match the server's. 11.10

    Read the article

  • CMake : système de compilation sort en version 3.0, nouveaux générateurs, variables, propriétés et meilleure gestion de la compilation croisée

    CMake 3 est maintenant disponible ! Découvrez les nouveautés du système de compilation multiplateforme Nouvelles pages de manuel, dont une pour Qt, nouveaux générateurs et de multiples autres apports CMake est un système de compilation et de construction de projets multiplateforme et Open Source. À l'aide d'un simple fichier CMakeLists.txt décrivant votre projet, CMake sera capable de le générer des fichiers pour votre EDI préférés. En résumé, il configure votre projet Visual Studio, Code::Blocks,...

    Read the article

  • Jetty 7 hightide distribution, JSP and JSTL support

    - by Lior Cohen
    Hello guys. I've been struggling with Jetty 7 and its support for JSP and JSTL. My JSP file: <%@ page language="java" contentType="text/html; charset=utf-8" pageEncoding="utf-8" %> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <%@taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %> <head> <title>blah</title> </head> <body> <table id="data"> <tr class="columns"> <td>Hour</td> <c:forEach var="campaign" items="${campaigns}"> <td>${campaign}</td> </c:forEach> </tr> <c:forEach var="hour" items="${results}"> <tr> <td class="hour">${hour.key}</td> <c:forEach var="campaign" items="${campaigns}"> <td>${hour[campaign]}</td> </c:forEach> </tr> </c:forEach> </table> </body> </html> The JSP portions above work as expected. JSTL, however, does not. The campaigns and results variables are request attributes set by a servlet. I get the following errors: WARN: ... compiler.TagLibraryInfoImpl: Unknown element (deferred-value) in attribute WARN: ... compiler.TagLibraryInfoImpl: Unknown element (deferred-value) in attribute WARN: ... compiler.TagLibraryInfoImpl: Unknown element (deferred-value) in attribute ERROR: ... javax.servlet.ServletException: java.lang.AbstractMethodError: javax.servlet.jsp.PageContext.getELContext()Ljavax/el/ELContext; I am not bundling any jar files into my .war file deployed to jetty. The version of jetty I'm using is: jetty-hightide-7.0.1.v20091125 The classpath: /usr/local/jetty/lib/jetty-xml-7.0.1.v20091125.jar:/usr/local/jetty/lib/servlet-api-2.5.jar:/usr/local/jetty/lib/jetty-http-7.0.1.v20091125.jar:/usr/local/jetty/lib/jetty-continuation-7.0.1.v20091125.jar:/usr/local/jetty/lib/jetty-server-7.0.1.v20091125.jar:/usr/local/jetty/lib/jetty-security-7.0.1.v20091125.jar:/usr/local/jetty/lib/jetty-servlet-7.0.1.v20091125.jar:/usr/local/jetty/lib/jetty-webapp-7.0.1.v20091125.jar:/usr/local/jetty/lib/jetty-deploy-7.0.1.v20091125.jar:/usr/local/jetty/lib/jetty-servlets-7.0.1.v20091125.jar:/usr/local/jetty/lib/jsp/ant-1.6.5.jar:/usr/local/jetty/lib/jsp/core-3.1.1.jar:/usr/local/jetty/lib/jsp/jetty-jsp-2.1-7.0.1.v20091125.jar:/usr/local/jetty/lib/jsp/jsp-2.1-glassfish-9.1.1.B60.25.p2.jar:/usr/local/jetty/lib/jsp/jsp-api-2.1-glassfish-9.1.1.B60.25.p2.jar:/usr/local/jetty/resources:/usr/local/jetty/lib/jetty-util-7.0.1.v20091125.jar:/usr/local/jetty/lib/jetty-io-7.0.1.v20091125.jar Any help would be greatly appreciated. Thanks in advance, Lior.

    Read the article

  • In Applescript, why do local variables in handlers capture "with" labeled parameters?

    - by outis
    In Applescript, if you declare a handler using "with" labeled parameters, local variables get the values of the arguments and the parameters themselves are undefined. For example: on bam of thing with frst and scnd local eat_frst return {thing: thing, frst:frst, scnd:scnd} -- this line throws an error end bam bam of "bug-AWWK!" with frst without scnd results in an error message that "scnd" isn't defined in the second line of bam. thing and frst are both defined, getting the arguments passed in the call to bam. Why is this happening? Why is scnd undefined? Note: I know that declaring variables as "local" within a handler is unnecessary. It's done in the examples for illustrative purposes.

    Read the article

  • How to address thread-safety of service data used for maintaining static local variables in C++?

    - by sharptooth
    Consider the following scenario. We have a C++ function with a static local variable: void function() { static int variable = obtain(); //blahblablah } the function needs to be called from multiple threads concurrently, so we add a critical section to avoid concurrent access to the static local: void functionThreadSafe() { CriticalSectionLockClass lock( criticalSection ); static int variable = obtain(); //blahblablah } but will this be enough? I mean there's some magic that makes the variable being initialized no more than once. So there's some service data maintained by the runtime that indicates whether each static local has already been initialized. Will the critical section in the above code protect that service data as well? Is any extra protection required for this scenario?

    Read the article

  • The Execute SQL Task

    In this article we are going to take you through the Execute SQL Task in SQL Server Integration Services for SQL Server 2005 (although it appies just as well to SQL Server 2008).  We will be covering all the essentials that you will need to know to effectively use this task and make it as flexible as possible. The things we will be looking at are as follows: A tour of the Task. The properties of the Task. After looking at these introductory topics we will then get into some examples. The examples will show different types of usage for the task: Returning a single value from a SQL query with two input parameters. Returning a rowset from a SQL query. Executing a stored procedure and retrieveing a rowset, a return value, an output parameter value and passing in an input parameter. Passing in the SQL Statement from a variable. Passing in the SQL Statement from a file. Tour Of The Task Before we can start to use the Execute SQL Task in our packages we are going to need to locate it in the toolbox. Let's do that now. Whilst in the Control Flow section of the package expand your toolbox and locate the Execute SQL Task. Below is how we found ours. Now drag the task onto the designer. As you can see from the following image we have a validation error appear telling us that no connection manager has been assigned to the task. This can be easily remedied by creating a connection manager. There are certain types of connection manager that are compatable with this task so we cannot just create any connection manager and these are detailed in a few graphics time. Double click on the task itself to take a look at the custom user interface provided to us for this task. The task will open on the general tab as shown below. Take a bit of time to have a look around here as throughout this article we will be revisting this page many times. Whilst on the general tab, drop down the combobox next to the ConnectionType property. In here you will see the types of connection manager which this task will accept. As with SQL Server 2000 DTS, SSIS allows you to output values from this task in a number of formats. Have a look at the combobox next to the Resultset property. The major difference here is the ability to output into XML. If you drop down the combobox next to the SQLSourceType property you will see the ways in which you can pass a SQL Statement into the task itself. We will have examples of each of these later on but certainly when we saw these for the first time we were very excited. Next to the SQLStatement property if you click in the empty box next to it you will see ellipses appear. Click on them and you will see the very basic query editor that becomes available to you. Alternatively after you have specified a connection manager for the task you can click on the Build Query button to bring up a completely different query editor. This is slightly inconsistent. Once you've finished looking around the general tab, move on to the next tab which is the parameter mapping tab. We shall, again, be visiting this tab throughout the article but to give you an initial heads up this is where you define the input, output and return values from your task. Note this is not where you specify the resultset. If however you now move on to the ResultSet tab this is where you define what variable will receive the output from your SQL Statement in whatever form that is. Property Expressions are one of the most amazing things to happen in SSIS and they will not be covered here as they deserve a whole article to themselves. Watch out for this as their usefulness will astound you. For a more detailed discussion of what should be the parameter markers in the SQL Statements on the General tab and how to map them to variables on the Parameter Mapping tab see Working with Parameters and Return Codes in the Execute SQL Task. Task Properties There are two places where you can specify the properties for your task. One is in the task UI itself and the other is in the property pane which will appear if you right click on your task and select Properties from the context menu. We will be doing plenty of property setting in the UI later so let's take a moment to have a look at the property pane. Below is a graphic showing our properties pane. Now we shall take you through all the properties and tell you exactly what they mean. A lot of these properties you will see across all tasks as well as the package because of everything's base structure The Container. BypassPrepare Should the statement be prepared before sending to the connection manager destination (True/False) Connection This is simply the name of the connection manager that the task will use. We can get this from the connection manager tray at the bottom of the package. DelayValidation Really interesting property and it tells the task to not validate until it actually executes. A usage for this may be that you are operating on table yet to be created but at runtime you know the table will be there. Description Very simply the description of your Task. Disable Should the task be enabled or not? You can also set this through a context menu by right clicking on the task itself. DisableEventHandlers As a result of events that happen in the task, should the event handlers for the container fire? ExecValueVariable The variable assigned here will get or set the execution value of the task. Expressions Expressions as we mentioned earlier are a really powerful tool in SSIS and this graphic below shows us a small peek of what you can do. We select a property on the left and assign an expression to the value of that property on the right causing the value to be dynamically changed at runtime. One of the most obvious uses of this is that the property value can be built dynamically from within the package allowing you a great deal of flexibility FailPackageOnFailure If this task fails does the package? FailParentOnFailure If this task fails does the parent container? A task can he hosted inside another container i.e. the For Each Loop Container and this would then be the parent. ForcedExecutionValue This property allows you to hard code an execution value for the task. ForcedExecutionValueType What is the datatype of the ForcedExecutionValue? ForceExecutionResult Force the task to return a certain execution result. This could then be used by the workflow constraints. Possible values are None, Success, Failure and Completion. ForceExecutionValue Should we force the execution result? IsolationLevel This is the transaction isolation level of the task. IsStoredProcedure Certain optimisations are made by the task if it knows that the query is a Stored Procedure invocation. The docs say this will always be false unless the connection is an ADO connection. LocaleID Gets or sets the LocaleID of the container. LoggingMode Should we log for this container and what settings should we use? The value choices are UseParentSetting, Enabled and Disabled. MaximumErrorCount How many times can the task fail before we call it a day? Name Very simply the name of the task. ResultSetType How do you want the results of your query returned? The choices are ResultSetType_None, ResultSetType_SingleRow, ResultSetType_Rowset and ResultSetType_XML. SqlStatementSource Your Query/SQL Statement. SqlStatementSourceType The method of specifying the query. Your choices here are DirectInput, FileConnection and Variables TimeOut How long should the task wait to receive results? TransactionOption How should the task handle being asked to join a transaction? Usage Examples As we move through the examples we will only cover in them what we think you must know and what we think you should see. This means that some of the more elementary steps like setting up variables will be covered in the early examples but skipped and simply referred to in later ones. All these examples used the AventureWorks database that comes with SQL Server 2005. Returning a Single Value, Passing in Two Input Parameters So the first thing we are going to do is add some variables to our package. The graphic below shows us those variables having been defined. Here the CountOfEmployees variable will be used as the output from the query and EndDate and StartDate will be used as input parameters. As you can see all these variables have been scoped to the package. Scoping allows us to have domains for variables. Each container has a scope and remember a package is a container as well. Variable values of the parent container can be seen in child containers but cannot be passed back up to the parent from a child. Our following graphic has had a number of changes made. The first of those changes is that we have created and assigned an OLEDB connection manager to this Task ExecuteSQL Task Connection. The next thing is we have made sure that the SQLSourceType property is set to Direct Input as we will be writing in our statement ourselves. We have also specified that only a single row will be returned from this query. The expressions we typed in was: SELECT COUNT(*) AS CountOfEmployees FROM HumanResources.Employee WHERE (HireDate BETWEEN ? AND ?) Moving on now to the Parameter Mapping tab this is where we are going to tell the task about our input paramaters. We Add them to the window specifying their direction and datatype. A quick word here about the structure of the variable name. As you can see SSIS has preceeded the variable with the word user. This is a default namespace for variables but you can create your own. When defining your variables if you look at the variables window title bar you will see some icons. If you hover over the last one on the right you will see it says "Choose Variable Columns". If you click the button you will see a list of checkbox options and one of them is namespace. after checking this you will see now where you can define your own namespace. The next tab, result set, is where we need to get back the value(s) returned from our statement and assign to a variable which in our case is CountOfEmployees so we can use it later perhaps. Because we are only returning a single value then if you remember from earlier we are allowed to assign a name to the resultset but it must be the name of the column (or alias) from the query. A really cool feature of Business Intelligence Studio being hosted by Visual Studio is that we get breakpoint support for free. In our package we set a Breakpoint so we can break the package and have a look in a watch window at the variable values as they appear to our task and what the variable value of our resultset is after the task has done the assignment. Here's that window now. As you can see the count of employess that matched the data range was 2. Returning a Rowset In this example we are going to return a resultset back to a variable after the task has executed not just a single row single value. There are no input parameters required so the variables window is nice and straight forward. One variable of type object. Here is the statement that will form the soure for our Resultset. select p.ProductNumber, p.name, pc.Name as ProductCategoryNameFROM Production.ProductCategory pcJOIN Production.ProductSubCategory pscON pc.ProductCategoryID = psc.ProductCategoryIDJOIN Production.Product pON psc.ProductSubCategoryID = p.ProductSubCategoryID We need to make sure that we have selected Full result set as the ResultSet as shown below on the task's General tab. Because there are no input parameters we can skip the parameter mapping tab and move straight to the Result Set tab. Here we need to Add our variable defined earlier and map it to the result name of 0 (remember we covered this earlier) Once we run the task we can again set a breakpoint and have a look at the values coming back from the task. In the following graphic you can see the result set returned to us as a COM object. We can do some pretty interesting things with this COM object and in later articles that is exactly what we shall be doing. Return Values, Input/Output Parameters and Returning a Rowset from a Stored Procedure This example is pretty much going to give us a taste of everything. We have already covered in the previous example how to specify the ResultSet to be a Full result set so we will not cover it again here. For this example we are going to need 4 variables. One for the return value, one for the input parameter, one for the output parameter and one for the result set. Here is the statement we want to execute. Note how much cleaner it is than if you wanted to do it using the current version of DTS. In the Parameter Mapping tab we are going to Add our variables and specify their direction and datatypes. In the Result Set tab we can now map our final variable to the rowset returned from the stored procedure. It really is as simple as that and we were amazed at how much easier it is than in DTS 2000. Passing in the SQL Statement from a Variable SSIS as we have mentioned is hugely more flexible than its predecessor and one of the things you will notice when moving around the tasks and the adapters is that a lot of them accept a variable as an input for something they need. The ExecuteSQL task is no different. It will allow us to pass in a string variable as the SQL Statement. This variable value could have been set earlier on from inside the package or it could have been populated from outside using a configuration. The ResultSet property is set to single row and we'll show you why in a second when we look at the variables. Note also the SQLSourceType property. Here's the General Tab again. Looking at the variable we have in this package you can see we have only two. One for the return value from the statement and one which is obviously for the statement itself. Again we need to map the Result name to our variable and this can be a named Result Name (The column name or alias returned by the query) and not 0. The expected result into our variable should be the amount of rows in the Person.Contact table and if we look in the watch window we see that it is.   Passing in the SQL Statement from a File The final example we are going to show is a really interesting one. We are going to pass in the SQL statement to the task by using a file connection manager. The file itself contains the statement to run. The first thing we are going to need to do is create our file connection mananger to point to our file. Click in the connections tray at the bottom of the designer, right click and choose "New File Connection" As you can see in the graphic below we have chosen to use an existing file and have passed in the name as well. Have a look around at the other "Usage Type" values available whilst you are here. Having set that up we can now see in the connection manager tray our file connection manager sitting alongside our OLE-DB connection we have been using for the rest of these examples. Now we can go back to the familiar General Tab to set up how the task will accept our file connection as the source. All the other properties in this task are set up exactly as we have been doing for other examples depending on the options chosen so we will not cover them again here.   We hope you will agree that the Execute SQL Task has changed considerably in this release from its DTS predecessor. It has a lot of options available but once you have configured it a few times you get to learn what needs to go where. We hope you have found this article useful.

    Read the article

  • Is there a "fancy" Ruby way to check whether a local variable is both defined and evaluates to true without using ands and ors?

    - by Steven Xu
    This is quite a quick question. I currently use do_this if (testvar ||= false) Which works just fine. But this approach unnerves me because while fast, it does not short-circuit like defined?(testvar) && testvar does, and it instantiates and assigns a value to a local variable that is subsequently never used, which seems inefficient. I enjoy the very reasonable fact that instance variables are nil before assignment, but I'd like for the situation to be just as easy for local variables.

    Read the article

  • Where can I hire local programmers with very specific skillsets?

    - by Lostsoul
    I have been browsing the site and haven't found a exact fit to this question so I'll post it but if its already answered(since I'm sure its a common problem, then let me know). I have a business and want to create a totally different product in a different industry than I'm currently in, so I learned how to program and created a working prototype. I have a bit of savings and am getting some cash flow from my current business so I can go out and hire a developer(in the future hopefully it can be permenant but right now I just need a person willing to work on contract and code on weekends or their spare time and I just want to pay in cash instead of equity or future promises). At first I wasn't sure what kind of developer to hire but this question helped me understand I should target specific skills I need as opposed to general programmers. This poses a problem for me since general programmers are everywhere but if I want specific skills I'm unsure how to get them. I thought about a list of approaches but it doesn't feel complete or effective since it seems to be assuming good developers are actively looking. If it helps I want someone local(since this is my first developer hire) and looking for skills like cuda, hadoop, hbase, java and c. Any suggestions? As a FYI, I have been thinking of approaching it as: Go to meet ups for one or more skills I need. Use LinkedIn to find people with the skills I need Search for job postings that contain skills I need and then use linkedIn to reach out to that firms employees since many profiles on linkedin are not very updated or detailed but job postings generally are. Send postings to universities and maybe find a student who loves technology so much they learned these tools on their own. Post on job board. Not sure how successful it will be to post to monster. Use Craigslist, not sure if a highly skilled developer would go here for work. What am I missing? I could be wrong but it seems like good/smart/able developers aren't hunting for work non-stop(especially in this tech job market). Plus most successful people I know have work/life balance so I'm not sure if the best ones really care about code after work. Lastly, most of the skills I need aren't used in big corporations so not sure how aggressively smart developers at small shops look for work. I don’t really know any developers personally, so but should I be using the above plan or if they live balanced lives should I be looking outside of the regular resources(and instead focus on asking around my gym or my accountant or something)? Sorry, I'm making huge assumptions here, I guess because developers are a total mystery to me. I kind of wish Jane Goodall wrote a book on understanding developers social behaviour better :-p

    Read the article

  • How to deal with MySQL Connector/ODBC error "Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock'"

    - by user12653020
    I am sure many users run into a mysterious problem when perfectly working ODBC configurations started failing with errors like: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' The above error message might be preceded with something like [nxDc[yQ]. At the same time odbc.ini specifies in its DSN different SOCKET=/tmp/mysql.sock or a TCP connection SERVER=<remote_host_or_ip>. The question is, what had happened that the ODBC driver started to ignore the DSN options? The clue lies in the corrupted string [nxDc[yQ], which actually was [UnixODBC][MySQL] with each 2nd symbol removed. This is the case of bad conversion from SQLCHAR to SQLWCHAR. The UnixODBC driver manager took a single-byte character string from the client application and tried to convert it into the wide (multi-byte) characters for the Unicode version of MyODBC driver: Initially the piece of the connection string was represented by 1-byte chars like: [S][E][R][V][E][R][=][m][y][h][o][s][t][;] after the bad conversion to wide chars (commonly 2-byte UTF-16) [SE][RV][ER][=m][yh][os][t;] instead of [S\0][E\0][R\0][V\0][E\0][R\0][=\0][m\0][y\0][h\0][o\0][s\0][t\0][;\0] Naturally, the MyODBC driver could not parse the bad string and tried to use the default connection type (SOCKET) with the default value (/var/lib/mysql/mysql.sock) Now we know what happened, but why it happened? In most cases it happened because of using ODBCManageDataSourcesQ4 utility or its older analog ODBCConfig. When registering ODBC drivers they put lots of additional options and one of these options badly affects the UnixODBC driver manager itself. The solution is simple - remove or comment out the option in odbcinst.ini file (it is empty by default) set for the driver: [MySQL ODBC 5.2.6 Driver] Description    = Driver         = /home/dbs/myodbc526/lib/libmyodbc5w.so Driver64       = /home/dbs/myodbc526/lib/libmyodbc5w.so Setup          = /home/dbs/myodbc526/lib/libmyodbc5S.so Setup64        = /home/dbs/myodbc526/lib/libmyodbc5S.so UsageCount     = 1 CPTimeout      = 0 CPTimeToLive   = 0 IconvEncoding  =  # <--------- remove this line Trace          = TraceFile      = TraceLibrary   = After applying this simple solution (remove the line with IconvEncoding = ) everything came to normal. Prior to removing that line I tried putting different encoding names there, but the result was not good, so I really don't know how to properly use it. Unfortunately, UnixODBC manuals say nothing about it. Therefore, removing this option was the only way to get things done.

    Read the article

  • Getting PATH right for python after MacPorts install

    - by BenjaminGolder
    I can't import some python libraries (PIL, psycopg2) that I just installed with MacPorts. I looked through these forums, and tried to adjust my PATH variable in $HOME/.bash_profile in order to fix this but it did not work. I added the location of PIL and psycopg2 to PATH. I know that Terminal is a version of python in /usr/local/bin, rather than the one installed by MacPorts at /opt/local/bin. Do I need to use the MacPorts version of Python in order to ensure that PIL and psycopg2 are on sys.path when I use python in Terminal? Should I switch to the MacPorts version of Python, or will that cause more problems? In case it is helpful, here are more facts: PIl and psycopg2 are installed in /opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages which pythonreturns/usr/bin/python echo $PATHreturns (I separated each path for easy reading): :/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/ :/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages :/opt/local/bin :/opt/local/sbin :/usr/local/git/bin :/usr/bin :/bin :/usr/sbin :/sbin :/usr/local/bin :/usr/local/git/bin :/usr/X11/bin :/opt/local/bin in python, sys.path returns: /Library/Frameworks/SQLite3.framework/Versions/3/Python /Library/Python/2.6/site-packages/numpy-override /Library/Frameworks/GDAL.framework/Versions/1.7/Python/site-packages /Library/Frameworks/cairo.framework/Versions/1/Python /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python26.zip /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6 /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-darwin /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/plat-mac/lib-scriptpackages /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-tk /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-old /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/lib-dynload /Library/Python/2.6/site-packages /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/PyObjC /System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python/wx-2.8-mac-unicode I welcome any criticism and comments, if any of the above looks foolish or poorly conceived. I'm new to all of this. Thanks! Running OSX 10.6.5 on a MacBook Pro, invoking python 2.6.1 from Terminal

    Read the article

  • How to remotely open gedit with SFTP URL in Gnome through SSH?

    - by Álvaro Justen
    My setup is weird and I can't change it now. I have two machines: local-machine: it's my desktop running Ubuntu with Gnome remote-machine: it's one virtual machine, also running Ubuntu but without X In both machines I have my private and public SSH keys. I need to run SSH from remote-machine to local-machine and run gedit (in local-machine, under the default $DISPLAY) but openning a file in remote-machine throught SFTP. Something like this: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 gedit sftp://remote-machine/some/file" The command above doesn't work. gedit shows this message: Could not open the file sftp://remote-machine/some/file. gedit cannot handle sftp: locations. Note that: /some/file exists on remote-machine. I can SSH normally from remote-machine to local-machine using my SSH key without any problems! I can run the command DISPLAY=:0.0 gedit sftp://remote-machine/some/file in a terminal on local-machine and gedit opens the file on remote-machine without any problems - but the terminal in which I executed the command is running in DISPLAY :0 (really, it's gnome-terminal). I also tried -t option of SSH client (to force pseudo-tty allocation) but it didn't work. If I try to run DISPLAY=:0.0 gedit sftp://remote-machine/some/file in local-machine but under a tty (for example in tty1, by pressing <Ctrl>+<Alt>+<F1>) it doesn't not work - I get the same error when running from remote-machine. I found that if I pass the environment variable DBUS_SESSION_BUS_ADDRESS with a correct value, it works! So, if I do something like that: myuser@local-machine:~$ env | grep DBUS_SESSION_BUS_ADDRESS > env.txt myuser@local-machine:~$ scp env.txt remote-machine: and then: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 $(cat env.txt) gedit sftp://remote-machine/some/file" it works! The problem is that I'm not on local-machine so I can't get the correct value for this env variable. Is there any other way to make this work?

    Read the article

  • Cannot connect to local network shares when connected to VPN. Error: "the user name could not be found"

    - by Nick G
    I keep finding that on our small company LAN (7 users, 3 servers) that some servers keep becoming "not accessible" for the purposes of file sharing. They display the message "\SERVER is not accessible. You might not have permission to use this network resource. The user name could not be found". But I don't know why "the user name could not be found" as all the machines are on the same domain and the PDC and BDC seem to be behaving OK. EDIT: VPN seems to be the cause: It turns out I can see the server if I use the IP address (\\1.2.3.4\ etc) or the FQ active directory name (eg \server.domainname.local) but not if I use the server name on its own or a mapped network drive originally created from the "short" name. Oddly though, my machine has no issue resolving the server's DNS name as I can ping the machine name OK and it immediately comes back with the IP, however nslookup seems to fail. It seems to be a problem with how Windows looks up machine names when connected to VPNs. When I'm connected to a VPN, windows seems to use the DNS assocated with the VPN and not the one on the domain controller. This behavior to me, seems incorrect as surely that would mean connecting to any VPN would break any ability to lookup local machine names for servers and printers etc. So I guess the real question now is, how can I make my machine still search the local Active Directory DNS (the PDC) even when connected to a VPN? More info in my comments below.

    Read the article

  • Convert raw IMAP server data into local folders, then upload partial dataset to new IMAP server?

    - by Manca Weeks
    I am transitioning a company with about 30 IMAP accounts, loaded with data (about 77GB total), to a new email host. The majority of the data will be converted into a local archive and distributed to the company computers as a static reference data set. The server side folders the users absolutely cannot do without being on the server will be uploaded back to the new server. I used Mac OS X Mail (Snow Leopard 10.6.6) to download the content. I notice some messages have the name [xxx].partial.emlx, which leads me to believe they have not been downloaded all the way. I have root access to the mail server data and could download the IMAP server data via FTP. I am not sure what utility to use to convert that data to local Mail.app mailboxes. Furthermore, I would appreciate any input on the best way to upload a portion of the data to the new server (GoDaddy), preserving the original dates of the messages. edit OK - forget the raw server data. I found a script that apparently does pretty good archiving IMAP folders to local mbx files. My main quest now is to batch upload a mailbox hierarchy to the new IMAP server without having to start-stop and deal with similar issues. Anyone know of a utility (hopefully for OS X, but if not, I'll fire up my XP virtual system...) that would be capable of this? Thanks, M

    Read the article

  • need to stop mysql server on my mac os x

    - by al0ne evenings
    I just installed xampp on my mac os x. When I tried start mysql it display a message that mysql is already running on this computer. In order to start mysql stop first mysql. I tried following ways to stop it but neither of them works. mysqladmin version sudo /usr/local/mysql/mysql.server stop //mysql.server command not found mysqladmin -u root -p password shutdown //restarts the server but not shutdown when i use which mysql command it shows this path /usr/local/bin/mysql and when I issue ps aux | grep mysqld command I get following output zafarsaleem 85209 0.0 0.3 2699804 13204 ?? S 7:51AM 0:00.88 /Applications/MAMP/Library/bin/mysqld --basedir=/Applications/MAMP/Library --datadir=/Applications/MAMP/db/mysql --plugin-dir=/Applications/MAMP/Library/lib/plugin --lower-case-table-names=0 --log-error=/Applications/MAMP/logs/mysql_error_log.err --pid-file=/Applications/MAMP/tmp/mysql/mysql.pid --socket=/Applications/MAMP/tmp/mysql/mysql.sock --port=8889 zafarsaleem 85093 0.0 0.0 2435488 924 ?? S 7:51AM 0:00.03 /bin/sh /Applications/MAMP/Library/bin/mysqld_safe --port=8889 --socket=/Applications/MAMP/tmp/mysql/mysql.sock --lower_case_table_names=0 --pid-file=/Applications/MAMP/tmp/mysql/mysql.pid --log-error=/Applications/MAMP/logs/mysql_error_log zafarsaleem 86693 0.0 0.0 2425480 180 s004 R+ 8:30AM 0:00.00 grep mysqld zafarsaleem 86507 0.0 0.3 2678756 11364 ?? S 8:07AM 0:00.63 /usr/local/Cellar/mysql/5.5.20/bin/mysqld --basedir=/usr/local/Cellar/mysql/5.5.20 --datadir=/usr/local/var/mysql --plugin-dir=/usr/local/Cellar/mysql/5.5.20/lib/plugin --max-allowed-packet=32M --log-error=/usr/local/var/mysql/Zafars-MacBook-Pro-2.local.err --pid-file=/usr/local/var/mysql/Zafars-MacBook-Pro-2.local.pid zafarsaleem 86447 0.0 0.0 2435488 920 ?? S 8:07AM 0:00.02 /bin/sh /usr/local/bin/mysqld_safe --max_allowed_packet=32M Please help. How can I resolve this issue.

    Read the article

  • Active Directory Restricted Group confusion

    - by pepoluan
    I am trying to implement Restricted Group policy for my company's AD infrastructure, namely standardizing the local "Administrators" group. The documentation (and various webpages) said that the "Members of this group" policy will wipe out the "Administrators" group. However, an experiment made me confused: I created 2 GPOs: GPO-A replaces the Local Administrators with a list of domain users (e.g., "Alice" and "Bob") GPO-B inserts a domain user (e.g., "Charlie" -- not part of GPO A) into the Local Administrators Experiment 1: GPO-A gets applied first (link order 2) Everything happens as expected: GPO-A cleans out Local Admins and add "Alice" & "Bob" gets added; GPO-B adds "Charlie". Experiment 2: GPO-B is applied first What happens: "Charlie" gets added to the Local Admins group (which also contains 2 local users) The local users on the PC gets deleted, and "Alice" and "Bob" gets added. Result: Local Admins contain "Alice", "Bob", and "Charlie" My confusion: In Experiment 2, I thought GPO-A will totally erase the Local Admins group, including users added by GPO-B (since GPO-A gets applied after GPO-B). As it happens, it only erase local users from the Local Admins, but keeps the domain users. So, is that the way it should be? Or am I doing something incorrectly?

    Read the article

  • Why can a local root turn into any LDAP user?

    - by Daniel Gollás
    I know this has been asked here before, but I am not satisfied with the answers and don't know if it's ok to revive and hijack an older question. We have workstations that authenticate users on an LDAP server. However, the local root user can su into any LDAP user without needing a password. From my perspective this sounds like a huge security problem that I would hope could be avoided at the server level. I can imagine the following scenario where a user can impersonate another and don't know how to prevent it: UserA has limited permissions, but can log into a company workstation using their LDAP password. They can cat /etc/ldap.conf and figure out the LDAP server's address and can ifconfig to check out their own IP address. (This is just an example of how to get the LDAP address, I don't think that is usually a secret and obscurity is not hard to overcome) UserA takes out their own personal laptop, configures authentication and network interfaces to match the company workstation and plugs in the network cable from the workstation to their laptop, boots and logs in as local root (it's his laptop, so he has local root) As root, they su into any other user on LDAP that may or may not have more permissions (without needing a password!), but at the very least, they can impersonate that user without any problem. The other answers on here say that this is normal UNIX behavior, but it sounds really insecure. Can the impersonated user act as that user on an NFS mount for example? (the laptop even has the same IP address). I know they won't be able to act as root on a remote machine, but they can still be any other user they want! There must be a way to prevent this on the LDAP server level right? Or maybe at the NFS server level? Is there some part of the process that I'm missing that actually prevents this? Thanks!!

    Read the article

  • OpenVPN with MacOS X Client and same subnets in local and remote net.

    - by Daniel
    I have a homenetwork 192.168.1.0/24 with gteway 192.168.1.1 and a remote network with the same parameters. Now I want to create a OpenVPN tunnel between those networks. I have no problems with Windows, because Windows routes everything to 192.168.1.0/24 except 192.168.1.1 throught the tunnel. On MacOS X however I see the folling line in the Details window: 2010-05-10 09:13:01 WARNING: potential route subnet conflict between local LAN [192.168.1.0/255.255.255.0] and remote VPN [192.168.1.0/255.255.255.0] When I list the routes I get the following: Internet: Destination Gateway Flags Refs Use Netif Expire default 192.168.1.1 UGSc 13 3 en1 127 localhost UCS 0 0 lo0 localhost localhost UH 12 3589 lo0 169.254 link#5 UCS 0 0 en1 192.168.1 link#5 UCS 1 0 en1 192.168.1.1 0:1e:e5:f4:ec:7f UHLW 13 17 en1 1103 192.168.1.101 localhost UHS 0 0 lo0 192.168.6 192.168.6.5 UGSc 0 0 tun0 192.168.6.5 192.168.6.6 UH 1 0 tun0 My Interfaces are en1 - My local Wifi network tun0 - The tunnel interface As can be seen from the routes above there is no entry for 192.168.1.0/24 that routes the traffic through the tunnel interface. When I manually route a single IP like 192.168.1.16 over the tunnel gateway 192.168.6.6, this works. Q: How do I set up my routes in MacOS X for the same behaviour as on windows, to route everything except 192.168.1.1 through the tunnel, but leave the default gateway to be my local 192.168.1.1 ?

    Read the article

  • OS X: Storing MySQL data securely, on an encrypted FileVault image using a soft link

    - by GJ
    I am trying to get a macports-installed MySQL to use a data directory stored inside my FileVault-protected home dir. I used sudo cp -a /opt/local/var/db/mysql5 ~/db/ (the -a to ensure file permissions remain intact) and then replaced the original mysql5 directory with a soft link: sudo ln -s ~/db/mysql5 /opt/local/var/db/mysql5 However, when I now try to start MySQL it fails. It follows the soft link at least to the extent that it modifies some files in the ~/db/mysql5 dir, notably the error log which gets appended to it this: 110108 15:33:08 mysqld_safe Starting mysqld daemon with databases from /opt/local/var/db/mysql5 110108 15:33:08 [Warning] '--skip-locking' is deprecated and will be removed in a future release. Please use '--skip-external-locking' instead. 110108 15:33:08 [Warning] '--log_slow_queries' is deprecated and will be removed in a future release. Please use ''--slow_query_log'/'--slow_query_log_file'' instead. 110108 15:33:08 [Warning] '--default-character-set' is deprecated and will be removed in a future release. Please use '--character-set-server' instead. 110108 15:33:08 [Warning] Setting lower_case_table_names=2 because file system for /opt/local/var/db/mysql5/ is case insensitive 110108 15:33:08 [Note] Plugin 'FEDERATED' is disabled. 110108 15:33:08 [Note] Plugin 'ndbcluster' is disabled. /opt/local/libexec/mysqld: Table 'mysql.plugin' doesn't exist 110108 15:33:08 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110108 15:33:09 InnoDB: Started; log sequence number 4 1596664332 110108 15:33:09 [ERROR] /opt/local/libexec/mysqld: Can't create/write to file '/opt/local/var/db/mysql5/mac.local.pid' (Errcode: 13) 110108 15:33:09 [ERROR] Can't start server: can't create PID file: Permission denied 110108 15:33:09 mysqld_safe mysqld from pid file /opt/local/var/db/mysql5/gPod.local.pid ended I can't see why MySQL can't create the pid file, since manually creating it using the _mysql user succeeds (sudo -u _mysql touch mac.local.pid from inside ~/db/mysql5) Any ideas how to resolve this?

    Read the article

  • Permission denied problem in Freenas + Transmission

    - by Torbjörn Karlsson
    Running Freenas 0.7.2 (5543) and Transmission 2.11 The problem it that i can not save a torrent where ever i want.. For example... I can save in: /nmt/1-500gb/Tv/dexter but i can not save in /nmt/4-1000gb/tv/Lost When i try to save in the lost folder I get a permission denied error in the Web interface. But when I try to save the same torrent file in the dexter folder everything works fine... This is probably an easy thing to fix, but I'm new to Freenas. The user name for Transmission is TorrentUser if that helps. Now I find out that I can not browse the disk in Quixplorer.. I can browse nmt/4-1000gb/ but not /nmt/1-500gb When I try to browse the nmt/4-1000gb/ I get Unable to read directory $ mount /dev/md0 on / (ufs, local) devfs on /dev (devfs, local) procfs on /proc (procfs, local) /dev/fuse1 on /mnt/5 - 500gb (fusefs, local, synchronous) /dev/fuse2 on /mnt/2 - 1000gb (fusefs, local, synchronous) /dev/fuse3 on /mnt/3 - 1000gb (fusefs, local, synchronous) /dev/fuse4 on /mnt/4 - 1000gb (fusefs, local, synchronous) /dev/fuse5 on /mnt/320GB - USB (fusefs, local, synchronous) /dev/md1 on /var (ufs, local) /dev/da0a on /cf (ufs, local, read-only) /dev/fuse0 on /mnt/1 - 500gb (fusefs, local, synchronous) Dont work : 1 - 500gb 2 - 1000gb 3 - 1000gb Works: 320GB - USB 4 - 1000gb 5 - 500gb And this 3 disk is the same disks that I can save my torrents to. Ps. Every disk works perfect when i use ftp...

    Read the article

  • OS X: MySQL not dealing properly with data directory via soft link

    - by GJ
    I am trying to get a macports-installed MySQL to use a data directory stored inside my FileVault-protected home dir. I used sudo cp -a /opt/local/var/db/mysql5 ~/db/ (the -a to ensure file permissions remain intact) and then replaced the original mysql5 directory with a soft link: sudo ln -s ~/db/mysql5 /opt/local/var/db/mysql5 However, when I now try to start MySQL it fails. It follows the soft link at least to the extent that it modifies some files in the ~/db/mysql5 dir, notably the error log which gets appended to it this: 110108 15:33:08 mysqld_safe Starting mysqld daemon with databases from /opt/local/var/db/mysql5 110108 15:33:08 [Warning] '--skip-locking' is deprecated and will be removed in a future release. Please use '--skip-external-locking' instead. 110108 15:33:08 [Warning] '--log_slow_queries' is deprecated and will be removed in a future release. Please use ''--slow_query_log'/'--slow_query_log_file'' instead. 110108 15:33:08 [Warning] '--default-character-set' is deprecated and will be removed in a future release. Please use '--character-set-server' instead. 110108 15:33:08 [Warning] Setting lower_case_table_names=2 because file system for /opt/local/var/db/mysql5/ is case insensitive 110108 15:33:08 [Note] Plugin 'FEDERATED' is disabled. 110108 15:33:08 [Note] Plugin 'ndbcluster' is disabled. /opt/local/libexec/mysqld: Table 'mysql.plugin' doesn't exist 110108 15:33:08 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110108 15:33:09 InnoDB: Started; log sequence number 4 1596664332 110108 15:33:09 [ERROR] /opt/local/libexec/mysqld: Can't create/write to file '/opt/local/var/db/mysql5/mac.local.pid' (Errcode: 13) 110108 15:33:09 [ERROR] Can't start server: can't create PID file: Permission denied 110108 15:33:09 mysqld_safe mysqld from pid file /opt/local/var/db/mysql5/gPod.local.pid ended I can't see why MySQL can't create the pid file, since manually creating it using the _mysql user succeeds (sudo -u _mysql touch mac.local.pid from inside ~/db/mysql5) Any ideas how to resolve this?

    Read the article

  • Why does Google Chrome ignore "last_known_google_url" property in "Local State" file?

    - by Peter Sivák
    I want to force my Google Chrome web browser (version 21.0.1180.89, 64-bit) to use non-localized search (thus google in english) through address bar, using the default Google search engine. To achieve that, I have to change value of the property last_known_google_url to https://www.google.com/?hl=en& in Local State file (for instance on Linux, the full path to the file is ~/.config/google-chrome/Local State). In that file, there should be the property: "browser": { "last_known_google_url": but it is not. Even if I add there the property, it has no impact on search - Google Chrome does not use the property and still searches in localized version. Another option is to put the property to Preferences file (for instance on Linux, the full path to the file is ~/.config/google-chrome/Default/Preferences) - which works perfectly when I start Google Chrome and do some search - but just after that, the property (actually the whole Preferences file) is overriden, so "the most important" trailing part ?hl=en& of the property value is removed - and without it, the non-localized search does not work anymore. Why does Google Chrome ignore last_known_google_url property in Local State file?

    Read the article

  • FreeBSD 8 and Samba 3.3 Samba seems to crash/not start?

    - by scraft3613
    I am both new to FreeBSD and Samba, and with that in mind ... I installed Samba 3.3. from Ports. I have this in my rc.conf: #samba nmbd_enable="YES" smbd_enable="YES" winbindd_enable="YES" I can see an active PID: prod1# cat /var/run/smbd.pid 24426 But it seems like smbd isn't running: prod1# ps -auwxx | egrep '[sn]mbd' root 24513 0.0 0.3 21108 4672 ?? Ss Sat02PM 0:00.71 /usr/local/sbin/nmbd -D -s /usr/local/etc/smb.conf If I restart samba with /usr/local/etc/rc.d/samba restart it runs: prod1# ps -auwxx | egrep '[sn]mbd' root 30188 0.0 0.3 21080 4700 ?? Ss 2:49PM 0:00.00 /usr/local/sbin/nmbd -D -s /usr/local/etc/smb.conf root 30196 0.0 0.6 35520 10952 ?? Ss 2:49PM 0:00.00 /usr/local/sbin/smbd -D -s /usr/local/etc/smb.conf root 30198 0.0 0.6 35520 10880 ?? S 2:49PM 0:00.00 /usr/local/sbin/smbd -D -s /usr/local/etc/smb.conf Until I do: prod1# smbclient -L prod1 Connection to prod1 failed (Error NT_STATUS_CONNECTION_REFUSED) prod1# prod1# ps -auwxx | egrep '[sn]mbd' root 30188 0.0 0.3 21080 4700 ?? Ss 2:49PM 0:00.00 /usr/local/sbin/nmbd -D -s /usr/local/etc/smb.conf What should I be checking to find out what's going on?

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >