Search Results

Search found 10755 results on 431 pages for 'cluster shared volume'.

Page 57/431 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Phantom activity on MySQL

    - by LoveMeSomeCode
    This is probably just my total lack of MySQL expertise, but is it typical to see lots of phantom activity on a MySQL instance via phpMyAdmin? I have a shared hosting plan through Lithium, and when I log in through the phpMyAdmin console and click on the 'Status' tab, it's showing crazy high numbers for queries. Within an hour of activating my account I had 1 million queries. At first I thought this was them setting things up, but the number is climbing constantly, averaging 170/second. I've got a support ticket in with Lithium, but I thought I'd ask here if this were a MySQL/shared host thing, because I had the same thing happen with a shared hosting plan through Joyent.

    Read the article

  • How to create make .so files from code written in c or c++ that are usable from python

    - by None
    Looking at python modules and at code in the "lib-dnyload" directory in the python framework, I noticed whenever code is creating some kind of gui or graphic it imports a non-python file with a .so extension. And there are tons .so files in "lib-dnyload". From googling things I found that these files are called shared objects and are written in c or c++. I have a mac and I use gcc. I want to know how to make shared object files that are accessible via python. Mainly just how to make shared objects with gcc using mac os x.

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • jboss cache as hibernate 2nd level - cluster node doesn't persist replicated data

    - by Sergey Grashchenko
    I'm trying to build an architecture basically described in user guide http://www.jboss.org/file-access/default/members/jbosscache/freezone/docs/3.2.1.GA/userguide_en/html/cache_loaders.html#d0e3090 (Replicated caches with each cache having its own store.) but having jboss cache configured as hibernate second level cache. I've read manual for several days and played with the settings but could not achieve the result - the data in memory (jboss cache) gets replicated across the hosts, but it's not persisted in the datasource/database of the target (not original) cluster host. I had a hope that a node might become persistent at eviction, so I've got a cache listener and attached it to @NoveEvicted event. I found that though I could adjust eviction policy to fully control it, no any persistence takes place. Then I had a though that I could try to modify CacheLoader to set "passivate" to true, but I found that in my case (hibernate 2nd level cache) I don't have a way to access a loader. I wonder if replicated data persistence is possible at all by configuration tuning ? If not, will it work for me to create some manual peristence in CacheListener (I could check whether the eviction event is local, and if not - persist it to hibernate datasource somehow) ? I've used mvcc-entity configuration with the modification of cacheMode - set to REPL_ASYNC. I've also played with the eviction policy configuration. Last thing to mention is that I've tested entty persistence and replication in project that has been generated with Seam. I guess it's not important though.

    Read the article

  • Create a sub menu in existing menu in Excel Shared Add-in

    - by Sathish
    Hi I am developing a Excel shared Add-in which has the menu called Custom which is created using Excel Macros. Now i want to create a submenu under the Custom menu using Csharp Shared Add -in. Iam using the below code for doing this but no help oStandardBar = oCommandBars["Custom"]; oCmdBarCtrl = oStandardBar.Controls.Add(MsoControlType.msoControlPopup, Type.Missing, Type.Missing, Type.Missing, true); oCmdBarCtrl.Visible = false; oCmdBarCtrl.Caption = "Sub Menu1"; But it does not create a submenu, where as if i give "Help" in place of Custom i get the menu created. any work around for this?

    Read the article

  • Looking for a webhost to support SSRS Hosting with SQL Azure

    - by Adrian Grigore
    Since SQL Azure does not currently support SSRS, the only possible workaround is to host my own SSRS server and have it point to my SQL Azure instance for data retrieval. Now, for me it would be total overkill to rent a dedicated server with SQL server on it just for hosting SSRS. Are there any (shared) web hosters that offer SSRS hosting with third party SQL servers? I've already asked discountasp.net, but they don't allow this. Thanks, Adrian

    Read the article

  • LD_LIBRARY_PATH : how to find a shared object

    - by CuriousDawg
    I have a shared object ( libxyz.so ). Given LD_LIBRARY_PATH, how can find the exact location of this shared object? If i had a binary that depends on this lib, i would have used ldd on that. Here is the reason why i ask: I have a cgi script which works when using LD_LIBRARY_PATH set to say VALUE1. It does not work when the path is set to VALUE2. I would like to find the exact location of the library as specified by the path in VALUE1 ( Note that VALUE1 has almost 20+ different locations ) Platform: Linux

    Read the article

  • Installing Win32 shared SxS policy via WiX 3.0 MSM fails for 2nd app

    - by dr-stevep
    I am attempting to author a merge module for use by multiple application installers to install a Win32 Shared SxS Assembly and its associated Policy. I'm using WiX 3.0 to generate the MSM and test MSIs. So far it works fine for the first app installer that runs … but the second app installer fails because the Policy file already exists (HRESULT: 0x800700B7). What requirement(s) for correct Win32 Shared SxS Policy installation am I missing? I have submitted WiX bug 3005301 for this (https://sourceforge.net/tracker/?func=detail&atid=642714&aid=3005301&group_id=105970) and posted VS2008 projects that reproduce the problem. URL: ftp.digital-rapids.com/upload/SteveP/ User: drc-support Password: drc-support Link: ftp://drc-support:[email protected]/upload/SteveP/ wix-Bugs-3005201.rar contains a VS2008 solution that builds the MSM and MSIs that reproduce the issue. (~3MB) wix-Bugs-3005301_Output.rar contains the generated MSM, MSI, and wixpdb files (~40MB)

    Read the article

  • Windows SBS 2008 domain, Outlook 2010 calendar sharing / appointments not synced

    - by indeed005
    A user on this network reports not being able to delete appointments on Outlook 2010 calendars that are shared (not delegated) with other users in the domain. Specifically, the delete appears to work, but after 5 seconds or so the appointment re-appears. Permissions seem fine. Problem does not occur with other users because they do not share calendars. Is this behaviour intended (appointments re-appearing after they have been deleted)?

    Read the article

  • ASP.NET access files on another computer shared folder

    - by Tomas
    Hello, I have ASP.NET project which do some file access and manipulation, the methods which I use for file access are below. Now I need to access files on another server shared folder, how to do that? I easily can change file path to shared folder path but I get "can't access" error because shares are password protected. As I understand I need somehow to send credentials to remote server before executing methods below. How to do that? FileStream("c:\MyProj\file.doc", FileMode.OpenOrCreate, FileAccess.Write) Context.Response.TransmitFile("c:\MyProj\file.doc"); Regards, Tomas

    Read the article

  • worker process in IIS shared hosting

    - by Akshat Goel
    Can anyone tell me, is there a way to run a process in IIS shared hosting service. Suppose, the scenario is like "I want to send emails to a list of email id's after everywhere 3 hrs", so the challenge here is the process should not be invoked by a HTTP link. It should be automatic. I think we can do this by IIS worker processes. Also this all will be happening on a shared server(like GoDaddy) in IIS7, .NET 3.5 Please anyone give me a direction. Thanks in advance.

    Read the article

  • Managing Team Development on Shared Website

    - by stjowa
    I need to know the best way to manage team web-development on a shared server (hostgator). I have done some individual web development on a shared server in the past, and I have always setup SVN through SSH to have a pretty-nice development workflow (version control, quick-commits, work though eclipse/subclipse, etc). However, I also know that with that setup, I had to make some pretty-sophisticated post-commit hooks to export the repository to /public_html; and, therefore, making the repository code testable. This seems like a tedious and error-prone setup for an entire team. I would like to be able to: Easily test the latest code in the repository. Somewhat easily move the code in the repository to production. Use an IDE like eclipse/subclipse to easily work with the repository. With this in mind, does anyone know of a good version-control/repository setup for developing a website with a team of about 4-5 people? Thanks a lot.

    Read the article

  • ASP.Net: Scheduled tasks in a shared environment.

    - by UpTheCreek
    This is really an extension of this question, which asked the best way to schedule tasks which need to be performed periodically within ASP.NET in a normal environment. However, I would like to ask this specifically for a Shared Hosting Environment (most of the answers to the previous question would not work in a shared environment as far as I know). One obvious solution would be to have a page which is responsible to performing the tasks on the hosted machine, and call this page from another machine (that you have full control over) using e.g. a windows scheduled task. This is a bit nasty though - are there any better approaches?

    Read the article

  • How to use AD/GPO/Print Services to "push out" a new printer driver to replace a broken one? How did my server get a broken driver?

    - by Zac B
    Context: We have an AD/GPO-managed corporate network with a little over a hundred PCs running Windows 7 x64, and a few managed printers. Our Server2008R2 primary domain controller is configured as a print server for them all. Problem: After a recent windows update and restart (no printer driver updates were included) on the DC, a particular shared printer (Lexmark T650) has begin exhibiting some strange behavior. First, it prints a preceding and following blank page for almost every document, on jobs submitted by about half of client machines (no separator page is configured on the server or any of the clients I've seen). Second, whenever someone tries to access "Printing Preferences" on any client, they recieve the following error message (this happens everywhere, 100% of the time, and didn't happen before the update on the DC): Once they click "OK", the prefs screen appears (with no separator page selected) and everything seems fine. I'm not even sure if these two issues are related, but everyone seems affected by one or both of those issues. What I've Tried: I've been hesitant to un-deploy the problem printer, or remove it via GPO, as it's pretty heavily used. I've tried updating (via MS update and our internal WSUS server) client machines and the DC. No printer driver updates have appeared, and no number of updates or restarts on the server or the client seems to have achieved anything other than my boss getting grumpy that I'm bouncing the domain controller so often. I've tried deleting the drivers on the server, and re-installing them from the original source that has worked for the past year...no change. I've tried selecting "New Driver" for one of the shared printers on a client machine, running as domain admin, and pushed the latest driver found by MSupdate back up to the DC. This changed the version number of the driver recorded in the print server manager, but caused no change--on the client I pushed from, or on any other. The error still appears. Question: Why the heck is this happening? Obviously, I got a bad driver from somewhere, but how do I get rid of it? I don't know of any "roll back drivers" functionality for centrally managed print drivers like Windows offers for other devices. How would I a) get this issue resolved on a client, and b) push the fix to the other members of the domain?

    Read the article

  • DataContractAttribute with Shared Assembly

    - by Sanju
    Hi All, Is it necessary to decorate custom objects with [DataContract] and [DataMember] when using shared assemblies (as opposed to auto proxy generation)? The reason I ask is that I have encountered the following scenario: Suppose the following object is implemented in my service: public class baseClass { Guid _guid; public baseClass() { _guid = Guid.NewGuid() } public Guid ReturnGuid { get {return _guid;}} } public class newClass : baseClass { int _someValue; public newClass {} public int SomeValue { get {return _someValue;} set {_someValue = value;} } } [ServiceContract] public IService { [OperationContract] newClass SomeOperation(); } In my client (with shared assemblie) I can happily recieve and use a serialized newClass when SomeOperation is called - even though I have not marked it as a DataContract. However, as soon as I do mark it with DataContract and use DataMember then it complains that set is not implemented on ReturnGuid in the base class. Could somebody explain why it works fine when I do not decorate with DataContract and DataMember. Many thanks.

    Read the article

  • Way around ASP.NET session being shared across multiple tab windows

    - by ace
    I'm storing some value in an asp.net session on the first page. On the next page, this session value is being read. However if multiple tabs are opened and there are multiple page 1-page 2 navigation going on, the value stored in session gets mixed up since the session is shared between the browser tabs. I'm wondering what are the options around this : Query String: Passing value between the pages using query string, I don't want to take this approach since there can be multiple anchor tags on page 1 linking to page 2 and I can not rewrite the URLs of each tag since they are dynamic. Cookies??? In-memory cookies are shared across browser tabs too, same as the session cookie, rite ? Any other option?

    Read the article

  • Increase samba space on open suse 12.1

    - by Kapil Sharma
    I know linux basics but not an expert. IT guy left the job here and there is some time before new hire. So sorry if question is very basic. We have local testing server based on Open SUSE 12.1, which also act as shared drive between dev/mgmt team here and using Samba for that. Now we are running out of space on samba, even though server's 2*1TB harddisk is nearly 90% free. My question is, what is limiting Samba and how can I increase its limit? We need around at least 500 GB as shared drive but currently its just 25 GB. I don't need step by step answer, just a link to any helpful article would be sufficient. Probably I'm putting wrong keywords in google so not getting any helpful link. EDIT: Output of commands in the first comment. All commands were run as root user df -h (getting error with df -ht) Filesystem Size Used Avail Use% Mounted on rootfs 30G 5.1G 23G 19% / devtmpfs 2.0G 36K 2.0G 1% /dev tmpfs 2.0G 1.1M 2.0G 1% /dev/shm tmpfs 2.0G 676K 2.0G 1% /run /dev/sda2 30G 5.1G 23G 19% / tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup tmpfs 2.0G 676K 2.0G 1% /var/run tmpfs 2.0G 0 2.0G 0% /media tmpfs 2.0G 676K 2.0G 1% /var/lock /dev/sda3 36G 31G 3.3G 91% /home fdisk -l /dev/[hmsv]d* Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x2d4a2d49 Device Boot Start End Blocks Id System /dev/sda1 2048 16771071 8384512 82 Linux swap / Solaris /dev/sda2 * 16771072 79681535 31455232 83 Linux /dev/sda3 79681536 156301311 38309888 83 Linux Disk /dev/sda1: 8585 MB, 8585740288 bytes 255 heads, 63 sectors/track, 1043 cylinders, total 16769024 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda1 doesn't contain a valid partition table Disk /dev/sda2: 32.2 GB, 32210157568 bytes 255 heads, 63 sectors/track, 3915 cylinders, total 62910464 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System Disk /dev/sda3: 39.2 GB, 39229325312 bytes 255 heads, 63 sectors/track, 4769 cylinders, total 76619776 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sda3 doesn't contain a valid partition table vgs No volume groups found lvs No volume groups found output of vi /etc/samba/smb.conf # smb.conf is the main Samba configuration file. You find a full commented # version at /usr/share/doc/packages/samba/examples/smb.conf.SUSE if the # samba-doc package is installed. # Date: 2011-11-02 [global] workgroup = WORKGROUP passdb backend = tdbsam printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User include = /etc/samba/dhcp.conf logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes [homes] comment = Home Directories valid users = %S, %D%w%S browseable = No read only = No inherit acls = Yes [profiles] comment = Network Profiles Service path = %H read only = No store dos attributes = Yes create mask = 0600 directory mask = 0700 [users] comment = All users path = /home read only = No inherit acls = Yes veto files = /aquota.user/groups/shares/ [groups] comment = All groups path = /home/groups read only = No inherit acls = Yes [printers] comment = All Printers path = /var/tmp printable = Yes create mask = 0600 browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/drivers write list = @ntadmin root force group = ntadmin create mask = 0664 directory mask = 0775 [allusers] comment = All Users path = /home/shares/allusers valid users = @users force group = users create mask = 0660 directory mask = 0771 writable = yes

    Read the article

  • iptable CLUSTERIP won't work

    - by Rad Akefirad
    We have some requirements which explained here. We tried to satisfy them without any success as described. Here is the brief information: Here are requirements: 1. High Availability 2. Load Balancing Current Configuration: Server #1: one static (real) IP for each 10.17.243.11 Server #2: one static (real) IP for each 10.17.243.12 Cluster (virtual and shared among all servers) IP: 10.17.243.15 I tried to use CLUSTERIP to have the cluster IP by the following: on the server #1 iptables -I INPUT -i eth0 -d 10.17.243.15 -j CLUSTERIP --new --hashmode sourceip --clustermac 01:00:5E:00:00:20 --total-nodes 2 --local-node 1 on the server #2 iptables -I INPUT -i eth0 -d 10.17.243.15 -j CLUSTERIP --new --hashmode sourceip --clustermac 01:00:5E:00:00:20 --total-nodes 2 --local-node 2 When we try to ping 10.17.243.15 there is no reply. And the web service (tomcat on port 8080) is not accessible either. However we managed to get the packets on both servers by using TCPDUMP. Some useful information: iptable roules (iptables -L -n -v): Chain INPUT (policy ACCEPT 21775 packets, 1470K bytes) pkts bytes target prot opt in out source destination 0 0 CLUSTERIP all -- eth0 * 0.0.0.0/0 10.17.243.15 CLUSTERIP hashmode=sourceip clustermac=01:00:5E:00:00:20 total_nodes=2 local_node=1 hash_init=0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 14078 packets, 44M bytes) pkts bytes target prot opt in out source destination Log messages: ... kernel: [ 7.329017] e1000e: eth3 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None ... kernel: [ 7.329133] e1000e 0000:05:00.0: eth3: 10/100 speed: disabling TSO ... kernel: [ 7.329567] ADDRCONF(NETDEV_CHANGE): eth3: link becomes ready ... kernel: [ 71.333285] ip_tables: (C) 2000-2006 Netfilter Core Team ... kernel: [ 71.341804] nf_conntrack version 0.5.0 (16384 buckets, 65536 max) ... kernel: [ 71.343168] ipt_CLUSTERIP: ClusterIP Version 0.8 loaded successfully ... kernel: [ 108.456043] device eth0 entered promiscuous mode ... kernel: [ 112.678859] device eth0 left promiscuous mode ... kernel: [ 117.916050] device eth0 entered promiscuous mode ... kernel: [ 140.168848] device eth0 left promiscuous mode TCPDUMP while pinging: tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 12:11:55.335528 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 10.17.243.1 > 10.17.243.15: ICMP echo request, id 16162, seq 2390, length 64 12:11:56.335778 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 10.17.243.1 > 10.17.243.15: ICMP echo request, id 16162, seq 2391, length 64 12:11:57.336010 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 10.17.243.1 > 10.17.243.15: ICMP echo request, id 16162, seq 2392, length 64 12:11:58.336287 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) 10.17.243.1 > 10.17.243.15: ICMP echo request, id 16162, seq 2393, length 64 And there is no ping reply as I said. Does anyone know which part I missed? Thanks in advance.

    Read the article

  • How do I extract the values from this data using bash and awk?

    - by ben Rod
    I grepped these, how do I extract the values? ... cavity_2mgl_wt_strip57001.out: Total cavity volume (A3) : ( 1.240E+01) cavity_2mgl_wt_strip58001.out: Total cavity volume (A3) : ( 2.408E+00) cavity_2mgl_wt_strip60001.out: Total cavity volume (A3) : ( 4.935E+00) cavity_2mgl_wt_strip61001.out: Total cavity volume (A3) : ( 1.319E+00) cavity_2mgl_wt_strip63001.out: Total cavity volume (A3) : ( 1.532E-01) cavity_2mgl_wt_strip64001.out: Total cavity volume (A3) : ( 1.137E+01) ... and I need the index # in the filename in bold: cavity_2mgl_wt_strip76001.out: Total cavity volume (A3) : ( 1.276E+01) and I need the number in the parenthesis cavity_2mgl_wt_strip76001.out: Total cavity volume (A3) : ( **1.276E+01**)

    Read the article

  • How do I rewrite a for loop with a shared dependency using actors

    - by Thomas Rynne
    We have some code which needs to run faster. Its already profiled so we would like to make use of multiple threads. Usually I would setup an in memory queue, and have a number of threads taking jobs of the queue and calculating the results. For the shared data I would use a ConcurrentHashMap or similar. I don't really want to go down that route again. From what I have read using actors will result in cleaner code and if I use akka migrating to more than 1 jvm should be easier. Is that true? However, I don't know how to think in actors so I am not sure where to start. To give a better idea of the problem here is some sample code: case class Trade(price:Double, volume:Int, stock:String) { def value(priceCalculator:PriceCalculator) = (priceCalculator.priceFor(stock)-> price)*volume } class PriceCalculator { def priceFor(stock:String) = { Thread.sleep(20)//a slow operation which can be cached 50.0 } } object ValueTrades { def valueAll(trades:List[Trade], priceCalculator:PriceCalculator):List[(Trade,Double)] = { trades.map { trade => (trade,trade.value(priceCalculator)) } } def main(args:Array[String]) { val trades = List( Trade(30.5, 10, "Foo"), Trade(30.5, 20, "Foo") //usually much longer ) val priceCalculator = new PriceCalculator val values = valueAll(trades, priceCalculator) } } I'd appreciate it if someone with experience using actors could suggest how this would map on to actors.

    Read the article

  • Using WKA in Large Coherence Clusters (Disabling Multicast)

    - by jpurdy
    Disabling hardware multicast (by configuring well-known addresses aka WKA) will place significant stress on the network. For messages that must be sent to multiple servers, rather than having a server send a single packet to the switch and having the switch broadcast that packet to the rest of the cluster, the server must send a packet to each of the other servers. While hardware varies significantly, consider that a server with a single gigabit connection can send at most ~70,000 packets per second. To continue with some concrete numbers, in a cluster with 500 members, that means that each server can send at most 140 cluster-wide messages per second. And if there are 10 cluster members on each physical machine, that number shrinks to 14 cluster-wide messages per second (or with only mild hyperbole, roughly zero). It is also important to keep in mind that network I/O is not only expensive in terms of the network itself, but also the consumption of CPU required to send (or receive) a message (due to things like copying the packet bytes, processing a interrupt, etc). Fortunately, Coherence is designed to rely primarily on point-to-point messages, but there are some features that are inherently one-to-many: Announcing the arrival or departure of a member Updating partition assignment maps across the cluster Creating or destroying a NamedCache Invalidating a cache entry from a large number of client-side near caches Distributing a filter-based request across the full set of cache servers (e.g. queries, aggregators and entry processors) Invoking clear() on a NamedCache The first few of these are operations that are primarily routed through a single senior member, and also occur infrequently, so they usually are not a primary consideration. There are cases, however, where the load from introducing new members can be substantial (to the point of destabilizing the cluster). Consider the case where cluster in the first paragraph grows from 500 members to 1000 members (holding the number of physical machines constant). During this period, there will be 500 new member introductions, each of which may consist of several cluster-wide operations (for the cluster membership itself as well as the partitioned cache services, replicated cache services, invocation services, management services, etc). Note that all of these introductions will route through that one senior member, which is sharing its network bandwidth with several other members (which will be communicating to a lesser degree with other members throughout this process). While each service may have a distinct senior member, there's a good chance during initial startup that a single member will be the senior for all services (if those services start on the senior before the second member joins the cluster). It's obvious that this could cause CPU and/or network starvation. In the current release of Coherence (3.7.1.3 as of this writing), the pure unicast code path also has less sophisticated flow-control for cluster-wide messages (compared to the multicast-enabled code path), which may also result in significant heap consumption on the senior member's JVM (from the message backlog). This is almost never a problem in practice, but with sufficient CPU or network starvation, it could become critical. For the non-operational concerns (near caches, queries, etc), the application itself will determine how much load is placed on the cluster. Applications intended for deployment in a pure unicast environment should be careful to avoid excessive dependence on these features. Even in an environment with multicast support, these operations may scale poorly since even with a constant request rate, the underlying workload will increase at roughly the same rate as the underlying resources are added. Unless there is an infrastructural requirement to the contrary, multicast should be enabled. If it can't be enabled, care should be taken to ensure the added overhead doesn't lead to performance or stability issues. This is particularly crucial in large clusters.

    Read the article

  • Which process is using my NAS?

    - by sethu
    I have a nas connected to my cluster. The NAS holds all our home directories. When I did a set of experiments last week, saving a 1 GB file to the nas took around 30 seconds. If i do the same to a local disk it takes 18 seconds. But when I tried doing the same process today, it takes 150 seconds. I am unsure what is the problem . Can someone help me pointout the issue? Is it possible to find out which process is accessing the NAS or how much NAS bandwidth is getting used ? Thanks for your help. -Sethu

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >