Search Results

Search found 7226 results on 290 pages for 'shared mailbox'.

Page 57/290 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Auto-mount in fstab no longer working until manually running 'sudo mount -a'

    - by Brett Alton
    I have 3 SMB shared drives I need to connect to for work purposes. I had Ubuntu 10.10 Maverick and had all my drives loaded into fstab to be auto-mounted. Everything worked fine for a while but just before I upgraded to 11.04 Natty, the fstab auto-mount stopped working. Unfortunately I don't know what changed I made to my machine or what update installed that made this occur. /etc/fstab {snip} //192.168.7.3/apache_proj/ /home/brett/Desktop/apache smbfs guest,rw,iocharset=utf8,uid=1000,gid=1000 0 0 //192.168.7.3/apache_54321/ /home/brett/Desktop/54321 smbfs guest,rw,iocharset=utf8,uid=1000,gid=1000 0 0 //freenas.local/shared/ /home/brett/Desktop/shared smbfs guest,rw,iocharset=utf8,uid=1000,gid=1000 0 0 //lamp/www/ /home/brett/Desktop/lamp smbfs username={snip},password={snip},rw,iocharset=utf8,uid=1000,gid=1000 0 0 When the machine boots, I run this command to get them to mount: $ sudo umount /home/brett/Desktop/54321 /home/brett/Desktop/shared /home/brett/Desktop/apache; sudo mount -a [sudo] password for brett: umount: /home/brett/Desktop/54321: not mounted umount: /home/brett/Desktop/shared: not mounted umount: /home/brett/Desktop/apache: not mounted Warning: mapping 'guest' to 'guest,sec=none' Warning: mapping 'guest' to 'guest,sec=none' Warning: mapping 'guest' to 'guest,sec=none' mount error: could not resolve address for lamp: No address associated with hostname (I run that umount as a just-in-case). I looked through dmesg and some error logs and couldn't see why fstab was failing on my mounts. I see that my 'lamp' directive is failing, but that's because the machine is currently down.

    Read the article

  • How do I clean build and installs, ie un-build?

    - by Kaustubh P
    I have installed and downloaded and built mongodb, and just one works. $ mongo mongo: error while loading shared libraries: libmozjs.so: cannot open shared object file: No such file or directory $ /opt/mongo/bin/mongo /opt/mongo/bin/mongo: error while loading shared libraries: libboost_system-mt.so.1.38.0: cannot open shared object file: No such file or directory $ /usr/bin/mongo MongoDB shell version: 1.6.5 connecting to: test > I can remove the installation via apt-get. But how do I remove all things mongo that were built with make, and get a clean system? I followed this guide to build and install mongodb. Thanks.

    Read the article

  • Getting all new messages from a Maildir in python

    - by Jesper
    I have a mail dir: foo@foo:~/Maildir$ ls -l total 288 drwx------ 2 foo foo 155648 2010-04-19 15:19 cur -rw------- 1 foo foo 440 2010-03-20 08:50 dovecot.index.log -rw------- 1 foo foo 112 2010-03-20 08:49 dovecot-uidlist -rw------- 1 foo foo 8 2010-03-20 08:49 dovecot-uidvalidity -rw------- 1 foo foo 0 2010-03-20 08:49 dovecot-uidvalidity.4ba48c0e drwx------ 2 foo foo 114688 2010-04-19 16:07 new drwx------ 2 foo foo 4096 2010-04-19 16:07 tmp And in python I'm trying to get all new messages (Python 2.6.5rc2). First, getting "Maildir" works: >>> import mailbox >>> md = mailbox.Maildir('/home/foo/Maildir') >>> md.iterkeys().next() '1269924477.Vfc01I4249fM708004.foo' But how do I access "Maildir/new"? This does not work: >>> md = mailbox.Maildir('/home/foo/Maildir/new') >>> md.iterkeys().next() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.6/mailbox.py", line 346, in iterkeys self._refresh() File "/usr/lib/python2.6/mailbox.py", line 467, in _refresh for entry in os.listdir(subdir_path): OSError: [Errno 2] No such file or directory: '/home/foo/Maildir/new/new' >>> Any ideas?

    Read the article

  • Rails and Mongoid best way to implement sharing system

    - by Matteo Pagliazzi
    I have to model User and Board in rails using mongoid as ODM. Each board is referenced to an user through a foreign key user_id and now I want to add the ability to share a board with other users. Following CRUD I'd create a new Model called something like Share and it's releated Controller with the ability to create/edit/delete a Share but I have some doubts: First, where to save informations about Shares? I think I may create a field in the Board's collection called shared_with including an array of user ids. in a MySQL I'd created a new table with the ids of who share, the resource shared and the user the resources is shared with but I don't think that's necessary using MongoDB. Every user a Board is shared with should be able to edit the Board (but not to delete it) so the Board should have two relations one with the owner and another with the users the board is shared with, right? For permission (the owner should be able to delete a board but the users it is shared with shouldn't) what to use? I'm using Devise for authentication but I think something like CanCan would fit better. but how to implement it? What do you think about this way? Do you find any problems or have better solutions?

    Read the article

  • Chargeback and showback...both a 'throw back'

    - by llaszews
    Been getting asked again by customers and partners about chargeback and showback in the cloud so thought I would blog on my response to this question. Charge Back background, information and industry analysis: Cloud computing is all about shared resources. These shared resources are computer servers (including memory and CPU), network devices, hard disk storage, database servers, application servers, cooling, floor space, electricity and more. These resources are shared by departments within a company, or by a number of companies, when resources are hosted in the public or hybrid cloud. Currently, hosting providers that run other companies on their cloud platforms do not have an accurate way to measure the shared computing resources used by a specific user let alone used by a specific customer. Additionally, companies running their own cloud data centers, for private or hybrid clouds, have no way of measure and charging back the departments in the company that are using these shared cloud resources. In both cases, the lack of determine shared resource costs and to charge them back to the company, department or user that is using this resources is limited a clear measure of business benefit and impacting company’s ability to measure the Return on Investment (ROI). An IT chargeback system is an accounting strategy that applies the costs of IT services, hardware or software to the business unit in which they are used. This system contrasts with traditional IT accounting models in which a centralized department bears all of the IT costs in an organization and those costs are treated simply as corporate overhead. Showback involves showing the IT costs to a department or customer but not actually charging them for their IT usage. Showback is a gradual method of introducing chargeback into an enterprise. Most companies implement a show back mechanism before a full chargeback system is put in place. Oracle chargeback product: Oracle Enterprise Manager provides tools for defining detailed Chargeback plans spanning different metrics collected for each type of resources as well as defining Cost Centers for grouping costs across multiple developers. Chargeback plans can use not only usage based costs, but also configuration based costs (e.g. version of the platform) or fixed costs (e.g. flat-rate management fee). Chargeback has rich out of the box reports. Trending reports show how charge and resource consumption varies over time, while Summary reports show the breakdown of charges or usage by different dimensions such as Cost Center or Target Type. These reports help consumers in understanding how their charges relate to their consumption and also assist the IT department with budgeting and planning activities. With BI Publisher, the reports can be made available in a variety of formats such as PDF, HTML, Word, Excel or PowerPoint.

    Read the article

  • Using Windows Previous Versions to access ZFS Snapshots (July 14, 2009)

    - by user12612012
    The Previous Versions tab on the Windows desktop provides a straightforward, intuitive way for users to view or recover files from ZFS snapshots.  ZFS snapshots are read-only, point-in-time instances of a ZFS dataset, based on the same copy-on-write transactional model used throughout ZFS.  ZFS snapshots can be used to recover deleted files or previous versions of files and they are space efficient because unchanged data is shared between the file system and its snapshots.  Snapshots are available locally via the .zfs/snapshot directory and remotely via Previous Versions on the Windows desktop. Shadow Copies for Shared Folders was introduced with Windows Server 2003 but subsequently renamed to Previous Versions with the release of Windows Vista and Windows Server 2008.  Windows shadow copies, or snapshots, are based on the Volume Snapshot Service (VSS) and, as the [Shared Folders part of the] name implies, are accessible to clients via SMB shares, which is good news when using the Solaris CIFS Service.  And the nice thing is that no additional configuration is required - it "just works". On Windows clients, snapshots are accessible via the Previous Versions tab in Windows Explorer using the Shadow Copy client, which is available by default on Windows XP SP2 and later.  For Windows 2000 and pre-SP2 Windows XP, the client software is available for download from Microsoft: Shadow Copies for Shared Folders Client. Assuming that we already have a shared ZFS dataset, we can create ZFS snapshots and view them from a Windows client. zfs snapshot tank/home/administrator@snap101zfs snapshot tank/home/administrator@snap102 To view the snapshots on Windows, map the dataset on the client then right click on a folder or file and select Previous Versions.  Note that Windows will only display previous versions of objects that differ from the originals.  So you may have to modify files after creating a snapshot in order to see previous versions of those files. The screenshot above shows various snapshots in the Previous Versions window, created at different times.  On the left panel, the .zfs folder is visible, illustrating that this is a ZFS share.  The .zfs setting can be toggled as desired, it makes no difference when using previous versions.  To make the .zfs folder visible: zfs set snapdir=visible tank/home/administrator To hide the .zfs folder: zfs set snapdir=hidden tank/home/administrator The following screenshot shows the Previous Versions panel when a file has been selected.  In this case the user is prompted to view, copy or restore the file from one of the available snapshots. As can be seen from the screenshots above, the Previous Versions window doesn't display snapshot names: snapshots are listed by snapshot creation time, sorted in time order from most recent to oldest.  There's nothing we can do about this, it's the way that the interface works.  Perhaps one point of note, to avoid confusion, is that the ZFS snapshot creation time isnot the same as the root directory creation timestamp. In ZFS, all object attributes in the original dataset are preserved when a snapshot is taken, including the creation time of the root directory.  Thus the root directory creation timestamp is the time that the directory was created in the original dataset. # ls -d% all /home/administrator         timestamp: atime         Mar 19 15:40:23 2009         timestamp: ctime         Mar 19 15:40:58 2009         timestamp: mtime         Mar 19 15:40:58 2009         timestamp: crtime         Mar 19 15:18:34 2009 # ls -d% all /home/administrator/.zfs/snapshot/snap101         timestamp: atime         Mar 19 15:40:23 2009         timestamp: ctime         Mar 19 15:40:58 2009         timestamp: mtime         Mar 19 15:40:58 2009         timestamp: crtime         Mar 19 15:18:34 2009 The snapshot creation time can be obtained using the zfs command as shown below. # zfs get all tank/home/administrator@snap101NAME                             PROPERTY  VALUEtank/home/administrator@snap101  type      snapshottank/home/administrator@snap101  creation  Mon Mar 23 18:21 2009 In this example, the dataset was created on March 19th and the snapshot was created on March 23rd. In conclusion, Shadow Copies for Shared Folders provides a straightforward way for users to view or recover files from ZFS snapshots.  The Windows desktop provides an easy to use, intuitive GUI and no configuration is required to use or access previous versions of files or folders. REFERENCES FOR MORE INFORMATION ZFS ZFS Learning Center Introduction to Shadow Copies of Shared Folders Shadow Copies for Shared Folders Client

    Read the article

  • Samba issue with sharing directories on NTFS/FAT32

    - by Microkernel
    I have some strange problems with Samba server. I am using samba Version 3.5.4 on Ubuntu 10.10. I have two Windows XP machines, one on VirtualBox on Ubuntu and another office laptop. Windows machine on VirtualBox has no issues in accessing the shared folders, but the laptop is not able to access all the shared content. The issue faced on laptop is the following. Shared folders on ext3 drives have no issues in accessing, but the contents shared on NTFS and FAT32 drives (mounted ones) are not accessible. When I try to open the shared folder, it asks for user name and password, but doesn't accept when I provide it. (Even if I provide admin login details). I changed workgroup value to the domain_name in office laptop, but still the problem persists. Here is the smdb.conf I am using: [global] workgroup = XXX.XXX.ORG server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d guest ok = Yes [homes] comment = Home Directories [printers] comment = All Printers path = /var/spool/samba read only = No create mask = 0700 printable = Yes browseable = No [print$] comment = Samba server's CD-ROM path = /cdrom force user = nobody force group = nobody locking = No Workgroup was defined as "HOMENET" before, changed it to domain name on the office laptop thinking it was the problem, but for no avail.

    Read the article

  • what is means of this problem when sending email through web-application

    - by Richa Media and services
    i have this error when i sending email through our web application " Mailbox unavailable. The server response was: Requested action not taken: mailbox unavailable or not local" this is detail of error System.Net.Mail.SmtpFailedRecipientException was caught Message=Mailbox unavailable. The server response was: Requested action not taken: mailbox unavailable or not local Source=System FailedRecipient=<[email protected]> StackTrace: at System.Net.Mail.SmtpTransport.SendMail(MailAddress sender, MailAddressCollection recipients, String deliveryNotify, SmtpFailedRecipientException& exception) at System.Net.Mail.SmtpClient.Send(MailMessage message) at email.Globals.SendMail(String EmailID, String subject, String message, String senderMail) in C:location InnerException:

    Read the article

  • ACL and moving files in Nautilus

    - by MyOnlyEye
    When I move files from a private home directory (e.g. /home/jack) to a shared directory (e.g. /home/shared-school) Nautilus copies the file permissions from the original file into the shared directory - and ignores the ACL that I've put in the /home/shared-school directory (e.g. setfacl -R -m d:g:school:rwx /home/shared-school). Is it possible to force Nautilus to change ACL on a file that is moved or copied - or not to ignore the ACL on the directory where the files are moved or copied?

    Read the article

  • Optimal Database design regarding functionality of letting user share posts by other users

    - by codecool
    I want to implement functionality which let user share posts by other users similar to what Facebook and Google+ share button and twitter retweet. There are 2 choices: 1) I create duplicate copy of the post and have a column which keeps track of the original post id and makes clear this is a shared post. 2) I have a separate table shared post where I save the post id which is a foreign key to post id in post table. Talking in terms of programming basically I keep pointer to the original post in a separate table and when need to get post posted by user and also shared ones I do a left join on post and shared post table Post(post_id(PK), post_content, posted_by) SharedPost(post_id(FK to Post.post_id), sharing_user, sharedfrom(in case someone shares from non owners profile)) I am in favour of second choice but wanted to know the advice of experts out there? One thing more posts on my webapp will be more on the lines of facebook size not tweet size.

    Read the article

  • Share Mulitple Classes as one dll or a lib with Mulitple Projects

    - by JNL
    Currently I have some shared class files(.cpp and .h) which I include them in around 20 Projects. Currently I have to include them in all of the projects. So if I get some business requirments and I change some of the shared(.cpp or .h) files I have to include them in all the 20 Projects which is kind of tedious. Is there a way where I can create a shared dll or library and include it all of my Projects. So if I have to change it, I just have to change it once and then just Add Reference to include that dll or library which contains all the shared(.cpp, .h) files. Any help/recommendations regarding the same, will be highly appreciated. I am using VS2012 for VC++.

    Read the article

  • can't the asp file system object access shared server paths?

    - by sushant
    i am using this code to access files and folders. <%@ Language=VBScript %><% option explicit dim sRoot, sDir, sParent, objFSO, objFolder, objFile, objSubFolder, sSize %> <META content="Microsoft Visual Studio 6.0" name=GENERATOR><!-- Author: Adrian Forbes --> <% sRoot = "D:Raghu" sDir = Request("Dir") sDir = sDir & "\" Response.Write "<h1>" & sDir & "</h1>" & vbCRLF Set objFSO = CreateObject("Scripting.FileSystemObject") on error resume next Set objFolder = objFSO.GetFolder(sRoot & sDir) if err.number <> 0 then Response.Write "Could not open folder" Response.End end if on error goto 0 sParent = objFSO.GetParentFolderName(objFolder.Path) ' Remove the contents of sRoot from the front. This gives us the parent ' path relative to the root folder ' eg. if parent folder is "c:webfilessubfolder1subfolder2" then we just want "subfolder1subfolder2" sParent = mid(sParent, len(sRoot) + 1) Response.Write "<table border=""1"">" ' Give a link to the parent folder. This is just a link to this page only pssing in ' the new folder as a parameter Response.Write "<tr><td colspan=3><a href=""browse.asp?dir=" & Server.URLEncode(sParent) & """>Parent folder</a></td></tr>" & vbCRLF ' Now we want to loop through the subfolders in this folder For Each objSubFolder In objFolder.SubFolders ' And provide a link to them Response.Write "<tr><td colspan=3><a href=""browse.asp?dir=" & Server.URLEncode(sDir & objSubFolder.Name) & """>" & objSubFolder.Name & "</a></td></tr>" & vbCRLF Next ' Now we want to loop through the files in this folder For Each objFile In objFolder.Files if Clng(objFile.Size) < 1024 then sSize = objFile.Size & " bytes" else sSize = Clng(objFile.Size / 1024) & " KB" end if ' And provide a link to view them. This is a link to show.asp passing in the directory and the file ' as parameters Response.Write "<tr><td><a href=""show.asp?file=" & server.URLEncode(objFile.Name) & "&dir=" & server.URLEncode (sDir) & """>" & objFile.Name & "</a></td><td>" & sSize & "</td><td>" & objFile.Type & "</td></tr>" & vbCRLF Next Response.Write "</table>" %> it works fine. but when i try to access something on shared path like: "\\cvrdd0110:share" it gives error. how to access these files? and sorry for formatting issues.

    Read the article

  • Count the unread emails in exchange for each user

    - by Luis
    Hi, i want to count the unread emails in exchange with c# i all conected to the exchange, and get all users and the corresponding email. for the connection i have .. RunspaceConfiguration rsConfig = RunspaceConfiguration.Create(); PSSnapInException snapInException = null; PSSnapInInfo info = rsConfig.AddPSSnapIn("Microsoft.Exchange.Management.PowerShell.Admin", out snapInException); Runspace myRunSpace = RunspaceFactory.CreateRunspace(rsConfig); myRunSpace.Open(); Pipeline pipeline = myRunSpace.CreatePipeline(); Command myCommand = new Command("Get-Mailbox"); pipeline.Commands.Add(myCommand); Collection<PSObject> commandResults = pipeline.Invoke(); // Ok, now we've got a bunch of mailboxes, cycle through them foreach (PSObject mailbox in commandResults) { //define which properties to get foreach (String propName in new string[] { "Name", "EmailAddresses", "Database", "OrganizationalUnit", "UserPrincipalName" }) { //grab the specified property of this mailbox Object objValue = mailbox.Properties[propName].Value; .......

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • How can I make a case for "dependency management"?

    - by C. Ross
    I'm currently trying to make a case for adopting dependency management for builds (ala Maven, Ivy, NuGet) and creating an internal repository for shared modules, of which we have over a dozen enterprise wide. What are the primary selling points of this build technique? The ones I have so far: Eases the process of distributing and importing shared modules, especially version upgrades. Requires the dependencies of shared modules to be precisely documented. Removes shared modules from source control, speeding and simplifying checkouts/check ins (when you have applications with 20+ libraries this is a real factor). Allows more control or awareness of what third party libs are used in your organization. Are there any selling points that I'm missing? Are there any studies or articles giving improvement metrics?

    Read the article

  • How do I install graphviz 2.29 in 12.04?

    - by bidur
    In my ubuntu 12.04, the graphviz is not the latest version(2.29). I need some features available in the latest version of graphviz. I tried to install the graphviz version 2.29, which requires libgraphviz4(=2.18). I anyhow installed libgraphviz4 and installed graphviz 2.29. For that I have to remove packages libcdt4 and libpathplan4. Now whenever I try to generate graph, I get some problems: For e.g.: dot -Kfdp -n -Tpng -o samplePOS.png forcePOS.dot It says: dot: error while loading shared libraries: libgvc.so.6: cannot open shared object file: No such file or directory neato -Tps -o sample_1.ps sourcedot.gv It says: neato: error while loading shared libraries: libgvc.so.6: cannot open shared object file: No such file or directory So, I am looking for some ways so that I can run graphviz 2.29 in my ubuntu 12.04.

    Read the article

  • Samba issue with sharing directories on NTFS/FAT32 (Mounted Drives) ???

    - by Microkernel
    Hi guys, I have some strange problems with Samba server. I am using samba Version 3.5.4 on Ubuntu 10.10. I have two windows-xp machines, one on VirtualBox on Ubuntu and another office laptop. Windows machine on VBox has no issues in accessing the shared folders, but the laptop is not able to access all the shared content. The issue faced on laptop is = Shared folders on Ext3 drives have no issues in accessing, but the contents shared on NTFS and FAT32 drives (mounted ones) are not accessible. When I try to open the shared folder, it asks for user name and password, but doesn't accept when I provide it. (even if I provide admin login details!!!). I changed workgroup value to the domain_name in office laptop, but still the problem persists... Here is the smdb.conf I am using... [global] workgroup = XXX.XXX.ORG server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d guest ok = Yes [homes] comment = Home Directories [printers] comment = All Printers path = /var/spool/samba read only = No create mask = 0700 printable = Yes browseable = No [print$] comment = Samba server's CD-ROM path = /cdrom force user = nobody force group = nobody locking = No Workgroup Was defined as "HOMENET" before, changed it to domain name on the office laptop thinking it was the problem, but for no avail Thanks in advance Regards, Microkernel

    Read the article

  • Can I autogenerate/compile code on-the-fly, at runtime, based upon values (like key/value pairs) parsed out of a configuration file?

    - by Kumba
    This might be a doozy for some. I'm not sure if it's even 100% implementable, but I wanted to throw the idea out there to see if I'm really off of my rocker yet. I have a set of classes that mimics enums (see my other questions for specific details/examples). For 90% of my project, I can compile everything in at design time. But the remaining 10% is going to need to be editable w/o re-compiling the project in VS 2010. This remaining 10% will be based on a templated version of my Enums class, but will generate code at runtime, based upon data values sourced in from external configuration files. To keep this question small, see this SO question for an idea of what my Enums class looks like. The templated fields, per that question, will be the MaxEnums Int32, Names String() array, and Values array, plus each shared implementation of the Enums sub-class (which themselves, represent the Enums that I use elsewhere in my code). I'd ideally like to parse values from a simple text file (INI-style) of key/value pairs: [Section1] Enum1=enum_one Enum2=enum_two Enum3=enum_three So that the following code would be generated (and compiled) at runtime (comments/supporting code stripped to reduce question size): Friend Shared ReadOnly MaxEnums As Int32 = 3 Private Shared ReadOnly _Names As String() = New String() _ {"enum_one", "enum_two", "enum_three"} Friend Shared ReadOnly Enum1 As New Enums(_Names(0), 1) Friend Shared ReadOnly Enum2 As New Enums(_Names(1), 2) Friend Shared ReadOnly Enum3 As New Enums(_Names(2), 4) Friend Shared ReadOnly Values As Enums() = New Enums() _ {Enum1, Enum2, Enum3} I'm certain this would need to be generated in MSIL code, and I know from reading that the two components to look at are CodeDom and Reflection.Emit, but I was wondering if anyone had working examples (or pointers to working examples) versus really long articles. I'm a hands-on learner, so I have to have example code to play with. Thanks!

    Read the article

  • Themeing and Master Pages

    - by Jeff
    I have the requirement to support themeing of my site's pages. The way I am doing this is by dynamically choosing a master page based on the current theme. I have setup a directory structure like so /shared/masterpages/theme1/Master1.master /shared/masterpages/theme1/Master2.master /shared/masterpages/theme1/Master3.master /shared/masterpages/theme2/Master1.master /shared/masterpages/theme2/Master2.master /shared/masterpages/theme2/Master3.master And I am still using the page directive in the view <%@ Page Title="" Language="C#" MasterPageFile="~/Views/shared/masterpages/theme1/Master1.Master"%> I would still like to leverage the view's MasterPageFile property and just change the theme directory. I can only think of three ways to do this none of them which sound great. Create a custom BaseView class that uses OnPreInit to change the theme like this Create some xml file or database table that links each view to a master page file and then set this in the controller. Build some tool that reads all the views and parses them for their masterpagefile, (similar to 2 but could be done at run time potentially.) Option 1 seems the best option to me so far. Does anyone else have any thoughts on how to do this?

    Read the article

  • Problem with impersonating a specific user in WCF service

    - by aJ
    I am having a WCF service hosted in IIS on WindowsServer 2008. This service needs to write to a shared folder present on another machine(Windows XP). The shared folder has write permissions for a particular user say "X" which is present on both the machines .i.e on the server where the service is running as well as the machine where the shared folder is present. The service runs under the NETWORK SERVICE account. For the service to access the shared folder I have added code to impersonate the user "X"in the service so that it gets the permission to write to the shared folder. Since I want to impersonate user "X" only when I run a particular section of code I have used the sample code. Even after the impersonation the service fails to write to the shared folder sometimes. It works sporadically. Whereas if I add tag in the Web.config file it works perfectly fine. <identity impersonate="true" userName="accountname" password="password" /> But the above is not desirable since it impersonates a specific user for all the requests. What I need is to impersonate a specific user only when I run a particular section of code. Also, the impersonation code works absolutely fine when the shared folder is present on another WindowsServer 2008. Could anyone give me ideas on what's going wrong here.

    Read the article

  • Accidentally deleted symlink libc.so.6 in CentOS 6.4. How to get sudo privilege to re-create it?

    - by Eric
    I accidentally deleted the symbol link /lib64/libc.so.6 - /lib64/libc-2.12.so with $ sudo rm libc.so.6 Then I can not use anything including ls command. The error appears for any command I type ls: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory I've tried $ export LD_PRELOAD=/lib64/libc-2.12.so After this I can use ls and ln ..., but still can not use sudo ln ..., sudo -E ln ..., sudo su or even su. I always get this err sudo: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory or su: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory It seems LD_PRELOAD works only for the current shell session of my account, but not for a new account like root or a new session. It's a remote server so I can not use a live CD. I now have a ssh bash session alive but can not establish new ones. I have sudo privilege, but don't have root password. So currently my problem is I need to run sudo sln -s libc-2.12.so libc.so.6 to re-create the symlink libc.so.6, but I can not run sudo without libc.so.6. How can I fix it? Thanks~

    Read the article

  • Cisco ASA Site-to-Site VPN Dropping

    - by ScottAdair
    I have three sites, Toronto (1.1.1.1), Mississauga (2.2.2.2) and San Francisco (3.3.3.3). All three sites have ASA 5520. All the sites are connected together with two site-to-site VPN links between each other location. My issue is that the tunnel between Toronto and San Francisco is very unstable, dropping every 40 min to 60 mins. The tunnel between Toronto and Mississauga (which is configured in the same manner) is fine with no drops. I also noticed that my pings with drop but the ASA thinks that the tunnel is still up and running. Here is the configuration of the tunnel. Toronto (1.1.1.1) crypto map Outside_map 1 match address Outside_cryptomap crypto map Outside_map 1 set peer 3.3.3.3 crypto map Outside_map 1 set ikev1 transform-set ESP-AES-256-MD5 ESP-AES-256-SHA crypto map Outside_map 1 set ikev2 ipsec-proposal AES256 group-policy GroupPolicy_3.3.3.3 internal group-policy GroupPolicy_3.3.3.3 attributes vpn-idle-timeout none vpn-tunnel-protocol ikev1 ikev2 tunnel-group 3.3.3.3 type ipsec-l2l tunnel-group 3.3.3.3 general-attributes default-group-policy GroupPolicy_3.3.3.3 tunnel-group 3.3.3.3 ipsec-attributes ikev1 pre-shared-key ***** isakmp keepalive disable ikev2 remote-authentication pre-shared-key ***** ikev2 local-authentication pre-shared-key ***** San Francisco (3.3.3.3) crypto map Outside_map0 2 match address Outside_cryptomap_1 crypto map Outside_map0 2 set peer 1.1.1.1 crypto map Outside_map0 2 set ikev1 transform-set ESP-AES-256-MD5 ESP-AES-256-SHA crypto map Outside_map0 2 set ikev2 ipsec-proposal AES256 group-policy GroupPolicy_1.1.1.1 internal group-policy GroupPolicy_1.1.1.1 attributes vpn-idle-timeout none vpn-tunnel-protocol ikev1 ikev2 tunnel-group 1.1.1.1 type ipsec-l2l tunnel-group 1.1.1.1 general-attributes default-group-policy GroupPolicy_1.1.1.1 tunnel-group 1.1.1.1 ipsec-attributes ikev1 pre-shared-key ***** isakmp keepalive disable ikev2 remote-authentication pre-shared-key ***** ikev2 local-authentication pre-shared-key ***** I'm at a loss. Any ideas?

    Read the article

  • Setting kernel memory for installing postgresql

    - by Matthieu Taymans
    My question is about setting the kernel shared memory for installing postgresql on mac osx 10.6.8. In the readme file of postgresql it is said: Shared Memory PostgreSQL uses shared memory extensively for caching and inter-process communication. Unfortunately, the default configuration of Mac OS X does not allow suitable amounts of shared memory to be created to run the database server. Before running the installation, please ensure that your system is configured to allow the use of larger amounts of shared memory. Note that this does not 'reserve' any memory so it is safe to configure much higher values than you might initially need. You can do this by editting the file /etc/sysctl.conf - e.g. % sudo vi /etc/sysctl.conf On a MacBook Pro with 2GB of RAM, the author's sysctl.conf contains: kern.sysv.shmmax=1610612736 kern.sysv.shmall=393216 kern.sysv.shmmin=1 kern.sysv.shmmni=32 kern.sysv.shmseg=8 kern.maxprocperuid=512 kern.maxproc=2048 Note that (kern.sysv.shmall * 4096) should be greater than or equal to kern.sysv.shmmax. kern.sysv.shmmax must also be a multiple of 4096. Once you have edited (or created) the file, reboot before continuing with the installation. If you wish to check the settings currently being used by the kernel, you can use the sysctl utility: % sysctl -a The database server can now be installed. I'm a real beginner with all this but need to instal postgresql for academic purposes do you know how i can set this kernel shared memory. Won't that be harmful for my system? Thank you in advance. Matthieu

    Read the article

  • memory tuning with rails/unicorn running on ubuntu

    - by user970193
    I am running unicorn on Ubuntu 11, Rails 3.0, and Ruby 1.8.7. It is an 8 core ec2 box, and I am running 15 workers. CPU never seems to get pinned, and I seem to be handling requests pretty nicely. My question concerns memory usage, and what concerns I should have with what I am seeing. (if any) Here is the scenario: Under constant load (about 15 reqs/sec coming in from nginx), over the course of an hour, each server in the 3 server cluster loses about 100MB / hour. This is a linear slope for about 6 hours, then it appears to level out, but still maybe appear to lose about 10MB/hour. If I drop my page caches using the linux command echo 1 /proc/sys/vm/drop_caches, the available free memory shoots back up to what it was when I started the unicorns, and the memory loss pattern begins again over the hours. Before: total used free shared buffers cached Mem: 7130244 5005376 2124868 0 113628 422856 -/+ buffers/cache: 4468892 2661352 Swap: 33554428 0 33554428 After: total used free shared buffers cached Mem: 7130244 4467144 2663100 0 228 11172 -/+ buffers/cache: 4455744 2674500 Swap: 33554428 0 33554428 My Ruby code does use memoizations and I'm assuming Ruby/Rails/Unicorn is keeping its own caches... what I'm wondering is should I be worried about this behaviour? FWIW, my Unicorn config: worker_processes 15 listen "#{CAPISTRANO_ROOT}/shared/pids/unicorn_socket", :backlog = 1024 listen 8080, :tcp_nopush = true timeout 180 pid "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid" GC.respond_to?(:copy_on_write_friendly=) and GC.copy_on_write_friendly = true before_fork do |server, worker| STDERR.puts "XXXXXXXXXXXXXXXXXXX BEFORE FORK" print_gemfile_location defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect! defined?(Resque) and Resque.redis.client.disconnect old_pid = "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # already killed end end File.open("#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.ok", "w"){|f| f.print($$.to_s)} end after_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection defined?(Resque) and Resque.redis.client.connect end Is there a need to experiment enforcing more stringent garbage collection using OobGC (http://unicorn.bogomips.org/Unicorn/OobGC.html)? Or is this just normal behaviour, and when/as the system needs more memory, it will empty the caches by itself, without me manually running that cache command? Basically, is this normal, expected behaviour? tia

    Read the article

  • _default_ VirtualHost overlap on port 443, the first has precedence

    - by Mohit Jain
    I have two ruby on rails 3 applications running on same server, (ubuntu 10.04), both with SSL. Here is my apache config file: <VirtualHost *:80> ServerName example1.com DocumentRoot /home/me/example1/production/current/public </VirtualHost> <VirtualHost *:443> ServerName example1.com DocumentRoot /home/me/example1/production/current/public SSLEngine on SSLCertificateFile /home/me/example1/production/shared/example1.crt SSLCertificateKeyFile /home/me/example1/production/shared/example1.key SSLCertificateChainFile /home/me/example1/production/shared/gd_bundle.crt SSLProtocol -all +TLSv1 +SSLv3 SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM </VirtualHost> <VirtualHost *:80> ServerName example2.com DocumentRoot /home/me/example2/production/current/public </VirtualHost> <VirtualHost *:443> ServerName example2.com DocumentRoot /home/me/example2/production/current/public SSLEngine on SSLCertificateFile /home/me/example2/production/shared/iwanto.crt SSLCertificateKeyFile /home/me/example2/production/shared/iwanto.key SSLCertificateChainFile /home/me/example2/production/shared/gd_bundle.crt SSLProtocol -all +TLSv1 +SSLv3 SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM </VirtualHost> Whats the issue: On restarting my server it gives me some output like this: * Restarting web server apache2 [Sun Jun 17 17:57:49 2012] [warn] _default_ VirtualHost overlap on port 443, the first has precedence ... waiting [Sun Jun 17 17:57:50 2012] [warn] _default_ VirtualHost overlap on port 443, the first has precedence On googling why this issue is coming I got something like this: You cannot use name based virtual hosts with SSL because the SSL handshake (when the browser accepts the secure Web server's certificate) occurs before the HTTP request, which identifies the appropriate name based virtual host. If you plan to use name-based virtual hosts, remember that they only work with your non-secure Web server. But not able to figure out how to run two ssl application on same server. Can any one help me?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >