Search Results

Search found 6519 results on 261 pages for 'nested attributes'.

Page 193/261 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Problem adding second domain controller to SBS 2008

    - by Quango
    Have an SBS 2008 server in one location, and want to add a backup domain controller at a different site. The two sites are linked by a VPN. New server is running Server 2008 R2, fully patched. At present it is a member server and the DNS is pointing at the SBS DNS. When I try running DCPROMO to connect the server, the wizard runs fine up to the point where the wizard is 'configuring Active Directory Domain Services' and 'examining forest': "The operation failed because: The wizard could not read operational attributes from the remote Active Directory Domain Controller SERVER.DOMAIN.LOCAL using LDAP. "The specified server cannot perform the requested operation." This error can occur if you have not been granted necessary permissions to read data in the directory. For more information, please see article 936241 in the Microsoft Knowledge Base (http://go.microsoft.com/fwlink/?LinkId=88420)." I was logged on as domain administrator. Interestingly the link is invalid and the KB article does not exist..! Settings: Configure this server as an additional Active Directory domain controller for the domain "[domain]". Site: [site] Additional Options: Read-only domain controller: "No" Global catalog: Yes DNS Server: Yes Update DNS Delegation: No Source domain controller: any writable domain controller Database folder: C:\Windows\NTDS Log file folder: C:\Windows\NTDS SYSVOL folder: C:\Windows\SYSVOL The DNS Server service will be configured on this computer. This computer will be configured to use this DNS server as its preferred DNS server.

    Read the article

  • Archive software for big files and fast index

    - by AkiRoss
    I'm currently using tar for archiving some files. Problem is: archives are pretty big, contains many data and tar is very slow when listing and extracting. I often need to extract single files or folders from the archive, but I don't currently have an external index of files. So, is there an alternative for Linux, allowing me to build uncompressed archive files, preserving the file attributes AND having fast access list table? I'm talking about archives of 10 to 100 GB, and it's pretty impractical to wait several minutes to access a single file. Anyway, any trick to solve this problem is welcome (but single archives are non-optional, so no rsync or similar). Thanks in advance! EDIT: I'm not compressing archives, and using tar I think they are too slow. To be precise about "slow", I'd like that: listing archive content should take time linear in files count inside the archive, but with very little constant (e.g. if a list of all the files is included at the head of the archive, it could be very fast). extraction of a target file/directory should (filesystem premitting) take time linear with the target size (e.g. if I'm extracting a 2MB PDF file in a 40GB directory, I'd really like it to take less than few minutes... If not seconds). Of course, this is just my idea and not a requirement. I guess such performances could be achievable if the archive contained an index of all the files with respective offset and such index is well organized (e.g. tree structure).

    Read the article

  • Ubuntu xrandr rotate issue

    - by user83544
    I've just bought a second monitor for my PC which happens to be a pivot monitor. I've already read lots of forums related to my problem but haven't come across a solution - I have the same symptoms as dozens of posts but no matter whatever I try it just doesn't work. I've already changed the xorg.conf file and added in the device section just under Driver "nvidia" the following for my second monitor: Option "RandRRotation" "on" When I save and reboot I try to rotate my screen with the nvidia X server settings by choosing the second monitor and clicking either "left" or "right" for the rotation. It immediately exits the nvidia settings window and does nothing. I tried within the terminal by typing: xrandr -o right I get the following error: X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 154 (RANDR) Minor opcode of failed request: 2 (RRSetScreenConfig) Serial number of failed request: 14 Current serial number in output stream: 14 I actually manage to rotate it with Option "Rotate" "CCW" instead of "RandRRotation". The problem with this solution is that you get the second monitor in the right position, but any window you open on that screen is practically unchangeable. You can't change the size nor move it, making it useless for reading PDFs, which is the main reason why I bought this second screen to help me write my thesis. Any help is really appreciated. sudo lshw -c video hiram@hiram-linux:~$ sudo lshw -c video *-display description: VGA compatible controller product: nVidia Corporation vendor: nVidia Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:16 memory:f8000000-f9ffffff memory:d8000000-dfffffff memory:d4000000-d7ffffff ioport:dc00(size=12 memory:fbd80000-fbdfffff

    Read the article

  • Question about Displaying Documents and the CQWP in MOSS 2007

    - by Psycho Bob
    My organization is in the process of converting our intranet over to a SharePoint solution. Part of this intranet will be the movement and organization of all our internal documents. Currently, we have 11 pages of document links, each with its own subheadings. So far I have it set where each document has a custom field called "Page" with a check box list of all the document pages on the intranet site. On each individual page, I have setup a Content Query Web Part that displays the documents that have the corresponding Page value set (i.e. if a document Page value has been checked for "HR" it will appear on the HR page). The goal of this setup is to allow the nontechnical personal who will be responsible for the maintenance of the documents to be able to upload new documents to the documents list and note on which pages they should appear on without having to manually update the pages themselves. The problem that I am having is that I cannot seem to find a good way to sort the documents into their subheadings once they are on the appropriate page. I could create individual check boxes for each page/subheading combination, but this would create a list of approximately 50-75 items. Does anyone have any ideas as to how I could accomplish this, either via CQWP or by different means? Goals/Requirements of Installation Allow Intranet documents to be maintained by nontechnical personnel Display documents on the appropriate pages without user having to edit actual page or web part Denote document page location using user settable document attributes (if possible) Maintain current intranet organization and workflow Use only one document list without subdirectories NOTE: I am aware that this is not the most efficient or elegant way to do things, but these are the requirements I have been given for the project.

    Read the article

  • Minimum permissions needed to create a user Home Folder in Windows Active Directory

    - by Jim
    We would like the Help Desk to have the responsibility of creating User Home folders instead of our 2nd level support. The help desk global group is already an Account Operator, so in Active Directory they are able to edit all User Attributes just fine. The problem is figuring out the minimum level of permissions needed on the File Server to create the home share, with out giving them access to everyone home share. So if they open AD Users and Computer, open the properties for a user, and enter \home\users\%username% in the profile tab and then click OK, they get the following error. The \home\users\username home folder was not created because you do not have create access on the server. The user account has been updated with the new home folder value but you must create the directory manually after obtaining the required access right. Right now I have given the Helpdesk group Full Control on the root folder only (no files or subdirectories) The directory is actually created, but the permissions on the newly created folder only show administrators full control, and no permissions for the configured user account. It sure sounds like I'd have to make the helpdesk local admins on the file servers, which is what I'd like to avoid. Especially since the file servers are a large cluster hosting much much more than the entire orgs home share structure.

    Read the article

  • Linux Tuning for High Traffic JBoss Server with LDAP Binds

    - by Levi Stanley
    I'm configuring a high traffic Linux server (RedHat) and running into a limit I haven't been able to track down. I need to be able to handle sustained 300 requests per second throughput using Nginx and JBoss. The point of this server is to run checks on a user's account when that user signs in. Each request goes through Nginx to JBoss (specifically Torquebox with JBoss A7 with a Sinatra app) and then makes an LDAP request to bind that user and retrieve several attributes. It is during the bind that these errors occur. I'm able to reproduce this going directly to JBoss, so that rules out Nginx at least. I get a variety of error messages, though oddly JBoss stopped writing to the log file recently. It used to report errors about creating native threads. Now I just see "java.net.SocketException: Connection reset" and "org.apache.http.conn.HttpHostConnectException: Connection to http://my.awesome.server:8080 refused" as responses in jmeter. To the best of my knowledge, I have plenty of available file handles, processes, sockets, and ports, yet the issue persists. Unfortunately, I have very little experience tuning servers. I've found a couple useful documents - Ipsysctl tutorial 1.0.4 and Linux Tuning - but those documents are a bit over my head (and just entering the the configuration described in Linux Tuning doesn't fix my issue. Here are the configuration changes I've tried (webproxy is the user that runs Nginx and JBoss): /etc/security/limits.conf webproxy soft nofile 65536 webproxy hard nofile 65536 webproxy soft nproc 65536 webproxy hard nproc 65536 root soft nofile 65536 root hard nofile 65536 root soft nproc 65536 root hard nofile 65536 First attempt /etc/sysctl.conf sysctl net.core.somaxconn = 8192 sysctl net.ipv4.ip_local_port_range = 32768 65535 sysctl net.ipv4.tcp_fin_timeout = 15 sysctl net.ipv4.tcp_keepalive_time = 1800 sysctl net.ipv4.tcp_keepalive_intvl = 35 sysctl net.ipv4.tcp_tw_recycle = 1 sysctl net.ipv4.tcp_tw_reuse = 1 Second attempt /etc/sysctl.conf net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.core.netdev_max_backlog = 30000 net.ipv4.tcp_congestion_control=htcp net.ipv4.tcp_mtu_probing=1 Any ideas what might be happening here? Or better yet, are there some good documentation resources designed for beginners?

    Read the article

  • AWS document on number of databases allowed on an Amazon RDS instance

    - by user35042
    At the Amazon RDS FAQ there is the question "What is a database instance (DB Instance)?". The entire answer (as of mid-June 2012) is: You can think of a DB Instance as a database environment in the cloud with the compute and storage resources you specify. You can create and delete DB Instances, define/refine infrastructure attributes of your DB Instance(s), and control access and security via the AWS Management Console, Amazon RDS APIs, and Command Line Tools. Multiple MySQL databases or SQL Server databases (up to 30) or Oracle database schemas can be created on a given DB Instance. The last part of that quote, "Multiple MySQL databases or SQL Server databases (up to 30) Oracle database schemas" I interpret to mean that you can have an "unlimited" number of databases on an RDS MySQL or Oracle instance but only 30 databases on an MS SQL Server instance ("unlimited" meaning not limited by the RDS infrastructure itself). This was asked in the Stackoverflow question Does Amazon RDS support multiple databases per instance?. The answer quoted an older version of the FAQ. What I am looking for is an Amazon document that clarifies this question, or else someone who has experience using Amazon RDS who can attest what the situation actually is.

    Read the article

  • How do I remove a USB drive's write protection?

    - by nate
    I have a SanDisk Cruser Blade USB stick that suddenly seems to be write protected. I tried running DiskPart but after I write the command "attributes disk clear readonly" it displays this: Microsoft DiskPart version 5.1.3565 ADD - Add a mirror to a simple volume. ACTIVE - Marks the current basic partition as an active boot partition. ASSIGN - Assign a drive letter or mount point to the selected volume. BREAK - Break a mirror set. CLEAN - Clear the configuration information, or all information, off the disk. CONVERT - Converts between different disk formats. CREATE - Create a volume or partition. DELETE - Delete an object. DETAIL - Provide details about an object. EXIT - Exit DiskPart EXTEND - Extend a volume. HELP - Prints a list of commands. IMPORT - Imports a disk group. LIST - Prints out a list of objects. INACTIVE - Marks the current basic partition as an inactive partition. ONLINE - Online a disk that is currently marked as offline. REM - Does nothing. Used to comment scripts. REMOVE - Remove a drive letter or mount point assignment. REPAIR - Repair a RAID-5 volume. RESCAN - Rescan the computer looking for disks and volumes. RETAIN - Place a retainer partition under a simple volume. SELECT - Move the focus to an object. It's like when you type help at the DiskPart prompt, so how do I get past this? This problem started when I plugged the stick into a laptop which had viruses, if that's any help.

    Read the article

  • Postfix spool on ext3 optimiziations in >=linux-2.6.34 days

    - by Luke404
    Given the very specific nature of the subject (we're not talking about mailboxes, just the spool; we're not talking about other filesystems, just ext3; and so on...) and the maturity of the softwares involved (linux kernel, ext3fs, postfix) I'd think there should be a more or less agreed on set of best practices to filesystem related tuning. I'm trying to get a roundup of them: data=journal became the default in recent kernels (somewhere around 2.6.30 IIRC) so we should be ok with that Wietse Venema says atime must be on, but Postfix documentation recommendsnoatime while talking about the Incoming Queue. Does that mean that postfix needs atime on just for some queue directories and will benefit from noatime on the others? can we use noatime if we just don't use ETRN? filesystem can be mounted nodev,noexec,nosuid - no* won't prevent you from setting attributes (postfix uses exec attr) they just won't have any effect (we don't run anything from the spool) the fsync() issue cited by Wietse and/or the chattr -S are probably linked to sync/async options of ext3fs but I do not understand them enough. Mouting the filesystem with async option is equivalent to chattr -R -S the whole fs? Seems like it will increase performance, but will that pose a risk of "loss of mail after a system crash" or is it really "safe on /var/spool/postfix" ? would you tune anything else on postfix-2.6.x to work better on ext3 or do you leave defaults everywhere? is there a "best" linux I/O scheduler for this kind of workload (namely CFQ or deadline?) or that's something that will vary too much based on hardware configuration? would you tune anything else in the filesystem or in the kernel? anything else? References: Postfix Performance here on SF Postfix documentation about the Incoming Queue Wietse Venema in Best file system on [email protected] here Postfix and ext3 on [email protected] here and there

    Read the article

  • How can I copy a VMware Fusion virtual machine to a FAT32 partition?

    - by Michael Prescott
    I created the virtual machine on a host running OS X. I then moved the machine to a FAT32 partition on an external drive. It moved the first time without error. Then I moved it from the external drive to a host running Ubuntu 9.10. I had to move to a FAT32 partition first because Ubuntu doesn't recognize Mac OS Extended partitions on the drive. So, the virtual machine (vm) ran on the ubuntu host for a while and then I moved it back to the FAT32 partition and from there back to the OS X host. I worked on the vm for a while on the OS X host and then attempted to move it back to the FAT32 partition. I get the following system error: The Finder can’t complete the operation because some data in “my-virtual-machine” can’t be read or written. (Error code -36) Interestingly, I can move the file to another OS X partition, just not FAT32. I also perused VMware's forums and found advice to set permissions on all files and folders to 777. I did this, but have had no success. I notice the the files within the vm package are 777 now, but there is an extended attributes symbol on their permission details "rwxrwxrwx@" Since I can copy the vm between OS X partitions, but not to non OS X partitions, and all files and folders withing the vm package and the package itself have permissions of 777, I speculate that the "@" is the problem. How can I remove the "@" or is there something else I need to modify to allow me to copy/move the vm to other hosts?

    Read the article

  • Can't open cocoa emacs from terminal using open -a

    - by Shane
    I installed emacs on my MacBook Air running Mac OS X 10.6.5 from this site http://emacsformacosx.com/. I believe this is what used to be called cocoa emacs. I dragged it into my Application folder and it works fine when I run it from there. I want to be able to run it from the Terminal. After some googling, I tried open -a /Application/Emacs.app foo.txt (foo.txt was and existing file). I got two emacs windows - one with welcome screen and one with foo.txt loaded. I tried a few applications in the /Applications directory and they did not seem to behave like this. I had installed it using my own account (an admin account) so after doing ls -l on /Application I noticed that the owner and group were different from the other entries in this folder. I recursively changed the owner and group to root and wheel, like the others, but this did not help. The only thing that looks funny now is that there that ls -l show a @ character which has something to do with extended attributes but I don't know how to check these. Any suggestions on what to check next? Is using the open command the only to run the program? Can I simulate what it does using a shell script?

    Read the article

  • Tidy up old Windows Server Backup snapshots

    - by dty
    Hi, I'm running wbadmin from a scheduled job, backing up my C: and D: drives to my E: and (I believe!) including the system state: wbadmin start backup -backuptarget:e: -include:c:,d: -allCritical -noVerify -quiet I'd like to delete old backups, but I'm concerned that all the information I can find says to use wbadmin to delete old system state backups, and vssadmin to delete other backups. As far as I know, my backups ARE system state backups, but are using VSS on E: for storage, so I'm worried about trying either of these techniques for fear of losing all my backups. This is a home network, so I don't have a spare server to test this on. I'm also happy to simply restrict the space used on E:, but I can't make sense of the difference between the /for and /on parameters of the relevant vssadmin command. For reference, here's the output of vssadmin show shadows: Contents of shadow copy set ID: {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} Contained 1 shadow copies at creation time: 07/01/2011 08:12:05 Shadow Copy ID: {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} Original Volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Volume: \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy83 Originating Machine: x.y.com Service Machine: x.y.com Provider: 'Microsoft Software Shadow Copy provider 1.0' Type: DataVolumeRollback Attributes: Persistent, No auto release, No writers, Differential [... repeated a lot...] vssadmin show shadowstorage: Shadow Copy Storage association For volume: (C:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (C:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 0 B Allocated Shadow Copy Storage space: 0 B Maximum Shadow Copy Storage space: 5.859 GB Shadow Copy Storage association For volume: (D:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (D:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 0 B Allocated Shadow Copy Storage space: 0 B Maximum Shadow Copy Storage space: 40.317 GB Shadow Copy Storage association For volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Shadow Copy Storage volume: (E:)\\?\Volume{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}\ Used Shadow Copy Storage space: 168.284 GB Allocated Shadow Copy Storage space: 171.15 GB Maximum Shadow Copy Storage space: UNBOUNDED wbadmin get versions: Backup time: 07/01/2011 03:00 Backup target: 1394/USB Disk labeled xxxxxxxxx(E:) Version identifier: 01/07/2011-03:00 Can Recover: Volume(s), File(s), Application(s), Bare Metal Recovery, System State [... repeated a lot...]

    Read the article

  • Ruby net:LDAP returns "code = 53 message = Unwilling to perform" error

    - by Yong
    Hi, I am getting this error "code = 53, message = Unwilling to perform" while I am traversing the eDirectory treebase = "ou=Users,o=MTC". My ruby script can read about 126 entries from eDirectory and then it stops and prints out this error. I do not have any clue of why this is happening. I am using the ruby net:LDAP library version 0.0.4. The following is an excerpt of the code. require 'rubygems' require 'net/ldap' ldap = Net::LDAP.new :host => "10.121.121.112", :port => 389, :auth => {:method => :simple, :username => "cn=abc,ou=Users,o=MTC", :password => "123" } filter = Net::LDAP::Filter.eq( "mail", "*mtc.ca.gov" ) treebase = "ou=Users,o=MTC" attrs = ["mail", "uid", "cn", "ou", "fullname"] i = 0 ldap.search( :base => treebase, :attributes => attrs, :filter => filter ) do |entry| puts "DN: #{entry.dn}" i += 1 entry.each do |attribute, values| puts " #{attribute}:" values.each do |value| puts " --->#{value}" end end end puts "Total #{i} entries found." p ldap.get_operation_result Here is the output and the error at the end. Thank you very much for your help. DN: cn=uvogle,ou=Users,o=MTC mail: --->[email protected] fullname: --->Ursula Vogler ou: --->Legislation and Public Affairs dn: --->cn=uvogle,ou=Users,o=MTC cn: --->uvogle Total 126 entries found. OpenStruct code=53, message="Unwilling to perform"

    Read the article

  • How to move your Windows User Profile to another drive in Windows 8

    - by Mark
    I like to have my user folder on a different drive (D:) than my OS is (C:). Reading the following post I decided to give it a try. All went quite well, untill I found out that my Windows 8 Apps won't execute anymore (other than that I didn't noticed any problems). My apps do work, while using an account that isn't moved. In the eventviewer I've found error messages like these: App <Microsoft.MicrosoftSkyDrive> crashed with an unhandled Javascript exception. App details are as follows: Display Name:<SkyDrive>, AppUserModelId: <microsoft.microsoftskydrive_8wekyb3d8bbwe!Microsoft.MicrosoftSkyDrive> Package Identity:<microsoft.microsoftskydrive_16.4.4204.712_x64__8wekyb3d8bbwe> PID:<4452>. The details of the JavaScript exception are as follows Exception Name:<WinRT error>, Description:<Loading the state store failed. > , HTML Document Path:</modernskydrive/product/skydrive/App.html>, Source File Name:<ms-appx://microsoft.microsoftskydrive/jx/jx.js>, Source Line Number:<1>, Source Column Number:<27246>, and Stack Trace: ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:27246 localSettings() ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:51544 _initSettings() ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:54710 getApplicationStatus(boolean) ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:48180 init(object) ms-appx://microsoft.microsoftskydrive/jx/jx.js:1:45583 Application(number, boolean) ms-appx://microsoft.microsoftskydrive/modernskydrive/product/skydrive/App.html:216:13 Anonymous function(object) Using ProcMon, I see a lot of access denied messages, like these: Date & Time: 12-9-2012 9:32:20 Event Class: File System Operation: CreateFile Result: ACCESS DENIED Path: D:\Users\John\AppData\Local\Packages\microsoft.microsoftskydrive_8wekyb3d8bbwe\Settings\settings.dat TID: 2520 Duration: 0.0000149 Desired Access: Read Data/List Directory, Write Data/Add File, Read Control Disposition: OpenIf Options: Sequential Access, Synchronous IO Non-Alert, No Compression Attributes: N ShareMode: None AllocationSize: 0 Any idea how to solve this? I noticed that the app folders e.g.: D:\Users\john\AppData\Local\Packages\microsoft.microsoftskydrive_8wekyb3d8bbwe had a different owner than the old profile folder had. Old profile folder had john as owner where my new profile folder had the Administrators group as owner. Changing this didn't help unfortunately.

    Read the article

  • Windows 7 search does not return results from indexed folders

    - by Dilbert
    I am experiencing this issue over and over again and I just cannot seem to find the answer. It doesn't make sense, but search simply does not return results from folders that certainly have these files inside. It's weird that this technology exists for more than 5 years now (it could be added to Windows XP as an addon), and they still haven't got it right. My folder contains 10 image files with .png extensions. Two scenarios: Scenario 1: I exclude the folder using Indexing options. Search works. Scenario 2: I turn on indexing for this folder. Search does not work. Of course, Agent Ransack returns results every time. When I check Advanced options for the Indexing options inside control panel, .png files are checked in the File Types tab, using the "File Properties filter". What's the deal with this? [Edit] To clarify, this doesn't happen with all folders, but does with more than one. For the "problematic" folders, even *.* doesn't return a single result. I found some advice to clear the archive and readonly attributes for all files (doesn't make sense, but hey), but it didn't work. Indexing status in Control panel is: Indexing complete. 100,000 items indexed. Folder is included in the list. File types list contains the .png extension (although it doesn't work with any filter, not even *.*).

    Read the article

  • Can't open cocoa emacs from terminal using open -a

    - by Shane
    I installed emacs on my macbook air running os x 10.6.5 from this site http://emacsformacosx.com/. I believe this is what used to be called cocoa emacs. I dragged it into my Application folder and it works fine when i run it from there. I want to be able to run it from terminal. After some googling, i tried open -a /Application/Emacs.app foo.txt (foo.txt was and existing file). I got two emacs windows - one with welcome screen and one with foo.txt loaded. I tried a few applications in the /Applications directory and they did not seem to behave like this. I had installed it using my own account (an admin account) so after doing ls -l on /Application I noticed that the owner and group were different from the other entries in this folder. I recursively changed the owner and group to root and wheel, like the others, but this did not help. The only thing that looks funny now is that there that ls -l show a @ character which has something to do with extended attributes but i don't know how to check these. Any suggestions on what to check next? Is using the open command the only to run the program? Can I simulate what it does using a shell script?

    Read the article

  • Why is Outlook 2007 resizing images in outgoing and incoming HTML email? How can I fix this?

    - by Mikuso
    In my Outlook 2007 client, embedded images in incoming emails appear resized when the message is viewed. The incoming images are resized to 198px wide, despite the original size. If it was larger, it will be shrunk; if it was smaller, it will be enlarged; the aspect ratio remains the same (the image is not stretched). This is local to my client only (i.e. another Outlook 2007 client reading the same IMAP inbox will see the image in the correct size. Similarly, viewing the email message in a browser will display the correct size). This happens regardless of whether width/height attributes are set on the image tag in the HTML. Zoom is set to 100% in my message window; text and other elements are not distorted from the original. In addition to this, outgoing HTML messages with images embedded in the same way are resized as they are sent. The outgoing images are all scaled to have a width of 247px. The source HTML of the outgoing message is changed after pressing Send so that the tag's width attribute is 247 pixels and the image file itself is also resized. These problems only occur with HTML messages; Rich Text messages do not have the same problem. I have already tried reinstalling Outlook and have it fully patched up to date. Why is this happening and how can I stop this?

    Read the article

  • JBoss https on port other than 8080 not working

    - by MilindaD
    We have a server with two JBoss instances where one runs on 8080, the other on 8081. We need to have HTTPS enabled for the 8081 server, firstly we tried enabling https on the 8080 port instance by generating the keystore and editing the server.xml and it successfully worked. However when we tried the same thing for 8081 it did not, note that we removed https for the 8080 server first before enabling it for 8081. This is what was used for both server.xml for 8080 and 8081. The only difference was that the port was changed from 8080 to 8081 when trying to enable https for 8081 port instance. What am I doing wrong and what needs to be changed? NOTE : When I meant enabled for 8080 I meant when you visit https:// URL:8484 you will actually be visiting the 8080 port instance. However when ssl is enabled for 8081 and I visit https:// URL:8484 I get that the web page is unavailable. COMMENTLESS VERSION <Server> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Service name="jboss.web"> <!-- https --> <Connector port="8080" address="${jboss.bind.address}" maxThreads="350" maxHttpHeaderSize="8192" emptySessionPath="true" protocol="HTTP/1.1" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" compression="on" ompressableMimeType="text/html,text/css,text/javascript,application/json,text/xml,text/plain,application/x-javascript,application/javascript"/> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" address="${jboss.bind.address}" keystoreFile="${jboss.server.home.dir}/conf/supun1.keystore" keystorePass="aaaaaa" truststoreFile="${jboss.server.home.dir}/conf/supun1.keystore" truststorePass="aaaaaa" /> <!-- https1 --> <Connector port="8009" address="${jboss.bind.address}" protocol="AJP/1.3" emptySessionPath="true" enableLookups="false" redirectPort="8443" /> <Engine name="jboss.web" defaultHost="localhost" jvmRoute="khms1"> <Realm className="org.jboss.web.tomcat.security.JBossSecurityMgrRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> <Host name="localhost" autoDeploy="false" deployOnStartup="false" deployXML="false" configClass="org.jboss.web.tomcat.security.config.JBossContextConfig" > <Valve className="org.jboss.web.tomcat.service.sso.ClusteredSingleSignOn" /> <Valve className="org.jboss.web.tomcat.service.jca.CachedConnectionValve" cachedConnectionManagerObjectName="jboss.jca:service=CachedConnectionManager" transactionManagerObjectName="jboss:service=TransactionManager" /> </Host> </Engine> </Service> </Server> WITH COMMENTS VERSION <Server> <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- Use a custom version of StandardService that allows the connectors to be started independent of the normal lifecycle start to allow web apps to be deployed before starting the connectors. --> <Service name="jboss.web"> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" address="${jboss.bind.address}" maxThreads="350" maxHttpHeaderSize="8192" emptySessionPath="true" protocol="HTTP/1.1" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" compression="on" ompressableMimeType="text/html,text/css,text/javascript,application/json,text/xml,text/plain,application/x-javascript,application/javascript"/> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" keystoreFile="${jboss.server.home.dir}/conf/zara.keystore" keystorePass="zara2010" clientAuth="false" sslProtocol="TLS" compression="on" /> --> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" address="${jboss.bind.address}" keystoreFile="${jboss.server.home.dir}/conf/supun1.keystore" keystorePass="aaaaaa" truststoreFile="${jboss.server.home.dir}/conf/supun1.keystore" truststorePass="aaaaaa" /> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" address="${jboss.bind.address}" protocol="AJP/1.3" emptySessionPath="true" enableLookups="false" redirectPort="8443" /> <Engine name="jboss.web" defaultHost="localhost" jvmRoute="khms1"> <!-- The JAAS based authentication and authorization realm implementation that is compatible with the jboss 3.2.x realm implementation. - certificatePrincipal : the class name of the org.jboss.security.auth.certs.CertificatePrincipal impl used for mapping X509[] cert chains to a Princpal. - allRolesMode : how to handle an auth-constraint with a role-name=*, one of strict, authOnly, strictAuthOnly + strict = Use the strict servlet spec interpretation which requires that the user have one of the web-app/security-role/role-name + authOnly = Allow any authenticated user + strictAuthOnly = Allow any authenticated user only if there are no web-app/security-roles --> <Realm className="org.jboss.web.tomcat.security.JBossSecurityMgrRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> <!-- A subclass of JBossSecurityMgrRealm that uses the authentication behavior of JBossSecurityMgrRealm, but overrides the authorization checks to use JACC permissions with the current java.security.Policy to determine authorized access. - allRolesMode : how to handle an auth-constraint with a role-name=*, one of strict, authOnly, strictAuthOnly + strict = Use the strict servlet spec interpretation which requires that the user have one of the web-app/security-role/role-name + authOnly = Allow any authenticated user + strictAuthOnly = Allow any authenticated user only if there are no web-app/security-roles <Realm className="org.jboss.web.tomcat.security.JaccAuthorizationRealm" certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /> --> <Host name="localhost" autoDeploy="false" deployOnStartup="false" deployXML="false" configClass="org.jboss.web.tomcat.security.config.JBossContextConfig" > <!-- Uncomment to enable request dumper. This Valve "logs interesting contents from the specified Request (before processing) and the corresponding Response (after processing). It is especially useful in debugging problems related to headers and cookies." --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve" /> --> <!-- Access logger --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" prefix="localhost_access_log." suffix=".log" pattern="common" directory="${jboss.server.log.dir}" resolveHosts="false" /> --> <!-- Uncomment to enable single sign-on across web apps deployed to this host. Does not provide SSO across a cluster. If this valve is used, do not use the JBoss ClusteredSingleSignOn valve shown below. A new configuration attribute is available beginning with release 4.0.4: cookieDomain configures the domain to which the SSO cookie will be scoped (i.e. the set of hosts to which the cookie will be presented). By default the cookie is scoped to "/", meaning the host that presented it. Set cookieDomain to a wider domain (e.g. "xyz.com") to allow an SSO to span more than one hostname. --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Uncomment to enable single sign-on across web apps deployed to this host AND to all other hosts in the cluster. If this valve is used, do not use the standard Tomcat SingleSignOn valve shown above. Valve uses a JBossCache instance to support SSO credential caching and replication across the cluster. The JBossCache instance must be configured separately. By default, the valve shares a JBossCache with the service that supports HttpSession replication. See the "jboss-web-cluster-service.xml" file in the server/all/deploy directory for cache configuration details. Besides the attributes supported by the standard Tomcat SingleSignOn valve (see the Tomcat docs), this version also supports the following attributes: cookieDomain see above treeCacheName JMX ObjectName of the JBossCache MBean used to support credential caching and replication across the cluster. If not set, the default value is "jboss.cache:service=TomcatClusteringCache", the standard ObjectName of the JBossCache MBean used to support session replication. --> <Valve className="org.jboss.web.tomcat.service.sso.ClusteredSingleSignOn" /> <!-- Check for unclosed connections and transaction terminated checks in servlets/jsps. Important: The dependency on the CachedConnectionManager in META-INF/jboss-service.xml must be uncommented, too --> <Valve className="org.jboss.web.tomcat.service.jca.CachedConnectionValve" cachedConnectionManagerObjectName="jboss.jca:service=CachedConnectionManager" transactionManagerObjectName="jboss:service=TransactionManager" /> </Host> </Engine> </Service> </Server>

    Read the article

  • Ubuntu xrandr rotate issue

    - by user83544
    I've just bought a second monitor for my PC which happens to be a pivot monitor. I've already read lots of forums related to my problem but haven't come across a solution - I have the same symptoms as dozens of posts but no matter whatever I try it just doesn't work. I've already changed the xorg.conf file and added in the device section just under Driver "nvidia" the following for my second monitor: Option "RandRRotation" "on" When I save and reboot I try to rotate my screen with the nvidia X server settings by choosing the second monitor and clicking either "left" or "right" for the rotation. It immediately exits the nvidia settings window and does nothing. I tried within the terminal by typing: xrandr -o right I get the following error: X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 154 (RANDR) Minor opcode of failed request: 2 (RRSetScreenConfig) Serial number of failed request: 14 Current serial number in output stream: 14 I actually manage to rotate it with Option "Rotate" "CCW" instead of "RandRRotation". The problem with this solution is that you get the second monitor in the right position, but any window you open on that screen is practically unchangeable. You can't change the size nor move it, making it useless for reading PDFs, which is the main reason why I bought this second screen to help me write my thesis. Any help is really appreciated. sudo lshw -c video hiram@hiram-linux:~$ sudo lshw -c video *-display description: VGA compatible controller product: nVidia Corporation vendor: nVidia Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:16 memory:f8000000-f9ffffff memory:d8000000-dfffffff memory:d4000000-d7ffffff ioport:dc00(size=12 memory:fbd80000-fbdfffff

    Read the article

  • CIFS Mounting Permissions

    - by malco
    I have an issue that I;m going round in circles with, I hope you can help. The Set up: Server 1 (CIFS Client) - CentOS 6.3 AD integrated uing Samba/Winbind & idmap_ad Server 2 (CIFS Server) - CentOS 6.3 AD integrated uing Samba/Winbind & idmap_ad All users (apart from root) are AD authenticated and this, including groups, etc works happily. What's working: I have created a share on Server 2: [share2] path = /srv/samba/share2 writeable = yes Permissions on the share: drwxrwx---. 2 root domain users 4096 Oct 12 09:21 share2 I can log into a Windows machine as user5 (member of domain users) and everything works as it should, for example: If I create a file it shows the correct permissions and attributes on both the MS and the Linux sides. Where I Fall Down: I mount the share on Server 1 using: # mount //server2/share2 /mnt/share2/ -o username=cifsmount,password=blah,domain=blah Or using fstab: //server2/share2 /mnt/share2 cifs credentials=/blah/.creds 0 0 This mounts fine, but.... If I log su, or log onto server 1 as a normal user (say user5) and try to create a file I get: #touch test touch test touch: cannot touch `test': Permission denied Then if I check the folder the file was created but as the cifsmount user: -rw-r--r--. 1 cifsmount domain users 0 Oct 12 09:21 test I can rename, delete, move or copy stuff around as user5, I just can't create anything, what am I doing wrong? I'm guessing it's something to do with the mount action as when I log onto server2 as user5 and access the folder locally it all works as it should. Can anyone point me in the right direction?

    Read the article

  • How to see the properties of a DOM element as they change in realtime?

    - by allquixotic
    JavaScript code can update the properties/attributes of DOM elements in real time by responding to events and so on. Here is an example. In the table on that page, move your mouse over the cells. Notice how they change color when the mouse is on them, and the color goes away when you move the mouse to another cell. Now, using Firefox or Chrome (but not IE, Opera, etc.), I want to examine the background color, expressed in RGB or hex or whatever, of the cells updated in real time, as the mouse cursor enters and leaves the region and causes the JS to do its thing. The behavior that I observe, currently, is that the Inspect Element functionality of both Firefox and Chrome does not update the value of the properties as they are updated by JavaScript. So, in order to view the latest value of the property, I have to inspect the element again, and it takes a momentary "snapshot" of the values. But since the values only change while I have the mouse on them, I can't take a snapshot of the value I want while my mouse cursor is over the cell, because I have to remove my mouse from the cell to select the "Inspect Element" item in the right-click list! If it is possible to have the values updated in real time using either Firefox or Chrome, or an extension, on any recent version of the software (up to the latest stable), please provide instructions for doing so.

    Read the article

  • How do I change file protections running XP on a disk from Windows Server?

    - by cdkMoose
    I had a Windows Server 2003 machine running at home, along with my desktop which I use for development. Server went belly up, but since my desktop is reasonably powerful, I figured I would move the disk from the file server (it was OK) into my XP machine to keep all of the files. Disk comes up fine and shows all of the files. I have been getting access denied errors when trying to work with some of the files. When I display attributes in Explorer, none of them are marked Read-Only. When I view properties on the directories, the Read-Only checkbox is not checked, but has a green background(which I thought meant mixed usage for files in the directory). When I click on the checkbox to clear it and click Apply, the disk does some work and all looks well. However, I continue to get the Access Denied errors, the files still don't show any Read-Only attribute and the directory properties shows the green background again on the Read-Only checkbox. I did check the box which says to apply the change to the folder and all files /subfilders under it. I am assuming that the issue relates to userids/permissions carried over from the Server install. So, why does it let me think I can change the attribute when I can't and how can I correct this problem so that the disk correctly recognizes the ids from XP?

    Read the article

  • How to verify TRIM/discard on encrypted swap?

    - by svarni
    I am using an encrypted swap partition via ecryptfs-setup-swap on my Ubuntu 13.04 computer using a SSD. I have manually set up trim for my ext4 root partition (simply by adding the "discard" option in /etc/fstab). I also manually ran fstrim on the root partition prior to booting and using dstat I saw that for a few seconds several GB/s of data have been written to the disk. That was presumably the effect of the trim command. These high writerates are reproducable by deleting huge files and have not occured before setting up trim, so I take them as evidence for working trim/discard. Manually enabling trim on my root partition has stopped the wearout of my precious new disk from 365 used reserved blocks (out of 6176 total) within three months down to 0 additional used reserved blocks within three additional months (data from SMART attributes). Because I want to minimize the wearout of my SSD I now would like to know whether my swap partition (which is encrypted using ecryptfs-setup-swap) also makes use of the trim/discard option. I tried sudo swapon -d -v /dev/mapper/cryptswap1 but did not receive particular information ("-v") about whether trim/discard ("-d") was applied. If unsupported, i would expect a message. Then I tried sudo dd if=/dev/sda6 count=1 BS=1M | xxd | less directly after booting and when no swapspace was used but I saw not only zeroes. I assume, when looking at freshly trimmed regions, the disk would send zeroes instead of reading random sectors (and according to some forums, (unencrypted) swap space is trimmed once upon boot). Long story short: Are there any ideas on how to test if trim is effectively used for my encrypted swap? And if not, any ideas on how to - at least manually, for once - trim the whole swap space? I wouldn't want to tinker with the partition itself, because I dont know if it needs to be reinitialized as (encrypted) swap - I dont want to be left with an unbootable system :)

    Read the article

  • What's a good solution for file-tagging in linux?

    - by julien
    I've been looking for a way to tag my files and search/filter them based on those tags. Here are my (updated) requirements : any file readable by the user can be tagged freely a user can search for files matching one or several tags files can be moved around without losing the previously associated tags the system could be backed up easily no dependencies on any desktop environment if any gui is involved, there must be a cli fallback I've been hoping for some basic filesystem & coreutils hackery to handle this, but I haven' thought about this hard enough yet. Meanwhile I'll review beagle and metatracker, which have been mentionned here, and see how they perform. Ok so beagle has huge gnome dependencies, and tracker is okish, but still has some dependencies I don't like... Been doing some more research, and the way to go could very well be extended file attributes. That's a native solution for most recent filesystems, but they aren't very well supported yet (most coreutils destroys them by default, cp for example needs the -a flag to preserve them). Would like to hear some thoughts on using them while I try my hand at some hacks myself, eventhough this might warrant a new question.

    Read the article

  • Unrelated Files Corrupted on System Restore

    - by Yar
    I restored OSX 10.6.2 today (was 10.6.3 and not booting) by copying the system over from a backup. The data directories were not touched. In the data directories, I'm seeing some files as 0 bytes, and getting permission-denied errors when copying, even when using sudo cp or the Finder itself. Some programs, differently, take the files at face value and see no permission problems (such as zip), but they see the files as zero bytes, which would be game-over for recovery. cp: .git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: could not copy extended attributes to /eraseme/blah/.git/objects/fe/86b676974a44aa7f128a55bf27670f4a1073ca: Operation not permitted I have tried sudo chown, sudo chmod -R 777 and sudo chflags -R nouchg which do not change the end result. Strangely, this is only affecting my .git directories (perhaps because they start with a period, but renaming them -- which works -- does not change anything). What else can I do to take ownership of these files? Edit: This question comes from StackOverflow because I originally thought it was a GIT problem. It's definitely not (just) GIT. Anyway, this is to help put some of the comments in context.

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >