Search Results

Search found 5001 results on 201 pages for 'mailing lists'.

Page 102/201 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • sharepoint 2010 search error give me correlation id and event viewer is empty and no log there

    - by saber tabatabaee yazdi
    we have a farm of SharePoint 2010 that worked properly since last week. we configure and start the search service last week and it is also worked and when we test it with some criteria , worked , and results appear for us so we are very happy. but after few days when we search , occur this error: after that we delete application service and reconfure. and now when we open any document libraries occur this error: but lists open correctly without any error An unexpected error has occurred. Troubleshoot issues with Microsoft SharePoint Foundation. Correlation ID: 5becf903-d13e-4490-a23c-d7e4f68ca769 please help us.

    Read the article

  • List of registered domain names

    - by Eric Chang
    Where can I find a list of domain names that I can download in a text file? I looked around and I found many sites with lists of expired domain names or domain names that will expire soon, but that's not what I need, I need a list of CURRENTLY REGISTERED domain names. The closest I found was this site (hxxp://www.list-of-domains.org/), but what I'm looking for ideally is just a plain list of registered domains and not some web links split across many pages full with ads and no text file. I know it's possible, after all how do these sites get the list of domains that they use? Where can I find such a list?

    Read the article

  • Troubleshooting a high SQL Server Compilation/Batch-Ratio

    - by Sleepless
    I have a SQL Server (quad core x86, 4GB RAM) that constantly has almost the same values for "SQLServer:SQL Statistics: SQL compilations/sec" and "SQLServer:SQL Statistics: SQL batches/sec". This could be interpreted as a server running 100% ad hoc queries, each one of which has to be recompiled, but this is not the case here. The sys.dm_exec_query_stats DMV lists hundreds of query plans with an execution_count much larger than 1. Does anybody have any idea how to interpret / troubleshoot this phenomenon? BTW, the server's general performance counters (CPU,I/O,RAM) all show very modest utilization.

    Read the article

  • prevent apt dependency from being satisfied (permanently)

    - by Bryan Hunt
    I want to install mailman (just to use it's mail archiving feature) but Ubuntu wants to pull down a load of extra dependencies. sudo apt-get install mailman Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: apache2 apache2-mpm-worker apache2.2-common Suggested packages: apache2-doc apache2-suexec apache2-suexec-custom spamassassin lynx listadmin Is there any way to mark those packages ( apache2 apache2-mpm-worker apache2.2-common ) as never to be installed? This is not 2002 ;)

    Read the article

  • Sub-process /usr/bin/dpkg returned an error code (1)

    - by rohit
    Hey friends i am getting the following error when i am trying to purge shorewall root@aptosid:/etc# apt-get purge shorewall Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: shorewall* 0 upgraded, 0 newly installed, 1 to remove and 3 not upgraded. 1 not fully installed or removed. After this operation, 1,843 kB disk space will be freed. Do you want to continue [Y/n]? (Reading database ... 212702 files and directories currently installed.) Removing shorewall ... : not found/shorewall: 25: /etc/default/shorewall: :q Stopping "Shorewall firewall": not done (check /var/log/shorewall-init.log). invoke-rc.d: initscript shorewall, action "stop" failed. dpkg: error processing shorewall (--purge): subprocess installed pre-removal script returned error exit status 1 configured to not write apport reports Errors were encountered while processing: shorewall E: Sub-process /usr/bin/dpkg returned an error code (1) root@aptosid:/etc# please help me out ...........?

    Read the article

  • How to gray out HTML form inputs?

    - by typoknig
    What is the best way to gray out text inputs on an HTML form? I need the inputs to be grayed out when a user checks a check box. Do I have to use JavaScript for this (not very familiar with JScript) or can I use PHP (which I am more familiar with)? EDIT: After some reading I have got a little bit of code, but it is giving me problems. For some reason I cannot get my script to work based on the state of the form input (enabled or disabled) or the state of my checkbox (checked or unchecked), but my script works fine when I base it on the values of the form inputs. I have written my code exactly like several examples online (mainly this one) but to no avail. None of the stuff that is commented out will work. What am I doing wrong here? <label>Mailing address same as residental address</label> <input name="checkbox" onclick="disable_enable() "type="checkbox" style="width:15px"/><br/><br/> <script type="text/javascript"> function disable_enable(){ if (document.form.mail_street_address.value==1) document.form.mail_street_address.value=0; //document.form.mail_street_address.disabled=true; //document.form.mail_city.disabled=true; //document.form.mail_state.disabled=true; //document.form.mail_zip.disabled=true; else document.form.mail_street_address.value=1; //document.form.mail_street.disabled=false; //document.form.mail_city.disabled=false; //document.form.mail_state.disabled=false; //document.form.mail_zip.disabled=false; } </script>

    Read the article

  • can't install sun jdk on ubuntu karmic 9.10

    - by pstanton
    i've just started using a new ec2 install of ubuntu karmic 9.10 and my first task was to install the jdk. when running apt-get install sun-java6-jdk i get the following: Reading package lists... Done Building dependency tree Reading state information... Done Package sun-java6-jdk is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package sun-java6-jdk has no installation candidate i've tried apt-get update as well as adding deb http://us.archive.ubuntu.com/ubuntu/ karmic universe to /etc/apt/sources.list any ideas fellas?

    Read the article

  • How to tell Windows 7 to ignore a default gateway

    - by zildjohn01
    I currently have 2 network cards in my PC -- one connected to an internal network on a router with a disconnected WAN port (10.x.x.x), and one connected to the internet through a consumer router (192.168.0.x). Windows seems to recognize them correctly (my "Network and Sharing Center" lists them as "No Internet" and "Internet" respectively), however when I try browsing the internet it always tries the internal network's default gateway, rather than the one with internet access. Trying to ping a website results in "Reply from 10.0.0.1: Destination net unreachable.". A simple "route delete 0.0.0.0 mask 0.0.0.0 10.0.0.1" fixes the problems, but they return upon reboot, or upon renewing my IP. Is there any way to tell Windows to ignore one NIC's default gateway, or to at least give them priorities?

    Read the article

  • Supermicro SmartOS setup issue with HDDs

    - by Andrew B.
    I'm trying to setup SmartOS on my new Supermicro server (SYS-6027R-N3RF4+). I have 8 HDDs + 2 SSDs installed in the hot-swappable drive bays. I have not configured the Intel RAID controller on the machine (it shows no RAID configured, and lists all 8 HDDs). When I boot SmartOS from my USB key for the first time and get to the initial zpool creation I only see 2 disks listed. How can I tell which two drives are being referenced? How can I get SmartOS to recognize all 8+2 drives? Is there a BIOS setting I need to adjust?

    Read the article

  • SharePoint - force user to accept AUP when first logging in etc

    - by Chris W
    We're looking to move a bespoke intranet across to SharePoint. One query that has come up is whether we can do the following easily: When user logs in for the first time they should be forced read and accept an Acceptable Use Policy for the site. Agree a separate agreement that relates to their data being shared with other parties. (Optional) upload their profile photo. They can skip this step if they don't have one but they should be prompted to do it each time they login subsequently. The above is all nice and easy in a bespoke app but I can't see how to do this with SharePoint. Can we build a custom workflow that is tied to the user logging in? So far I can only find how to attach workflows to libraries and lists.

    Read the article

  • Subversion multi checkout post-commit hook?

    - by FLX
    The title must sound strange but I'm trying to achieve the following: SVN repo location: /home/flx/svn/flxdev SVN repo "flxdev" structure: + Project1 ++ files + Project2 + Project3 + Project4 I'm trying to set up a post-commit hook that automatically checks out on the other end when I do a commit. The post-commit doc explicitly lists the following: # POST-COMMIT HOOK # # The post-commit hook is invoked after a commit. Subversion runs # this hook by invoking a program (script, executable, binary, etc.) # named 'post-commit' (for which this file is a template) with the # following ordered arguments: # # [1] REPOS-PATH (the path to this repository) # [2] REV (the number of the revision just committed) So I made the following command to test: REPOS="$1" REV="$2" echo "Updated project $REPOS to $REV" However when I edit files in Project1 for example, this outputs "Updated project /home/flx/svn/flxdev to 1016" I'd like this to be: "Updated project Project1 to 1016" Having this variable allows me to specify to do different actions per project post-commit. How can I specify the project parameter? Thanks! Dennis

    Read the article

  • Subversion multi checkout post-commit hook?

    - by FLX
    The title must sound strange but I'm trying to achieve the following: SVN repo location: /home/flx/svn/flxdev SVN repo "flxdev" structure: + Project1 ++ files + Project2 + Project3 + Project4 I'm trying to set up a post-commit hook that automatically checks out on the other end when I do a commit. The post-commit doc explicitly lists the following: # POST-COMMIT HOOK # # The post-commit hook is invoked after a commit. Subversion runs # this hook by invoking a program (script, executable, binary, etc.) # named 'post-commit' (for which this file is a template) with the # following ordered arguments: # # [1] REPOS-PATH (the path to this repository) # [2] REV (the number of the revision just committed) So I made the following command to test: REPOS="$1" REV="$2" echo "Updated project $REPOS to $REV" However when I edit files in Project1 for example, this outputs "Updated project /home/flx/svn/flxdev to 1016" I'd like this to be: "Updated project Project1 to 1016" Having this variable allows me to specify to do different actions per project post-commit. How can I specify the project parameter? Thanks! Dennis

    Read the article

  • Get sessions' remote IP from Teamviewer log file

    - by etuardu
    I'd like to know who has logged in to my machine and when. I have two TeamViewer log files: Connections_incoming.txt and TeamViewer7_Logfile.log. The first one is quite plain and lists, as its name says, the incoming connections to the machine, reporting the local name of the remote host, login time, logout time, and some ids. e.g.: 173274362 MYLAPTOP 20-02-2012 17:32:16 20-02-2012 17:50:42 Master RemoteControl {C5AAE483-ED0B-54B8-9235-7AE597CAD342} This is almost all what I need, but unfortunately no remote IP address is reported here, so I checked for IPs in TeamViewer7_Logfile.log but it is really messy. It indeed contains some IP addresses but I can't understand which one is bound with the items in the first log file. Is there a way to interpolate the two logs to get what I need? Should I search the second file for some particular text? What do you suggest?

    Read the article

  • Cassandra hot keyspace structure change

    - by Pierre
    Hello. I'm currently running a 12-node Cassandra cluster storing 4TB of data, with a replication factor set to 3. For the needs of an application update, we need to change the configuration of our keyspace, and we'd like to avoid any downtime if possible. I read on a mailing list that the best way to do it is to: Kill cassandra process on one server of the cluster Start it again, wait for the commit log to be written on the disk, and kill it again Make the modifications in the storage.xml file Rename or delete the files in the data directories according to the changes we made Start cassandra Goto 1 with next server on the list My questions would be: Did I understand the process well? Is there any risk of data corruption? During the process, there will be servers with different versions of the storage.xml file in the same cluser, same keyspace. Is it a problem? Same question as above if we not only add, rename and remove ColumnFamilies, but if we change the CompareWith parameter / transform an existing column family into a super one. Or do we need to change the name? Thank you for your answers. It's the first time I'll do this, and I'm a little bit scared.

    Read the article

  • Getting Recognition for Open-Source Computer Language Projects

    - by Jon Purdy
    I like language a lot, so I write a lot of language-based solutions for programming, automation, and data definition. I'm very much a believer in open-source software, so lately I've started to push these projects to Sourceforge when I start them. I feel that these tools could be quite valuable in the right hands, and that they fill niches that otherwise go unfilled. The trouble, for me, is gaining recognition. No matter how useful the software I write, after a certain point I can no longer come up with anything to add or improve. Basically no one but me uses it, so it's not being attacked from enough angles to discover any new weaknesses. I cannot work on a project that doesn't have anything to do, but I won't have anything to do unless I gain recognition by working on it! This is greatly discouraging. It's like giving what you think is a really thoughtful gift to someone who just isn't paying attention. So I'm looking for advice on how to network and disseminate information about my projects so that they don't fizzle out like this. Are there any sites, newsgroups, or mailing lists that I've been completely missing?

    Read the article

  • JSF/ADF/PPR can't refresh the page as expected

    - by Nhut Le
    Hi, I am having issues with JSF/ADF/PPR on refreshing the page incorrectly. I have a selectManyCheckBox with 5 options in it, one of the option is 'All'. If users check that checkbox, I should check all the others. <h:panelGrid styleClass="myBox leftAligned" id="applyChangesBox"> <af:selectManyCheckbox id="changesCheckedBox" autoSubmit="true" label="Hello: " value="#{updateForm.applyChangesList}" valueChangeListener="#{updateForm.testValueChanged}"> <af:selectItem value="A" label="All Changes"/> <af:selectItem value="R" label="Residential Address"/> <af:selectItem value="M" label="Mailing Address"/> <af:selectItem value="P" label="Personal Phone/Fax Numbers"/> <af:selectItem value="E" label="Personal Email Addresses"/> </af:selectManyCheckbox> <af:outputText value="#{updateForm.testValue}" partialTriggers="changesCheckedBox"/> </h:panelGrid> I am using valueChangeListener so that I can see my bean updated and printed out correctly, but my page does not refresh and check all the other checkbox if I need to.

    Read the article

  • Offline Outlook 2007 global address book slow to update

    - by munrobasher
    Outlook 2007 talking to an Exchange 2007 server. Usual set-up of personal contacts and a site wide global address book. Distribution lists are often created by IT in the global address book but it sometimes takes days for them to appear in the local offline copy. Performing a manual download of the address book doesn't help. Problem doesn't occur with non-offline mode of Outlook 2007. Any ideas? Server or client side issue? Cheers, Rob.

    Read the article

  • Why aren't Heroku syslog drains logging to rsyslogd?

    - by Benjamin Oakes
    I'm having a problem using syslog drains as described in https://devcenter.heroku.com/articles/logging. To summarize, I have an Ubuntu 10.04 instance on EC2 that is running rsyslogd. I've also set up the security groups as they describe, and added a syslog drain using a command like heroku drains:add syslog://host1.example.com:514. I can send messages from the Heroku console to my rsyslogd instance via nc. I see them appear in the log file, so I know there isn't a firewall/security group issue.  However, Heroku does not seem to be forwarding log messages to the server that heroku drains lists. I would expect to see HTTP requests, Rails messages, etc. Is there something else I can try to figure this out? I'm new to rsyslogd, so I could easily be missing something.

    Read the article

  • SharePoint Web Analytics not tracking usage for main application

    - by Chris W
    My SP 2010 setup is two separate applications - one for the main portal and one for MySite. Whilst WebAnalytics is tracking usage of MySite it's not showing any stats for the main Portal. The only thing it lists is the number of site collections but no page views etc. The WA service is clearly running to pick up data for MySite. In Configure web analytics and health data collection everything is ticked. I can't find any obvious settings that are different between the two applications. Where should I look to get usage tracking correctly?

    Read the article

  • I am not seeing named session in gnu screen

    - by dorelal
    I am tying to learn gnu screen. I am using mac(snow leopard). I am running 4.00.03 version of screen. I am starting a new screen with following command screen -S foo However after that if I do ctrl + A + " then I see the list of screens. However all the lists have numbers and then bash. Because all it says is 'bash' I can't figure out which screen has what. Am I missing something?

    Read the article

  • mutt, smime, decrypt with one of two different keys

    - by munin
    This is an odd one. We want to have an encrypted e-mail list. There are a few ways to do this, but in the interim what we've done is created a public/private keypair via openssl for our e-mail list ([email protected]) and then distributed the public/private keypair amongst the list participants (ugh). When someone posts to the list, they encrypt using the lists public key, and everyone has the private key (ugh) so it 'works'. MUAs like Outlook and Thunderbird work with this setup. Mutt has a problem though - it seems to only decrypt a SMIME message with a private key that is specified by your e-mail. So when someone sends an e-mail to the list e-mail, my MUA won't decrypt it. How can I tell Mutt about this second private key?

    Read the article

  • Excel - Linking multiple source spreadsheets with variable amounts of rows to a destination spreadsh

    - by Emilio
    I have multiple source spreadsheets, each with a variable number of rows. An example might be one spreadsheet per bank account, with one row for each transaction, with a date and amount. One spreadsheet might have 30 rows, the other 50, and so on. I want to create another spreadsheet which links to the various source spreadsheets and lists an aggregate of all transactions from all sources. So if 3 source sheets with 30, 50 and 20 rows respectively, the destination sheet would have 100 rows. The number of rows (transactions) in the source sheets can grow or shrink over time. I'd like the destination sheet to show one contiguous list of transactions without gaps (spaces). How can I do this?

    Read the article

  • Trying to script rsync using pam_exec

    - by Ricky-Rose
    I'm trying to write a bash script that will execute rsync when called by pam_exec. I've tried a couple different ways, and I'm not sure what I'm doing wrong. When I try to run the script at login by adding session optional pam_exec.so /usr/bin/local/sync.sh to my sshd file, it gives me an exit code of 12. if I log in and then manually run my script, it allows me to connect to the remote server, and it lists my files, but it doesn't actually sync anything. I have tried the code below using buth $USER and $PAM_USER. $PAM_USER doesn't work at all. #!/bin/sh rsync -azv -e ssh $USER@remote_server:/home/html/$USER/ /home/html/$USER

    Read the article

  • How can I get vim to set an ACL on its swap files?

    - by thsutton
    I use vim on an OS X Snow Leopard Server machine. A number of the directories I work in have ACLs (so that various groups of users can access them over AFP) that are inherited. For some reason, when I'm working in one of these directories, vim cannot read it's own swap files. It can create them fine but can't read them which, for some reason, makes it display the "swap file already exists" message (and no, the swap file does not already exist). vim -r lists the newly created swap file as "[cannot be read]". The owner and group are correct and the permissions are 0600, and the ACLs on the swap file and the file I'm editing are identical (as disclosed by ls -le and compared with diff). groups returns the same thing whether invoked from my login shell or via :! in vim. Has anyone encountered (and hopefully resolved) a problem like this before?

    Read the article

  • querying sge using job_name

    - by user39212
    Hello, qstat manual says -j [job_list] Prints either for all pending jobs or the jobs con- tained in job_list various information. The job_list can contain job_ids, job_names, or wildcard expression sge_types(1). i.e. qstat -j job_name should work. But whenver I try i get an error saying qstat -j test_20100203_01 ERROR: job-ID should be an integer Any idea if this should have worked. I can also use: qstat | grep test_20100203_01 but the problem is that qstat lists the shortened job name (something like test_2010020 and not test_20100203_01). Please advise.

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >