Search Results

Search found 46104 results on 1845 pages for 'run dialog'.

Page 370/1845 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • How to get more detailed reason for Thunderbird crashing?

    - by Nick
    I have a computer (10.04) that was previously running Thunderbird 12.0.1 just fine (installed via apt-get via official PPA). I don't know what happened at the time the problem started since this is a multi-user computer and I was not here. However, every time we try to launch TB, we immediately get a dialog that says: We're Sorry Thunderbird had a problem and crashed. Things I've tried: Running thunderbird from a terminal produces no output. I tried apt-get remove thunderbird --purge and then reinstalled Deleting the user's .thunderbird folder and launching still results in a crash Attempting to run thunderbird -safe-mode still results in a crash This problem occurs for all users of Thunderbird on this computer. Is there any way to get more details on why the program is crashing? For example, the specific error that TB is encountering? I tried thunderbird -g but I'm not sure what to do with the debugger.

    Read the article

  • Ubuntu server has slow performance

    - by Rich
    I have a custom built Ubuntu 11.04 server with a 6 disk software RAID 10 primary drive. On it I'm primarily running a PostgreSQL and a few other utilities that stream data from the web. I often find after a few hours of uptime the server starts to lag with all kinds of processes. For example, it may take 10-15 seconds after log-in to get a shell prompt. It might take 5-10 seconds for top to come up. An ls might take a second or two. When I look at top there is almost no CPU usage. There's a fair amount of memory used by the PostgreSQL server but not enough to bleed into swap. I have no idea where to go from here, other than to suspect the RAID10 (I've only ever had software RAID 1's before). Edit: Output from top: top - 11:56:03 up 1:46, 3 users, load average: 0.89, 0.73, 0.72 Tasks: 119 total, 1 running, 118 sleeping, 0 stopped, 0 zombie Cpu(s): 0.2%us, 0.0%sy, 0.0%ni, 93.5%id, 6.2%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 16325596k total, 3478248k used, 12847348k free, 20880k buffers Swap: 19534176k total, 0k used, 19534176k free, 3041992k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1747 woodsp 20 0 109m 10m 4888 S 1 0.1 0:42.70 python 357 root 20 0 0 0 0 S 0 0.0 0:00.40 jbd2/sda3-8 1 root 20 0 24324 2284 1344 S 0 0.0 0:00.84 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0 0.0 0:00.24 ksoftirqd/0 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0 7 root RT 0 0 0 0 S 0 0.0 0:00.01 watchdog/0 8 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 10 root 20 0 0 0 0 S 0 0.0 0:00.02 ksoftirqd/1 12 root RT 0 0 0 0 S 0 0.0 0:00.01 watchdog/1 13 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/2 14 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/2:0 15 root 20 0 0 0 0 S 0 0.0 0:00.00 ksoftirqd/2 16 root RT 0 0 0 0 S 0 0.0 0:00.01 watchdog/2 17 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/3 18 root 20 0 0 0 0 S 0 0.0 0:00.00 kworker/3:0 19 root 20 0 0 0 0 S 0 0.0 0:00.02 ksoftirqd/3 20 root RT 0 0 0 0 S 0 0.0 0:00.01 watchdog/3 21 root 0 -20 0 0 0 S 0 0.0 0:00.00 cpuset 22 root 0 -20 0 0 0 S 0 0.0 0:00.00 khelper 23 root 20 0 0 0 0 S 0 0.0 0:00.00 kdevtmpfs 24 root 0 -20 0 0 0 S 0 0.0 0:00.00 netns 26 root 20 0 0 0 0 S 0 0.0 0:00.00 sync_supers df -h rpsharp@ncp-skookum:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 1.8T 549G 1.2T 32% / udev 7.8G 4.0K 7.8G 1% /dev tmpfs 3.2G 492K 3.2G 1% /run none 5.0M 0 5.0M 0% /run/lock none 7.8G 0 7.8G 0% /run/shm /dev/sda2 952M 128K 952M 1% /boot/efi /dev/md0 5.5T 562G 4.7T 11% /usr/local free -m psharp@ncp-skookum:~$ free -m total used free shared buffers cached Mem: 15942 3409 12533 0 20 2983 -/+ buffers/cache: 405 15537 Swap: 19076 0 19076 tail -50 /var/log/syslog Jul 3 06:31:32 ncp-skookum rsyslogd: [origin software="rsyslogd" swVersion="5.8.6" x-pid="1070" x-info="http://www.rsyslog.com"] rsyslogd was HUPed Jul 3 06:39:01 ncp-skookum CRON[14211]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jul 3 06:40:01 ncp-skookum CRON[14223]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 07:00:01 ncp-skookum CRON[14328]: (woodsp) CMD (/home/woodsp/bin/mail_tweetupdate # email an update) Jul 3 07:00:01 ncp-skookum CRON[14327]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 07:00:28 ncp-skookum sendmail[14356]: q63E0SoZ014356: from=woodsp, size=2328, class=0, nrcpts=2, msgid=<[email protected]>, relay=woodsp@localhost Jul 3 07:00:29 ncp-skookum sm-mta[14357]: q63E0Si6014357: from=<[email protected]>, size=2569, class=0, nrcpts=2, msgid=<[email protected]>, proto=ESMTP, daemon=MTA-v4, relay=localhost [127.0.0.1] Jul 3 07:00:29 ncp-skookum sendmail[14356]: q63E0SoZ014356: to=Spencer Wood <[email protected]>,Martin Lacayo <[email protected]>, ctladdr=woodsp (1004/1005), delay=00:00:01, xdelay=00:00:01, mailer=relay, pri=62328, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (q63E0Si6014357 Message accepted for delivery) Jul 3 07:00:29 ncp-skookum sm-mta[14359]: STARTTLS=client, relay=mx3.stanford.edu., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-SHA, bits=256/256 Jul 3 07:00:29 ncp-skookum sm-mta[14359]: q63E0Si6014357: to=<[email protected]>,<[email protected]>, ctladdr=<[email protected]> (1004/1005), delay=00:00:01, xdelay=00:00:00, mailer=esmtp, pri=152569, relay=mx3.stanford.edu. [171.67.219.73], dsn=2.0.0, stat=Sent (Ok: queued as 8F3505802AC) Jul 3 07:09:08 ncp-skookum CRON[14396]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jul 3 07:17:01 ncp-skookum CRON[14438]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jul 3 07:20:01 ncp-skookum CRON[14453]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 07:39:01 ncp-skookum CRON[14551]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jul 3 07:40:01 ncp-skookum CRON[14562]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 08:00:01 ncp-skookum CRON[14668]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 08:09:01 ncp-skookum CRON[14724]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jul 3 08:17:01 ncp-skookum CRON[14766]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jul 3 08:20:01 ncp-skookum CRON[14781]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Jul 3 08:39:01 ncp-skookum CRON[14881]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) Jul 3 08:40:01 ncp-skookum CRON[14892]: (smmsp) CMD (test -x /etc/init.d/sendmail && /usr/share/sendmail/sendmail cron-msp) Output of hdparm -t /dev/sd{a,b,c,d,e,f} This looks suspicious? /dev/sda: Timing buffered disk reads: 2 MB in 4.84 seconds = 423.39 kB/sec /dev/sdb: Timing buffered disk reads: 420 MB in 3.01 seconds = 139.74 MB/sec /dev/sdc: Timing buffered disk reads: 390 MB in 3.00 seconds = 129.87 MB/sec /dev/sdd: Timing buffered disk reads: 416 MB in 3.00 seconds = 138.51 MB/sec /dev/sde: Timing buffered disk reads: 422 MB in 3.00 seconds = 140.50 MB/sec /dev/sdf: Timing buffered disk reads: 416 MB in 3.01 seconds = 138.26 MB/sec

    Read the article

  • Easily Customize Internet Explorer 9 Using IE9 Tweaker Plus

    - by Lori Kaufman
    If you use Internet Explorer 9, we found a useful program, called IE Tweaker Plus, that allows you to easily tweak and customize over 27 settings in the browser, as well as create customized IE9 shortcuts that automatically open IE in InPrivate mode. IE9 Tweaker Plus does not need to be installed. To run it, simply extract the .zip file you downloaded (see the link at the end of this article) and double-click on the .exe file. If the User Account Control dialog box displays, click Yes to continue. HTG Explains: How Windows 8′s Secure Boot Feature Works & What It Means for Linux Hack Your Kindle for Easy Font Customization HTG Explains: What Is RSS and How Can I Benefit From Using It?

    Read the article

  • Speaker at the German Visual FoxPro Developer Conference 2004

    The following is an excerpt from the UniversalThread conference coverage of the German Visual FoxPro Developer Conference 2004 written by Hans-Otto Lochmann, Armin Neudert and myself. TRACK Active FoxPro Pages Back in 1996 Peter Herzog invented a FoxPro based solution to provide intranet capabilities for one of his customers. Nearly at the same time Rick Strahl had the same task and created WestWind Web Connection (WWWC). The aspect that developers have to have a full Visual FoxPro development environment to create WWWC solutions was the starting point of a "personal sportive competition" of Peter to write his own solution. But the main aspect has to be that it doesn't rely on a full VFP version in order to run. The VFP runtime should enough and the source code has to be compiled and interpreted on the fly. So, as Microsoft released Active Server Pages a name for Peter's solution was found: Active FoxPro Pages (AFP). During the years many drawbacks, design aspects as well as technological hassles forced ProLib Software to refactor the product. This way many limits like DCOM configuration, file-based information transfer between Web server and AFP, missing features (like upload forms or other Web servers than IIS) and extensibility were eliminated. As a consequence ProLib Software decided to rewrite Active FoxPro Pages in mid of 2002 completely. Christof Wollenhaupt, before his marriage known as Christof Lange, and Jochen Kirstätter had to solve this task. AFP 3.0 was officially released at German Devcon in November 2002. Today AFP has six distributors world-wide and there is a lot more information available online than before version 3.0. Directly after a short welcome speech by Rainer Becker, Jochen Kirstätter - aka JoKi - opened today's AFP track and introduced the basic concepts how Active FoxPro Pages works in general, explained the AFP terminilogy and every single component, and presented a small Walk-Through about how to write an AFP-based Web solution. Actually his presentation slides themselves were an AFP Web application. This way it was easy to integrate accompanying AFP samples on the fly. Additionally it was shown that no Visual FoxPro development environment is needed to create a Web application. A simple text editor like NotePad or any WYSIWYG editor on the market is usable to fullfil customer's requirements.Welcome at least two new speakers - Nina Schwanzer and Bernhard Reiter. Both are working at ProLib Software and this year's conference is their first time as speakers. And they did their job very well. The whole session was kind of a "ping pong" game and those two complemented each other to keep the audience in tension. First, they described typical requirements a modern desktop application should fullfil - online registration and activation, auto-update capabilities, or even frontend to administer a Web application on a remote system via internet, and explained how possible solutions like Web Services (using the SOAP interface), DCOM, and even .NET might solve those requirements. But any of those ways has different drawbacks like complicated installation or configuration, or extraordinary download sizes. Next, they introduced a technology they developed and used in a customer's project: Active FoxPro Pages Remote Procedure Call (AFP RPC). [...]   In the next session JoKi described how to extend Active FoxPro Pages. On the one hand AFP provides a plugin interface, and on the other hand any addon for Visual FoxPro might be usable as well. During the first half he spoke about the plugin interface and wrote live a new AFP extension - the Devcon plugin. Later he questioned any former step and showed that a single AFP document may solve the problem as well. So, developing extensions is only interesting if they are re-usable and generic. At the end he talked about multiple interfaces for the same business logic. For instance plain VFP class, COM server and .NET integration. Currently there are several specialized AFP extensions for sending mail, for using cryptographic routines (ie. based on .NET classes), or enhanced methods to handle HTML/XML strings.Rainer Becker and Peter Herzog introduced a new development for Visual Extend (VFX) - an AFP form builder. With this builder creating an AFP Web form designed with Visual FoxPro's form designer was a matter of seconds. The builder itself is currently in pre-release status and will be part of the VFX framework in the future. It was very impressive to see that the whole design of a form as well as most parts of its functionality were exported to a combination of HTML, JavaScript and Active FoxPro Pages. At half-time Jürgen "wOOdy" Wondzinski and JoKi changed places with Rainer and Peter, and presented some Web solutions in AFP. [...] Visual FoxPro 9.0 und Linux Is Linux still a topic for Visual FoxPro developers based on the activities during this year? In his session Jochen Kirstätter - aka JoKi - went not through the technical steps and requirements on how to setup and run FoxPro on a Linux client. Instead, he explained what Linux actually is, and talked about the high variety of distributions. In fact there are a lot of distributions around but since some several years there are some specialized ones available: Live Distributions (aka LiveCDs).The intension of LiveCDs is to run a full-featured Linux operating system on any personal computer directly from a bootable medium, like CD, DVD, or even USB memory stick, without installation on a hard disk. One of the first Linux LiveCDs was made by Klaus Knopper and is well-known as Knoppix. Today, many other LiveCDs are based on the concepts of Knoppix. During the session Jochen booted Morphix, a very light-weighted LiveCD, on his notebook, and actually showed the attendees that testing and playing around with Linux is absolutely easy. Running a text processing application swept away most of the contrary aspects the audience had. Okay, where is the part about FoxPro? Well, there are several scenarios a customer might require usage of Linux, and actually with all of them FoxPro could deal with. I guess that one of the more common ones is the situation that a customer has a heterogeneous intranet with Windows clients and Linux servers, i.e. Windows XP Professional and any Linux distribution on their servers. Even in this scenario there are two variants hidden! Why? Well, on the one hand there is a software package called Samba, that provides Windows server capabilities to a Linux system, and on the other hand there are several SQL servers for Linux, like PostgreSQL, DB2 and MySQL. Either way, FoxPro is able to deal with these scenarios, but you as developer have to know what you are talking about with your customers. And even if there's no Windows operating system, you are able to provide a FoxPro-based solution. Using the wine library - wine stands for Wine Is Not an Emulator - you are able to run your VFP applications on Linux clients, too; but not without reading VFP's EULA. Licenses were also part the session, and Jochen discussed the meaning of Open Source and its misunderstanding throughout most developers. Open Source does not mean that it's without a fee. Instead, it stands for access to the source code of an application or tool. And, VFP itself is one of the best samples to explain Open Source due to fact that since years, VFP is shipped with the xSource.zip archive. [...]

    Read the article

  • HTTP Basic Auth Protected Services using Web Service Data Control

    - by vishal.s.jain(at)oracle.com
    With Oracle JDeveloper 11g (11.1.1.4.0) one can now create Web Service Data Control for services which are protected with HTTP Basic Authentication.So when you provide such a service to the Data Control Wizard, a dialog pops up prompting you to entry the authentication details:After you give the details, you can proceed with the creation of Data Control.Once the Data Control is created, you can use the WSDC Tester to quickly test the service.In this case, since the service is protected, we need to first edit the connection to provide username details:Enter the authentication details against username and password. Once done, select DataControl.dcx and using the context menu, select 'Run'. This will bring up the Tester.On the Tester, select the Service Node and using context menu pick 'Operations'. This will bring up the methods which you can test:Now you can pick a method, provide the input parameters and hit execute to see the results.

    Read the article

  • Using the Static Code Analysis feature of Visual Studio (Premium/Ultimate) to find memory leakage problems

    - by terje
    Memory for managed code is handled by the garbage collector, but if you use any kind of unmanaged code, like native resources of any kind, open files, streams and window handles, your application may leak memory if these are not properly handled.  To handle such resources the classes that own these in your application should implement the IDisposable interface, and preferably implement it according to the pattern described for that interface. When you suspect a memory leak, the immediate impulse would be to start up a memory profiler and start digging into that.   However, before you follow that impulse, do a Static Code Analysis run with a ruleset tuned to finding possible memory leaks in your code.  If you get any warnings from this, fix them before you go on with the profiling. How to use a ruleset In Visual Studio 2010 (Premium and Ultimate editions) you can define your own rulesets containing a list of Static Code Analysis checks.   I have defined the memory checks as shown in the lists below as ruleset files, which can be downloaded – see bottom of this post.  When you get them, you can easily attach them to every project in your solution using the Solution Properties dialog. Right click the solution, and choose Properties at the bottom, or use the Analyze menu and choose “Configure Code Analysis for Solution”: In this dialog you can now choose the Memorycheck ruleset for every project you want to investigate.  Pressing Apply or Ok opens every project file and changes the projects code analysis ruleset to the one we have specified here. How to define your own ruleset  (skip this if you just download my predefined rulesets) If you want to define the ruleset yourself, open the properties on any project, choose Code Analysis tab near the bottom, choose any ruleset in the drop box and press Open Clear out all the rules by selecting “Source Rule Sets” in the Group By box, and unselect the box Change the Group By box to ID, and select the checks you want to include from the lists below. Note that you can change the action for each check to either warning, error or none, none being the same as unchecking the check.   Now go to the properties window and set a new name and description for your ruleset. Then save (File/Save as) the ruleset using the new name as its name, and use it for your projects as detailed above. It can also be wise to add the ruleset to your solution as a solution item. That way it’s there if you want to enable Code Analysis in some of your TFS builds.   Running the code analysis In Visual Studio 2010 you can either do your code analysis project by project using the context menu in the solution explorer and choose “Run Code Analysis”, you can define a new solution configuration, call it for example Debug (Code Analysis), in for each project here enable the Enable Code Analysis on Build   In Visual Studio Dev-11 it is all much simpler, just go to the Solution root in the Solution explorer, right click and choose “Run code analysis on solution”.     The ruleset checks The following list is the essential and critical memory checks.  CheckID Message Can be ignored ? Link to description with fix suggestions CA1001 Types that own disposable fields should be disposable No  http://msdn.microsoft.com/en-us/library/ms182172.aspx CA1049 Types that own native resources should be disposable Only if the pointers assumed to point to unmanaged resources point to something else  http://msdn.microsoft.com/en-us/library/ms182173.aspx CA1063 Implement IDisposable correctly No  http://msdn.microsoft.com/en-us/library/ms244737.aspx CA2000 Dispose objects before losing scope No  http://msdn.microsoft.com/en-us/library/ms182289.aspx CA2115 1 Call GC.KeepAlive when using native resources See description  http://msdn.microsoft.com/en-us/library/ms182300.aspx CA2213 Disposable fields should be disposed If you are not responsible for release, of if Dispose occurs at deeper level  http://msdn.microsoft.com/en-us/library/ms182328.aspx CA2215 Dispose methods should call base class dispose Only if call to base happens at deeper calling level  http://msdn.microsoft.com/en-us/library/ms182330.aspx CA2216 Disposable types should declare a finalizer Only if type does not implement IDisposable for the purpose of releasing unmanaged resources  http://msdn.microsoft.com/en-us/library/ms182329.aspx CA2220 Finalizers should call base class finalizers No  http://msdn.microsoft.com/en-us/library/ms182341.aspx Notes: 1) Does not result in memory leak, but may cause the application to crash   The list below is a set of optional checks that may be enabled for your ruleset, because the issues these points too often happen as a result of attempting to fix up the warnings from the first set.   ID Message Type of fault Can be ignored ? Link to description with fix suggestions CA1060 Move P/invokes to NativeMethods class Security No http://msdn.microsoft.com/en-us/library/ms182161.aspx CA1816 Call GC.SuppressFinalize correctly Performance Sometimes, see description http://msdn.microsoft.com/en-us/library/ms182269.aspx CA1821 Remove empty finalizers Performance No http://msdn.microsoft.com/en-us/library/bb264476.aspx CA2004 Remove calls to GC.KeepAlive Performance and maintainability Only if not technically correct to convert to SafeHandle http://msdn.microsoft.com/en-us/library/ms182293.aspx CA2006 Use SafeHandle to encapsulate native resources Security No http://msdn.microsoft.com/en-us/library/ms182294.aspx CA2202 Do not dispose of objects multiple times Exception (System.ObjectDisposedException) No http://msdn.microsoft.com/en-us/library/ms182334.aspx CA2205 Use managed equivalents of Win32 API Maintainability and complexity Only if the replace doesn’t provide needed functionality http://msdn.microsoft.com/en-us/library/ms182365.aspx CA2221 Finalizers should be protected Incorrect implementation, only possible in MSIL coding No http://msdn.microsoft.com/en-us/library/ms182340.aspx   Downloadable ruleset definitions I have defined three rulesets, one called Inmeta.Memorycheck with the rules in the first list above, and Inmeta.Memorycheck.Optionals containing the rules in the second list, and the last one called Inmeta.Memorycheck.All containing the sum of the two first ones.  All three rulesets can be found in the  zip archive  “Inmeta.Memorycheck” downloadable from here.   Links to some other resources relevant to Static Code Analysis MSDN Magazine Article by Mickey Gousset on Static Code Analysis in VS2010 MSDN :  Analyzing Managed Code Quality by Using Code Analysis, root of the documentation for this Preventing generated code from being analyzed using attributes Online training course on Using Code Analysis with VS2010 Blogpost by Tatham Oddie on custom code analysis rules How to write custom rules, from Microsoft Code Analysis Team Blog Microsoft Code Analysis Team Blog

    Read the article

  • OpenCV install problems on Studio 12.04 - broken dependencies

    - by Will
    I'm trying to follow the Ubuntu OpenCV documentation at OpenCV. The provided script has a line which executed for some time, taking away more packages than I expected (such as ubuntu-studio video); sudo apt-get -qq remove ffmpeg x264 libx264-dev When the script gets to the line below, it bombs; sudo apt-get -qq install libopencv-dev build-essential checkinstall cmake pkg-config yasm libtiff4-dev libjpeg-dev libjasper-dev libavcodec-dev libavformat-dev libswscale-dev libdc1394-22-dev libxine-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev libv4l-dev python-dev python-numpy libtbb-dev libqt4-dev libgtk2.0-dev libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev x264 v4l-utils ffmpeg The error msg is; E: Unable to correct problems, you have held broken packages. I've since run Update-Manager, run sudo apt-get updates, rebooted, tried the above script line manually, and still no change. I've just run sudo apt-get install -f and nothing seemed to change. It did mention that some packages were no longer needed and could be removed by apt-get autoremove, so I ran that. It removed a number of packages, so I reran the install command above. Still same problem of held broken packages. I just ran sudo apt-get -u dist-upgrade Part of the response was; The following packages have been kept back: gstreamer0.10-ffmpeg I'm not sure what that means. I do know that it shows up in my Update-Manager and cannot be checked I then ran sudo dpkg --configure -a and then reran sudo apt-get -f install and the package was still not upgraded, though there was this very interesting comment; Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: gstreamer0.10-ffmpeg : Depends: libavcodec53 (< 5:0) but it is not going to be installed or libavcodec-extra-53 (< 5:0) but 5:0.7.2-1ubuntu1+codecs1~oneiric2 is to be installed E: Unable to correct problems, you have held broken packages. Then I ran sudo apt-get -u dist-upgrade It showed I had one held package, so I ran; sudo apt-get -o Debug::pkgProblemResolver=yes dist-upgrade It also exited without upgrading the package, so I ran; sudo apt-get remove --dry-run gstreamer0.10-ffmpeg:i386 And it gave me; *The following packages will be REMOVED: arista gstreamer0.10-ffmpeg 0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded. Remv arista [0.9.7-3ubuntu1] Remv gstreamer0.10-ffmpeg [0.10.12-1ubuntu1]* But when I reran sudo apt-get -u dist-upgrade It showed the package was still there. *The following packages have been kept back: gstreamer0.10-ffmpeg 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.* Update: Just went into Synaptic PM and completely removed gstreamer0.10-ffmpeg Reran sudo apt-get -u dist-upgrade And was told; 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. However, when I ran the original apt-get to install opencv (first code at the top of this question), it still gave me the same broken package errors. So I tried $ cat /etc/apt/sources.list # # deb cdrom:[Ubuntu-Studio 11.10 _Oneiric Ocelot_ - Release i386 (20111011.1)]/ oneiric main multiverse restricted universe # deb cdrom:[Ubuntu-Studio 11.10 _Oneiric Ocelot_ - Release i386 (20111011.1)]/ oneiric main multiverse restricted universe # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://us.archive.ubuntu.com/ubuntu/ precise main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://us.archive.ubuntu.com/ubuntu/ precise universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise universe deb http://us.archive.ubuntu.com/ubuntu/ precise-updates universe deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://us.archive.ubuntu.com/ubuntu/ precise multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise multiverse deb http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse deb-src http://us.archive.ubuntu.com/ubuntu/ precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://security.ubuntu.com/ubuntu precise-security main restricted deb-src http://security.ubuntu.com/ubuntu precise-security main restricted deb http://security.ubuntu.com/ubuntu precise-security universe deb-src http://security.ubuntu.com/ubuntu precise-security universe deb http://security.ubuntu.com/ubuntu precise-security multiverse deb-src http://security.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu oneiric partner ## Uncomment the following two lines to add software from Ubuntu's ## 'extras' repository. ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # deb http://extras.ubuntu.com/ubuntu oneiric main # deb-src http://extras.ubuntu.com/ubuntu oneiric main # deb http://download.opensuse.org/repositories/home:/popinet/xUbuntu_11.04 ./ # disabled on upgrade to precise and then; $ cat /etc/apt/sources.list.d/* But I don't have enough reputation to post the results here (it says I need at least 10 reputation points to post more than 2 links), so I don't know how to provide the requested feedback. Then tried; $ sudo apt-get check [sudo] password for <abcd>: Reading package lists... Done Building dependency tree Reading state information... Done However, no resolution of the problem yet. What else do I need to do? Will an upgrade to Ubuntu Studio 13.xx solve this problem (or compound it)?

    Read the article

  • Packages are not available for installation

    - by Alex Farber
    Changing some Software Update settings I possibly corrupted something, and now I don't see many packages in the list. For example: alex@u120464:~$ sudo apt-get install codeblocks [sudo] password for alex: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package codeblocks I checked all options in the Software Sources dialog, but packages are still not available. How can I fix this? OS: Ubuntu 12.04, 64 bit. Additional information. alex@u120464:~$ sudo apt-get update [sudo] password for alex: Ign http://extras.ubuntu.com precise InRelease Ign http://security.ubuntu.com precise-security InRelease Ign http://archive.canonical.com precise InRelease Ign http://archive.ubuntu.com precise InRelease Ign http://archive.ubuntu.com precise-updates InRelease ... It looks like most Ubuntu repositories are not searched, how can I restore default update behaviour?

    Read the article

  • Change the Default Font Size in Word

    - by Matthew Guay
    Are you frustrated by always having to change the font size before you create a document it Word?  Here’s how you can end that frustration and set your favorite default font size for once and for all! Microsoft changed the default font font to 11 point Calibri in Word 2007 after years of 12 point Times New Roman being the default.  Although it can be easily overlooked, there are ways in Word to change the default settings to anything you want.  Whether you want to change your default to 12 point Calibri or to 48 point Comic Sans…here’s how to change your default font settings in Word 2007 and 2010. Changing Default Fonts in Word To change the default font settings, click the small box with an arrow in the right left corner of the Font section of the Home tab in the Ribbon.   In the Font dialog box, choose the default font settings you want.  Notice in the Font box it says “+Body”; this means that the font will be chosen by the document style you choose, and you are only selecting the default font style and size.  So, if your style uses Calibri, then your font will be Calibri at the size and style you chose.  If you’d prefer to choose a specific font to be the default, just select one from the drop-down box and this selection will override the font selection in your document style. Here we left all the default settings, except we selected 12 point font in the Latin text box (this is your standard body text; users of Asian languages such as Chinese may see a box for Asian languages).  When you’ve made your selections, click the “Set as Default” button in the bottom left corner of the dialog. You will be asked to confirm that you want these settings to be made default.  In Word 2010, you will be given the option to set these settings for this document only or for all documents.  Click the bullet beside “All documents based on the Normal.dotm template?”, and then click Ok. In Word 2007, simply click Ok to save these settings as default. Now, whenever you open Word or create a new document, your default font settings should be set exactly to what you want.  And simply repeat these steps to change your default font settings again if you want. Editing your default template file Another way to change your default font settings is to edit your Normal.dotm file.  This file is what Word uses to create new documents; it basically copies the formatting in this document each time you make a new document. To edit your Normal.dotm file, enter the following in the address bar in Explorer or in the Run prompt: %appdata%\Microsoft\Templates This will open your Office Templates folder.  Right-click on the Normal.dotm file, and click Open to edit it.  Note: Do not double-click on the file, as this will only create a new document based on Normal.dotm and any edits you make will not be saved in this file.   Now, change any font settings as you normally would.  Remember: anything you change or enter in this document will appear in any new document you create using Word. If you want to revert to your default settings, simply delete your Normal.dotm file.  Word will recreate it with the standard default settings the next time you open Word. Please Note: Changing your default font size will not change the font size in existing documents, so these will still show the settings you used when these documents were created.  Also, some addins can affect your Normal.dotm template.  If Word does not seem to remember your font settings, try disabling Word addins to see if this helps. Conclusion Sometimes it’s the small things that can be the most frustrating.  Getting your default font settings the way you want is a great way to take away a frustration and make you more productive. And here’s a quick question: Do you prefer the new default 11 point Calibri, or do you prefer 12 point Times New Roman or some other combination?  Sound off in the comments, and let the world know your favorite font settings. Similar Articles Productive Geek Tips Change the Default Font in Excel 2007Add Emphasis to Paragraphs with Drop Caps in Word 2007Keep Websites From Using Tiny Fonts in SafariMake Word 2007 Always Save in Word 2003 FormatStupid Geek Tricks: Enable More Fonts for the Windows Command Prompt TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Spyware Blaster v4.3 Yes, it’s Patch Tuesday Generate Stunning Tag Clouds With Tagxedo Install, Remove and HIDE Fonts in Windows 7 Need Help with Your Home Network? Awesome Lyrics Finder for Winamp & Windows Media Player

    Read the article

  • Plan your SharePoint 2010 Content Type Hub carefully

    - by Wayne
    Currently setting up a new environment on SharePoint 2010 (which was made available for download yesterday if anyone missed that :-). One of the new features of SharePoint 2010 is to set up a Content Type Hub (which is a part of the Metadata Service Application), which is a hub for all Content Types that other Site Collections can subscribe to. That is you only need to manage your content types in one location. Setting up the Content Type Hub is not that difficult but you must make it very careful to avoid a lot of work and troubleshooting. Here is a short tutorial with a few tips and tricks to make it easy for you to get started. Determine location of Content Type Hub First of all you need to decide in which Site Collection to place your Content Type Hub; in the root site collection or a specific one. I think using a specific Site Collection that only acts as a Content Type Hub is the best way, there are no best practice as of now. So I create a new Site Collection, at for instance http://server/sites/CTH/. The top-level site of this site collection should be for instance a Team Site. You cannot use Blank Site by default, which would have been the best option IMHO, since that site does not have the Taxonomy feature stapled upon it (check the TaxonomyFeatureStapler feature for which site templates that can be used). Configure Managed Metadata Service Application Next you need to create your Managed Metadata Service Application or configure the existing one, Central Administration > Application Management > Manage Service Applications. Select the Managed Metadata service application and click Properties if you already have created it. In the bottom of the dialog window when you are creating the service application or when you are editing the properties is a section to fill in the Content Type Hub. In this text box fill in the URL of the Content Type Hub. It is essential that you have decided where your Content Type Hub will reside, since once this is set you cannot change it. The only way to change it is to rebuild the whole managed metadata service application! Also make sure that you enter the URL correctly. I did copy and paste the URL once and got the /default.aspx in the URL which funked the whole service up. Make sure that you only use the URL to the Site Collection of the hub. Now you have to set up so that other Site Collections can consume the content types from the hub. This is done by selecting the connection for the managed metadata service application and clicking properties. A new dialog window opens and there you need to click the Consumes content types from the Content Type Gallery at nnnn. Now you are free to syndicate your Content Types from the Hub. Publish Content Types To publish a Content Type from the hub you need to go to Site Settings > Content Types and select the content type that you would like to publish. Then select Manage publishing for this content type. This takes you to a page from where you can Publish, Unpublish or Republish the content type. Once the content type is published it can take up to an hour for the subscribing Site Collections to get it. This is controlled by the Content Type Subscriber job that is scheduled to run once an hour. To speed up your publishing just go to Central Administration > Monitoring > Review Job Definitions > Content Type Subscriber and click Run now and you content type is very soon available for use. Published Content Type status You can check the status of the content type publishing in your destination site collections by selecting Site Settings > Content Type Publishing. From here you can force a refresh of all subscribed content types, see which ones that are subscribed and finally check the publishing error log. This error log is very useful for detecting errors during the publishing. For instance if you use any features such as ratings, metadata, document ids in your content type hub and your destination site collection does not have those features available this will be reported here.

    Read the article

  • Internet Explorer 9 Cannot open file on download CTRL + J doesn’t work can’t open list of downloads

    - by simonsabin
    If any of the above symptoms are causing a problem, i.e. 1. You download a file and the download dialog disappears. 2. You select open when you download a file and nothing happens. 3. The View Downloads doesn’t work 4. CTRL + J doesn’t work (view downloads) The solution is to clear your download history See IE9 - View downloads / Ctrl+J do not open. I cannot open any file. But SAVE function still work fine. 64 bit version. for details the answer is provided by Steven. S on June 20th. I hope that...(read more)

    Read the article

  • Screen magnifier wrecks login screen — how to get rid of it?

    - by Jonik
    In the Ubuntu / GNOME login screen, I tried some of the different accessibility settings out of curiosity. Big mistake! Enabling the "use screen magnifier" (or whatever) option breaks down the view horribly, and makes it impossible to even access the settings again: (Right click & View Image for full size) Yes, I tried to access every corner of the screen using the mouse, but there's just chaos everywhere. Fortunately, I can still log in (pressing "Esc" makes the normal login dialog appear on the left-hand monitor). My question is, how to disable the "magnifier" option outside of the login screen itself? (By editing some config file perhaps?) I don't care about getting the magnifier mode to work properly - just make it go away altogether, please.

    Read the article

  • Synching folders other than the default Ubuntu One folder doesn't seem to work on Windows

    - by Gordon
    I have installed Ubuntu One on two machines - one Windows 7 and one Windows XP. I have added a folder from the computer using the "Add Folder" button on the Windows 7 machine. The Ubuntu One dialog box tells me that file synch is finished and up to date. I now have two problems. If I log on to my Ubuntu One account on the web, I can see what appears to be a LINK to the synched folder - I cannot open the folder and see the contents. This is what I see in the Files page of my Ubuntu one account: ~/Documents/Computing/Test On the second machine, after synching is complete, I see the folder but NOT the contents. The folder is EMPTY. Can anyone provide a solution as to a) what is happening and b) how do I get this folder to synch properly on the other locations?

    Read the article

  • SPARC T3-1 Record Results Running JD Edwards EnterpriseOne Day in the Life Benchmark with Added Batch Component

    - by Brian
    Using Oracle's SPARC T3-1 server for the application tier and Oracle's SPARC Enterprise M3000 server for the database tier, a world record result was produced running the Oracle's JD Edwards EnterpriseOne applications Day in the Life benchmark run concurrently with a batch workload. The SPARC T3-1 server based result has 25% better performance than the IBM Power 750 POWER7 server even though the IBM result did not include running a batch component. The SPARC T3-1 server based result has 25% better space/performance than the IBM Power 750 POWER7 server as measured by the online component. The SPARC T3-1 server based result is 5x faster than the x86-based IBM x3650 M2 server system when executing the online component of the JD Edwards EnterpriseOne 9.0.1 Day in the Life benchmark. The IBM result did not include a batch component. The SPARC T3-1 server based result has 2.5x better space/performance than the x86-based IBM x3650 M2 server as measured by the online component. The combination of SPARC T3-1 and SPARC Enterprise M3000 servers delivered a Day in the Life benchmark result of 5000 online users with 0.875 seconds of average transaction response time running concurrently with 19 Universal Batch Engine (UBE) processes at 10 UBEs/minute. The solution exercises various JD Edwards EnterpriseOne applications while running Oracle WebLogic Server 11g Release 1 and Oracle Web Tier Utilities 11g HTTP server in Oracle Solaris Containers, together with the Oracle Database 11g Release 2. The SPARC T3-1 server showed that it could handle the additional workload of batch processing while maintaining the same number of online users for the JD Edwards EnterpriseOne Day in the Life benchmark. This was accomplished with minimal loss in response time. JD Edwards EnterpriseOne 9.0.1 takes advantage of the large number of compute threads available in the SPARC T3-1 server at the application tier and achieves excellent response times. The SPARC T3-1 server consolidates the application/web tier of the JD Edwards EnterpriseOne 9.0.1 application using Oracle Solaris Containers. Containers provide flexibility, easier maintenance and better CPU utilization of the server leaving processing capacity for additional growth. A number of Oracle advanced technology and features were used to obtain this result: Oracle Solaris 10, Oracle Solaris Containers, Oracle Java Hotspot Server VM, Oracle WebLogic Server 11g Release 1, Oracle Web Tier Utilities 11g, Oracle Database 11g Release 2, the SPARC T3 and SPARC64 VII+ based servers. This is the first published result running both online and batch workload concurrently on the JD Enterprise Application server. No published results are available from IBM running the online component together with a batch workload. The 9.0.1 version of the benchmark saw some minor performance improvements relative to 9.0. When comparing between 9.0.1 and 9.0 results, the reader should take this into account when the difference between results is small. Performance Landscape JD Edwards EnterpriseOne Day in the Life Benchmark Online with Batch Workload This is the first publication on the Day in the Life benchmark run concurrently with batch jobs. The batch workload was provided by Oracle's Universal Batch Engine. System RackUnits Online Users Resp Time (sec) BatchConcur(# of UBEs) BatchRate(UBEs/m) Version SPARC T3-1, 1xSPARC T3 (1.65 GHz), Solaris 10 M3000, 1xSPARC64 VII+ (2.86 GHz), Solaris 10 4 5000 0.88 19 10 9.0.1 Resp Time (sec) — Response time of online jobs reported in seconds Batch Concur (# of UBEs) — Batch concurrency presented in the number of UBEs Batch Rate (UBEs/m) — Batch transaction rate in UBEs/minute. JD Edwards EnterpriseOne Day in the Life Benchmark Online Workload Only These results are for the Day in the Life benchmark. They are run without any batch workload. System RackUnits Online Users ResponseTime (sec) Version SPARC T3-1, 1xSPARC T3 (1.65 GHz), Solaris 10 M3000, 1xSPARC64 VII (2.75 GHz), Solaris 10 4 5000 0.52 9.0.1 IBM Power 750, 1xPOWER7 (3.55 GHz), IBM i7.1 4 4000 0.61 9.0 IBM x3650M2, 2xIntel X5570 (2.93 GHz), OVM 2 1000 0.29 9.0 IBM result from http://www-03.ibm.com/systems/i/advantages/oracle/, IBM used WebSphere Configuration Summary Hardware Configuration: 1 x SPARC T3-1 server 1 x 1.65 GHz SPARC T3 128 GB memory 16 x 300 GB 10000 RPM SAS 1 x Sun Flash Accelerator F20 PCIe Card, 92 GB 1 x 10 GbE NIC 1 x SPARC Enterprise M3000 server 1 x 2.86 SPARC64 VII+ 64 GB memory 1 x 10 GbE NIC 2 x StorageTek 2540 + 2501 Software Configuration: JD Edwards EnterpriseOne 9.0.1 with Tools 8.98.3.3 Oracle Database 11g Release 2 Oracle 11g WebLogic server 11g Release 1 version 10.3.2 Oracle Web Tier Utilities 11g Oracle Solaris 10 9/10 Mercury LoadRunner 9.10 with Oracle Day in the Life kit for JD Edwards EnterpriseOne 9.0.1 Oracle’s Universal Batch Engine - Short UBEs and Long UBEs Benchmark Description JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations. Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and other manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company. The workload consists of online transactions and the UBE workload of 15 short and 4 long UBEs. LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time. The UBE processes workload runs from the JD Enterprise Application server. Oracle's UBE processes come as three flavors: Short UBEs < 1 minute engage in Business Report and Summary Analysis, Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address, Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs. The UBE workload generates large numbers of PDF files reports and log files. The UBE Queues are categorized as the QBATCHD, a single threaded queue for large UBEs, and the QPROCESS queue for short UBEs run concurrently. One of the Oracle Solaris Containers ran 4 Long UBEs, while another Container ran 15 short UBEs concurrently. The mixed size UBEs ran concurrently from the SPARC T3-1 server with the 5000 online users driven by the LoadRunner. Oracle’s UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute. Key Points and Best Practices Two JD Edwards EnterpriseOne Application Servers and two Oracle Fusion Middleware WebLogic Servers 11g R1 coupled with two Oracle Fusion Middleware 11g Web Tier HTTP Server instances on the SPARC T3-1 server were hosted in four separate Oracle Solaris Containers to demonstrate consolidation of multiple application and web servers. See Also SPARC T3-1 oracle.com SPARC Enterprise M3000 oracle.com Oracle Solaris oracle.com JD Edwards EnterpriseOne oracle.com Oracle Database 11g Release 2 Enterprise Edition oracle.com Disclosure Statement Copyright 2011, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 6/27/2011.

    Read the article

  • Azure Storage Explorer

    - by kaleidoscope
    Azure Storage Explorer –  an another way to Deploy the services on Cloud Azure Storage Explorer is a useful GUI tool for inspecting and altering the data in your Azure cloud storage projects including the logs of your cloud-hosted applications. All three types of cloud storage can be viewed: blobs, queues, and tables. You can also create or delete blob/queue/table containers and items. Text blobs can be edited and all data types can be imported/exported between the cloud and local files. Table records can be imported/exported between the cloud and spreadsheet CSV files. Why Azure Storage Explorer Azure Storage Explorer is a licensed CodePlex project provided by Neudesic – a Microsoft partner.  It is a simple UI that requires you to input your blob storage name, access key and endpoints in the Storage Settings dialog. For more details please refer to the link: http://azurestorageexplorer.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=35189   Anish, S

    Read the article

  • OCS 2007 R2 User Properties Error Message

    - by BWCA
    When I attempted to configure one of our user’s Meeting settings using the Microsoft Office Communications Server 2007 R2 Administration Tool   I received an Validation failed – Validation failed with HRESULT = 0XC3EC7E02 dialog box error message. I received the same error message when I tried to configure the user’s Telephony and Other settings. Using ADSI Edit, I compared the settings of an user that I had no problems configuring and the user that I had problems configuring.  For the user I had problems configuring, I noticed a trailing space after the last phone number digit for the user’s msRTCSIP-Line attribute. After I removed the trailing space for the attribute and waited for Active Directory replication to complete, I was able to configure the user’s Meeting settings (and Telephony/Other settings) without any problems. If you get the error message, check your user’s msRTCSIP-xxxxx attributes in Active Directory using ADSI Edit for any trailing spaces, typos, or any other mistakes.

    Read the article

  • NX/SSH remote access with Remmina

    - by Niklas
    After many days and a lot of frustration, I managed to get freenx to work on my home server. I can connect to it with nomachine's linux client, but I want to use Remmina for this purpose. The problem is that I don't exactly know how to connect to a NX-server with the program. In the connection dialog, I've chosen SSH as the protocol, and I've correctly added the IP and port. Under "SSH Authentication" I've added my user name on the server, and I choose "identity file" and selected the ssh-key I generated (which works with nxclient). (When am I supposed to provide my password for the user on the server?) When I try to connect I get the message: SSH public key authentication failed: Public key file doesn't exist Why do I get this message? How shall I proceed correctly to get the authentication working? Thank you for your time!

    Read the article

  • Try a sample: Using the counter predicate for event sampling

    - by extended_events
    Extended Events offers a rich filtering mechanism, called predicates, that allows you to reduce the number of events you collect by specifying criteria that will be applied during event collection. (You can find more information about predicates in Using SQL Server 2008 Extended Events (by Jonathan Kehayias)) By evaluating predicates early in the event firing sequence we can reduce the performance impact of collecting events by stopping event collection when the criteria are not met. You can specify predicates on both event fields and on a special object called a predicate source. Predicate sources are similar to action in that they typically are related to some type of global information available from the server. You will find that many of the actions available in Extended Events have equivalent predicate sources, but actions and predicates sources are not the same thing. Applying predicates, whether on a field or predicate source, is very similar to what you are used to in T-SQL in terms of how they work; you pick some field/source and compare it to a value, for example, session_id = 52. There is one predicate source that merits special attention though, not just for its special use, but for how the order of predicate evaluation impacts the behavior you see. I’m referring to the counter predicate source. The counter predicate source gives you a way to sample a subset of events that otherwise meet the criteria of the predicate; for example you could collect every other event, or only every tenth event. Simple CountingThe counter predicate source works by creating an in memory counter that increments every time the predicate statement is evaluated. Here is a simple example with my favorite event, sql_statement_completed, that only collects the second statement that is run. (OK, that’s not much of a sample, but this is for demonstration purposes. Here is the session definition: CREATE EVENT SESSION counter_test ON SERVERADD EVENT sqlserver.sql_statement_completed    (ACTION (sqlserver.sql_text)    WHERE package0.counter = 2)ADD TARGET package0.ring_bufferWITH (MAX_DISPATCH_LATENCY = 1 SECONDS) You can find general information about the session DDL syntax in BOL and from Pedro’s post Introduction to Extended Events. The important part here is the WHERE statement that defines that I only what the event where package0.count = 2; in other words, only the second instance of the event. Notice that I need to provide the package name along with the predicate source. You don’t need to provide the package name if you’re using event fields, only for predicate sources. Let’s say I run the following test queries: -- Run three statements to test the sessionSELECT 'This is the first statement'GOSELECT 'This is the second statement'GOSELECT 'This is the third statement';GO Once you return the event data from the ring buffer and parse the XML (see my earlier post on reading event data) you should see something like this: event_name sql_text sql_statement_completed SELECT ‘This is the second statement’ You can see that only the second statement from the test was actually collected. (Feel free to try this yourself. Check out what happens if you remove the WHERE statement from your session. Go ahead, I’ll wait.) Percentage Sampling OK, so that wasn’t particularly interesting, but you can probably see that this could be interesting, for example, lets say I need a 25% sample of the statements executed on my server for some type of QA analysis, that might be more interesting than just the second statement. All comparisons of predicates are handled using an object called a predicate comparator; the simple comparisons such as equals, greater than, etc. are mapped to the common mathematical symbols you know and love (eg. = and >), but to do the less common comparisons you will need to use the predicate comparators directly. You would probably look to the MOD operation to do this type sampling; we would too, but we don’t call it MOD, we call it divides_by_uint64. This comparator evaluates whether one number is divisible by another with no remainder. The general syntax for using a predicate comparator is pred_comp(field, value), field is always first and value is always second. So lets take a look at how the session changes to answer our new question of 25% sampling: CREATE EVENT SESSION counter_test_25 ON SERVERADD EVENT sqlserver.sql_statement_completed    (ACTION (sqlserver.sql_text)    WHERE package0.divides_by_uint64(package0.counter,4))ADD TARGET package0.ring_bufferWITH (MAX_DISPATCH_LATENCY = 1 SECONDS)GO Here I’ve replaced the simple equivalency check with the divides_by_uint64 comparator to check if the counter is evenly divisible by 4, which gives us back every fourth record. I’ll leave it as an exercise for the reader to test this session. Why order matters I indicated at the start of this post that order matters when it comes to the counter predicate – it does. Like most other predicate systems, Extended Events evaluates the predicate statement from left to right; as soon as the predicate statement is proven false we abandon evaluation of the remainder of the statement. The counter predicate source is only incremented when it is evaluated so whether or not the counter is incremented will depend on where it is in the predicate statement and whether a previous criteria made the predicate false or not. Here is a generic example: Pred1: (WHERE statement_1 AND package0.counter = 2)Pred2: (WHERE package0.counter = 2 AND statement_1) Let’s say I cause a number of events as follows and examine what happens to the counter predicate source. Iteration Statement Pred1 Counter Pred2 Counter A Not statement_1 0 1 B statement_1 1 2 C Not statement_1 1 3 D statement_1 2 4 As you can see, in the case of Pred1, statement_1 is evaluated first, when it fails (A & C) predicate evaluation is stopped and the counter is not incremented. With Pred2 the counter is evaluated first, so it is incremented on every iteration of the event and the remaining parts of the predicate are then evaluated. In this example, Pred1 would return an event for D while Pred2 would return an event for B. But wait, there is an interesting side-effect here; consider Pred2 if I had run my statements in the following order: Not statement_1 Not statement_1 statement_1 statement_1 In this case I would never get an event back from the system because the point at which counter=2, the rest of the predicate evaluates as false so the event is not returned. If you’re using the counter target for sampling and you’re not getting the expected events, or any events, check the order of the predicate criteria. As a general rule I’d suggest that the counter criteria should be the last element of your predicate statement since that will assure that your sampling rate will apply to the set of event records defined by the rest of your predicate. Aside: I’m interested in hearing about uses for putting the counter predicate criteria earlier in the predicate statement. If you have one, post it in a comment to share with the class. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Ubuntu 11.04:Add right click menu as "Compress as ZIP"

    - by Ananthavel Sundararajan
    Step 1: I wanted to Add a menu as "Compress as ZIP" on right click. I know i can use change default compress format as "ZIP" using gconf-editor. But I wanted to add a new Menu Item for Compressing as ZIP without opening any other option dialog. Step 2: I wanted to Compress a file as ZIP and Rename it as a "epub". Please let me know is it possible to zip&rename by adding single menu Item? Im using Ubuntu 11.04 and Installed "Nautilus-Action-Configurations", but unsuccessful. N.B. I have read this Ask Ubuntu Q&A; I dont want to open a new window to choose me the format. It should be straight away saved as ZIP.

    Read the article

  • IntelliTrace Causing Slow WPF Debugging in Visual Studio 2010

    - by WeigeltRo
    Just a quick note to myself (and others that may stumble across this blog entry via a web search): If a WPF application is running slow inside the debugger of Visual Studio 2010, but perfectly fine without a debugger (e.g. by hitting Ctrl-F5), then the reason may be Intellitrace. In my case switching off Intellitrace (only available in the Ultimate Edition of Visual Studio 2010) helped gitting rid of the sluggish behavior of a DataGrid. In the “Tools” menu select “Options”, on the Options dialog click “Intellitrace” and then uncheck “Enable Intellitrace”. Note that I do not have access to Visual Studio 2012 at the time of this writing, thus I cannot make a statement about its debugging behavior.

    Read the article

  • How to cancel Update Manager downloading flashplugin-installer?

    - by kev
    Update Manger has frozen for about 60min when downloading flashplugin-installer. And the Cancel button of Applying Changes dialog is disabled. When I click the top-left x, it doesn't response. How to cancel downloading flashplugin-installer? ...SKIP... Setting up libxatracker1 (8.0.2-0ubuntu3.1) ... Setting up update-manager-core (1:0.156.14.5) ... Setting up update-manager (1:0.156.14.5) ... Setting up update-notifier-common (0.119ubuntu8.4) ... flashplugin-installer: downloading http://archive.canonical.com/pool/partner/a/adobe-flashplugin/adobe-flashplugin_11.2.202.236.orig.tar.gz

    Read the article

  • Remote Debug Windows Azure Cloud Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/02/remote-debug-windows-azure-cloud-service.aspxOn the 22nd of October Microsoft Announced the new Windows Azure SDK 2.2. It introduced a lot of cool features but one of it shocked most, which is the remote debug support for Windows Azure Cloud Service (a.k.a. WACS).   Live Debug is Nightmare for Cloud Application When we are developing against public cloud, debug might be the most difficult task, especially after the application had been deployed. In order to minimize the debug effort, Microsoft provided local emulator for cloud service and storage once the Windows Azure platform was announced. By using local emulator developers could be able run their application on local machine with almost the same behavior as running on Windows Azure, and that could be debug easily and quickly. But when we deployed our application to Azure, we have to use log, diagnostic monitor to debug, which is very low efficient. Visual Studio 2012 introduced a new feature named "anonymous remote debug" which allows any workstation under any user could be able to attach the remote process. This is less secure comparing the authenticated remote debug but much easier and simpler to use. Now in Windows Azure SDK 2.2, we could be able to attach our application from our local machine to Windows Azure, and it's very easy.   How to Use Remote Debugger First, let's create a new Windows Azure Cloud Project in Visual Studio and selected ASP.NET Web Role. Then create an ASP.NET WebForm application. Then right click on the cloud project and select "publish". In the publish dialog we need to make sure the application will be built in debug mode, since .NET assembly cannot be debugged in release mode. I enabled Remote Desktop as I will log into the virtual machine later in this post. It's NOT necessary for remote debug. And selected "advanced settings" tab, make sure we checked "Enable Remote Debugger for all roles". In WACS, a cloud service could be able to have one or more roles and each role could be able to have one or more instances. The remote debugger will be enabled for all roles and all instances if we checked. Currently there's no way for us to specify which role(s) and which instance(s) to enable. Finally click "publish" button. In the windows azure activity window in Visual Studio we can find some information about remote debugger. To attache remote process would be easy. Open the "server explorer" window in Visual Studio and expand "cloud services" node, find the cloud service, role and instance we had just published and wanted to debug, right click on the instance and select "attach debugger". Then after a while (it's based on how fast our Internet connect to Windows Azure Data Center) the Visual Studio will be switched to debug mode. Let's add a breakpoint in the default web page's form load function and refresh the page in browser to see what's happen. We can see that the our application was stopped at the breakpoint. The call stack, watch features are all available to use. Now let's hit F5 to continue the step, then back to the browser we will find the page was rendered successfully.   What Under the Hood Remote debugger is a WACS plugin. When we checked the "enable remote debugger" in the publish dialog, Visual Studio will add two cloud configuration settings in the CSCFG file. Since they were appended when deployment, we cannot find in our project's CSCFG file. But if we opened the publish package we could find as below. At the same time, Visual Studio will generate a certificate and included into the package for remote debugger. If we went to the azure management portal we will find there will a certificate under our application which was created, uploaded by remote debugger plugin. Since I enabled Remote Desktop there will be two certificates in the screenshot below. The other one is for remote debugger. When our application was deployed, windows azure system will open related ports for remote debugger. As below you can see there are two new ports opened on my application. Finally, in our WACS virtual machine, windows azure system will copy the remote debug component based on which version of Visual Studio we are using and start. Our application then can be debugged remotely through the visual studio remote debugger. Below is the task manager on the virtual machine of my WACS application.   Summary In this post I demonstrated one of the feature introduced in Windows Azure SDK 2.2, which is Remote Debugger. It allows us to attach our application from local machine to windows azure virtual machine once it had been deployed. Remote debugger is powerful and easy to use, but it brings more security risk. And since it's only available for debug build this means the performance will be worse than release build. Hence we should only use this feature for staging test and bug fix (publish our beta version to azure staging slot), rather than for production.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Windows 8 Live Accounts and the actual Windows Account

    - by Rick Strahl
    As if Windows Security wasn't confusing enough, in Windows 8 we get thrown yet another curve ball with Windows Live accounts to logon. When I set up my Windows 8 machine I originally set it up with a 'real', non-live account that I always use on my Windows machines. I did this mainly so I have a matching account for resources around my home and intranet network so I could log on to network resources properly. At some point later I decided to set up Windows Live security just to see how changes things. Windows wants you to use Windows Live Windows 8 logins are required in order for the Windows RT account info to work. Not that I care - since installing Windows 8 I've maybe spent 10 minutes with Windows RT because - well it's pretty freaking sucky on the desktop. From shitty apps to mis-managed screen real estate I can't say that there's anything compelling there to date, but then I haven't looked that hard either. Anyway… I set up the Windows Live account to see if that changes things. It does - I do get all my live logins to work from Live Account so that Twitter and Facebook posts and pictures and calendars all show up on live tiles on the start screen and in the actual apps. That's nice-ish, but hardly that exciting given that all of the apps tied to those live tiles are average at best. And it would have been nice if all of this could be done without being forced into running with a Windows Live User Account - this all feels like strong-arming you into moving into Microsofts walled garden… and that's probably what it's meant to do. Who am I? The real problem to me though is that these Windows Live and raw Windows User accounts are a bit unpredictable especially when it comes to developer information about the account and which credentials to use. So for example Windows reports folder security like this: Notice it's showing my Windows Live account. Now if I go to Edit and try to add my Windows user account (rstrahl) it'll just automatically show up as the live account. On the other hand though the underlying system sees everything as my real Windows account. After I switched to a Windows Live login account and I have to login to Windows with my Live account, what do you suppose this returns?Console.WriteLine(Environment.UserName); It returns my raw Windows user account (rstrahl). All my permissions, all my actual settings and the desktop console altogether run under that account. If I look in TaskManager (or Process Explorer for me) I see: Everything running on the desktop shell with my login running under my Windows user account. I suppose it makes sense, but where is that association happening? When I switched to a Windows Live account, nowhere did I associate my real account with the Live account - it just happened. And looking through the account configuration dialogs I can't find any reference to the raw Windows account. Other than switching back I see no mention anywhere of the raw Windows account - everything refers to the Live account. Right then, clear as potato soup! So this is who you really are! The problem is that in some situations this schizophrenic account behavior gets a bit weird. Today I was running a local Web application in IIS that uses Windows Authentication - I tried to log-in with my real Windows account login because that's what I'm used to using with WINDOWS freaking Authentication through IIS. But… it failed. I checked my IIS settings, my apps login settings and I just could not for the life of me get into the site with my Windows username. That is until I finally realized that I should try using my Windows Live credentials instead. And that worked. So now in this Windows Authentication dialog I had to type in my Live ID and password, which is - just weird. Then in IIS if I look at a Trace page (or in my case my app's Status page) I see that the logged on account is - my Windows user account. What's really annoying about this is that in some places it uses the live account in other places it uses my Windows account. If I remote desktop into my Web server online - I have to use the local authentication dialog but I have to put in my real Windows credentials not the Live account. Oh yes, it's all so terribly intuitive and logical… So in summary, when you log on with a Live account you are actually mapped to an underlying Windows user. In any application if you check the user name it'll be the underlying user account (not sure what happens in a Windows RT app or even what mechanism is used there to get the user name info).  When logging on to local machine resource with user name and password you have to use your Live IDs even if the permissions on the resources are mapped to your underlying Windows account. Easy enough I suppose, but still not exactly intuitive behavior…© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • 7 Ubuntu File Manager Features You May Not Have Noticed

    - by Chris Hoffman
    The Nautilus file manager included with Ubuntu includes some useful features you may not notice unless you go looking for them. You can create saved searches, mount remote file systems, use tabs in your file manager, and more. Ubuntu’s file manager also includes built-in support for sharing folders on your local network – the Sharing Options dialog creates and configures network shares compatible with both Linux and Windows machines. How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me?

    Read the article

  • Configuring OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never Why is it that I can connect to the same server using one version of JRE while I cannot with another ?

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >