Search Results

Search found 2822 results on 113 pages for 'scheduled backups'.

Page 50/113 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Shell script for daily disk usage report

    - by Master
    I am doing backups on my local drives. The drives are mounted in /media folder. Now i want to run cron job daily which will tell in table format how much disk is used by folder and how much free space is left on drive It would be good if i can insert that info in database and i can see that info use webpage on locahost ubuntu 10

    Read the article

  • MySql backup (MySqldump questions)

    - by Camran
    I have a vps with ubuntu 9 server. I need to backup my MySql database. Can MySql make backups automatically? If so, how? If not, how should I do it then? The website is a classifieds website (PHP, MySql etc) Thanks

    Read the article

  • What is the BRU Server Disaster Recovery Procedure?

    - by Jonskichov
    How do I go about performing a full, bare metal disaster recovery from BRU Server backups? I have backed up the entire C:\ (including Windows, Program Files etc) of a test machine (using Open File Manager) and want to restore this to a new server. What is the procedure I need to go through to restore to a clean server using my backup? How does this work with various services such as DHCP, DNS, Active Directory, SQL Server, Windows Registry etc? Thanks in advance.

    Read the article

  • `find` command not available in web host, how to implement a delete based on modification time using other commands?

    - by CalumJEadie
    I'm creating a simple datebase backup solution for a client using web hosting at DataFlame. The web hosting account provides access to cron but not a shell. I have a database backup script creating regular backups and I want to automatically remove those more than N days old. I attempted to use find -v $backup_dir -mtime +$keep_days -name "*db.tar.gz" -delete however the user executing the script does not have permission to run find. Can you suggest how to implement this without using the find command?

    Read the article

  • Scheduling runtime-specified Activity in Workflow 4 RC

    - by johnny g
    Hi, so I have this requirement to kick off Activities provided to me at run-time. To facilitate this, I have set up a WorkflowService that receives Activities as Xaml, hydrates them, and kicks them off. Sounds simple enough ... ... this is my WorkflowService in Xaml <Activity x:Class="Workflow.Services.WorkflowService.WorkflowService" ... xmlns:local1="clr-namespace:Workflow.Activities" > <Sequence sap:VirtualizedContainerService.HintSize="277,272"> <Sequence.Variables> <Variable x:TypeArguments="local:Workflow" Name="Workflow" /> </Sequence.Variables> <sap:WorkflowViewStateService.ViewState> <scg3:Dictionary x:TypeArguments="x:String, x:Object"> <x:Boolean x:Key="IsExpanded">True</x:Boolean> </scg3:Dictionary> </sap:WorkflowViewStateService.ViewState> <p:Receive CanCreateInstance="True" DisplayName="ReceiveSubmitWorkflow" sap:VirtualizedContainerService.HintSize="255,86" OperationName="SubmitWorkflow" ServiceContractName="IWorkflowService"> <p:ReceiveParametersContent> <OutArgument x:TypeArguments="local:Workflow" x:Key="workflow">[Workflow]</OutArgument> </p:ReceiveParametersContent> </p:Receive> <local1:InvokeActivity Activity="[ActivityXamlServices.Load(New System.IO.StringReader(Workflow.Xaml))]" sap:VirtualizedContainerService.HintSize="255,22" /> </Sequence> </Activity> ... which, except for repetitive use of "Workflow" is pretty straight forward. In fact, it's just a Sequence with a Receive and [currently] a custom Activity called InvokeActivity. Get to that in a bit. Receive Activity accepts a custom type, [DataContract] public class Workflow { [DataMember] public string Xaml { get; set; } } which contains a string whose contents are to be interpreted as Xaml. You can see the VB expression that then converts this Xaml to an Activity and passes it on. Now this second bit, the custom InvokeActivity is where I have questions. First question: 1) given an arbitrary task, provided at runtime [as described above] is it possible to kick off this Activity using Activities that ship with WF4RC, out of the box? I'm fairly new, and thought I did a good job going through the API and existing documentation, but may as well ask :) Second: 2) my first attempt at implementing a custom InvokeActivity looked like this public sealed class InvokeActivity : NativeActivity { private static readonly ILog _log = LogManager.GetLogger (typeof (InvokeActivity)); public InArgument<Activity> Activity { get; set; } public InvokeActivity () { _log.DebugFormat ("Instantiated."); } protected override void Execute (NativeActivityContext context) { Activity activity = Activity.Get (context); _log.DebugFormat ("Scheduling activity [{0}]...", activity.DisplayName); // throws exception to lack of metadata! :( ActivityInstance instance = context.ScheduleActivity (activity, OnComplete, OnFault); _log.DebugFormat ( "Scheduled activity [{0}] with instance id [{1}].", activity.DisplayName, instance.Id); } protected override void CacheMetadata (NativeActivityMetadata metadata) { // how does one add InArgument<T> to metadata? not easily // is my first guess base.CacheMetadata (metadata); } // private methods private void OnComplete ( NativeActivityContext context, ActivityInstance instance) { _log.DebugFormat ( "Scheduled activity [{0}] with instance id [{1}] has [{2}].", instance.Activity.DisplayName, instance.Id, instance.State); } private void OnFault ( NativeActivityFaultContext context, Exception exception, ActivityInstance instance) { _log.ErrorFormat ( @"Scheduled activity [{0}] with instance id [{1}] has faulted in state [{2}] {3}", instance.Activity.DisplayName, instance.Id, instance.State, exception.ToStringFullStackTrace ()); } } Which attempts to schedule the specified Activity within the current context. Unfortunately, however, this fails. When I attempt to schedule said Activity, the runtime returns with the following exception The provided activity was not part of this workflow definition when its metadata was being processed. The problematic activity named 'DynamicActivity' was provided by the activity named 'InvokeActivity'. Right, so the "dynamic" Activity provided at runtime is not a member of InvokeActivitys metadata. Googled and came across this. Couldn't sort out how to specify an InArgument<Activity> to metadata cache, so my second question is, naturally, how does one address this issue? Is it ill advised to use context.ScheduleActivity (...) in this manner? Third and final, 3) I have settled on this [simpler] solution for the time being, public sealed class InvokeActivity : NativeActivity { private static readonly ILog _log = LogManager.GetLogger (typeof (InvokeActivity)); public InArgument<Activity> Activity { get; set; } public InvokeActivity () { _log.DebugFormat ("Instantiated."); } protected override void Execute (NativeActivityContext context) { Activity activity = Activity.Get (context); _log.DebugFormat ("Invoking activity [{0}] ...", activity.DisplayName); // synchronous execution ... a little less than ideal, this // seems heavy handed, and not entirely semantic-equivalent // to what i want. i really want to invoke this runtime // activity as if it were one of my own, not a separate // process - wrong mentality? WorkflowInvoker.Invoke (activity); _log.DebugFormat ("Invoked activity [{0}].", activity.DisplayName); } } Which simply invokes specified task synchronously within its own runtime instance thingy [use of WF4 vernacular is certainly questionable]. Eventually, I would like to tap into WF's tracking and possibly persistance facilities. So my third and final question is, in terms of what I would like to do [ie kick off arbitrary workflows inbound from client applications] is this the preferred method? Alright, thanks in advance for your time and consideration :)

    Read the article

  • Multi-tenancy - single database vs multiple database

    - by RichardW1001
    We have a number of clients, whose systems share some functionality, but also have quite a degree of diversity. The number of clients is growing - always a healthy thing! - and the diversity between their businesses is also increasing. At present there is a single ASP.Net (Web Forms) Web Site (as opposed to web project), which has sub-folders for each tenant, with that tenant's non-standard pages. There is a separate model project, which deals with database access and business logic. Which is preferable - and most importantly, why - between having (a) 1 database per client, with only the features associated with that client; or (b) a single database shared by all clients, where only a subset of tables are used by any one client. The main concerns within the business are over: maintenance of multiple assets - backups, version control and the like promoting re-use as much as possible How would you ensure these concerns are addressed, which solution is preferable, and why? (I have been also compiling responses to similar questions)

    Read the article

  • New Whitepaper: Advanced Compression 11gR1 Benchmarks with EBS 12

    - by Steven Chan
    In my opinion, if there's any reason to upgrade an E-Business Suite environment to the 11gR1 or 11gR2 database, it's the Advanced Compression database option.  Oracle Advanced Compression was introduced in Oracle Database 11g, and allows you to compress structured data (numbers, characters) as well as unstructured data (documents, spreadsheets, XML and other files).  It provides enhanced compression for database backups and also includes network compression for faster synchronization with standby databases.In other words, the promise of Advanced Compression is that it can make your E-Business Suite database smaller and faster.  But how well does it actually deliver on that promise?Apps 12 + Advanced Compression Benchmarks now availableThree of my colleagues, Uday Moogala, Lester Gutierrez, and Andy Tremayne, have been benchmarking Oracle E-Business Suite Release 12 with Advanced Compression 11gR1.  They've just released a detailed whitepaper with their benchmarking results and recommendations.This whitepaper is available in two locations:Oracle E-Business Suite Release 12.1 with Oracle Database 11g Advanced Compression (Note 1110648.1) (requires My Oracle Support access)Oracle E-Business Suite Release 12.1 with Oracle Database 11g Advanced Compression (Applications Benchmark website, PDF, 500K)

    Read the article

  • Want to go to DevConnections for Free? Speak at DotNetNuke Connections

    - by Chris Hammond
    So every year in November (for the past 3 years at least!) DotNetNuke has been part of the DevConnections conference in Las Vegas, Nevada. This year (2010) will be no different as DotNetNuke Connections is back ( This year’s conference is scheduled for 11/1-11/4/2010 ) and guess what? I can tell you how to get to go to the conference for free! (travel to/from Las Vegas not included) How, might you ask? Well if you didn’t know this already, people who are selected to give a presentations at...(read more)

    Read the article

  • Ask How-To Geek: Backing Up Photos to Flickr, Automating Repetitive Tasks, and Normalizing MP3 Volume

    - by Jason Fitzpatrick
    This week we take a look at how to automate your Flickr backups, knock out repetitive tasks with automation, and normalize your MP3 collection’s wild volume levels. Once a week we dip into our reader mailbag and help readers solve their problems, sharing the useful solutions with you in the process. Read on to see our fixes for this week’s reader dilemmas. Latest Features How-To Geek ETC How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC Hide the Twitter “Litter” in Twitter’s Sidebar Area (Chrome and Iron) Public Domain Day: Reflections on Copyright and the Importance of Public Domain Angry Birds Coming to PS3 and PSP This Week I Hate Mondays Wallpaper for That First Day Back at Work Tune Pop Enhances Android Music Notifications Another Busy Night in Gotham City Wallpaper

    Read the article

  • TFS 2010 and the missing Area & Iterations (stale data) Issue

    - by andresv
    The symptom is this: you change some area or iteration in a TFS Project, but the change is not reflected (or updated) in VS or any other TFS Client. Well, it happens that TFS now has some clever caching mechanisms that need to be updated when you make a change like this, and those changes are propagated by some scheduled jobs TFS is continuously running in the Application Tier.  So, you you get this behavior, please check (and possibly restart) the "Visual Studio Team Foundation Background Job Agent" service. In my case, this service was logging a very odd "Object Reference Not Set" into the Windows Event Log, and a simple restart fixed it. Hope this is fixed by RTM...   (we are using the RC version). And by the way, if the job agent is broken there are some other things that stops working like email notifications. Best regards, Andrés G Vettori, CTO, VMBC  

    Read the article

  • Troubleshooting High-CPU Utilization for SQL Server

    - by Susantha Bathige
    The objective of this FAQ is to outline the basic steps in troubleshooting high CPU utilization on  a server hosting a SQL Server instance. The first and the most common step if you suspect high CPU utilization (or are alerted for it) is to login to the physical server and check the Windows Task Manager. The Performance tab will show the high utilization as shown below: Next, we need to determine which process is responsible for the high CPU consumption. The Processes tab of the Task Manager will show this information: Note that to see all processes you should select Show processes from all user. In this case, SQL Server (sqlserver.exe) is consuming 99% of the CPU (a normal benchmark for max CPU utilization is about 50-60%). Next we examine the scheduler data. Scheduler is a component of SQLOS which evenly distributes load amongst CPUs. The query below returns the important columns for CPU troubleshooting. Note – if your server is under severe stress and you are unable to login to SSMS, you can use another machine’s SSMS to login to the server through DAC – Dedicated Administrator Connection (see http://msdn.microsoft.com/en-us/library/ms189595.aspx for details on using DAC) SELECT scheduler_id ,cpu_id ,status ,runnable_tasks_count ,active_workers_count ,load_factor ,yield_count FROM sys.dm_os_schedulers WHERE scheduler_id See below for the BOL definitions for the above columns. scheduler_id – ID of the scheduler. All schedulers that are used to run regular queries have ID numbers less than 1048576. Those schedulers that have IDs greater than or equal to 1048576 are used internally by SQL Server, such as the dedicated administrator connection scheduler. cpu_id – ID of the CPU with which this scheduler is associated. status – Indicates the status of the scheduler. runnable_tasks_count – Number of workers, with tasks assigned to them that are waiting to be scheduled on the runnable queue. active_workers_count – Number of workers that are active. An active worker is never preemptive, must have an associated task, and is either running, runnable, or suspended. current_tasks_count - Number of current tasks that are associated with this scheduler. load_factor – Internal value that indicates the perceived load on this scheduler. yield_count – Internal value that is used to indicate progress on this scheduler.                                                                 Now to interpret the above data. There are four schedulers and each assigned to a different CPU. All the CPUs are ready to accept user queries as they all are ONLINE. There are 294 active tasks in the output as per the current_tasks_count column. This count indicates how many activities currently associated with the schedulers. When a  task is complete, this number is decremented. The 294 is quite a high figure and indicates all four schedulers are extremely busy. When a task is enqueued, the load_factor  value is incremented. This value is used to determine whether a new task should be put on this scheduler or another scheduler. The new task will be allocated to less loaded scheduler by SQLOS. The very high value of this column indicates all the schedulers have a high load. There are 268 runnable tasks which mean all these tasks are assigned a worker and waiting to be scheduled on the runnable queue.   The next step is  to identify which queries are demanding a lot of CPU time. The below query is useful for this purpose (note, in its current form,  it only shows the top 10 records). SELECT TOP 10 st.text  ,st.dbid  ,st.objectid  ,qs.total_worker_time  ,qs.last_worker_time  ,qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_worker_time DESC This query as total_worker_time as the measure of CPU load and is in descending order of the  total_worker_time to show the most expensive queries and their plans at the top:      Note the BOL definitions for the important columns: total_worker_time - Total amount of CPU time, in microseconds, that was consumed by executions of this plan since it was compiled. last_worker_time - CPU time, in microseconds, that was consumed the last time the plan was executed.   I re-ran the same query again after few seconds and was returned the below output. After few seconds the SP dbo.TestProc1 is shown in fourth place and once again the last_worker_time is the highest. This means the procedure TestProc1 consumes a CPU time continuously each time it executes.      In this case, the primary cause for high CPU utilization was a stored procedure. You can view the execution plan by clicking on query_plan column to investigate why this is causing a high CPU load. I have used SQL Server 2008 (SP1) to test all the queries used in this article.

    Read the article

  • Address Regulatory Mandates for Data Encryption Without Changing Your Applications

    - by Troy Kitch
    The Payment Card Industry Data Security Standard, US state-level data breach laws, and numerous data privacy regulations worldwide all call for data encryption to protect personally identifiable information (PII). However encrypting PII data in applications requires costly and complex application changes. Fortunately, since this data typically resides in the application database, using Oracle Advanced Security, PII can be encrypted transparently by the Oracle database without any application changes. In this ISACA webinar, learn how Oracle Advanced Security offers complete encryption for data at rest, in transit, and on backups, along with built-in key management to help organizations meet regulatory requirements and save money. You will also hear from TransUnion Interactive, the consumer subsidiary of TransUnion, a global leader in credit and information management, which maintains credit histories on an estimated 500 million consumers across the globe, about how they addressed PCI DSS encryption requirements using Oracle Database 11g with Oracle Advanced Security. Register to watch the webinar now.

    Read the article

  • Replication Services in a BI environment

    - by jorg
    In this blog post I will explain the principles of SQL Server Replication Services without too much detail and I will take a look on the BI capabilities that Replication Services could offer in my opinion. SQL Server Replication Services provides tools to copy and distribute database objects from one database system to another and maintain consistency afterwards. These tools basically copy or synchronize data with little or no transformations, they do not offer capabilities to transform data or apply business rules, like ETL tools do. The only “transformations” Replication Services offers is to filter records or columns out of your data set. You can achieve this by selecting the desired columns of a table and/or by using WHERE statements like this: SELECT <published_columns> FROM [Table] WHERE [DateTime] >= getdate() - 60 There are three types of replication: Transactional Replication This type replicates data on a transactional level. The Log Reader Agent reads directly on the transaction log of the source database (Publisher) and clones the transactions to the Distribution Database (Distributor), this database acts as a queue for the destination database (Subscriber). Next, the Distribution Agent moves the cloned transactions that are stored in the Distribution Database to the Subscriber. The Distribution Agent can either run at scheduled intervals or continuously which offers near real-time replication of data! So for example when a user executes an UPDATE statement on one or multiple records in the publisher database, this transaction (not the data itself) is copied to the distribution database and is then also executed on the subscriber. When the Distribution Agent is set to run continuously this process runs all the time and transactions on the publisher are replicated in small batches (near real-time), when it runs on scheduled intervals it executes larger batches of transactions, but the idea is the same. Snapshot Replication This type of replication makes an initial copy of database objects that need to be replicated, this includes the schemas and the data itself. All types of replication must start with a snapshot of the database objects from the Publisher to initialize the Subscriber. Transactional replication need an initial snapshot of the replicated publisher tables/objects to run its cloned transactions on and maintain consistency. The Snapshot Agent copies the schemas of the tables that will be replicated to files that will be stored in the Snapshot Folder which is a normal folder on the file system. When all the schemas are ready, the data itself will be copied from the Publisher to the snapshot folder. The snapshot is generated as a set of bulk copy program (BCP) files. Next, the Distribution Agent moves the snapshot to the Subscriber, if necessary it applies schema changes first and copies the data itself afterwards. The application of schema changes to the Subscriber is a nice feature, when you change the schema of the Publisher with, for example, an ALTER TABLE statement, that change is propagated by default to the Subscriber(s). Merge Replication Merge replication is typically used in server-to-client environments, for example when subscribers need to receive data, make changes offline, and later synchronize changes with the Publisher and other Subscribers, like with mobile devices that need to synchronize one in a while. Because I don’t really see BI capabilities here, I will not explain this type of replication any further. Replication Services in a BI environment Transactional Replication can be very useful in BI environments. In my opinion you never want to see users to run custom (SSRS) reports or PowerPivot solutions directly on your production database, it can slow down the system and can cause deadlocks in the database which can cause errors. Transactional Replication can offer a read-only, near real-time database for reporting purposes with minimal overhead on the source system. Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!

    Read the article

  • Live Support Webinar for Oracle Primavera Customers

    - by karl.prutzer
    Hi all, Our Customer Support team is hosting another Live Support Webinar for Oracle Primavera customers scheduled for May 6, 2010 at 11am Eastern Time. The webinar covers the following topics. Best Practices when submitting an SR My Oracle Support Overview Support Resources - lifetime support policy, My Oracle Support Speed training resources, etc. Both the conference key for the web conference and the audio passcode for the call is... Primavera Audio Conference Details Toll Free dial in number = 1.877.808.5067 International Toll dial in number = 1.706.902.0289 Web conference link https://strtc.oracle.com/imtapp/app/sch_mtg_details.uix?mID=6761278

    Read the article

  • Use Evernote’s Secret Debug Menu to Optimize and Speed Up Searching

    - by The Geek
    If your Evernote installation has become sluggish after adding thousands of notes, you might be able to speed it up a bit with this great tip from Matthew’s TechInch blog that uncovers a secret debug menu in the latest Windows client. It’s important to note that Evernote runs database optimization in the background automatically, so this really shouldn’t be necessary, but if your database is sluggish, anything is worth a shot, right Latest Features How-To Geek ETC How to Upgrade Windows 7 Easily (And Understand Whether You Should) The How-To Geek Guide to Audio Editing: Basic Noise Removal Install a Wii Game Loader for Easy Backups and Fast Load Times The Best of CES (Consumer Electronics Show) in 2011 The Worst of CES (Consumer Electronics Show) in 2011 HTG Projects: How to Create Your Own Custom Papercraft Toy Firefox 4.0 Beta 9 Available for Download – Get Your Copy Now The Frustrations of a Computer Literate Watching a Newbie Use a Computer [Humorous Video] Season0nPass Jailbreaks Current Gen Apple TVs IBM’s Jeopardy Playing Computer Watson Shows The Pros How It’s Done [Video] Tranquil Juice Drop Abstract Wallpaper Pulse Is a Sleek Newsreader for iOS and Android Devices

    Read the article

  • mysql show databases not showing databases that are in /opt/bitnami/mysql/data directory

    - by hgolov
    and thank you for taking the time to look at my question. I have an ebs-backed ec2 ubuntu server which is running but unreachable. ** There are very stupidly no recent backups ** I made a snapshot of the block, created a volume, spun up a new instance, attached the new volume. I see all the data from my site in the /opt/bitnami/mysql/data directory, but when I go into the mysql console, it shows only information_schema and test when I type show databases; How can I 'point' mysql to the correct folder? Thank you!

    Read the article

  • rsnapshot intervals in configuration file…

    - by Patrick
    A simple question about rsnapshot. In order to perform daily backups I'm going to add lines to cron in my Ubuntu. Then, why do I have also these lines in the rsnapshot.conf ? ######################################### # BACKUP INTERVALS # # Must be unique and in ascending order # # i.e. hourly, daily, weekly, etc. # ######################################### interval hourly 6 interval daily 7 interval weekly 4 #interval monthly 3 If I use cron, should I disable them ? thanks ps. I've just realized that in the crontab I still have "hourly" and "daily". Should I then uncomment only the one I use in the crontab ? And what's the point to specify hourly if it is already specified in cron ? I'm a bit confused. # crontab -e 0 */4 * * * /usr/local/bin/rsnapshot hourly 30 23 * * * /usr/local/bin/rsnapshot daily

    Read the article

  • Configuring Expert Search in Communicator 14 and SharePoint 2010

    Communicator 14 provides functionality to be able to search for contacts not just by name, but by skill.  For example a customer service agent at an airline can search for other agents with Travel Advisory experience by typing the search criteria into the Communicator search box and performing a search by keyword.  The search results will return users who have specified that skill in their profile on their SharePoint My Site.  This is actually pretty easy to configure, Ill show you how. Create Search and People Search Results Pages in SharePoint Communicator 14 Expert Search works by using the SharePoint 2010 Search Service to search SharePoint for user profiles with matching keywords.  This requires that you have an Enterprise Search site in your site collection which includes the search service and also the People Results pages.  The easiest way to do this is to create a Search Center site in your site collection. Note: I get an error when trying to create an Enterprise Search site in a Team Site in the SharePoint 2010 RTM bits, so I created it as a site collection that is evident in the URLs you see below. In the screenshots below, you can see that the URL of the SharePoint search service in the Search site collection is http://sps2010/sites/search/_vti_bin/search.asmx, and the URL of the People Search Results page is http://sps2010/sites/Search/Pages/peopleresults.aspx. Point Communications Server 14 to Search and People Search Results Pages For Communicator 14 to be able to perform an Expert Search, you need to configure Communications Server 14 to point to the Search Service and People Search Results page URLs. From a server with the OCS Core bits installed, fire up the Communications Server Management Shell and type Get-CsClientPolicy. Scroll down to the bottom of the output, were interested in setting the values of: SPSearchInternalURL SPSearchExternalURL SPSearchCenterInternalURL SPSearchCenterExternalURL SPSearchInternalURL and SPSearchExternalURL correspond to the internal and external URLs of the SharePoint search service in the Search site collection, while SPSearchCenterInternalURL and SPSearchCenterExternalURL correspond to the internal and external URLs of the people search results pages. Well use the Communications Server Management Shell to set the values of these CS policy properties. For simplicity, Im only going to set the internal URLs here. Set-CsClientPolicy SPSearchInternalURL http://sps2010/sites/search/_vti_bin/search.asmx     -SPSearchCenterInternalURL http://sps2010/sites/Search/Pages/peopleresults.aspx Log out and back into Communicator.  You can verify that these settings were applied by running the Get-CsClientPolicy cmdlet again from the Communications Server Management Shell. However, theres another super-secret ninja trick to verify that the settings were applied: Find the Communicator icon in the Windows System Tray Hold down the Ctrl button Click (left) the Communicator icon in the Windows System Tray do not depress the Ctrl button You should now see an extra menu item called Configuration Information, click it. Scroll down and locate the Expert Search URL and SharePoint Search Center URL keys and verify that their values correspond to those you set using the Set-CsClientPolicy PowerShell cmdlet. Configure a Sharepoint User Profile Import Im not going to provide detailed steps here except to say that you need to configure the SharePoint 2010 User Profile  Service Application to import user account details from Active Directory on a scheduled basis. This is a critical step because there are several user profile properties e.g. SipAddress that are only populated by a user profile import.  When performing an Expert Search, Communicator can only render results for users who have a SipAddress specified. Add Skills to User Profiles Navigate to your My Site and click on My Profile.  This page allows you to set many contact details that are searchable in SharePoint.  Were particularly interested in the Ask Me About property of a users profile.  Expert Search searches against this property to find users with matching skills. Configure a SharePoint Search Crawl Ensure that you have a scheduled job to crawl your Local SharePoint Sites content source.  Depending on how you have this configured, it will also crawl the My Site site collection and add user properties such as Ask Me About to the search index. Thats It! SharePoint 2010 provides new social and collaboration features to help users find other users with similar skills or interests. Expert Search extends this functionality directly into Microsoft Communicator 14, allowing you to interact with the users directly from the search results. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • list of things to think about for hosting a potentially high traffic website

    - by SpashHit
    I do my own hosting for a few clients on my own VPS server (Lindode). Since my clients so far have been extremely low traffic, I have not had to really dig into some of the considerations that I would need for a higher traffic site. Now I am bidding on a client whose site will be potentially higher (not Facebook or twitter, but higher than Joe's ice cream shop). Is there a list of things I need to think about that I may be missing? I am going to assume, at least at first, that I will be able to handle them on my shared Linode, but I could move to a dedicated Linode if need be. I am not thinking so far of multiple servers, but short of that there are still considerations. For example, mod_perl instead of straight CGI, better backups, etc. What else? In case it matters, the stack will be debian-linux / apache / Perl / mysql / Template Toolkit.

    Read the article

  • Hello With Oracle Identity Manager Architecture

    - by mustafakaya
    Hi, my name is Mustafa! I'm a Senior Consultant in Fusion Middleware Team and living in Istanbul,Turkey. I worked many various Java based software development projects such as end-to-end web applications, CRM , Telco VAS and integration projects.I want to share my experiences and research about Fusion Middleware Products in this column. Customer always wants best solution from software consultants or developers. Solution will be a code snippet or change complete architecture. We faced different requests according to the case of customer. In my posts i want to discuss Fusion Middleware Products Architecture or how can extend usability with apis or UI customization and more and I look forward to engaging with you on your experiences and thoughts on this.  In my first post, i will be discussing Oracle Identity Manager architecture  and i plan to discuss Oracle Identity Manager 11g features in next posts. Oracle Identity Manager System Architecture Oracle Identity Governance includes Oracle Identity Manager,Oracle Identity Analytics and Oracle Privileged Account Manager. I will discuss Oracle Identity Manager architecture in this post.  In basically, Oracle Identity Manager is a n-tier standard  Java EE application that is deployed on Oracle WebLogic Server and uses  a database .  Oracle Identity Manager presentation tier has three different screen and two different client. Identity Self Service and Identity System Administration are web-based thin client. Design Console is a Java Swing Client that communicates directly with the Business Service Tier.  Identity Self Service provides end-user operations and delegated administration features. System Administration provides system administration functions. And Design Console mostly use for development management operations such as  create and manage adapter and process form,notification , workflow desing, reconciliation rules etc. Business service tier is implemented as an Enterprise JavaBeans(EJB) application. So you can extense Oracle Identity Manager capabilities.  -The SMPL and EJB APIs allow develop custom plug-ins such as management roles or identities.  -Identity Services allow use core business capabilites of Oracle Identity Manager such as The User provisioning or reconciliation service. -Integration Services allow develop custom connectors or adapters for various deployment needs. -Platform Services allow use Entitlement Servers, Scheduler or SOA composites. The Middleware tier allows you using capabilites ADF Faces,SOA Suites, Scheduler, Entitlement Server and BI Publisher Reports. So OIM allows you to configure workflows uses Oracle SOA Suite or define authorization policies use with Oracle Entitlement Server. Also you can customization of OIM UI without need to write code and using ADF Business Editor  you can extend custom attributes to user,role,catalog and other objects. Data tiers; Oracle Identity Manager is driven by data and metadata which provides flexibility and adaptability to Oracle Identity Manager functionlities.  -Database has five schemas these are OIM,SOA,MDS,OPSS and OES. Oracle Identity Manager uses database to store runtime and configuration data. And all of entity, transactional and audit datas are stored in database. -Metadata Store; customizations and personalizations are stored in file-based repository or database-based repository.And Oracle Identity Manager architecture,the metadata is in Oracle Identity Manager database to take advantage of some of the advanced performance and availability features that this mode provides. -Identity Store; Oracle Identity Manager provides the ability to integrate an LDAP-based identity store into Oracle Identity Manager architecture.  Oracle Identity Manager uses the human workflow module of Oracle Service Oriented Architecture Suite. OIM connects to SOA using the T3 URL which is front-end URL for the SOA server.Oracle Identity Manager uses embedded Oracle Entitlement Server for authorization checks in OIM engine.  Several Oracle Identity Manager modules use JMS queues. Each queue is processed by a separate Message Driven Bean (MDB), which is also part of the Oracle Identity Manager application. Message producers are also part of the Oracle Identity Manager application. Oracle Identity Manager uses a scheduled jobs for some activities in the background.Some of scheduled jobs come with Out-Of-Box such as the disable users after the end date of the users or you can define your custom schedule jobs with Oracle Identity Manager APIs. You can use Oracle BI Publisher for reporting Oracle Identity Manager transactions or audit data which are in database. About me: Mustafa Kaya is a Senior Consultant in Oracle Fusion Middleware Team, living in Istanbul. Before coming to Oracle, he worked in teams developing web applications and backend services at a telco company. He is a Java technology enthusiast, software engineer and addicted to learn new technologies,develop new ideas. Follow Mustafa on Twitter,Connect on LinkedIn, and visit his site for Oracle Fusion Middleware related tips.

    Read the article

  • How to Change and Manually Start and Stop Automatic Maintenance in Windows 8

    - by Lori Kaufman
    Windows 8 has a new feature that allows you to automatically run scheduled daily maintenance on your computer. These maintenance tasks run in the background and include security updating and scanning, Windows software updates, disk defragmentation, system diagnostics, among other tasks. We’ve previously shown you how to automate maintenance in Windows 7, Vista, and XP. Windows 8 maintenance is automatic by default and the performance and energy efficiency has been improved over Windows 7. The program for Windows 8 automatic maintenance is called MSchedExe.exe and it is located in the C:\Windows\System32 directory. We will show you how you can change the automatic maintenance settings in Windows 8 and how you can start and stop the maintenance manually. NOTE: It seems that you cannot turn off the automatic maintenance in Windows 8. You can only change the settings and start and stop it manually. Can Dust Actually Damage My Computer? What To Do If You Get a Virus on Your Computer Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked

    Read the article

  • How do I recover a BTRFS filesystem with "parent transid verify failed" errors?

    - by Evan P.
    I've got an external USB disk running btrfs. I use it for backups, and each time I do a backup I take a snapshot. However, it's giving me this error now: parent transid verify failed on 109973766144 wanted 1823 found 1821 parent transid verify failed on 109973766144 wanted 1823 found 1821 Obviously, this is a non-critical disk, but I have a few files on here that aren't available elsewhere. Is there any way to recover data from this disk? Maybe by mounting one of the snapshots as root?

    Read the article

  • SQL SERVER – An Efficiency Tool to Compare and Synchronize SQL Server Databases

    - by Pinal Dave
    There is no need to reinvent the wheel if it is already invented and if the wheel is already available at ease, there is no need to wait to grab it. Here is the similar situation. I came across a very interesting situation and I had to look for an efficient tool which can make my life easier and solve my business problem. Here is the scenario. One of the developers had deleted few rows from the very important mapping table of our development server (thankfully, it was not the production server). Though it was a development server, the entire development team had to stop working as the application started to crash on every page. Think about the lost of manpower and efficiency which we started to loose.  Pretty much every department had to stop working as our internal development application stopped working. Thankfully, we even take a backup of our development server and we had access to full backup of the entire database at 6 AM morning. We do not take as a frequent backup of development server as production server (naturally!). Even though we had a full backup, the solution was not to restore the database. Think about it, there were plenty of the other operations since the last good full backup and if we restore a full backup, we will pretty much overwrite on the top of the work done by developers since morning. Now, as restoring the full backup was not an option we decided to restore the same database on another server. Once we had restored our database to another server, the challenge was to compare the table from where the database was deleted. The mapping table from where the data were deleted contained over 5000 rows and it was humanly impossible to compare both the tables manually. Finally we decided to use efficiency tool dbForge Data Compare for SQL Server from DevArt. dbForge Data Compare for SQL Server is a powerful, fast and easy to use SQL compare tool, capable of using native SQL Server backups as metadata source. (FYI we Downloaded dbForge Data Compare) Once we discovered the product, we immediately downloaded the product and installed on our development server. After we installed the product, we were greeted with the following screen. We clicked on the New Data Comparision to start our new comparison project. It brought up following screen. Here is the best part of the product, we just had to enter our database connection username and password along with source and destination details and we are done. The entire process is very simple and self intuiting. The best part was that for the source, we can either select database or even backup. This was indeed fantastic feature. Think about this, if you have a very big database, it will take long time to restore on the server. Once it is restored, you will be able to work with it. However, when you are working with dbForge Data Compare it will accept database backup as your source or destination. Once I click on the execute it brought up following screen where it displayed an excellent summary of the data compare. It has dedicated tabs for the what is changing in what table as well had details of the changed data. The best part is that, once we had reviewed the change. We click on the Synchronize button in the menu bar and it brought up following screen. You can see that the screen has very simple straight forward but very powerful features. You can generate a script to synchronize from target to source or even from source to target. Additionally, the database is a very complicated world and there are extensive options to configure various database options on the next screen. We also have the option to either generate script or directly execute the script to target server. I like to play on the safe side and I generated the script for my synchronization and later on after review I deployed the scripts on the server. Well, my team and we were able to get going from our disaster in less than 10 minutes. There were few people in our team were indeed disappointed as they were thinking of going home early that day but in less than 10 minutes they had to get back to work. There are so many other features in  dbForge Data Compare for SQL Server, I am already planning to make this product company wide recommended product for Data Compare tool. Hats off to the team who have build this product. Here are few of the features salient features of the dbForge Data Compare for SQL Server Perform SQL Server database comparison to detect changes Compare SQL Server backups with live databases Analyze data differences between two databases Synchronize two databases that went out of sync Restore data of a particular table from the backup Generate data comparison reports in Excel and HTML formats Copy look-up data from development database to production Automate routine data synchronization tasks with command-line interface Go Ahead and Download the dbForge Data Compare for SQL Server right away. It is always a good idea to get familiar with the important tools before hand instead of learning it under pressure of disaster. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • How to Back Up Ubuntu the Easy Way with Déjà Dup

    - by Chris Hoffman
    Déjà Dup is a simple — yet powerful — backup tool included with Ubuntu. It offers the power of rsync with incremental backups, encryption, scheduling, and support for remote services. With Déjà Dup, you can quickly revert files to previous versions or restore missing files from a file manager window. It’s a graphical frontend to Duplicity, which itself uses rsync. It offers the power of rsync with a simple interface. Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header?

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >