Search Results

Search found 10463 results on 419 pages for 'task tracking'.

Page 18/419 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • How to Find Your Lost Android Phone, Even if You Never Set Up a Tracking App

    - by Chris Hoffman
    Android doesn’t come with a “find my Android” feature, so there’s no official way to track your phone if you lose it. You should prepare your phone for loss by setting up such a tracking app — but what if you didn’t? Your first instinct may be to download Lookout’s Plan B, which has been the go-to app for this purpose. However, Plan B only runs on Android 2.3 Gingerbread and lower, so modern Android phones will require a new solution. If you are still running 2.3 or lower, you should definitely check it out, but everybody else can keep reading.    

    Read the article

  • Google Analytics Campaigns Not Tracking E-Commerce

    - by Paul
    I am running email campaigns via MailChimp and tracking the success of my campaigns via Google Analytics. I can successfully see data being tracked for: Reporting > Conversions > Ecommerce (Receiving Data) Reporting > Traffic Sources > Campaigns (Receiving Data) However, I am not receiving any Ecommerce data for the individual campaigns: Reporting > Traffic Sources > Campaigns > Ecommerce (No data) So I see data like: Visits: 18,501 Revenue: $0.00 Everything I have read leads me to believe this should just "work" if Ecommerce is setup. Is there some additional action I need to take for this work? Any help would be appreciated!

    Read the article

  • Oracle Streamlines Tracking of Global Carbon Footprint and Greenhouse Gas Emissions

    - by Evelyn Neumayr
    Oracle has automated its global carbon footprint and greenhouse gas emissions measurement using Oracle Environmental Accounting and Reporting. By using this solution, Oracle was able to increase organizational efficiency and reduce the need for labor intensive, manual processes in the tracking of greenhouse gas (GHG) emissions for both voluntary and legislated environmental reporting. The move to Oracle Environmental Accounting and Reporting enables Oracle to more effectively meet both internal and governmental reporting needs, while addressing the associated economic mandates for reporting emissions and sustainability efforts. Organizations across the company can now record environmental data such as energy consumed or energy generated at facilities or locations within the enterprise, and can automatically calculate corresponding GHG emissions resulting from the use of emission sources. In addition, Oracle Environmental Accounting and Reporting includes data integration from multiple applications to ensure proper representation and calculation of emissions across the globe. The result is access to fast, accurate data and reporting to help the company meet its sustainability goals.

    Read the article

  • Parallel task in C# 4.0

    - by Jalpesh P. Vadgama
    In today’s computing world the world is all about Parallel processing. You have multicore CPU where you have different core doing different work parallel or its doing same task parallel. For example I am having 4-core CPU as follows. So the code that I write should take care of this.C# does provide that kind of facility to write code for multi core CPU with task parallel library. We will explore that in this post. Read More

    Read the article

  • Google E-Commerce tracking not working

    - by Mert
    I got 9 successful transaction but I can see only 2 in google analytics. I do redirect to https while I get payment that I doubt about it may cause but not really sure while e-commerce tracking doesn't work properly. UPDATE: var pageTracker = _gat._getTracker('UA-1234567890'); pageTracker._trackPageview(); pageTracker._addTrans('254','','217,4550','','0','Istanbul','','Turkey'); pageTracker._addItem('254','203','AAA - BBB','','169,00','1'); pageTracker._addItem('254','167','XXX - YYY','','59,90','1'); pageTracker._trackTrans();

    Read the article

  • Windows 7 Task Manager Tips for Performance

    In Part 1 of this tutorial we looked at how the Task Manager in Windows 7 could be used to view the processes running on your computer and the memory usage associated with each. Let s continue by discussing some more tips on using the Task Manager to help identify any issues and improve your PC s performance.... Comcast? Business Class - Official Site Learn About Comcast Small Business Services. Best in Phone, TV & Internet.

    Read the article

  • Google Analytics: tracking subdomains for a profile defined for a subdomain

    - by Alex G
    Hope you can help. We have set up a single property under our Google Analytics account. That property's default URL is set to subdomain1.example.com. We would now like to track multiple subdomains for example.com, under the same property. Seems easy enough: we just need to add _gaq.push(['_setDomainName', 'example.com']); to our tracking code, right? But my question is: does it matter if a) we don't need to track www.example.com (this is tracked under a seperate account and property) and b) the default URL for our property is set to subdomain1.example.com? Will either of these have any impact on data collection?

    Read the article

  • Help with tracking sub domain

    - by roobus
    I currently have my app's marketing/external website on the root level, e.g. http://example.com My web app itself is hosted at: http://app.example.com What's the best strategy to set-up Google Analytics tracking for both of them? Should I create a separate web property? Also, what's the difference between creating a new web property and a new profile? UPDATE: I would want to be able to track conversion from a page on the root domain to a sign-up page on the app sub-domain.

    Read the article

  • Tracking a single page on another domain in Google Analytics

    - by Ross
    I have access to edit a 'mini-site' hosted on our organisation's parent site. I'd like to track this page using Google Analytics, however I don't have access to the front page so I can't verify this as my domain. Using the tracking code for our main site works, however I don't want this data to be confused with similarly named pages on our site (for example, our mini-site is at /radio, and if we had a /radio at our main site this would be counted as the same). Has anyone been in this situation before? I'd like to just redirect visitors to our mini-site to our main site, seeing as it ranks higher in Google, but I've been told to maintain a separate site with our main features.

    Read the article

  • Relationship between "Task Parallel Library" and "Task-based Asynchronous Pattern"?

    - by Sid
    In the context of C#, .NET 4/4.5 used for an application running on a web-server, what is the relationship between "Task Parallel Library" and "Task-based Asynchronous Pattern"? I understand one is a library and the other is a pattern. But to dig deeper, is it like "The library is used by the pattern to enforce good practices". I'm also not clear if both are supported in .NET 4.0 (with awake and async keywords) Edit: Seems that awake and async are only in .NET 4.5 ...

    Read the article

  • Google Analytics is not tracking all of our pages

    - by luis
    Our website is insynchq.com. In the All Pages report under Content - Site Content we can only see data for some our pages, like /, /getstarted, and /download. Others, like /gmail, /about, and /mobile are not shown, even if we are sure that there have been visits to them. We use a template for our pages so the scripts that are loaded for / (for example) should also be loaded for /gmail, so it doesn't seem to be a problem with the installation of the tracking code. Can anyone help? Thanks.

    Read the article

  • Why do weekly tasks created via PowerShell using a different user fail with error 0x41306

    - by Danny Tuppeny
    We have some scripts that create scheduled jobs using PowerShell as part of our application. When testing them recently, I noticed that some of them always failed immediately, and no output is ever produced (they don't even appear in the Get-Job list). After many days of tweaking, we've managed to isolate it to any jobs that are set to run weekly. Below is a script that creates two jobs that do exactly the same thing. When we run this on our domain, and provide credentials of a domain user, then force both jobs to run in the Task Scheduler GUI (right-click - Run), the daily one runs fine (0x0 result) and the weekly one fails (0x41306). Note: If I don't provide the -Credential param, both jobs work fine. The jobs only fail if the task is both weekly, and running as this domain user. I can't find information on why this is happening, nor think of any reason it would behave differently for weekly jobs. The "History£ tab in the Task Scheduler has almost no useful information, just "Task stopping due to user request" and "Task terminated", both of which have no useful info: Task Scheduler terminated "{eabba479-f8fc-4f0e-bf5e-053dfbfe9f62}" instance of the "\Microsoft\Windows\PowerShell\ScheduledJobs\Test1" task. Task Scheduler stopped instance "{eabba479-f8fc-4f0e-bf5e-053dfbfe9f62}" of task "\Microsoft\Windows\PowerShell\ScheduledJobs\Test1" as request by user "MyDomain\SomeUser" . What's up with this? Why do weekly tasks run differently, and how can I diganose this issue? This is PowerShell v3 on Windows Server 2008 R2. I've been unable to reproduce this locally, but I don't have a user set up in the same way as the one in our production domain (I'm working on this, but I wanted to post this ASAP in the hope someone knows what's happening!). Import-Module PSScheduledJob $Action = { "Executing job!" } $cred = Get-Credential "MyDomain\SomeUser" # Remove previous versions (to allow re-running this script) Get-ScheduledJob Test1 | Unregister-ScheduledJob Get-ScheduledJob Test2 | Unregister-ScheduledJob # Create two identical jobs, with different triggers Register-ScheduledJob "Test1" -ScriptBlock $Action -Credential $cred -Trigger (New-JobTrigger -Weekly -At 1:25am -DaysOfWeek Sunday) Register-ScheduledJob "Test2" -ScriptBlock $Action -Credential $cred -Trigger (New-JobTrigger -Daily -At 1:25am)

    Read the article

  • Troubleshooting High-CPU Utilization for SQL Server

    - by Susantha Bathige
    The objective of this FAQ is to outline the basic steps in troubleshooting high CPU utilization on  a server hosting a SQL Server instance. The first and the most common step if you suspect high CPU utilization (or are alerted for it) is to login to the physical server and check the Windows Task Manager. The Performance tab will show the high utilization as shown below: Next, we need to determine which process is responsible for the high CPU consumption. The Processes tab of the Task Manager will show this information: Note that to see all processes you should select Show processes from all user. In this case, SQL Server (sqlserver.exe) is consuming 99% of the CPU (a normal benchmark for max CPU utilization is about 50-60%). Next we examine the scheduler data. Scheduler is a component of SQLOS which evenly distributes load amongst CPUs. The query below returns the important columns for CPU troubleshooting. Note – if your server is under severe stress and you are unable to login to SSMS, you can use another machine’s SSMS to login to the server through DAC – Dedicated Administrator Connection (see http://msdn.microsoft.com/en-us/library/ms189595.aspx for details on using DAC) SELECT scheduler_id ,cpu_id ,status ,runnable_tasks_count ,active_workers_count ,load_factor ,yield_count FROM sys.dm_os_schedulers WHERE scheduler_id See below for the BOL definitions for the above columns. scheduler_id – ID of the scheduler. All schedulers that are used to run regular queries have ID numbers less than 1048576. Those schedulers that have IDs greater than or equal to 1048576 are used internally by SQL Server, such as the dedicated administrator connection scheduler. cpu_id – ID of the CPU with which this scheduler is associated. status – Indicates the status of the scheduler. runnable_tasks_count – Number of workers, with tasks assigned to them that are waiting to be scheduled on the runnable queue. active_workers_count – Number of workers that are active. An active worker is never preemptive, must have an associated task, and is either running, runnable, or suspended. current_tasks_count - Number of current tasks that are associated with this scheduler. load_factor – Internal value that indicates the perceived load on this scheduler. yield_count – Internal value that is used to indicate progress on this scheduler.                                                                 Now to interpret the above data. There are four schedulers and each assigned to a different CPU. All the CPUs are ready to accept user queries as they all are ONLINE. There are 294 active tasks in the output as per the current_tasks_count column. This count indicates how many activities currently associated with the schedulers. When a  task is complete, this number is decremented. The 294 is quite a high figure and indicates all four schedulers are extremely busy. When a task is enqueued, the load_factor  value is incremented. This value is used to determine whether a new task should be put on this scheduler or another scheduler. The new task will be allocated to less loaded scheduler by SQLOS. The very high value of this column indicates all the schedulers have a high load. There are 268 runnable tasks which mean all these tasks are assigned a worker and waiting to be scheduled on the runnable queue.   The next step is  to identify which queries are demanding a lot of CPU time. The below query is useful for this purpose (note, in its current form,  it only shows the top 10 records). SELECT TOP 10 st.text  ,st.dbid  ,st.objectid  ,qs.total_worker_time  ,qs.last_worker_time  ,qp.query_plan FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp ORDER BY qs.total_worker_time DESC This query as total_worker_time as the measure of CPU load and is in descending order of the  total_worker_time to show the most expensive queries and their plans at the top:      Note the BOL definitions for the important columns: total_worker_time - Total amount of CPU time, in microseconds, that was consumed by executions of this plan since it was compiled. last_worker_time - CPU time, in microseconds, that was consumed the last time the plan was executed.   I re-ran the same query again after few seconds and was returned the below output. After few seconds the SP dbo.TestProc1 is shown in fourth place and once again the last_worker_time is the highest. This means the procedure TestProc1 consumes a CPU time continuously each time it executes.      In this case, the primary cause for high CPU utilization was a stored procedure. You can view the execution plan by clicking on query_plan column to investigate why this is causing a high CPU load. I have used SQL Server 2008 (SP1) to test all the queries used in this article.

    Read the article

  • ERROR 0x8007007A when trying to schedule a task

    - by Paul Hollingsworth
    I am getting the error "The data area passed to a system call is too small. (Exception from HRESULT: 0x8007007A)" when trying to create a scheduled task on a particular windows machine. The problem description is identical to that described in this Microsoft KB article I followed their steps to resolve: Stopped the task scheduler service (right-clicked "Task Scheduler" in the Services window from Control Panel and selected "Stop"). Restarted the task scheduler service Waited 15 minutes tried to schedule the task. But the error is persisting. To give more context of how we are creating these scheduled tasks, they are actually generated automatically from a configuration script (we run the script each time we wish to make a change). Each time this happens, it deletes all of the existing tasks and creates new ones. I don't know what else to try.... but surely there is some way to "reset" the task scheduler... How can I stop this error from happening.

    Read the article

  • Changing Windows Media Center StartRecording Task Scheduler options

    - by T Reddy
    Hi, I'm experiencing some occasional crashes waking from sleep and I believe it may be related to the Corsair SSD F40GB2 as my boot disk...or the new Ceton tuner I recently installed... I dunno what the real problem is as I can't reproduce the problem manually. I read somewhere that I should not allow the hard disks to sleep...so I'm trying that in the meantime... At any rate, I don't really care that the computer crashes so long as WMC will start recording the task as soon as possible...the task is scheduled to wake the computer 5 minutes before the recording starts. I noticed that there is a StartRecording param in the Media Center folder in Task Scheduler. There is an option (currently unchecked) that will retry a failed task. I would like to know if there is a way to enable that checkbox. My hope is that if my PC crashes when waking up for this task, that after it reboots the task will trigger again and start recording. Thanks!

    Read the article

  • ERROR 0x8007007A when trying to schedule a task

    - by Paul Hollingsworth
    I am getting the error "The data area passed to a system call is too small. (Exception from HRESULT: 0x8007007A)" when trying to create a scheduled task on a particular windows machine. The problem description is identical to that described in this Microsoft KB article I followed their steps to resolve: Stopped the task scheduler service (right-clicked "Task Scheduler" in the Services window from Control Panel and selected "Stop"). Restarted the task scheduler service Waited 15 minutes tried to schedule the task. But the error is persisting. To give more context of how we are creating these scheduled tasks, they are actually generated automatically from a configuration script (we run the script each time we wish to make a change). Each time this happens, it deletes all of the existing tasks and creates new ones. I don't know what else to try.... but surely there is some way to "reset" the task scheduler... How can I stop this error from happening.

    Read the article

  • Windows Task Manager Crashes Every Time It's Opened. Solutions?

    - by Winston
    I got the following report: Problem signature: Problem Event Name: APPCRASH Application Name: taskmgr.exe Application Version: 6.1.7600.16385 Application Timestamp: 4a5bc3ee Fault Module Name: hostv32.dll Fault Module Version: 0.0.0.0 Fault Module Timestamp: 4c5c027d Exception Code: c0000005 Exception Offset: 0000000000068b73 OS Version: 6.1.7600.2.0.0.256.48 Locale ID: 1033 Additional Information 1: bf4f Additional Information 2: bf4f79e8ecbde38b818b2c0e2771a379 Additional Information 3: d246 Additional Information 4: d2464c78aa97e6b203cd0fca121f9a58 Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt Whenever I open the task manager, within one or two seconds it says that it has stopped working, and giving the above report. Anyone have solutions?

    Read the article

  • Task Manager always crashes within 1 or 2 seconds. Solutions?

    - by tallship
    This is the error report: Problem signature: Problem Event Name: APPCRASH Application Name: taskmgr.exe Application Version: 6.1.7600.16385 Application Timestamp: 4a5bc3ee Fault Module Name: hostv32.dll Fault Module Version: 0.0.0.0 Fault Module Timestamp: 4c5c027d Exception Code: c0000005 Exception Offset: 0000000000068b73 OS Version: 6.1.7600.2.0.0.256.48 Locale ID: 1033 Additional Information 1: bf4f Additional Information 2: bf4f79e8ecbde38b818b2c0e2771a379 Additional Information 3: d246 Additional Information 4: d2464c78aa97e6b203cd0fca121f9a58 Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=104288&clcid=0x0409 If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt Whenever I open the task manager, within a few seconds it crashes, saying it has stopped working with the above report. I took the fault module (hostv32.dll) and scanned it with avast but it found no threat. Any reason/solution to this problem? Thanks

    Read the article

  • My powershell script wont save a file when run using Task Scheduler, do I need to specify a specific argument?

    - by EGr
    I have a script that downloads a temporary Excel file, copies parts of it to a new file, and saves it to a specific location on the network. The problem I'm having is that the new file is never created/saved. If I run the script locally (through cmd.exe, powershell, or powershell ise), it WILL save the file locally, or to the network. If I try running the script via a schedule or on-demand via Task Scheduler, the temporary file is created, but the final document is never created or saved. Is there a specific argument I need to pass, or anything I could be doing wrong? This is the command I'm currently using: powershell.exe -file C:\path\to\my\powershell\script\thescript.ps1 Since it calls environment variables, and other variables relative to the scripts positon, I also set "Start in" to C:\path\to\my\powershell\script\

    Read the article

  • How do I allow a (local) user to start/stop services with a scheduled task?

    - by Mulmoth
    Hi, on a Windows 2008 R2 server I have two small .cmd-scripts to start/stop a certain service. They look like this net start MyService and net stop MyService I want to execute these script via scheduled task, and I thought it would be best to create a local user for this job. The user is not member of the Administrators group. But the scripts fail with exit code 2. When I logon with this local user and try to execute these script in command line, I see a message like (maybe not exactly translated from german to english): Error code 5: Access denied It doesn't matter whether I start the command line as Administrator or not. How can this local user gain rights to do the job?

    Read the article

  • How to schedule a task X minutes after Windows Server 2003 starts?

    - by Joe Schmoe
    How to schedule a task X minutes after Windows Server 2003 starts? In "Scheduled Tasks" one can specify "When my computer starts" but I see no way to specify delay. What I am trying to achieve: there is a service (JIRA) that even though dependent on SQL Server service still doesn't wait long enough for SQL Server to become fully operational. So JIRA service fails to connect to the database and needs to be restarted manually after each server reboot. My plan is to add "SC stop" and "SC start" commands for JIRA service 3 minutes after server starts.

    Read the article

  • How to Force an Exception from a Task to be Observed in a Continuation Task?

    - by Richard
    I have a task to perform an HttpWebRequest using Task<WebResponse>.Factory.FromAsync(req.BeginGetRespone, req.EndGetResponse) which can obviously fail with a WebException. To the caller I want to return a Task<HttpResult> where HttpResult is a helper type to encapsulate the response (or not). In this case a 4xx or 5xx response is not an exception. Therefore I've attached two continuations to the request task. One with TaskContinuationOptions OnlyOnRanToCompletion and the other with OnlyOnOnFaulted. And then wrapped the whole thing in a Task<HttpResult> to pick up the one result whichever continuation completes. Each of the three child tasks (request plus two continuations) is created with the AttachedToParent option. But when the caller waits on the returned outer task, an AggregateException is thrown is the request failed. I want to, in the on faulted continuation, observe the WebException so the client code can just look at the result. Adding a Wait in the on fault continuation throws, but a try-catch around this doesn't help. Nor does looking at the Exception property (as section "Observing Exceptions By Using the Task.Exception Property" hints here). I could install a UnobservedTaskException event handler to filter, but as the event offers no direct link to the faulted task this will likely interact outside this part of the application and is a case of a sledgehammer to crack a nut. Given an instance of a faulted Task<T> is there any means of flagging it as "fault handled"? Simplified code: public static Task<HttpResult> Start(Uri url) { var webReq = BuildHttpWebRequest(url); var result = new HttpResult(); var taskOuter = Task<HttpResult>.Factory.StartNew(() => { var tRequest = Task<WebResponse>.Factory.FromAsync( webReq.BeginGetResponse, webReq.EndGetResponse, null, TaskCreationOptions.AttachedToParent); var tError = tRequest.ContinueWith<HttpResult>( t => HandleWebRequestError(t, result), TaskContinuationOptions.AttachedToParent |TaskContinuationOptions.OnlyOnFaulted); var tSuccess = tRequest.ContinueWith<HttpResult>( t => HandleWebRequestSuccess(t, result), TaskContinuationOptions.AttachedToParent |TaskContinuationOptions.OnlyOnRanToCompletion); return result; }); return taskOuter; } with: private static HttpDownloaderResult HandleWebRequestError( Task<WebResponse> respTask, HttpResult result) { Debug.Assert(respTask.Status == TaskStatus.Faulted); Debug.Assert(respTask.Exception.InnerException is WebException); // Try and observe the fault: Doesn't help. try { respTask.Wait(); } catch (AggregateException e) { Log("HandleWebRequestError: waiting on antecedent task threw inner: " + e.InnerException.Message); } // ... populate result with details of the failure for the client ... return result; } (HandleWebRequestSuccess will eventually spin off further tasks to get the content of the response...) The client should be able to wait on the task and then look at its result, without it throwing due to a fault that is expected and already handled.

    Read the article

  • Google Analytics checkout page tracking problem

    - by Amir E. Habib
    I am running a multilingual website, each lang on a different domain name. I am trying to lead all purchase requests to the checkout progress, which has its own domain too. In order to keep Google Analytics tracking I've updated the Google Analytics code accordingly. I set the source domain to 'multiple top-level domains'. Everything is going fine so far unless in E-commerce Overview; the "Sources / Medium" is always showing as (direct) - or the name of the source domain. Since I am redirecting using PHP header(location:.. etc.) the Google _link method doesn't seem to be working properly - I want to focus on two questions: Should I create a new profile for the checkout domain in Google Analytics? (I am now using the profile ID of the source domain even though I move to the checkout domain, si that OK?) When I'm trying to pass the cookies of the source domain to the checkout domain, I notice that the Google cookies are copied to the new domain (the cookie path is .checkout-domain/) and they have the same values of the original cookies - But for some reason another set of cookies is created once I access a page with google analytics code in the checkout pages, with different values (same path). Feels like I'm doing something wrong here, so my question is - What am I doing wrong here? Does anyone have an idea how to pass the cookies to the checkout domain?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >