Search Results

Search found 13947 results on 558 pages for 'television shows'.

Page 209/558 | < Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >

  • Merge Records in Session bean by using ADF Drag/Drop

    - by shantala.sankeshwar
    This article describes how to merge multiple selected records in Session Bean using ADF drag & drop feature. Below described is simple use case that shows how exactly this can be achieved. Here we will have table & user input field.Table shows  EMP records & user input field accepts Salary.When we drag & drop multiple records on user input field,the selected records get updated with the new Salary provided. Steps: Let us suppose that we have created Java EE Web Application with Entities from Emp table.Then create EJB Session Bean & generate Data control for the same. Write a simple code in sessionEJBBean & expose this method to local interface :  public void updateEmprecords(List empList, Object sal) {       Emp emp = null;       for (int i = 0; i < empList.size(); i++)       {        emp = em.find(Emp.class, empList.get(i));         emp.setSal((BigDecimal)sal);       }      em.merge(emp);   } Now let us create updateEmpRecords.jspx page in viewController project & Drop empFindAll object as ADF Table Define custom SelectionListener method for the table :   public void selectionListener(SelectionEvent selectionEvent)     {     // This method gets the Empno of the selected record & stores in the list object      UIXTable table = (UIXTable)selectionEvent.getComponent();      FacesCtrlHierNodeBinding fcr      =(FacesCtrlHierNodeBinding)table.getSelectedRowData();      Number empNo = (Number)fcr.getAttribute("empno") ;      this.getSelectedRowsList().add(empNo);     }Set table's selectedRowKeys to #{bindings.empFindAll.collectionModel.selectedRow}"Drop inputText on the same jspx page that accepts Salary .Now we would like to drag records from the above table & drop that on the inputtext field.This feature can be achieved by inserting dragSource operation inside the table & dropTraget operation inside the inputText:<af:dragSource discriminant="tab"/> //Insert this inside the table<af:inputText label="Enter Salary" id="it13" autoSubmit="true"       binding="# {test.deptValue}">       <af:dropTarget dropListener="#{test.handleTableDrop}">       <af:dataFlavor        flavorClass="org.apache.myfaces.trinidad.model.RowKeySet"    discriminant="tab"/>       </af:dropTarget>       <af:convertNumber/> </af:inputText> In the above code when the user drags & drops multiple records on inputText,the dropListener method gets called.Goto the respective page definition file & create updateEmprecords method action& execute action dropListener method code:        public DnDAction handleTableDrop(DropEvent dropEvent)        {          //Below code gets the updateEmprecords method,passes parameters & executes method            DataFlavor<RowKeySet> df = DataFlavor.getDataFlavor(RowKeySet.class);            RowKeySet droppedKeySet = dropEvent.getTransferable().getData(df);            if (droppedKeySet != null && droppedKeySet.size() > 0)           {                  DCBindingContainer bindings =                  (DCBindingContainer)BindingContext.getCurrent().getCurrentBindingsEntry();                  OperationBinding updateEmp;                  updateEmp= bindings.getOperationBinding("updateEmprecords");                  updateEmp.getParamsMap().put("sal",                  this.getDeptValue().getAttributes().get("value"));                            updateEmp.getParamsMap().put("empList", this.getSelectedRowsList());                  updateEmp.execute(); //Below code performs execute operation to refresh the updated records                 OperationBinding executeBinding;                 executeBinding= bindings.getOperationBinding("Execute");                 executeBinding.execute(); AdfFacesContext.getCurrentInstance().addPartialTarget(dropEvent.getDragComponent());                this.getSelectedRowsList().clear();          }                 return DnDAction.NONE;        }Run updateEmpRecords.jspx page & enter any Salary say '5000'.Select multiple records in table & drop these selected records on the inputText Salary. Note that all the selected records salary value gets updated to 5000.Technorati Tags: ADF Drag and drop,EJB Session bean,ADF table,inputText,DropEvent  

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • Lenovo V570 CPU fan running constantly, CPU core 1 running over 90%!

    - by Rabbit2190
    I have seen that a lot of people are having this same issue. I am running a Lenovo V570 i5 4 core, 6 gigs of ram, and am running 11.10 Onieric Ocelot. On my system monitor graph it shows CPU at 20%, when I open the monitor it shows core #1 at around 90%, the other cores fluctuate at or below 5-12% if even. Now this seems like a really terrible balance of power between the cores, especially with so much stress on one core only, when these things are designed to work with 4 cores and not at such high temps. My current readings say 64 degrees Celsius, this does not seem normal for any cpu, and I am seriously considering, working on my windows7 partition until I see a real solution to this issue or upgrading to 12.04 right away when it comes out... I have seen countless things saying it has something to do with the Kernel, the kernel on mine is the same as when I upgraded, I really do not like messing with it, as when I had 11.04, I did tinker with it due to the freeze issues I was having, and that just made worse issues. I like this version 11.10 and would like to keep it for a while, but without the fear that my core is going to fry! So any help would be much appreciated! I did try changing a couple things in ACPI, and restarting this did not help, and here I am. I tried one thing prior to that that was listed under a different computer brand, but it would not do a make on the file. I really need help with this, I rely on this computer for a lot of things, and love this OS! Please help so I do not need to resort to my Microsoft partition! PLEASE! Here is the fwts cpufrequ- output: rabbit@rabbit-Lenovo-V570:~$ sudo fwts cpufreq - 00001 fwts Results generated by fwts: Version V0.23.25 (Thu Oct 6 15 00002 fwts :12:31 BST 2011). 00003 fwts 00004 fwts Some of this work - Copyright (c) 1999 - 2010, Intel Corp. 00005 fwts All rights reserved. 00006 fwts Some of this work - Copyright (c) 2010 - 2011, Canonical. 00007 fwts 00008 fwts This test run on 02/04/12 at 17:23:22 on host Linux 00009 fwts rabbit-Lenovo-V570 3.0.0-17-generic-pae #30-Ubuntu SMP Thu 00010 fwts Mar 8 17:53:35 UTC 2012 i686. 00011 fwts 00012 fwts Running tests: cpufreq. 00014 cpufreq CPU frequency scaling tests (takes ~1-2 mins). 00015 cpufreq --------------------------------------------------------- 00016 cpufreq Test 1 of 1: CPU P-State Checks. 00017 cpufreq For each processor in the system, this test steps through 00018 cpufreq the various frequency states (P-states) that the BIOS 00019 cpufreq advertises for the processor. For each processor/frequency 00020 cpufreq combination, a quick performance value is measured. The 00021 cpufreq test then validates that: 00022 cpufreq 1) Each processor has the same number of frequency states 00023 cpufreq 2) Higher advertised frequencies have a higher performance 00024 cpufreq 3) No duplicate frequency values are reported by the BIOS 00025 cpufreq 4) Is BIOS wrongly doing Sw_All P-state coordination across cores 00026 cpufreq 5) Is BIOS wrongly doing Sw_Any P-state coordination across cores 00027 cpufreq Frequency | Speed 00028 cpufreq -----------+--------- 00029 cpufreq 2.45 Ghz | 100.0 % 00030 cpufreq 2.45 Ghz | 83.7 % 00031 cpufreq 2.05 Ghz | 69.2 % 00032 cpufreq 1.85 Ghz | 62.5 % 00033 cpufreq 1.65 Ghz | 55.2 % 00034 cpufreq 1400 Mhz | 48.6 % 00035 cpufreq 1200 Mhz | 41.8 % 00036 cpufreq 1000 Mhz | 34.5 % 00037 cpufreq 800 Mhz | 27.6 % 00038 cpufreq 9 CPU frequency steps supported 00039 cpufreq Frequency | Speed 00040 cpufreq -----------+--------- 00041 cpufreq 2.45 Ghz | 97.7 % 00042 cpufreq 2.45 Ghz | 83.7 % 00043 cpufreq 2.05 Ghz | 69.6 % 00044 cpufreq 1.85 Ghz | 63.3 % 00045 cpufreq 1.65 Ghz | 55.7 % 00046 cpufreq 1400 Mhz | 48.7 % 00047 cpufreq 1200 Mhz | 41.7 % 00048 cpufreq 1000 Mhz | 34.5 % 00049 cpufreq 800 Mhz | 27.5 % 00050 cpufreq Frequency | Speed 00051 cpufreq -----------+--------- 00052 cpufreq 2.45 Ghz | 97.7 % 00053 cpufreq 2.45 Ghz | 84.4 % 00054 cpufreq 2.05 Ghz | 69.6 % 00055 cpufreq 1.85 Ghz | 62.6 % 00056 cpufreq 1.65 Ghz | 55.9 % 00057 cpufreq 1400 Mhz | 48.7 % 00058 cpufreq 1200 Mhz | 41.7 % 00059 cpufreq 1000 Mhz | 34.7 % 00060 cpufreq 800 Mhz | 27.8 % 00061 cpufreq Frequency | Speed 00062 cpufreq -----------+--------- 00063 cpufreq 2.45 Ghz | 100.0 % 00064 cpufreq 2.45 Ghz | 82.6 % 00065 cpufreq 2.05 Ghz | 67.8 % 00066 cpufreq 1.85 Ghz | 61.4 % 00067 cpufreq 1.65 Ghz | 54.9 % 00068 cpufreq 1400 Mhz | 48.3 % 00069 cpufreq 1200 Mhz | 41.1 % 00070 cpufreq 1000 Mhz | 34.3 % 00071 cpufreq 800 Mhz | 27.4 % 00072 cpufreq Frequency | Speed 00073 cpufreq -----------+--------- 00074 cpufreq 2.45 Ghz | 96.2 % 00075 cpufreq 2.45 Ghz | 82.5 % 00076 cpufreq 2.05 Ghz | 69.3 % 00077 cpufreq 1.85 Ghz | 62.7 % 00078 cpufreq 1.65 Ghz | 55.0 % 00079 cpufreq 1400 Mhz | 47.4 % 00080 cpufreq 1200 Mhz | 41.1 % 00081 cpufreq 1000 Mhz | 34.0 % 00082 cpufreq 800 Mhz | 27.2 % 00083 cpufreq Frequency | Speed 00084 cpufreq -----------+--------- 00085 cpufreq 2.45 Ghz | 96.5 % 00086 cpufreq 2.45 Ghz | 83.6 % 00087 cpufreq 2.05 Ghz | 68.1 % 00088 cpufreq 1.85 Ghz | 61.7 % 00089 cpufreq 1.65 Ghz | 54.9 % 00090 cpufreq 1400 Mhz | 48.0 % 00091 cpufreq 1200 Mhz | 41.1 % 00092 cpufreq 1000 Mhz | 34.2 % 00093 cpufreq 800 Mhz | 27.8 % 00094 cpufreq Frequency | Speed 00095 cpufreq -----------+--------- 00096 cpufreq 2.45 Ghz | 96.4 % 00097 cpufreq 2.45 Ghz | 82.6 % 00098 cpufreq 2.05 Ghz | 68.8 % 00099 cpufreq 1.85 Ghz | 60.5 % 00100 cpufreq 1.65 Ghz | 52.4 % 00101 cpufreq 1400 Mhz | 48.8 % 00102 cpufreq 1200 Mhz | 41.1 % 00103 cpufreq 1000 Mhz | 34.2 % 00104 cpufreq 800 Mhz | 26.4 % 00105 cpufreq Frequency | Speed 00106 cpufreq -----------+--------- 00107 cpufreq 2.45 Ghz | 95.3 % 00108 cpufreq 2.45 Ghz | 82.5 % 00109 cpufreq 2.05 Ghz | 65.5 % 00110 cpufreq 1.85 Ghz | 62.8 % 00111 cpufreq 1.65 Ghz | 54.8 % 00112 cpufreq 1400 Mhz | 48.0 % 00113 cpufreq 1200 Mhz | 41.2 % 00114 cpufreq 1000 Mhz | 34.2 % 00115 cpufreq 800 Mhz | 27.3 % 00116 cpufreq Frequency | Speed 00117 cpufreq -----------+--------- 00118 cpufreq 2.45 Ghz | 96.3 % 00119 cpufreq 2.45 Ghz | 83.4 % 00120 cpufreq 2.05 Ghz | 68.3 % 00121 cpufreq 1.85 Ghz | 61.9 % 00122 cpufreq 1.65 Ghz | 54.9 % 00123 cpufreq 1400 Mhz | 48.0 % 00124 cpufreq 1200 Mhz | 41.1 % 00125 cpufreq 1000 Mhz | 34.2 % 00126 cpufreq 800 Mhz | 27.3 % 00127 cpufreq Frequency | Speed 00128 cpufreq -----------+--------- 00129 cpufreq 2.45 Ghz | 100.0 % 00130 cpufreq 2.45 Ghz | 77.9 % 00131 cpufreq 2.05 Ghz | 64.6 % 00132 cpufreq 1.85 Ghz | 54.0 % 00133 cpufreq 1.65 Ghz | 51.7 % 00134 cpufreq 1400 Mhz | 45.2 % 00135 cpufreq 1200 Mhz | 39.0 % 00136 cpufreq 1000 Mhz | 33.1 % 00137 cpufreq 800 Mhz | 25.5 % 00138 cpufreq Frequency | Speed 00139 cpufreq -----------+--------- 00140 cpufreq 2.45 Ghz | 93.4 % 00141 cpufreq 2.45 Ghz | 75.7 % 00142 cpufreq 2.05 Ghz | 64.5 % 00143 cpufreq 1.85 Ghz | 59.1 % 00144 cpufreq 1.65 Ghz | 51.4 % 00145 cpufreq 1400 Mhz | 45.9 % 00146 cpufreq 1200 Mhz | 39.3 % 00147 cpufreq 1000 Mhz | 32.7 % 00148 cpufreq 800 Mhz | 25.8 % 00149 cpufreq Frequency | Speed 00150 cpufreq -----------+--------- 00151 cpufreq 2.45 Ghz | 92.1 % 00152 cpufreq 2.45 Ghz | 78.1 % 00153 cpufreq 2.05 Ghz | 65.7 % 00154 cpufreq 1.85 Ghz | 58.6 % 00155 cpufreq 1.65 Ghz | 52.5 % 00156 cpufreq 1400 Mhz | 45.7 % 00157 cpufreq 1200 Mhz | 39.3 % 00158 cpufreq 1000 Mhz | 32.7 % 00159 cpufreq 800 Mhz | 24.3 % 00160 cpufreq Frequency | Speed 00161 cpufreq -----------+--------- 00162 cpufreq 2.45 Ghz | 88.9 % 00163 cpufreq 2.45 Ghz | 79.8 % 00164 cpufreq 2.05 Ghz | 58.4 % 00165 cpufreq 1.85 Ghz | 52.6 % 00166 cpufreq 1.65 Ghz | 46.9 % 00167 cpufreq 1400 Mhz | 41.0 % 00168 cpufreq 1200 Mhz | 35.1 % 00169 cpufreq 1000 Mhz | 29.1 % 00170 cpufreq 800 Mhz | 22.9 % 00171 cpufreq Frequency | Speed 00172 cpufreq -----------+--------- 00173 cpufreq 2.45 Ghz | 92.8 % 00174 cpufreq 2.45 Ghz | 80.1 % 00175 cpufreq 2.05 Ghz | 66.2 % 00176 cpufreq 1.85 Ghz | 59.5 % 00177 cpufreq 1.65 Ghz | 52.9 % 00178 cpufreq 1400 Mhz | 46.2 % 00179 cpufreq 1200 Mhz | 39.5 % 00180 cpufreq 1000 Mhz | 32.9 % 00181 cpufreq 800 Mhz | 26.3 % 00182 cpufreq Frequency | Speed 00183 cpufreq -----------+--------- 00184 cpufreq 2.45 Ghz | 92.9 % 00185 cpufreq 2.45 Ghz | 79.5 % 00186 cpufreq 2.05 Ghz | 66.2 % 00187 cpufreq 1.85 Ghz | 59.6 % 00188 cpufreq 1.65 Ghz | 52.9 % 00189 cpufreq 1400 Mhz | 46.7 % 00190 cpufreq 1200 Mhz | 39.6 % 00191 cpufreq 1000 Mhz | 32.9 % 00192 cpufreq 800 Mhz | 26.3 % 00193 cpufreq FAILED [MEDIUM] CPUFreqCPUsSetToSW_ANY: Test 1, Processors 00194 cpufreq are set to SW_ANY. 00195 cpufreq FAILED [MEDIUM] CPUFreqSW_ANY: Test 1, Firmware not 00196 cpufreq implementing hardware coordination cleanly. Firmware using 00197 cpufreq SW_ANY instead?. 00198 cpufreq 00199 cpufreq ========================================================= 00200 cpufreq 0 passed, 2 failed, 0 warnings, 0 aborted, 0 skipped, 0 00201 cpufreq info only. 00202 cpufreq ========================================================= 00204 summary 00205 summary 0 passed, 2 failed, 0 warnings, 0 aborted, 0 skipped, 0 00206 summary info only. 00207 summary 00208 summary Test Failure Summary 00209 summary ==================== 00210 summary 00211 summary Critical failures: NONE 00212 summary 00213 summary High failures: NONE 00214 summary 00215 summary Medium failures: 2 00216 summary cpufreq test, at 1 log line: 193 00217 summary "Processors are set to SW_ANY." 00218 summary cpufreq test, at 1 log line: 195 00219 summary "Firmware not implementing hardware coordination cleanly. Firmware using SW_ANY instead?." 00220 summary 00221 summary Low failures: NONE 00222 summary 00223 summary Other failures: NONE 00224 summary 00225 summary Test |Pass |Fail |Abort|Warn |Skip |Info | 00226 summary ---------------+-----+-----+-----+-----+-----+-----+ 00227 summary cpufreq | | 2| | | | | 00228 summary ---------------+-----+-----+-----+-----+-----+-----+ 00229 summary Total: | 0| 2| 0| 0| 0| 0| 00230 summary ---------------+-----+-----+-----+-----+-----+-----+ rabbit@rabbit-Lenovo-V570:~$

    Read the article

  • Super constructor must be a first statement in Java constructor [closed]

    - by Val
    I know the answer: "we need rules to prevent shooting into your own foot". Ok, I make millions of programming mistakes every day. To be prevented, we need one simple rule: prohibit all JLS and do not use Java. If we explain everything by "not shooting your foot", this is reasonable. But there is not much reason is such reason. When I programmed in Delphy, I always wanted the compiler to check me if I read uninitializable. I have discovered myself that is is stupid to read uncertain variable because it leads unpredictable result and is errorenous obviously. By just looking at the code I could see if there is an error. I wished if compiler could do this job. It is also a reliable signal of programming error if function does not return any value. But I never wanted it do enforce me the super constructor first. Why? You say that constructors just initialize fields. Super fields are derived; extra fields are introduced. From the goal point of view, it does not matter in which order you initialize the variables. I have studied parallel architectures and can say that all the fields can even be assigned in parallel... What? Do you want to use the unitialized fields? Stupid people always want to take away our freedoms and break the JLS rules the God gives to us! Please, policeman, take away that person! Where do I say so? I'm just saying only about initializing/assigning, not using the fields. Java compiler already defends me from the mistake of accessing notinitialized. Some cases sneak but this example shows how this stupid rule does not save us from the read-accessing incompletely initialized in construction: public class BadSuper { String field; public String toString() { return "field = " + field; } public BadSuper(String val) { field = val; // yea, superfirst does not protect from accessing // inconstructed subclass fields. Subclass constr // must be called before super()! System.err.println(this); } } public class BadPost extends BadSuper { Object o; public BadPost(Object o) { super("str"); this. o = o; } public String toString() { // superconstructor will boom here, because o is not initialized! return super.toString() + ", obj = " + o.toString(); } public static void main(String[] args) { new BadSuper("test 1"); new BadPost(new Object()); } } It shows that actually, subfields have to be inilialized before the supreclass! Meantime, java requirement "saves" us from writing specializing the class by specializing what the super constructor argument is, public class MyKryo extends Kryo { class MyClassResolver extends DefaultClassResolver { public Registration register(Registration registration) { System.out.println(MyKryo.this.getDepth()); return super.register(registration); } } MyKryo() { // cannot instantiate MyClassResolver in super super(new MyClassResolver(), new MapReferenceResolver()); } } Try to make it compilable. It is always pain. Especially, when you cannot assign the argument later. Initialization order is not important for initialization in general. I could understand that you should not use super methods before initializing super. But, the requirement for super to be the first statement is different. It only saves you from the code that does useful things simply. I do not see how this adds safety. Actually, safety is degraded because we need to use ugly workarounds. Doing post-initialization, outside the constructors also degrades safety (otherwise, why do we need constructors?) and defeats the java final safety reenforcer. To conclude Reading not initialized is a bug. Initialization order is not important from the computer science point of view. Doing initalization or computations in different order is not a bug. Reenforcing read-access to not initialized is good but compilers fail to detect all such bugs Making super the first does not solve the problem as it "Prevents" shooting into right things but not into the foot It requires to invent workarounds, where, because of complexity of analysis, it is easier to shoot into the foot doing post-initialization outside the constructors degrades safety (otherwise, why do we need constructors?) and that degrade safety by defeating final access modifier When there was java forum alive, java bigots attecked me for these thoughts. Particularly, they dislaked that fields can be initialized in parallel, saying that natural development ensures correctness. When I replied that you could use an advanced engineering to create a human right away, without "developing" any ape first, and it still be an ape, they stopped to listen me. Cos modern technology cannot afford it. Ok, Take something simpler. How do you produce a Renault? Should you construct an Automobile first? No, you start by producing a Renault and, once completed, you'll see that this is an automobile. So, the requirement to produce fields in "natural order" is unnatural. In case of alarmclock or armchair, which are still chair and clock, you may need first develop the base (clock and chair) and then add extra. So, I can have examples where superfields must be initialized first and, oppositely, when they need to be initialized later. The order does not exist in advance. So, the compiler cannot be aware of the proper order. Only programmer/constructor knows is. Compiler should not take more responsibility and enforce the wrong order onto programmer. Saying that I cannot initialize some fields because I did not ininialized the others is like "you cannot initialize the thing because it is not initialized". This is a kind of argument we have. So, to conclude once more, the feature that "protects" me from doing things in simple and right way in order to enforce something that does not add noticeably to the bug elimination at that is a strongly negative thing and it pisses me off, altogether with the all the arguments to support it I've seen so far. It is "a conceptual question about software development" Should there be the requirement to call super() first or not. I do not know. If you do or have an idea, you have place to answer. I think that I have provided enough arguments against this feature. Lets appreciate the ones who benefit form it. Let it just be something more than simple abstract and stupid "write your own language" or "protection" kind of argument. Why do we need it in the language that I am going to develop?

    Read the article

  • How to Control Screen Layouts in LightSwitch

    - by ChrisD
    Visual Studio LightSwitch has a bunch of screen templates that you can use to quickly generate screens. They give you good starting points that you can customize further. When you add a new screen to your project you see a set of screen templates that you can choose from. These templates lay out all the related data you choose to put on a screen automatically for you. And don’t under estimate them; they do a great job of laying out controls in a smart way. For instance, a tab control will be used when you select more than one related set of data to display on a screen. However, you’re not limited to taking the layout as is. In fact, the screen designer is pretty flexible and allows you to create stacks of controls in a variety of configurations. You just need to visualize your screen as a series of containers that you can lay out in rows and columns. You then place controls or stacks of controls into these areas to align the screen exactly how you want. If you’re new in Visual Studio LightSwitch, you can see this tutorial. OK, Let’s start with a simple example. I have already designed my data entities for a simple order tracking system similar to the Northwind database. I also have added a Search Data  Screen to search my Products already. Now I will add a new Details Screen for my Products and make it the default screen via the “Add New Screen” dialog: The screen designer picks a simple layout for me based on the single entity I chose, in this case Product. Hit F5 to run the application, select a Product on the search screen to open the Product Details Screen. Notice that it’s pretty simple because my entity is simple. Click the “Customize” button in the top right of the screen so we can start tweaking it. The left side of the screen shows the containership of controls and data bindings (called the content tree) and the right side shows the live preview with data. Notice that we have a simple layout of two rows but only one row is populated (with a vertical stack of controls in this case). The bottom row is empty. You can envision the screen like this: Each container will display a group of data that you select. For instance in the above screen, the top row is set to a vertical stack control and the group of data to display is coming from Product. So when laying out screens you need to think in terms of containers of controls bound to groups of data. To change the data to which a container is bound, select the data item next to the container: You can select the “New Group” item in order to create more containers (or controls) within the current container. For instance to totally control the layout, select the Product in the top row and hit the delete key. This will delete the vertical stack and therefore all the controls on the screen. The content tree will still have two rows, but the rows are now both empty. If you want a layout of four containers (two rows and two columns) then select “New Group” for the data item and then change the vertical stack control to “Two Columns” for both of the rows as shown here: You can keep going on and on by selecting new groups and choosing between rows or columns. Here’s a layout with 8 containers, 4 rows and 2 columns: And here is a layout with 7 content areas; one row across the top of the screen and three rows with two columns below that: When you select Choose Content and select a data item like Product it will populate all the controls within the container (row or column in a vertical stack) however you have complete control on what to display within each group. You can delete fields you don’t want to display and/or change their controls. You can also change the size of controls and how they display by changing the settings in the properties window. If you are in the Screen Designer (and not the customization mode like we are here) you can also drag-drop data items from the left-hand side of the screen to the content tree. Note, however, that not all areas of the tree will allow you to drop a data item if there is a binding already set to a different set of data. For instance you can’t drop a Customer ID into the same group as a Product if they originate from different entities. To get around this, all you need to do is create a new group and content area as shown above. Let’s take a more complex example that deals with more than just product. I want to design a complex screen that displays Products and their Category, as well as all the OrderDetails for which that product is selected. This time I will create a new screen and select List and Details, select the Products screen data, and include the related OrderDetails. However I’m going to totally change the layout so that a Product grid is at the top left and below that is the selected Product detail. Below that will be the Category text fields and image in two columns below. On the right side I want the OrderDetails grid to take up the whole right side of the screen. All this can be done in customization mode while you’re debugging the application. To do this, I first deleted all the content items in the tree and then re-created the content tree as shown in the image below. I also set the image to be larger and the description textbox to be 5 rows using the property window below the live preview. I added the green lines to indicate the containers and show how it maps to the content tree (click to enlarge): I hope this demystifies the screen designer a little bit. Remember that screen templates are excellent starting points – you can take them as-is or customize them further. It takes a little fooling around with customizing screens to get them to do exactly what you want but there are a ton of possibilities once you get the hang of it. Stay tuned for more information on how to create your own screen templates that show up in the “Add New Screen” dialog. Enjoy! The tutorial that might be interested: Adding Custom Control In LightSwitch

    Read the article

  • problem processing xml in flex3

    - by john
    Hi All, First time here asking a question and still learning on how to format things better... so sorry about the format as it does not look too well. I have started learning flex and picked up a book and tried to follow the examples in it. However, I got stuck with a problem. I have a jsp page which returns xml which basically have a list of products. I am trying to parse this xml, in other words go through products, and create Objects for each product node and store them in an ArrayCollection. The problem I believe I am having is I am not using the right way of navigating through xml. The xml that is being returned from the server looks like this: <?xml version="1.0" encoding="ISO-8859-1"?><result type="success"> <products> <product> <id>6</id> <cat>electronics</cat> <name>Plasma Television</name> <desc>65 inch screen with 1080p</desc> <price>$3000.0</price> </product> <product> <id>7</id> <cat>electronics</cat> <name>Surround Sound Stereo</name> <desc>7.1 surround sound receiver with wireless speakers</desc> <price>$1000.0</price> </product> <product> <id>8</id> <cat>appliances</cat> <name>Refrigerator</name> <desc>Bottom drawer freezer with water and ice on the door</desc> <price>$1200.0</price> </product> <product> <id>9</id> <cat>appliances</cat> <name>Dishwasher</name> <desc>Large capacity with water saver setting</desc> <price>$500.0</price> </product> <product> <id>10</id> <cat>furniture</cat> <name>Leather Sectional</name> <desc>Plush leather with room for 6 people</desc> <price>$1500.0</price> </product> </products></result> And I have flex code that tries to iterate over products like following: private function productListHandler(e:JavaFlexStoreEvent):void { productData = new ArrayCollection(); trace(JavaServiceHandler(e.currentTarget).response); for each (var item:XML in JavaServiceHandler(e.currentTarget).response..product ) { productData.addItem( { id:item.id, item:item.name, price:item.price, description:item.desc }); } } with trace, I can see the xml being returned from the server. However, I cannot get inside the loop as if the xml was empty. In other words, JavaServiceHandler(e.currentTarget).response..product must be returning nothing. Can someone please help/point out what I could be doing wrong. My JavaServiceHandler class looks like this: package com.wiley.jfib.store.data { import com.wiley.jfib.store.events.JavaFlexStoreEvent; import flash.events.Event; import flash.events.EventDispatcher; import flash.net.URLLoader; import flash.net.URLRequest; public class JavaServiceHandler extends EventDispatcher { public var serviceURL:String = ""; public var response:XML; public function JavaServiceHandler() { } public function callServer():void { if(serviceURL == "") { throw new Error("serviceURL is a required parameter"); return; } var loader:URLLoader = new URLLoader(); loader.addEventListener(Event.COMPLETE, handleResponse); loader.load(new URLRequest(serviceURL)); // var httpService:HTTPService = new HTTPService(); // httpService.url = serviceURL; // httpService.resultFormat = "e4x"; // httpService.addEventListener(Event.COMPLETE, handleResponse); // httpService.send(); } private function handleResponse(e:Event):void { var loader:URLLoader = URLLoader(e.currentTarget); response = XML(loader.data); dispatchEvent(new JavaFlexStoreEvent(JavaFlexStoreEvent.DATA_LOADED) ); // var httpService:HTTPService = HTTPService(e.currentTarget); // response = httpService.lastResult.product; // dispatchEvent(new JavaFlexStoreEvent(JavaFlexStoreEvent.DATA_LOADED) ); } } } Even though I refer to this as mine and it is not in reality. This is from a Flex book as a code sample which does not work, go figure. Any help is appreciated. Thanks john

    Read the article

  • Center footer fixed at the bottom IE

    - by Mirko
    I am coding a web interface for a University project and I have been dealing with this issue: I want my footer fixed at the bottom so it is in place no matter which screen I am using or if I toggle the full screen mode It works in all the other browsers except IE7 (I do not have to support previous versions) HTML <div id="menu"> <a href="information.html" rel="shadowbox;height=500;width=650" title="INFORMATION" > <img src="images/info.png" alt="information icon" /> </a> <a href="images/bricks_of_destiny.jpg" rel="shadowbox[gallery]" title="IMAGES" > <img src="images/image.png" alt="image icon" /> </a> <a href="music_player.swf" title="MUSIC" rel="shadowbox;height=400;width=600" > <img src="images/music.png" alt="music icon" /> </a> <a href="#" title="MOVIES"><img src="images/television.png" alt="movies icon" /></a> <a href="quotes.html" title="QUOTES" rel="shadowbox;height=300;width=650" > <img src="images/male_user.png" alt="male user icon" /> </a> <a href="#" title="REFERENCES"> <img src="images/search_globe.png" alt="search globe icon" /> </a> </div> <a href="images/destiny_1.jpg" rel="shadowbox[gallery]" title="IMAGES"></a> <a href="images/destiny_carma_jewell.jpg" rel="shadowbox[gallery]" title="IMAGES"></a> <a href="images/destiny-joan-marie.jpg" rel="shadowbox[gallery]" title="IMAGES"></a> <a href="images/pursuing_destiny.jpg" rel="shadowbox[gallery]" title="IMAGES"></a> <div class="clear"></div> <div id="destiny"> Discover more about the word <span class="strong">DESTINY </span>! Click one of the icon above! (F11 Toggle Full / Standard screen) </div> <div id="footer"> <ul id="breadcrumbs"> <li>Disclaimer</li> <li> | Icons by: <a href="http://dryicons.com/" rel="shadowbox">dryicons.com</a></li> <li> | Website by: <a href="http://www.eezzyweb.com/" rel="shadowbox">eezzyweb</a></li> <li> | <a href="http://jquery.com/" rel="shadowbox">jQuery</a></li> </ul> </div> </div> CSS: #wrapper{ text-align:center; margin:0 auto; width:750px; height:430px; border:1px solid #fff; } #menu{ position:relative; margin:0 auto; top:350px; width:450px; height:60px; } #destiny{ position:relative; top:380px; color:#FFF; font-size:1.5em; font-weight:bold; border:1px solid #fff; } #breadcrumbs{ list-style:none; } #breadcrumbs li{ display:inline; color:#FFF; } #footer{ position:absolute; width:750px; height:60px; margin:0 auto; text-align:center; border:1px solid #fff; bottom:0; } .clear{ clear:both; } The white borders are there only for debugging purposes The application is hosted at http://www.eezzyweb.com/destiny/ Any suggestion is appreciated

    Read the article

  • Javascript not working in IE but works in Firefox chrome

    - by user1290528
    So i have the following php page with a java script that gets the total of items based on their quatity, then inputs the total into a text box for each item. In ie the text boxes are being filled with $NaN. While in firefox, chrome the text boxes are filled with the correct values. Any help would be graatly appreciated. <?php echo $_SESSION['SESS_MEMBER_ID']; require_once('auth.php'); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> <title>Breakfast Menu</title> <link href="loginmodule.css" rel="stylesheet" type="text/css"> <script type='text/javascript'> var totalarray=new Array(); var totalarray2= new Array(); var runningtotal = 0; var runningtotal2 = 0; var discount = .2; var discounttotal = 0; var discount1 = 0; runningtotal = runningtotal * 1; runningtotal2 = runningtotal2 * 1; function displayResult(price,init) { var newstring = "quantity"+init; var totstring = "total"+init; var quantity = document.getElementById(newstring).value; var quantity = parseFloat(quantity); var test = price * quantity; var test = test.toFixed(2); document.getElementById(newstring).value = quantity; document.getElementById(totstring).value = "$" + test; totalarray[init] = test; getTotal(); } function getTotal(){ runningtotal = 0; var i=0; for (i=0;i<totalarray.length;i++){ totalarray[i] = totalarray[i] *1; runningtotal = runningtotal + totalarray[i]; discounttotal = totalarray[i] * discount; discounttotal = totalarray[i] - discounttotal; This line is where IE shows its first error document.getElementById('totalcost').value="$" + runningtotal.toFixed(2); } var orderpart1 = document.getElementById('totalcost').value; var orderpart1 = orderpart1.substr(1); var orderpart1 = orderpart1 * 1; var orderpart2 = document.getElementById('totalcost2').value; var orderpart2 = orderpart2.substr(1); var orderpart2 = orderpart2 * 1; var ordertot = orderpart1 + orderpart2; document.getElementById('ordertotal').value ="$"+ ordertot.toFixed(2) } function displayResult2(price2,init2) { var newstring2 = "quantity2"+init2; var totstring2 = "total2"+init2; var quantity2 = document.getElementById(newstring2).value; var quantity2 = parseFloat(quantity2); var test2 = price2 * quantity2; var test2 = test2.toFixed(2); document.getElementById(newstring2).value = quantity2; document.getElementById(totstring2).value = "$" + test2; totalarray2[init2] = test2; getTotal2(); } function getTotal2(){ runningtotal2 = 0; var i=0; for (i=0;i<totalarray2.length;i++){ totalarray2[i] = totalarray2[i] *1; runningtotal2 = runningtotal2+ totalarray2[i]; This is where IE shows its second error document.getElementById('totalcost2').value="$" + runningtotal2.toFixed(2); }//IE Shows Second error here var orderpart1 = document.getElementById('totalcost').value; var orderpart1 = orderpart1.substr(1); var orderpart1 = orderpart1 * 1; var orderpart2 = document.getElementById('totalcost2').value; var orderpart2 = orderpart2.substr(1); var orderpart2 = orderpart2 * 1; var ordertot = orderpart1 + orderpart2; document.getElementById('ordertotal').value ="$"+ ordertot.toFixed(2); } </script> </head> <body> <?php include("newnew.php"); ?> <td style="vertical-align: top; width: 80%; height:80%;"><br> <div style="text-align: center;"> <form action="testplaceorder.php" method="post" onSubmit="return confirm('Are you sure?');"> <h4>Employee Breakfast Order Form</h4> <h1 align="left">Breakfest Foods</h1> <table border='0' cellpadding='0' cellspacing='0'> <tr> <td> <table width="100%" border="1"> <tr> <th>Item&nbsp&nbsp&nbsp&nbsp&nbsp</th> <th>Price&nbsp&nbsp&nbsp&nbsp&nbsp </th> <th>Quantity&nbsp&nbsp&nbsp&nbsp&nbsp</th> <th>Total&nbsp&nbsp&nbsp&nbsp&nbsp</th> </tr> <?php mysql_connect("localhost", "seniorproject", "farmingdale123") or die(mysql_error()); mysql_select_db("fsenior") or die(mysql_error()); $result = mysql_query("SELECT name, price,foodid FROM Food where foodtype='br'") or die(mysql_error()); $init = 0; while(list($name, $price, $brId) = mysql_fetch_row($result)) { echo "<tr> <td>$name</td> <td>\$$price</td> <td><select name='quantity$init' id='quantity$init' onchange='displayResult($price,$init)'><option>0</option><option>1</option><option>2</option><option>3</option><option>4</option><option>5</option><option>6</option><option>7</option><option>8</option><option>9</option></td> <td><input name='total$init' type='text' id='total$init' readonly='readonly' value='\$0.00'></td> </tr>" ; echo "<script type='text/javascript'>displayResult($price,$init);</script>"; $foodname = "'SESS_FOODNAME_" . $init . "'"; $foodid = "'SESS_FOODID_" . $init."'"; $_SESSION[$foodname] = $name; $_SESSION[$foodid] = $brId; $init = $init+1; } $_SESSION['SESS_INIT'] = $init; ?> <tr> <td></td> <td></td> <td>Total Cost</td> <td><input name='totalcost' type='text' id='totalcost' readonly='readonly' value='$0.00'></td> </tr> <tr><td></td><td></td><td>Discount</td><td><input name='discountvalue1' id ='discountvalue1' type='text' readonly='readonly' value='20%'></td> </tr> <tr><td></td><td></td><td>Total After Discount</td><td><input name='discounttotal1' id ='discounttotal1' type='text' readonly='readonly' value='$0.00'></td></tr> </table> <tr> <td><br></td> </tr> </table> <h1 align="left">Breakfest Drinks</h1> <table border='0' cellpadding='0' cellspacing='0'> <tr> <td> <table width="100%" border="1"> <tr> <th>Item&nbsp&nbsp&nbsp&nbsp&nbsp</th> <th>Price&nbsp&nbsp&nbsp&nbsp&nbsp </th> <th>Quantity&nbsp&nbsp&nbsp&nbsp&nbsp</th> <th>Total&nbsp&nbsp&nbsp&nbsp&nbsp</th> </tr> <?php mysql_connect("localhost", "****", "***") or die(mysql_error()); mysql_select_db("fsenior") or die(mysql_error()); $result2 = mysql_query("SELECT drinkname, price,drinkid FROM Drinks where drinktype='br'") or die(mysql_error()); $init2 = 0; while(list($name2, $price2, $brId2) = mysql_fetch_row($result2)) { echo "<tr> <td>$name2</td> <td>\$$price2</td> <td><select name='quantity2$init2' id='quantity2$init2' onchange='displayResult2($price2,$init2)'><option>0</option><option>1</option><option>2</option><option>3</option><option>4</option><option>5</option><option>6</option><option>7</option><option>8</option><option>9</option></td> <td><input name='total2$init2' type='text' id='total2$init2' readonly='readonly' value='\$0.00'></td> </tr>" ; echo "<script type='text/javascript'>displayResult2($price2,$init2);</script>"; $drinkname = "'SESS_DRINKNAME_" . $init2 . "'"; $drinkid = "'SESS_DRINKID_" . $init2."'"; $_SESSION[$drinkname] = $name2; $_SESSION[$drinkid] = $brId2; $init2 = $init2+1; } $_SESSION['SESS_INIT2'] = $init2; ?> <tr> <td></td> <td></td> <td>Total Cost</td> <td><input name='totalcost2' type='text' id='totalcost2' readonly='readonly' value='$0.00'></td> </tr> </table> <tr> <td><br></td> </tr> </table> <table border="2"> <tr><td>Total Order Cost:</td><td> <?php echo "<input name='ordertotal' type='text' id='ordertotal' readonly='readonly' value='\$0.00'></td></table>"; ?> <p align="left"><input type='submit' name='submit' value='Submit'/></p> </form> </div></td> </tr> </tbody> </table></td> </tr> </tbody> </table> </body> </html>

    Read the article

  • Dell PowerEdge R720xd stuck in BIOS

    - by G_P
    I have a Dell PowerEdge R720xd that gets stuck in the BIOS when booting. It successfully gets past the "configuring memory" and "configuring iDRAC" screens, but once it shows the "CPLD version : 103" with the various management engine versions/patches, it just hangs. No errors messages are displayed. This started happening when we tried adding additional RAM to the machine. Since then, we tried re-seating the new memory which resulted in the same issue. Then, we took out all the new memory, and the problem persists. We have also tried pressing F2 to get into System Setup, but it just indicates "Entering System Setup" and hangs at the same point. Has anybody seen this issue before or have any ideas on what to try next? UPDATE After troubleshooting and trying to isolate the issue (stripping things down to a single CPU and single DIMM, same problem, swapping to the other CPU and a different DIMM, same problem), Dell support will be coming out to swap the system board.

    Read the article

  • How do I mount a "DiskSecure Multiboot" partition?

    - by ????
    For a hard drive that has 4 or 5 partitions, I was able to mount one of them using Ubuntu LiveCD: sudo mount /dev/sda1 /mnt but is there a way to mount to the other partitions? (if using sudo fdisk -l, it only shows /dev/sda) GParted's snapshot is: Right now, the fdisk info is as follows: ubuntu@ubuntu:~$ sudo fdisk -l /dev/sda Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1aca8ea5 Device Boot Start End Blocks Id System /dev/sda1 284993226 350602558 32804666+ 7 HPFS/NTFS/exFAT and then ubuntu@ubuntu:/mnt$ sudo fdisk -l /dev/sda1 Disk /dev/sda1: 33.6 GB, 33591978496 bytes 255 heads, 63 sectors/track, 4083 cylinders, total 65609333 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x2052474d This doesn't look like a partition table Probably you selected the wrong device. Device Boot Start End Blocks Id System /dev/sda1p1 ? 6579571 1924427647 958924038+ 70 DiskSecure Multi-Boot /dev/sda1p2 ? 1953251627 3771827541 909287957+ 43 Unknown /dev/sda1p3 ? 225735265 225735274 5 72 Unknown /dev/sda1p4 2642411520 2642463409 25945 0 Empty Partition table entries are not in disk order Per @lgarzo's request, parted info is: ubuntu@ubuntu:/mnt$ sudo parted /dev/sda print Model: ATA ST3320820AS (scsi) Disk /dev/sda: 320GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 146GB 180GB 33.6GB primary ntfs boot The command sudo mount /dev/sda1p2 /mnt won't work.

    Read the article

  • cdrom redirection fails on ILOM for Sun x4600 M2

    - by aculich
    We have been able to successfully redirect the ILOM on this x4600 M2 in the past, but for some reason now we are getting an error that says "CD-ROM image redirection failed to start": We have also launched the Storage Redirection service and that doesn't help. Launching it a second time to really make sure it's running shows that on the second time it can't bind to port 2121. We didn't get that message the first time, so I assume it is now listening (even though a scan with nmap doesn't show it listening. We are using ILOM firmware version: 3.0.3.31

    Read the article

  • "Host usb device connections disabled" in VMware???

    - by ZlateWay
    I installed Linux, Windows XP and Chrome OS in VMware Workstation 7 and in every OS the USB host doesn't work. When I start some of the Operative Systems this message shows up: "host usb device connections disabled" and under that : "The connection to the VMware USB Arbitration Service was unsuccessful. Please check the status of this service in the Microsoft Management Console." So what to do? What do I need to install to make the usb host work? BTW I use Windows 7 as a host OS. Thanks

    Read the article

  • Remotely Schedule and Stream Recorded TV in Windows 7 Media Center

    - by DigitalGeekery
    Have you ever been away from home and suddenly realized you forgot to record your favorite program? Now Windows 7 Media Center, users can schedule recordings remotely from their phones or mobile devices with Remote Potato. How it Works Remote Potato installs server software on the host computer running Windows 7 Media Center. Once the software is installed, we’ll need to do some port forwarding on the router and setup an optional dynamic DNS address. When setup is completed, we will access the application through a web based interface. Silverlight is required for Streaming recorded TV, but scheduling recordings can be done through an HTML interface. Installing Remote Potato Download and install Remote Potato on the Media Center PC. (See download link below) If you plan to stream any Recorded TV, you’ll also want to install the streaming pack located on the same page. It isn’t required to stream all shows, only shows that require the AC3 audio codec. Click Yes to allow Remote Potato to add rules to the Windows Firewall for remote access. You’ll likely need to accept a few UAC prompts. When notified that the rules were added, click OK. Remote Potato will then prompt you to allow administrator privileges to reserve a URL for it’s web server. Click Yes. Remote Potato server will start. Click on the configuration button at the right to to reveal the settings tabs.   One the General tab, you’ll have the option to run Remote Potato on startup and minimized in the System Tray. If you’re running Media Center on a dedicated HTPC, you’ll probably want to enable both startup options. Forwarding Ports on Your Router You’ll need to forward a couple ports on your router. By default, these will be ports 9080 and 9081. In this example we’re using a Linksys WRT54GL router, however, the steps for port forwarding will vary from router to router. On the Linksys configuration page, click on the Applications & Gaming Tab, and then the Port Range Forward tab. Under Application, type in a name of your choosing. In both the Start and End boxes, type the port number 9080. Enter the local IP address of your Media Center computer in the IP address column. Click the check box under Enable. Repeat the process on the next line, but this time use port 9081. When finished, click the Save Settings button. Note: It’s highly recommended that you configure the home computer running Media Center & Remote Potato with a static IP address.   Find your IP Address You’ll need to find the IP address assigned to your router from your ISP. There are many ways to do this but a quick and easy way is to visit a site like checkip.dyndns.org (link available below) The current external IP address of your router will be displayed in the browser.   Dynamic DNS This is an optional step, but  it’s highly recommended. Many routers, such as the Linksys WRT54GL we are using, support Dynamic DNS (DDNS). What Dynamic DNS allows you to do is affiliate your home router’s external IP address to a domain name. Every time your home router is assigned a a new IP address by your ISP, the domain name is updated to point to your new IP address. Remote Potato’s user interface is accessed over the Internet is by connecting to your router’s IP address followed by a colon and the port number. (Ex: XXX.XXX.XXX.XXX:9080) Instead of constantly having to look up and remember an IP address, you can use DDNS along with a 3rd party provider like DynDNS.com, to sign up for a free domain name and configure it to be updated each time your router is assigned a new IP address. Go to the DynDNS.com website (See link at the end of the article) and sign up for a free Domain name. You’ll need to register and confirm by email.   Once you’ve signed in and selected your domain name click Activate Services. You’ll get a confirmation message that your domain name has been activated.    On the Linksys WRT54GL click on the Setup tab an then DDNS. Select DynDNS.org, or TZO.com if you prefer to use their service, from the drop down list.   With DynDNS, you’ll need to fill in your username and password you signed up with at the DynDNS website and the hostname you chose. Note: You can connect over your local network with the IP Address of the computer running Remote Potato followed by a colon and the port number. Ex: 192.168.1.2:9080 Logging in Remote Potato and Recording a Show Once you connect, you’ll see the start page. To view the TV listings, click on TV Guide. You’ll then see your guide listings. There are a few ways to navigate the listings. At the top left, you can click on any of the preset time buttons to jump to  the listings at that time of the day.  Click on the arrows to the right and left of the day and date at the top center to proceed to the previous or next day. Or, jump to a specific day with the date and date buttons at the top right.   To setup a recording, click on a program.   You can choose to record the individual show or the entire series by clicking on Record Show or Record Series.   Remote Potato on Mobile Devices Perhaps the coolest feature of Remote Potato is the ability to schedule recording from your phone or mobile device. Note: For any devices or computers without Silverlight, you will be prompted to view the HTML page. Select Browse Listings. Select your program to record. In the Program Details, select Record Show to record the single episode or Record Series to record all instances of the series. You will then see a red dot on the program listing to indicate that the show is scheduled for recording.   Streaming Recorded TV Click on Recorded TV from the home screen to access your previously recorded TV programs. Click on the selection you wish to stream. Click on Play. If you receive this error message, you’ll need to install the streaming pack for Remote Potato. This is found on the same download page as installation files. (See link below) The Begin from slider allows you to start playback from the start (by default) or a different time of the program by moving the slider. The Quality (bitrate) setting  allows you to choose the quality of the playback. We found the video quality on the Normal setting to be pretty lousy, and Low was just pointless. High was the best overall viewing experience as it provided smooth quality video playback. We experienced significant stuttering during playback using the Ultra High setting.   Click Start when you are ready to begin. When playback begins you’ll see a slider at the top right.   Move the slider left or right to increase or decrease the size of the video. There’s also a button to switch to full screen.   Media Center users who travel frequently or are always on the go will likely find Remote Potato to be a blessing. Since being released earlier this year, updates for Remote Potato have come fast and furious. The latest beta release includes support for streaming music and photos. If you like those nice network TV logos, check out our article on adding TV channel logos to Windows Media Center. Downloads and Links Download Remote Potato and Streaming Pack Find your IP address Sign Up for a Domain Name at DynDNS.com Similar Articles Productive Geek Tips Schedule Updates for Windows Media CenterUsing Netflix Watchnow in Windows Vista Media Center (Gmedia)Add a Sleep Timer to Windows 7 Media CenterStartup Customizations for Media Center in Windows 7Enable Media Streaming in Windows Home Server to Windows Media Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos

    Read the article

  • Adding a hyperlink in a client report definition file (RDLC)

    - by rajbk
    This post shows you how to add a hyperlink to your RDLC report. In a previous post, I showed you how to create an RDLC report. We have been given the requirement to the report we created earlier, the Northwind Product report, to add a column that will contain hyperlinks which are unique per row.  The URLs will be RESTful with the ProductID at the end. Clicking on the URL will take them to a website like so: http://localhost/products/3  where 3 is the primary key of the product row clicked on. To start off, open the RDLC and add a new column to the product table.   Add text to the header (Details) and row (Product Website). Right click on the row (not header) and select “TextBox properties” Select Action – Go to URL. You could hard code a URL here but what we need is a URL that changes based on the ProductID.   Click on the expression button (fx) The expression builder gives you access to several functions and constants including the fields in your dataset. See this reference for more details: Common Expressions for ReportViewer Reports. Add the following expression: = "http://localhost/products/" & Fields!ProductID.Value Click OK to exit the Expression Builder. The report will not render because hyperlinks are disabled by default in the ReportViewer control. To enable it, add the following in your page load event (where rvProducts is the ID of your ReportViewerControl): protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { rvProducts.LocalReport.EnableHyperlinks = true; } } We want our links to open in a new window so set the HyperLinkTarget property of the ReportViewer control to “_blank”   We are done adding hyperlinks to our report. Clicking on the links for each product pops open a new windows. The URL has the ProductID added at the end. Enjoy!

    Read the article

  • PHP/APC fatal error, apc_mmap: mmap failed

    - by Sudowned
    I'm seeing some intermittent CPU usage spikes to 100%, sort of correlated to these log entries: [27-Feb-2012 13:29:29] PHP Fatal error: PHP Startup: apc_mmap: mmap failed: in Unknown on line 0 [27-Feb-2012 13:29:30] PHP Fatal error: PHP Startup: apc_mmap: mmap failed: in Unknown on line 0 [27-Feb-2012 13:29:31] PHP Fatal error: PHP Startup: apc_mmap: mmap failed: in Unknown on line 0 [27-Feb-2012 13:29:31] PHP Fatal error: PHP Startup: apc_mmap: mmap failed: in Unknown on line 0 phpinfo() indicates that APC is set up, and as far as I can tell this error doesn't cause visible 500 errors on the live site, which is a WordPress install that gets about 600k views monthly. Google's been unhelpful so far, and I was hoping that someone here had some insight as to what's causing this and how to fix it. Curiously, this error only shows up /usr/local/apache2/logs/error_log and not the error_log for the cpanel-configured site.

    Read the article

  • AWStats on Plesk consumes all of CPU and crashes server - how do you disable plesk

    - by columbo
    I have Plesk 9.0.1 running on a Red Hat server. Every week or so at about 4:10 AM the server locks up. At this time the server CPU usage shoots from 4% to 90% at the same time as a mass of awstats.pl processes start (I can't see how many as my datat only shows the top 30 processes, but all of these are awstats.pl). I turned off awstats through the Plesk control panel for all but 5 domains but I still get 90% CPU usage and at least 30 instances of awstats.pl happening at 4:10am as usual. Does anyone know why this may be? Does anyone know how to disable awstats (I have stats covered using piwik)? Or how do I uninstall awstats without snarling up Plesk?

    Read the article

  • Domino HTTP Server: Error - Unable to Bind 1.2.3.4, port 80, port in use or Bind To Host configuration specifies a duplicate IP address/host

    - by pdewaard
    We have a Domino 9.0.1 Server hosted on Ubuntu 14.04 Server, which hosts several other http based Tasks, (Nginx, Couchdb, Confluence on Tomcat). The Ubuntu Server has multiple IPs, all bind correctly to the different Tasks. The Domino SMTP task binds correctly and is working well. All http tasks (other than Domino) are proxied behind Nginx version 1.6x and all are working well, netstat shows no 0.0.0.0 bindings, no one is listening on 1.2.3.4:80 . when I try to load http on the (Domino) server console it failes with HTTP Server: Error - Unable to Bind 1.2.3.4, port 80, port in use or Bind To Host configuration specifies a duplicate IP address/host a couple of times, may be 4 or 5 times then it loads without failure! And: when it comes up, I see http is listening on 80 AND 443, but SSL Connections are not working, nor any error log! It must be a kind of bad magic :-( thanks in advance Pitt

    Read the article

  • Opening Skype, Opera, OpenOffice logs me off

    - by anjanesh
    Whats common among Skype, Opera, OpenOffice in Ubuntu ? Whenever I open these applications I get logged off and shows back me the login screen. This started happening since the 10.10 upgrade. Forgot to mention : Yes, its x64.Each time I open these applications, the UI shows and then crashes. I started each app & logged the last few lines of /var/log/syslog after each crash. Looks like something to do with sound drivers ? Opera :Jan 8 09:33:20 al-ubuntu pulseaudio[11532]: pid.c: Daemon already running. Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: snd_pcm_avail_delay() returned strange values: delay 0 is less than avail 8. Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Most likely this is a bug in the ALSA driver 'snd_hda_intel'. Please report this issue to the ALSA developers. Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: snd_pcm_dump(): Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Soft volume PCM Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Control: PCM Playback Volume Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: min_dB: -51 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: max_dB: 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: resolution: 256 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Its setup is: Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: stream : CAPTURE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: format : S16_LE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: subformat : STD Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: channels : 2 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: rate : 44100 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: msbits : 16 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: buffer_size : 88192 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_size : 44096 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_time : 999909 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_step : 1 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: avail_min : 87310 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_event : 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: start_threshold : -1 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: silence_threshold: 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: silence_size : 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Slave: Hardware PCM card 0 'HDA Intel' device 0 subdevice 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: Its setup is: Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: stream : CAPTURE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: format : S16_LE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: subformat : STD Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: channels : 2 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: rate : 44100 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: msbits : 16 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: buffer_size : 88192 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_size : 44096 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_time : 999909 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_step : 1 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: avail_min : 87310 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: period_event : 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: start_threshold : -1 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: silence_threshold: 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: silence_size : 0 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: appl_ptr : 87320 Jan 8 09:33:21 al-ubuntu pulseaudio[11429]: alsa-util.c: hw_ptr : 87320 Jan 8 09:33:22 al-ubuntu kernel: [ 4962.078306] opera[11036]: segfault at 261 ip 0000000000000261 sp 00007fffed7cd9a8 error 14 in opera[400000+122b000] anjanesh@al-ubuntu:~$ SkypeJan 8 09:40:21 al-ubuntu pulseaudio[12602]: pid.c: Daemon already running. Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: snd_pcm_avail_delay() returned strange values: delay 0 is less than avail 8. Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Most likely this is a bug in the ALSA driver 'snd_hda_intel'. Please report this issue to the ALSA developers. Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: snd_pcm_dump(): Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Soft volume PCM Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Control: PCM Playback Volume Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: min_dB: -51 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: max_dB: 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: resolution: 256 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Its setup is: Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: stream : CAPTURE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: format : S16_LE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: subformat : STD Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: channels : 2 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: rate : 44100 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: msbits : 16 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: buffer_size : 88192 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_size : 44096 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_time : 999909 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_step : 1 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: avail_min : 87310 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_event : 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: start_threshold : -1 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: silence_threshold: 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: silence_size : 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Slave: Hardware PCM card 0 'HDA Intel' device 0 subdevice 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: Its setup is: Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: stream : CAPTURE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: format : S16_LE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: subformat : STD Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: channels : 2 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: rate : 44100 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: msbits : 16 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: buffer_size : 88192 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_size : 44096 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_time : 999909 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_step : 1 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: avail_min : 87310 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: period_event : 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: start_threshold : -1 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: silence_threshold: 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: silence_size : 0 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: appl_ptr : 87312 Jan 8 09:40:23 al-ubuntu pulseaudio[12485]: alsa-util.c: hw_ptr : 87312 anjanesh@al-ubuntu:~$ Open OfficeJan 8 09:43:46 al-ubuntu pulseaudio[13157]: pid.c: Daemon already running. Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: snd_pcm_avail_delay() returned strange values: delay 0 is less than avail 16. Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Most likely this is a bug in the ALSA driver 'snd_hda_intel'. Please report this issue to the ALSA developers. Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: snd_pcm_dump(): Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Soft volume PCM Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Control: PCM Playback Volume Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: min_dB: -51 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: max_dB: 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: resolution: 256 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Its setup is: Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: stream : CAPTURE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: format : S16_LE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: subformat : STD Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: channels : 2 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: rate : 44100 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: msbits : 16 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: buffer_size : 88192 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_size : 44096 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_time : 999909 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_step : 1 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: avail_min : 87310 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_event : 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: start_threshold : -1 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: silence_threshold: 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: silence_size : 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Slave: Hardware PCM card 0 'HDA Intel' device 0 subdevice 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: Its setup is: Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: stream : CAPTURE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: access : MMAP_INTERLEAVED Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: format : S16_LE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: subformat : STD Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: channels : 2 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: rate : 44100 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: exact rate : 44100 (44100/1) Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: msbits : 16 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: buffer_size : 88192 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_size : 44096 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_time : 999909 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: tstamp_mode : ENABLE Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_step : 1 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: avail_min : 87310 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: period_event : 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: start_threshold : -1 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: stop_threshold : 6205960286516543488 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: silence_threshold: 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: silence_size : 0 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: boundary : 6205960286516543488 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: appl_ptr : 87320 Jan 8 09:43:48 al-ubuntu pulseaudio[13064]: alsa-util.c: hw_ptr : 87320 anjanesh@al-ubuntu:~$

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • Running ASP.NET Webforms and ASP.NET MVC side by side

    - by rajbk
    One of the nice things about ASP.NET MVC and its older brother ASP.NET WebForms is that they are both built on top of the ASP.NET runtime environment. The advantage of this is that, you can still run them side by side even though MVC and WebForms are different frameworks. Another point to note is that with the release of the ASP.NET routing in .NET 3.5 SP1, we are able to create SEO friendly URLs that do not map to specific files on disk. The routing is part of the core runtime environment and therefore can be used by both WebForms and MVC. To run both frameworks side by side, we could easily create a separate folder in your MVC project for all our WebForm files and be good to go. What this post shows you instead, is how to have an MVC application with WebForm pages  that both use a common master page and common routing for SEO friendly URLs.  A sample project that shows WebForms and MVC running side by side is attached at the bottom of this post. So why would we want to run WebForms and MVC in the same project?  WebForms come with a lot of nice server controls that provide a lot of functionality. One example is the ReportViewer control. Using this control and client report definition files (RDLC), we can create rich interactive reports (with charting controls). I show you how to use the ReportViewer control in a WebForm project here :  Creating an ASP.NET report using Visual Studio 2010. We can create even more advanced reports by using SQL reporting services that can also be rendered by the ReportViewer control. Now, consider the sample MVC application I blogged about called ASP.NET MVC Paging/Sorting/Filtering using the MVCContrib Grid and Pager. Assume you were given the requirement to add a UI to the MVC application where users could interact with a report and be given the option to export the report to Excel, PDF or Word. How do you go about doing it?   This is a perfect scenario to use the ReportViewer control and RDLCs. As you saw in the post on creating the ASP.NET report, the ReportViewer control is a Web Control and is designed to be run in a WebForm project with dependencies on, amongst others, a ScriptManager control and the beloved Viewstate.  Since MVC and WebForm both run under the same runtime, the easiest thing to is to add the WebForm application files (index.aspx, rdlc, related class files) into our MVC project. You can copy the files over from the WebForm project into the MVC project. Create a new folder in our MVC application called CommonReports. Add the index.aspx and rdlc file from the Webform project   Right click on the Index.aspx file and convert it to a web application. This will add the index.aspx.designer.cs file (this step is not required if you are manually adding a WebForm aspx file into the MVC project).    Verify that all the type names for the ObjectDataSources in code behind to point to the correct ProductRepository and fix any compiler errors. Right click on Index.aspx and select “View in browser”. You should see a screen like the one below:   There are two issues with our page. It does not use our site master page and the URL is not SEO friendly. Common Master Page The easiest way to use master pages with both MVC and WebForm pages is to have a common master page that each inherits from as shown below. The reason for this is most WebForm controls require them to be inside a Form control and require ControlState or ViewState. ViewMasterPages used in MVC, on the other hand, are designed to be used with content pages that derive from ViewPage with Viewstate turned off. By having a separate master page for MVC and WebForm that inherit from the Root master page,, we can set properties that are specific to each. For example, in the Webform master, we can turn on ViewState, add a form tag etc. Another point worth noting is that if you set a WebForm page to use a MVC site master page, you may run into errors like the following: A ViewMasterPage can be used only with content pages that derive from ViewPage or ViewPage<TViewItem> or Control 'MainContent_MyButton' of type 'Button' must be placed inside a form tag with runat=server. Since the ViewMasterPage inherits from MasterPage as seen below, we make our Root.master inherit from MasterPage, MVC.master inherit from ViewMasterPage and Webform.master inherits from MasterPage. We define the attributes on the master pages like so: Root.master <%@ Master Inherits="System.Web.UI.MasterPage"  … %> MVC.master <%@ Master MasterPageFile="~/Views/Shared/Root.Master" Inherits="System.Web.Mvc.ViewMasterPage" … %> WebForm.master <%@ Master MasterPageFile="~/Views/Shared/Root.Master" Inherits="NorthwindSales.Views.Shared.Webform" %> Code behind: public partial class Webform : System.Web.UI.MasterPage {} We make changes to our reports aspx file to use the Webform.master. See the source of the master pages in the sample project for a better understanding of how they are connected. SEO friendly links We want to create SEO friendly links that point to our report. A request to /Reports/Products should render the report located in ~/CommonReports/Products.aspx. Simillarly to support future reports, a request to /Reports/Sales should render a report in ~/CommonReports/Sales.aspx. Lets start by renaming our index.aspx file to Products.aspx to be consistent with our routing criteria above. As mentioned earlier, since routing is part of the core runtime environment, we ca easily create a custom route for our reports by adding an entry in Global.asax. public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}");   //Custom route for reports routes.MapPageRoute( "ReportRoute", // Route name "Reports/{reportname}", // URL "~/CommonReports/{reportname}.aspx" // File );     routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); } With our custom route in place, a request to Reports/Employees will render the page at ~/CommonReports/Employees.aspx. We make this custom route the first entry since the routing system walks the table from top to bottom, and the first route to match wins. Note that it is highly recommended that you write unit tests for your routes to ensure that the mappings you defined are correct. Common Menu Structure The master page in our original MVC project had a menu structure like so: <ul id="menu"> <li> <%=Html.ActionLink("Home", "Index", "Home") %></li> <li> <%=Html.ActionLink("Products", "Index", "Products") %></li> <li> <%=Html.ActionLink("Help", "Help", "Home") %></li> </ul> We want this menu structure to be common to all pages/views and hence should reside in Root.master. Unfortunately the Html.ActionLink helpers will not work since Root.master inherits from MasterPage which does not have the helper methods available. The quickest way to resolve this issue is to use RouteUrl expressions. Using  RouteUrl expressions, we can programmatically generate URLs that are based on route definitions. By specifying parameter values and a route name if required, we get back a URL string that corresponds to a matching route. We move our menu structure to Root.master and change it to use RouteUrl expressions: <ul id="menu"> <li> <asp:HyperLink ID="hypHome" runat="server" NavigateUrl="<%$RouteUrl:routename=default,controller=home,action=index%>">Home</asp:HyperLink></li> <li> <asp:HyperLink ID="hypProducts" runat="server" NavigateUrl="<%$RouteUrl:routename=default,controller=products,action=index%>">Products</asp:HyperLink></li> <li> <asp:HyperLink ID="hypReport" runat="server" NavigateUrl="<%$RouteUrl:routename=ReportRoute,reportname=products%>">Product Report</asp:HyperLink></li> <li> <asp:HyperLink ID="hypHelp" runat="server" NavigateUrl="<%$RouteUrl:routename=default,controller=home,action=help%>">Help</asp:HyperLink></li> </ul> We are done adding the common navigation to our application. The application now uses a common theme, routing and navigation structure. Conclusion We have seen how to do the following through this post Add a WebForm page from a WebForm project to an existing ASP.NET MVC application Use a common master page for both WebForm and MVC pages Use routing for SEO friendly links Use a common menu structure for both WebForm and MVC. The sample project is attached below. Version: VS 2010 RTM Remember to change your connection string to point to your Northwind database NorthwindSalesMVCWebform.zip

    Read the article

  • DB2 insert performance - How to measure

    - by svrist
    [From stackoverflow] Im trying to find a way to speedup my inserts to a DB2 9.7.1 (ubuntu linux) Im watching vmstat and trying to gather some statistics via the db2 get snapshot commands but im not able to figure out which numbers im looking for to be able to see where the trouble is. I've read lits of stuff like http://www.eggheadcafe.com/software/aspnet/35692526/question-multiple-row-in.aspx, and http://www.ibm.com/developerworks/data/library/tips/dm-0403wilkins/ and tricks like ALTER TABLE lalala APPEND ON works somewhat (the difference between a dd if=/dev/zero and insert is still a factor 10) but I would like to be able to find the counters or other performance indicators that actually show why it makes sense to use those tricks. For example: What is the metric called that shows me that it is buffer pages allocation (FSCR stuff) that is the problem Where do I see that the insert time is hampered by clustered indexes? I find db2top very useful but im still searching for more direct view of "this is your bottleneck" methods

    Read the article

  • New SSIS tool on Codeplex – SSIS Log Analyzer

    I stumbled across a new SSIS tool on Codeplex today, the SSIS Log Analyzer which was only released a few days ago. Whilst it is a beta release and currently only supports 2005 (2008 is promised) it looks quite interesting. It seems to be a fancy log viewer, but with some clever features and a nice looking front-end. I’ve only read the documentation so far, but it has graphs and a debug view that shows your package with the colour animations similar to when debugging in BIDS, and everyone knows, the way the pretty colours and numbers change is the best bit! I’ll quote some of the features for you here and then let you make your own mind up, is it useful in the real world? Option to analyze the logs manually by applying row and column filters over the log data or by using queries to specify more complex criterions. Automated Performance Analysis which provides a quick graphical look on which tasks spent most time during package execution. Rerun (debug) the entire sequence of events which happened during package execution showing the flow of control in graphical form, changes in runtime values for each task like execution duration etc. Support for Auto Analyzers to automatically find out issues and provide suggestions for problems which can be figured out with the help of SSIS logs and/or package. Option to analyze just log file or log and package together. Provides a lightweight environment to have a quick look at the package. Opening it in BIDS takes some time as being an authoring environment it does all sorts of validations resulting in some delay. See http://ssisloganalyzer.codeplex.com/  for more details.

    Read the article

  • Troubleshooting Windows Authentication problems (no challenge) in IIS 7.5?

    - by Aaronaught
    I know that there are thousands of reports of people having trouble getting Integrated Windows Authentication to work with IIS, but they all seem to lead to web pages that don't apply or solutions that I've already tried. I've deployed dozens of sites like this before, so either there's something bizarre going on with the server/configuration, or I've been looking at this too long and not seeing the obvious. Simply put, everything works perfectly on my local machine, but falls apart on the production server, which as far as I can tell has the exact same configuration. On the local machine: The machine is running Windows 7 Ultimate, Service Pack 1, IIS 7.5. The site has been tested successfully, using both IIS and the VS Web Development Server. The IIS site config has all authentication methods disabled except Windows Authentication. The local machine is not on any domain. The Providers set up are Negotiate and NTLM (not Negotiate:Kerberos). Extended Protection is Off. All browsers tested (IE, Firefox, Chrome) show the challenge prompt and allow me to log in to the localhost domain with my (local) Windows account. All browsers tested also work using an opaque local IP address - so the browsers themselves don't seem to care whether the site appears "local" or "remote". I've added a display line to the web page which shows the currently-logged-in user and it shows exactly what I would expect (whichever local user I logged in with). On the remote machine: The server is running Windows Server 2008 R2, IIS 7.5. Loading the web page results in an immediate 401.2 error: You are not authorized to view this page due to invalid authentication headers. No challenge prompt ever appears. The IIS site config has all authentication methods disabled except Windows Authentication. The remote machine is not on any domain. The Providers set up are Negotiate and NTLM (not Negotiate:Kerberos). Extended Protection is Off. On the remote machine (remote desktop session), the same error appears in Internet Explorer regardless of whether the domain is localhost or the external IP address. If I try to view the remote web site from my local machine, the error is still 401, but a slightly different 401. No subcode, with the text: Access is denied due to invalid credentials. The Windows Authentication IIS role feature is installed. The WindowsAuthentication Module is added (at the Server level). The exact same error occurs if I turn off Windows Authentication and enable Basic Authentication. The site does load if I turn off Windows Authentication and enable Anonymous (obviously). I've already followed all of the troubleshooting steps on Microsoft Support: Troubleshooting HTTP 401 errors in IIS I've already tried the workaround shown on another Microsoft support page (supposedly to force NTLM as the only method). Last but not least, I tried turning on FREB for 401.2 errors and the results don't seem to tell me anything useful, all I see is the following warning: MODULE_SET_RESPONSE_ERROR_STATUS ModuleName IIS Web Core Notification 2 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 2 ErrorCode 2147942405 ConfigExceptionInfo Notification AUTHENTICATE_REQUEST ErrorCode Access is denied. (0x80070005) ...this seems to just be telling me what I already know (that it's simply rejecting the request instead of negotiating the credentials). The trace does indicate that the WindowsAuthentication module is correctly loaded because there is a NOTIFY_MODULE_START line with ModuleName = WindowsAuthentication (and various other ASP.NET follow-up events - [un]fortunately, no interesting errors or warnings here). Can anyone tell me what I might be missing here? Quick Update: I'm a little uncomfortable sending a whole Wireshark dump as it would reveal IPs, URLs and other stuff, but I did a side-by-side comparison of the HTTP responses from localhost and the remote server in Fiddler, and it seems fairly self-evident what the problem is: Localhost: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.5 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM X-Powered-By: ASP.NET Date: Sat, 17 Dec 2011 23:42:34 GMT Content-Length: 6399 Proxy-Support: Session-Based-Authentication Remote: HTTP/1.1 401 Unauthorized Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Sat, 17 Dec 2011 23:43:13 GMT Content-Length: 1293 Aside from a few seemingly-inconsequential differences like cache-control, the main difference is that the remote server is not sending the WWW-Authenticate headers back to the client. So, I guess that narrows the question down to: Why is IIS not sending WWW-Authenticate headers when Windows Authentication appears to be installed, loaded, and exclusively enabled?

    Read the article

< Previous Page | 205 206 207 208 209 210 211 212 213 214 215 216  | Next Page >