Search Results

Search found 1032 results on 42 pages for 'repeated'.

Page 27/42 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Who Makes a Good Product Owner

    - by Robert May
    In general, the best product owners are those that care passionately about the customer of the product.  Note that I didn’t say about the product itself.  Actually, people that only care about the product, generally do not make good product owners.  Products only matter in relationship to their customers.  If a product doesn’t provide value to the customer, then the product has no value, no matter what a person might think of the product, and no matter what cool technologies exist inside of the product. A good product owner is also a good negotiator.  They recognize that many different priorities exist inside of a corporation, but that there can be only one list that developers work from.  A good product owner recognizes that its their job to help others around them prioritize (perhaps with a Product Council), but also understand that they alone have the final say about priorities and are willing to make the tough decisions required.  Deciding the priority between two perfectly valid stories is very difficult, especially when the stories are from two different departments! A good product owner is deeply interested in helping the team be successful.  They don’t seek to control the team, but instead seek to understand what the team can do and then work with the team to get the best product possible for the Customer.  A good product owner is never denigrating to team members, ever.  They recognize that such behavior would damage the trust that needs to be present between team members and product owners and will avoid it at all costs. In general, technical people (i.e. former or current developers) make poor product owners.  In their minds, they can’t separate implementation details from user functionality, so their stories end up sounding like implementation details.  For example, “The user enters their username on the password screen” is something that a technical product owner would write.  The proper wording for that story is “A user supplies the system with their credentials.”  Because technical people think different from the rest of the population, they are generally not a good fit. A good product owner is also a good writer.  Writing good stories demands good writing.  The art of persuasion, descriptiveness and just general good grammar are all required.  A good Product Owner must also be well spoken, since most of what will be conveyed will be conveyed with the spoken word, not just written word. A good product owner is a “People Person.”  They like talking to people and are very patient.  They don’t mind having questions repeated or fielding many questions, because they want to make sure that the ideas they’re conveying are properly understood so the customer gets the best product possible.  They are happy to answer any questions a team member may have and invite feedback and criticism of designs and stories, since they want a good product.  They really have little ego that gets in the way of building a great product. All of these qualities can be hard to find, but if you look close enough, you’ll find the right person in your organization.  Product owners can be found anywhere, not just in upper management.  Some of the best product owners are those that are very close to the customer.  In fact, check your customer support staff.  I’d bet that several great product owners are lurking there. Final note about what makes a good product owner.  You’re probably NOT going to find a good product owner in a manager, especially if they consider themselves a “Manager.”  Product owners don’t manage anything but the backlog, so be especially careful if the person you’re selecting for Product Owner is a manager. Up Next, “Messing with the Team.” Technorati Tags: Scrum,Product Owner

    Read the article

  • HPCM 11.1.2.x - Outline Optimisation for Calculation Performance

    - by Jane Story
    When an HPCM application is first created, it is likely that you will want to carry out some optimisation on the HPCM application’s Essbase outline in order to improve calculation execution times. There are several things that you may wish to consider. Because at least one dense dimension for an application is required to deploy from HPCM to Essbase, “Measures” and “AllocationType”, as the only required dimensions in an HPCM application, are created dense by default. However, for optimisation reasons, you may wish to consider changing this default dense/sparse configuration. In general, calculation scripts in HPCM execute best when they are targeting destinations with one or more dense dimensions. Therefore, consider your largest target stage i.e. the stage with the most assignment destinations and choose that as a dense dimension. When optimising an outline in this way, it is not possible to have a dense dimension in every target stage and so testing with the dense/sparse settings in every stage is the key to finding the best configuration for each individual application. It is not possible to change the dense/sparse setting of individual cloned dimensions from EPMA. When a dimension that is to be repeated in multiple stages, and therefore cloned, is defined in EPMA, every instance of that dimension has the same storage setting. However, such manual changes may not be preserved in all cases. Please see below for full explanation. However, once the application has been deployed from EPMA to HPCM and from HPCM to Essbase, it is possible to make the dense/sparse changes to a cloned dimension directly in Essbase. This can be done by editing the properties of the outline in Essbase Administration Services (EAS) and manually changing the dense/sparse settings of individual dimensions. There are two methods of deployment from HPCM to Essbase from 11.1.2.1. There is a “replace” deploy method and an “update” deploy method: “Replace” will delete the Essbase application and replace it. If this method is chosen, then any changes made directly on the Essbase outline will be lost. If you use the update deploy method (with or without archiving and reloading data), then the Essbase outline, including any manual changes you have made (i.e. changes to dense/sparse settings of the cloned dimensions), will be preserved. Notes If you are using the calculation optimisation technique mentioned in a previous blog to calculate multiple POVs (https://blogs.oracle.com/pa/entry/hpcm_11_1_2_optimising) and you are calculating all members of that POV dimension (e.g. all months in the Period dimension) then you could consider making that dimension dense. Always review Block sizes after all changes! The maximum block size recommended in the Essbase Database Administrator’s Guide is 100k for 32 bit Essbase and 200k for 64 bit Essbase. However, calculations may perform better with a larger than recommended block size provided that sufficient memory is available on the Essbase server. Test different configurations to determine the most optimal solution for your HPCM application. Please note that this blog article covers HPCM outline optimisation only. Additional performance tuning can be achieved by methodically testing database settings i.e data cache, index cache and/or commit block settings. For more information on Essbase tuning best practices, please review these items in the Essbase Database Administrators Guide. For additional information on the commit block setting, please see the previous PA blog article https://blogs.oracle.com/pa/entry/essbase_11_1_2_commit

    Read the article

  • Ubuntu 11.04 and 10.04 hang with black screen while installing from USB disk

    - by Bill
    I've been trying to install Ubuntu 11.04 from a USB flash stick and each time I try to boot from the USB key one of two things happen: A) The screen that asks you what you would like to do (e.g. run Ubuntu from the USB key or install it) shows up and the countdown to the default option starts to count down but as soon as I either touch the keyboard (sometimes I press enter or the arrow keys to select an option) or the countdown gets to zero the screen just locks up and nothing happens no matter how long I wait. B) When I boot from the USB key the screen will flicker for a second and then go black with a flashing white underscore at the top left corner of the screen. Again it doesn't matter how long I wait, nothing happens and pressing keys doesn't do a thing. The very first time I tried to install it I got a terminal-like screen that said something about a directory called 'casper' having an error of some sort. I have tried installing from USB using both 11.04 and 10.10. I'm about to try 10.04. I have read tons of forum posts about this but so far I haven't seen anything in the solutions that apply to me. My intention is to dual boot Windows 7 and Ubuntu. I must keep Windows as I am required to use Visual Studio for one of my college courses. Right now I'm using Wubi but I really want a full install. I can't use LVPM because it doesn't work with the version of Wubi I used. So now I'm thinking my best bet is to try to get a clean install working. I'd also convert Wubi to a full install too but there's no solution as far as I've read. So could someone tell me a reason why this is happening or if there's something I can do to get around the problem? I'm using a Gateway LT2802u netbook with and Intel Atom N455 processor, 1GB RAM, Intel Graphics Media Accelerator 3150 graphics card, and a 250GB HDD. I don't have anything on my current Wubi install that I can't replace so keep in mind when answering that I don't care if I lose my current settings and files from Wubi. Thanks everyone! UPDATE I just answered my own question so in case anyone else is having this same problem using similar hardware, do the following: When I first tried installing 11.04 I used the recommended universal installer tool to create the USB live/installation disk. That caused the original problem. Note that I had already downloaded the 11.04 ISO and did not use the included downloader from the USB creator. After that failed I used the same USB creator but had it download 10.10 for me. It also failed with the same issue. I repeated this process with unetbootin as well for both versions. Finally, I downloaded the Ubuntu 10.04 ISO and used the recommended USB creator once again. There was an error while creating the USB live install so I reformatted the USB key as FAT32 and tried again. It created the USB key. I then booted from the USB flash drive and selected "Install Ubuntu" (exact wording was different). It worked! It took me through the process that you see shown in pictures on the Ubuntu website. I let it create the appropriate partitions for me and it simply worked. I did get a few errors while the system tried to restart after it installed. It hung on a terminal-like screen but I pressed ENTER and it restarted. I booted into Windows 7, it checked the disks as it sensed that I messed with a partition, then it booted into Windows normally. Now I'm going to uninstall Wubi and update my new full install of Ubuntu! I'm excited to get the benefits of a full install now. So in the end, hopefully someone can learn from what I did.

    Read the article

  • Improvements to Joshua Bloch's Builder Design Pattern?

    - by Jason Fotinatos
    Back in 2007, I read an article about Joshua Blochs take on the "builder pattern" and how it could be modified to improve the overuse of constructors and setters, especially when an object has a large number of properties, most of which are optional. A brief summary of this design pattern is articled here [http://rwhansen.blogspot.com/2007/07/theres-builder-pattern-that-joshua.html]. I liked the idea, and have been using it since. The problem with it, while it is very clean and nice to use from the client perspective, implementing it can be a pain in the bum! There are so many different places in the object where a single property is reference, and thus creating the object, and adding a new property takes a lot of time. So...I had an idea. First, an example object in Joshua Bloch's style: Josh Bloch Style: public class OptionsJoshBlochStyle { private final String option1; private final int option2; // ...other options here <<<< public String getOption1() { return option1; } public int getOption2() { return option2; } public static class Builder { private String option1; private int option2; // other options here <<<<< public Builder option1(String option1) { this.option1 = option1; return this; } public Builder option2(int option2) { this.option2 = option2; return this; } public OptionsJoshBlochStyle build() { return new OptionsJoshBlochStyle(this); } } private OptionsJoshBlochStyle(Builder builder) { this.option1 = builder.option1; this.option2 = builder.option2; // other options here <<<<<< } public static void main(String[] args) { OptionsJoshBlochStyle optionsVariation1 = new OptionsJoshBlochStyle.Builder().option1("firefox").option2(1).build(); OptionsJoshBlochStyle optionsVariation2 = new OptionsJoshBlochStyle.Builder().option1("chrome").option2(2).build(); } } Now my "improved" version: public class Options { // note that these are not final private String option1; private int option2; // ...other options here public String getOption1() { return option1; } public int getOption2() { return option2; } public static class Builder { private final Options options = new Options(); public Builder option1(String option1) { this.options.option1 = option1; return this; } public Builder option2(int option2) { this.options.option2 = option2; return this; } public Options build() { return options; } } private Options() { } public static void main(String[] args) { Options optionsVariation1 = new Options.Builder().option1("firefox").option2(1).build(); Options optionsVariation2 = new Options.Builder().option1("chrome").option2(2).build(); } } As you can see in my "improved version", there are 2 less places in which we need to add code about any addition properties (or options, in this case)! The only negative that I can see is that the instance variables of the outer class are not able to be final. But, the class is still immutable without this. Is there really any downside to this improvement in maintainability? There has to be a reason which he repeated the properties within the nested class that I'm not seeing?

    Read the article

  • MEB: Taking Incremental Backup using last successful backup

    - by Sagar Jauhari
    Introduction In MySQL Enterprise Backup v3.7.0 (MEB 3.7.0) a new option '–incremental-base' was introduced. Using this option a user can take in incremental backup without specifying the '–start-lsn' option. Description of this option can be found here. Instead of '–start-lsn' the user can provide the location of the last full backup or incremental backup using the 'dir:' prefix. MEB would extract the end LSN of this backup from the mysql.backup_history table as well as the backup_variables.txt file (for verification) to use it as the start LSN of the incremental backup. Because of popular demand, in MEB 3.7.1 the option '-incremental-base' has been extended further. The idea is to allow the user to take an incremental backup as easily as possible using the '–incremental-base' option. With the new option MEB queries the backup_history table for the last successful backup and uses its end LSN as the start LSN for the new incremental backup. It should be noted that the last successful backup is used irrespective of the location of the backup. Details A new prefix 'history:' has been introduced for the –incremental-base option and currently the only permissible value is the string "last_backup". So using the new option an incremental backup can be taken with the following command: $ mysqlbackup --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup When MEB attempts to extract the end LSN of the last successful backup from the mysql.backup_history table, it also scans the corresponding backup destination for the old backup and tries to read the meta files at this backup destination. If a valid backup still exists at the backup destination and the meta files can be read, MEB compares the end LSN found in the mysql.backup_history table with the end LSN found in the backup meta files of the old backup. Assuming that the host MySQL server is alive and mysql.backup_history can be accessed by MEB, the behaviour of MEB with respect to verification of the old end LSN can be summarized as follows: If 'BD' is the backup destination of the last successful backup in mysql.backup_history table and 'BHT' is the mysql.backup_history table if can_read_files_at_BD:     if end_lsn_found_at_BD == end_lsn_of_last_backup_in_BHT:         continue_with_backup()     else         return_with_error() else     continue_with_backup() Advantages Apart from ease of usability an important advantage of this option is that the user can do repeated incremental backups without changing the command line. This is possible using the '–with-timestamp' option along with this new option. For example, the following command $ mysqlbackup --with-timestamp --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup  can be used to perform successive incremental backups in the directory /media/mysqlbackup-repo . Limitations The option '--incremental-base=history:last_backup' should not be used when the user takes different kinds of concurrent backups on the same MySQL server (say different partial backups at multiple locations). should not be used after any temporary or experimental backups performed on the server (which where successful!). needs to be used with precaution since any intermediate successful backup without the –no-connection will be used as the base backup for the next incremental backup.  will give an error in case a valid backup exists at the location of the last successful backup and whose end LSN is different from that of the last successful backup found in the backup_history table. Date: 2012-06-19 HTML generated by org-mode 6.33x in emacs 23

    Read the article

  • Why you shouldn't add methods to interfaces in APIs

    - by Simon Cooper
    It is an oft-repeated maxim that you shouldn't add methods to a publically-released interface in an API. Recently, I was hit hard when this wasn't followed. As part of the work on ApplicationMetrics, I've been implementing auto-reporting of MVC action methods; whenever an action was called on a controller, ApplicationMetrics would automatically report it without the developer needing to add manual ReportEvent calls. Fortunately, MVC provides easy hook when a controller is created, letting me log when it happens - the IControllerFactory interface. Now, the dll we provide to instrument an MVC webapp has to be compiled against .NET 3.5 and MVC 1, as the lowest common denominator. This MVC 1 dll will still work when used in an MVC 2, 3 or 4 webapp because all MVC 2+ webapps have a binding redirect redirecting all references to previous versions of System.Web.Mvc to the correct version, and type forwards taking care of any moved types in the new assemblies. Or at least, it should. IControllerFactory In MVC 1 and 2, IControllerFactory was defined as follows: public interface IControllerFactory { IController CreateController(RequestContext requestContext, string controllerName); void ReleaseController(IController controller); } So, to implement the logging controller factory, we simply wrap the existing controller factory: internal sealed class LoggingControllerFactory : IControllerFactory { private readonly IControllerFactory m_CurrentController; public LoggingControllerFactory(IControllerFactory currentController) { m_CurrentController = currentController; } public IController CreateController( RequestContext requestContext, string controllerName) { // log the controller being used FeatureSessionData.ReportEvent("Controller used:", controllerName); return m_CurrentController.CreateController(requestContext, controllerName); } public void ReleaseController(IController controller) { m_CurrentController.ReleaseController(controller); } } Easy. This works as expected in MVC 1 and 2. However, in MVC 3 this type was throwing a TypeLoadException, saying a method wasn't implemented. It turns out that, in MVC 3, the definition of IControllerFactory was changed to this: public interface IControllerFactory { IController CreateController(RequestContext requestContext, string controllerName); SessionStateBehavior GetControllerSessionBehavior( RequestContext requestContext, string controllerName); void ReleaseController(IController controller); } There's a new method in the interface. So when our MVC 1 dll was redirected to reference System.Web.Mvc v3, LoggingControllerFactory tried to implement version 3 of IControllerFactory, was missing the GetControllerSessionBehaviour method, and so couldn't be loaded by the CLR. Implementing the new method Fortunately, there was a workaround. Because interface methods are normally implemented implicitly in the CLR, if we simply declare a virtual method matching the signature of the new method in MVC 3, then it will be ignored in MVC 1 and 2 and implement the extra method in MVC 3: internal sealed class LoggingControllerFactory : IControllerFactory { ... public virtual SessionStateBehaviour GetControllerSessionBehaviour( RequestContext requestContext, string controllerName) {} ... } However, this also has problems - the SessionStateBehaviour type only exists in .NET 4, and we're limited to .NET 3.5 by support for MVC 1 and 2. This means that the only solutions to support all MVC versions are: Construct the LoggingControllerFactory type at runtime using reflection Produce entirely separate dlls for MVC 1&2 and MVC 3. Ugh. And all because of that blasted extra method! Another solution? Fortunately, in this case, there is a third option - System.Web.Mvc also provides a DefaultControllerFactory type that can provide the implementation of GetControllerSessionBehaviour for us in MVC 3, while still allowing us to override CreateController and ReleaseController. However, this does mean that LoggingControllerFactory won't be able to wrap any calls to GetControllerSessionBehaviour. This is an acceptable bug, given the other options, as very few developers will be overriding GetControllerSessionBehaviour in their own custom controller factory. So, if you're providing an interface as part of an API, then please please please don't add methods to it. Especially if you don't provide a 'default' implementing type. Any code compiled against the previous version that can't be updated will have some very tough decisions to make to support both versions.

    Read the article

  • SQL Constraints &ndash; CHECK and NOCHECK

    - by David Turner
    One performance issue i faced at a recent project was with the way that our constraints were being managed, we were using Subsonic as our ORM, and it has a useful tool for generating your ORM code called SubStage – once configured, you can regenerate your DAL code easily based on your database schema, and it can even be integrated into your build as a pre-build event if you want to do this.  SubStage also offers the useful feature of being able to generate DDL scripts for your entire database, and can script your data for you too. The problem came when we decided to use the generate scripts feature to migrate the database onto a test database instance – it turns out that the DDL scripts that it generates include the WITH NOCHECK option, so when we executed them on the test instance, and performed some testing, we found that performance wasn’t as expected. A constraint can be disabled, enabled but not trusted, or enabled and trusted.  When it is disabled, data can be inserted that violates the constraint because it is not being enforced, this is useful for bulk load scenarios where performance is important.  So what does it mean to say that a constraint is trusted or not trusted?  Well this refers to the SQL Server Query Optimizer, and whether it trusts that the constraint is valid.  If it trusts the constraint then it doesn’t check it is valid when executing a query, so the query can be executed much faster. Here is an example base in this article on TechNet, here we create two tables with a Foreign Key constraint between them, and add a single row to each.  We then query the tables: 1 DROP TABLE t2 2 DROP TABLE t1 3 GO 4 5 CREATE TABLE t1(col1 int NOT NULL PRIMARY KEY) 6 CREATE TABLE t2(col1 int NOT NULL) 7 8 ALTER TABLE t2 WITH CHECK ADD CONSTRAINT fk_t2_t1 FOREIGN KEY(col1) 9 REFERENCES t1(col1) 10 11 INSERT INTO t1 VALUES(1) 12 INSERT INTO t2 VALUES(1) 13 GO14 15 SELECT COUNT(*) FROM t2 16 WHERE EXISTS17 (SELECT *18 FROM t1 19 WHERE t1.col1 = t2.col1) This all works fine, and in this scenario the constraint is enabled and trusted.  We can verify this by executing the following SQL to query the ‘is_disabled’ and ‘is_not_trusted’ properties: 1 select name, is_disabled, is_not_trusted from sys.foreign_keys This gives the following result: We can disable the constraint using this SQL: 1 alter table t2 NOCHECK CONSTRAINT fk_t2_t1 And when we query the constraints again, we see that the constraint is disabled and not trusted: So the constraint won’t be enforced and we can insert data into the table t2 that doesn’t match the data in t1, but we don’t want to do this, so we can enable the constraint again using this SQL: 1 alter table t2 CHECK CONSTRAINT fk_t2_t1 But when we query the constraints again, we see that the constraint is enabled, but it is still not trusted: This means that the optimizer will check the constraint each time a query is executed over it, which will impact the performance of the query, and this is definitely not what we want, so we need to make the constraint trusted by the optimizer again.  First we should check that our constraints haven’t been violated, which we can do by running DBCC: 1 DBCC CHECKCONSTRAINTS (t2) Hopefully you see the following message indicating that DBCC completed without finding any violations of your constraint: Having verified that the constraint was not violated while it was disabled, we can simply execute the following SQL:   1 alter table t2 WITH CHECK CHECK CONSTRAINT fk_t2_t1 At first glance this looks like it must be a typo to have the keyword CHECK repeated twice in succession, but it is the correct syntax and when we query the constraints properties, we find that it is now trusted again: To fix our specific problem, we created a script that checked all constraints on our tables, using the following syntax: 1 ALTER TABLE t2 WITH CHECK CHECK CONSTRAINT ALL

    Read the article

  • perl comparing 2 data file as array 2D for finding match one to one [migrated]

    - by roman serpa
    I'm doing a program that uses combinations of variables ( combiData.txt 63 rows x different number of columns) for analysing a data table ( j1j2_1.csv, 1000filas x 19 columns ) , to choose how many times each combination is repeated in data table and which rows come from (for instance, tableData[row][4]). I have tried to compile it , however I get the following message : Use of uninitialized value $val in numeric eq (==) at rowInData.pl line 34. Use of reference "ARRAY(0x1a2eae4)" as array index at rowInData.pl line 56. Use of reference "ARRAY(0x1a1334c)" as array index at rowInData.pl line 56. Use of uninitialized value in subtraction (-) at rowInData.pl line 56. Modification of non-creatable array value attempted, subscript -1 at rowInData.pl line 56. nothing This is my code: #!/usr/bin/perl use strict; use warnings; my $line_match; my $countTrue; open (FILE1, "<combiData.txt") or die "can't open file text1.txt\n"; my @tableCombi; while(<FILE1>) { my @row = split(' ', $_); push(@tableCombi, \@row); } close FILE1 || die $!; open (FILE2, "<j1j2_1.csv") or die "can't open file text1.txt\n"; my @tableData; while(<FILE2>) { my @row2 = split(/\s*,\s*/, $_); push(@tableData, \@row2); } close FILE2 || die $!; #function transform combiData.txt variable (position ) to the real value that i have to find in the data table. sub trueVal($){ my ($val) = $_[0]; if($val == 7){ return ('nonsynonymous_SNV'); } elsif( $val == 14) { return '1'; } elsif( $val == 15) { return '1';} elsif( $val == 16) { return '1'; } elsif( $val == 17) { return '1'; } elsif( $val == 18) { return '1';} elsif( $val == 19) { return '1';} else { print 'nothing'; } } #function IntToStr ( ) , i'm not sure if it is necessary) that transforms $ to strings , to use the function <eq> in the third loop for the array of combinations compared with the data array . sub IntToStr { return "$_[0]"; } for my $combi (@tableCombi) { $line_match = 0; for my $sheetData (@tableData) { $countTrue=0; for my $cell ( @$combi) { #my $temp =\$tableCombi[$combi][$cell] ; #if ( trueVal($tableCombi[$combi][$cell] ) eq $tableData[$sheetData][ $tableCombi[$combi][$cell] - 1 ] ){ #if ( IntToStr(trueVal($$temp )) eq IntToStr( $tableData[$sheetData][ $$temp-1] ) ){ if ( IntToStr(trueVal($tableCombi[$combi][$cell]) ) eq IntToStr($tableData[$sheetData][ $tableCombi[$combi][$cell] -1]) ){ $countTrue++;} if ($countTrue==@$combi){ $line_match++; #if ($line_match < 50){ print $tableData[$sheetData][4]." "; #} } } } print $line_match." \n"; }

    Read the article

  • How the number of indexes built on a table can impact performances?

    - by Davide Mauri
    We all know that putting too many indexes (I’m talking of non-clustered index only, of course) on table may produce performance problems due to the overhead that each index bring to all insert/update/delete operations on that table. But how much? I mean, we all agree – I think – that, generally speaking, having many indexes on a table is “bad”. But how bad it can be? How much the performance will degrade? And on a concurrent system how much this situation can also hurts SELECT performances? If SQL Server take more time to update a row on a table due to the amount of indexes it also has to update, this also means that locks will be held for more time, slowing down the perceived performance of all queries involved. I was quite curious to measure this, also because when teaching it’s by far more impressive and effective to show to attended a chart with the measured impact, so that they can really “feel” what it means! To do the tests, I’ve create a script that creates a table (that has a clustered index on the primary key which is an identity column) , loads 1000 rows into the table (inserting 1000 row using only one insert, instead of issuing 1000 insert of one row, in order to minimize the overhead needed to handle the transaction, that would have otherwise ), and measures the time taken to do it. The process is then repeated 16 times, each time adding a new index on the table, using columns from table in a round-robin fashion. Test are done against different row sizes, so that it’s possible to check if performance changes depending on row size. The result are interesting, although expected. This is the chart showing how much time it takes to insert 1000 on a table that has from 0 to 16 non-clustered indexes. Each test has been run 20 times in order to have an average value. The value has been cleaned from outliers value due to unpredictable performance fluctuations due to machine activity. The test shows that in a  table with a row size of 80 bytes, 1000 rows can be inserted in 9,05 msec if no indexes are present on the table, and the value grows up to 88 (!!!) msec when you have 16 indexes on it This means a impact on performance of 975%. That’s *huge*! Now, what happens if we have a bigger row size? Say that we have a table with a row size of 1520 byte. Here’s the data, from 0 to 16 indexes on that table: In this case we need near 22 msec to insert 1000 in a table with no indexes, but we need more that 500msec if the table has 16 active indexes! Now we’re talking of a 2410% impact on performance! Now we can have a tangible idea of what’s the impact of having (too?) many indexes on a table and also how the size of a row also impact performances. That’s why the golden rule of OLTP databases “few indexes, but good” is so true! (And in fact last week I saw a database with tables with 1700bytes row size and 23 (!!!) indexes on them!) This also means that a too heavy denormalization is really not a good idea (we’re always talking about OLTP systems, keep it in mind), since the performance get worse with the increase of the row size. So, be careful out there, and keep in mind the “equilibrium” is the key world of a database professional: equilibrium between read and write performance, between normalization and denormalization, between to few and too may indexes. PS Tests are done on a VMWare Workstation 7 VM with 2 CPU and 4 GB of Memory. Host machine is a Dell Precsioni M6500 with i7 Extreme X920 Quad-Core HT 2.0Ghz and 16Gb of RAM. Database is stored on a SSD Intel X-25E Drive, Simple Recovery Model, running on SQL Server 2008 R2. If you also want to to tests on your own, you can download the test script here: Open TestIndexPerformance.sql

    Read the article

  • How do you report out user research results?

    - by user12277104
    A couple weeks ago, one of my mentees asked to meet, because she wanted my advice on how to report out user research results. She had just conducted her first usability test for her new employer, and was getting to the point where she wanted to put together some slides, but she didn't want them to be boring. She wanted to talk with me about what to present and how best to present results to stakeholders. While I couldn't meet for another week, thanks to slideshare, I could quickly point her in the direction that my in-person advice would have led her. First, I'd put together a panel for the February 2012 New Hampshire UPA monthly meeting that we then repeated for the 2012 Boston UPA annual conference. In this panel, I described my reporting techniques, as did six of my colleagues -- two of whom work for companies smaller than mine, and four of whom are independent consultants. Before taking questions, we each presented for 3 to 5 minutes on how we presented research results. The differences were really interesting. For example, when do you really NEED a long, written report (as opposed to an email, spreadsheet, or slide deck with callouts)? When you are reporting your test results to the FDA -- that makes sense. in this presentation, I describe two modes of reporting results that I use.  Second, I'd been a participant in the CUE-9 study. CUE stands for Comparative Usability Evaluation, and this was the 9th of these studies that Rolf Molich had designed. Originally, the studies were designed to show the variability in evaluation methods practitioners use to evaluate websites and applications. Of course, using methods and tasks of their own choosing, the results were wildly different. However, in this 9th study, the tasks were the same, the participants were the same, and the problem severity scale was the same, so how would the results of the 19 practitioners compare? Still wildly variable. But for the purposes of this discussion, it gave me a work product that was not proprietary to the company I work for -- a usability test report that I could share publicly. This was the way I'd been reporting results since 2005, and pretty much what I still do, when time allows.  That said, I have been continuing to evolve my methods and reporting techniques, and sometimes, there is no time to create that kind of report -- the team can't wait the days that it takes to take screen shots, go through my notes, refer back to recordings, and write it all up. So in those cases, I use bullet points in email, talk through the findings with stakeholders in a 1-hour meeting, and then post the take-aways on a wiki page. There are other requirements for that kind of reporting to work -- for example, the stakeholders need to attend each of the sessions, and the sessions can't take more than a day to complete, but you get the idea: there is no one "right" way to report out results. If the method of reporting you are using is giving your stakeholders the information they need, in a time frame in which it is useful, and in a format that meets their needs (FDA report or bullet points on a wiki), then that's the "right" way to report your results. 

    Read the article

  • JFall 2012

    - by Geertjan
    JFall 2012 was over far too soon! Seven tracks going on simultaneously in a great location, with many artifacts reminding me of JavaOne, and nice snacks and drinks afterwards. The day started, as such things always do, with a keynote. Thanks to @royvanrijn for the photo below, I didn't take any myself and without a picture this report might have been too dry: What you see above is Steve Chin riding into the keynote hall on his NightHacking bike. The keynote was interesting, I can't be too complimentary about it, since I was part of it myself. Bert Ertman introduced the day and then Steve Chin took over, together with Sharat Chander, Tom Eugelink, Timon Veenstra, and myself. We had a strict choreography for the keynote, one that would ensure a lot of variation and some unexpected surprises, such as Steve being thrown off the stage a few times by Bert because of mentioning JavaOne too many times, rather than the clearly much cooler JFall. Steve talked about JavaOne and the direction Java is headed in, Sharat talked about JavaME and embedded devices, Steve and Tom did a demo involving JavaFX, I did a Project Easel demo, and Timon from Ordina talked about his Duke's Choice Award winning AgroSense project. I think the Project Easel demo (which I repeated later in a screencast for Parleys arranged by Eugene Boogaart) came across well and several people I spoke to especially like the roundtrip/bi-directional work that can be done, from browser to IDE and back again, very simply and intuitively. (In a long conversation on the drive back home afterwards, the scenario of a designer laying out the UI in HTML and then handing the HTML to a developer for back-end work, a developer who would then find it convenient to open the HTML in a browser and quickly navigate from the browser to the resources within the IDE, was discussed and considered to be extremely interesting and worth considering adopting NetBeans for, for no other reason than that.) Later I attended a session by David Delabassee on Java EE 7, Hans Dockter on Gradle, and Sander Mak on cross-build injection attacks. I was sorry to have missed Martijn Verburg's session, which sounded like it was really fantastic, among others, such as Gerrit Grunwald. I did a session too, entitled "Unlocking the Java EE 6 Platform", which was very well attended, pretty much a full room, and the demo went very smoothly. I talked to many people, e.g., a long time with Hans Dockter about how cool Gradle is and how great the Gradle/NetBeans plugin is turning out to be. I also had a long conversation (and did a demo) with Chris Chedgey, from Structure101, after his session, which was incredibly well attended; very interesting how popular modularity is. I met several people for the first time, as well as some colleagues from past places I've worked at. All in all, it was a great conference, unfortunately too short, which was very well attended (clearly over 1000) people, with several international speakers, as well as international attendees such as Mattias Karlsson, Sweden JUG leader. And, unsurprisingly, I came across NetBeans Platform applications again, none of which I had ever heard of before. In each case, "our fat client application" was mentioned in passing, never as a main application, and never in a context where there are plans for the application to be migrated to the web or mobile, simply because doing so makes no business sense at all. Great times at JFall, looking forward to meeting with some of the people I met again soon.

    Read the article

  • Are Chief Digital Officers the Result of CMO/CIO Refusal to Change?

    - by Mike Stiles
    Apparently CDO no longer just stands for “Collateralized Debt Obligations.”  It stands for Chief Digital Officer. And they’re the ones who are supposed to answer the bat signal CEO’s are throwing into the sky, swoop in and POW! drive the transition of the enterprise to integrated digital systems. So imagine being a CMO or a CIO at such an enterprise and realizing it’s been determined that you are not the answer that’s needed. In fact, IntelligentHQ author Ashley Friedlein points out the very rise of the CDO is an admission of C-Suite failure to become savvy enough, quickly enough in modern technology. Is that fair? Despite the repeated drumbeat that CMO’s and CIO’s must enter a new era of cooperation and collaboration to enact the social-enabled enterprise, the verdict seems to be that if it’s happening at all, it’s not happening fast enough. Therefore, someone else is needed with the authority to make things happen. So who is this relatively new beast? Gartner VP David Willis says, “The Chief Digital Officer plays in the place where the enterprise meets the customer, where the revenue is generated, and the mission accomplished.” In other words, where the rubber meets the road. They aren’t just another “C” heading up a unit. They’re the CEO’s personal SWAT team, able to call the shots necessary across all units to affect what has become job one…customer experience. And what are the CMO’s and CIO’s doing while this is going on? Playing corporate games. Accenture reports 38% of CMOs say IT deliberately keeps them out of the loop, with 35% saying marketing’s needs aren’t a very high priority. 31% of CIOs say marketers don’t understand tech and regularly go around them for solutions. Fun! Meanwhile the CEO feels the need to bring in a parental figure to pull it all together. Gartner thinks 25% of all orgs will have a CDO by 2015 as CMO’s and particularly CIO’s (Peter Hinssen points out many CDO’s are coming “from anywhere but IT”) let the opportunity to be the agent of change their company needs slip away. Perhaps most interestingly, these CDO’s seem to be entering the picture already on the fast track. One consultancy counted 7 instances of a CDO moving into the CEO role, which, as this Wired article points out, is pretty astounding since nobody ever heard of the job a few years ago. And vendors are quickly figuring out that this is the person they need to be talking to inside the brand. The position isn’t without its critics. Forrester’s Martin Gill says the reaction from executives at some traditional companies to someone being brought in to be in charge of digital might be to wash their own hands of responsibility for all things digital – a risky maneuver given the pervasiveness of digital in business. They might not even be called Chief Digital Officers. They might be the Chief Customer Officer, Chief Experience Officer, etc. You can call them Twinkletoes if you want to, but essentially anyone who has the mandate direct from the CEO to enact modern technology changes not currently being championed by the CMO or CIO can be regarded as “boss.” @mikestiles @oraclesocialPhoto: freedigitalphotos.net

    Read the article

  • How views are changing in future versions of SQL

    - by Rob Farley
    April is here, and this weekend, SQL v11.0 (previous known as Denali, now known as SQL Server 2012) reaches general availability. And so I thought I’d share some news about what’s coming next. I didn’t hear this at the MVP Summit earlier this year (where there was lots of NDA information given, but I didn’t go), so I think I’m free to share it. I’ve written before about CTEs being query-scoped views. Well, the actual story goes a bit further, and will continue to develop in future versions. A CTE is a like a “temporary temporary view”, scoped to a single query. Due to globally-scoped temporary objects using a two-hashes naming style, and session-scoped (or ‘local’) temporary objects a one-hash naming style, this query-scoped temporary object uses a cunning zero-hash naming style. We see this implied in Books Online in the CREATE TABLE page, but as we know, temporary views are not yet supported in the SQL Server. However, in a breakaway from ANSI-SQL, Microsoft is moving towards consistency with their naming. We know that a CTE is a “common table expression” – this is proving to be a more strategic than you may have appreciated. Within the Microsoft product group, the term “Table Expression” is far more widely used than just CTEs. Anything that can be used in a FROM clause is referred to as a Table Expression, so long as it doesn’t actually store data (which would make it a Table, rather than a Table Expression). You can see this is not just restricted to the product group by doing an internet search for how the term is used without ‘common’. In the past, Books Online has referred to a view as a “virtual table” (but notice that there is no SQL 2012 version of this page). However, it was generally decided that “virtual table” was a poor name because it wasn’t completely accurate, and it’s typically accepted that virtualisation and SQL is frowned upon. That page I linked to says “or stored query”, which is slightly better, but when the SQL 2012 version of that page is actually published, the line will be changed to read: “A view is a stored table expression (STE)”. This change will be the first of many. During the SQL 2012 R2 release, the keyword VIEW will become deprecated (this will be SQL v11 SP1.5). Three versions later, in SQL 14.5, you will need to be in compatibility mode 140 to allow “CREATE VIEW” to work. Also consistent with Microsoft’s deprecation policy, the execution of any query that refers to an object created as a view (rather than the new “CREATE STE”), will cause a Deprecation Event to fire. This will all be in preparation for the introduction of Single-Column Table Expressions (to be introduced in SQL 17.3 SP6) which will finally shut up those people waiting for a decent implementation of Inline Scalar Functions. And of course, CTEs are “Common” because the Table Expression definition needs to be repeated over and over throughout a stored procedure. ...or so I think I heard at some point. Oh, and congratulations to all the new MVPs on this April 1st. @rob_farley

    Read the article

  • Starting over and new to Ubuntu

    - by 2funnyyone
    We have been having repeated problems with our interent service and using windows xp & sp3 (users and premissions) I see no need for them. I started with computers long before windows. Every since sp 3 come out in 2009 I have had nothing but problems. I have lost so many computers to virius and trojans, we just stack them up. We are with Qwest/ Century link which is using advertising servers which I think is causing the problem. All the computers are networked together which is not how I set them up. I beleive Century link is networking them through assignment of a domain for our home. This causes all the computers to crash twice. This is getting expensive. We tried buying new harddrives but reinfect with hours of connecting to internet. I also beleive the modem, router and all computers are infected. I put combofix on this one and that is the only reason we are still online with this laptop. I am afraid to install new equipment because my partner and I are on SSDI and this cost a lot. I go to school at UOP and had to run off a flash and reboot this laptop to recovery every other day or so, this pass month. New plan is: We are getting ready to install new equipment but afraid to reinfect again. Need help to install new equipment. The plan is to use current internet services from Qwest/ now Century Link. The list of New equipment in order: Century link wireless modem is ZyXEL PK5000Z with 4 direct connect Ethernet ports Next Dell Optiplex 210L ( used auction purchase ) 2 gb ram 80 g hard drive Ubuntu 11.10 operating system Next Wireless D-Link router WBR-1310 with 4 direct connect Ethernet ports OK-------- Purchased Dell OEM disk for Repair or Reinstalling Windows XP Professional Operating system (2 roommates as well) All infected computers are Dell desktops or laptops with XP Pro Also purchasing Ubuntu 12.04 for 3 computers. We like the way it runs but still learning it. Questions 1] How do we fdisk the infected computers without infecting new system. We have Dos disks, but none have floppy dish drive. We do have a new floppy disk drive and usb adapter we purchased from Amazon. 2] We are thinking Avast internet security because of the boot scan. We want all software loaded before reconnecting. We can manually load our internet provider information. We purchased StopZilla $100 for 5 computers, but not sure that is what we need. But need how to setup ports security and services we will need. Really lost at this part. So we are safe when we go back on the internet. 3] Want to connect reloaded fdisk systems to router as public connection and no sharing. Do not want to network all computers. 4] Want parental/ ownership control from Ubuntu system for internet connection (Children and friends). Do we restrict at the modem and/ or router? Any help would be a blessing. I do not want to go alone on this anymore.

    Read the article

  • E-Business Suite : Role of CHUNK_SIZE in Oracle Payroll

    - by Giri Mandalika
    Different batch processes in Oracle Payroll flow have the ability to spawn multiple child processes (or threads) to complete the work in hand. The number of child processes to fork is controlled by the THREADS parameter in APPS.PAY_ACTION_PARAMETERS view. THREADS parameter The default value for THREADS parameter is 1, which is fine for a single-processor system but not optimal for the modern multi-core multi-processor systems. Setting the THREADS parameter to a value equal to or less than the total number of [virtual] processors available on the system may improve the performance of payroll processing. However on the down side, since multiple child processes operate against the same set of payroll tables in HR schema, database may experience undesired consequences such as buffer busy waits and index contention, which results in giving up some of the gains achieved by using multiple child processes/threads to process the work. Couple of other action parameters, CHUNK_SIZE and CHUNK_SHUFFLE, help alleviate the database contention. eg., Set a value for THREADS parameter as shown below. CONNECT APPS/APPS_PASSWORD UPDATE PAY_ACTION_PARAMETERS SET PARAMETER_VALUE = DESIRED_VALUE WHERE PARAMETER_NAME = 'THREADS'; COMMIT; (I am not aware of any maximum value for THREADS parameter) CHUNK_SIZE parameter The size of each commit unit for the batch process is controlled by the CHUNK_SIZE action parameter. In other words, chunking is the act of splitting the assignment actions into commit groups of desired size represented by the CHUNK_SIZE parameter. The default value is 20, and each thread processes one chunk at a time -- which means each child process inserts or processes 20 assignment actions at any time. When multiple threads are configured, each thread picks up a chunk to process, completes the assignment actions and then picks up another chunk. This is repeated until all the chunks are exhausted. It is possible to use different chunk sizes in different batch processes. During the initial phase of processing, CHUNK_SIZE number of assignment actions are inserted into relevant table(s). When multiple child processes are inserting data at the same time into the same set of tables, as explained earlier, database may experience contention. The default value of 20 is mostly optimal in such a case. Experiment with different values for the initial phase by +/-10 for CHUNK_SIZE parameter and observe the performance impact. A larger value may make sense during the main processing phase. Again experimentation is the key in finding the suitable value for your environment. Start with a large value such as 2000 for the chunk size, then increment or decrement the size by 500 at a time until an optimal value is found. eg., Set a value for CHUNK_SIZE parameter as shown below. CONNECT APPS/APPS_PASSWORD UPDATE PAY_ACTION_PARAMETERS SET PARAMETER_VALUE = DESIRED_VALUE WHERE PARAMETER_NAME = 'CHUNK_SIZE'; COMMIT; CHUNK_SIZE action parameter accepts a value that is as low as 1 or as high as 16000. CHUNK SHUFFLE parameter By default, chunks of assignment actions are processed sequentially by all threads - which may not be a good thing especially given that all child processes/threads performing similar actions against the same set of tables almost at the same time. By saying not a good thing, I mean to say that the default behavior leads to contention in the database (in data blocks, for example). It is possible to relieve some of that database contention by randomizing the processing order of chunks of assignment actions. This behavior is controlled by the CHUNK SHUFFLE action parameter. Chunk processing is not randomized unless explicitly configured. eg., Set chunk shuffling as shown below. CONNECT APPS/APPS_PASSWORD UPDATE PAY_ACTION_PARAMETERS SET PARAMETER_VALUE = 'Y' WHERE PARAMETER_NAME = 'CHUNK SHUFFLE'; COMMIT; Finally I recommend checking the following document out for additional details and additional pay action tunable parameters that may speed up the processing of Oracle Payroll.     My Oracle Support Doc ID: 226987.1 Oracle 11i & R12 Human Resources (HRMS) & Benefits (BEN) Tuning & System Health Checks Also experiment with different combinations of parameters and values until the right set of action parameters and values are found for your deployment.

    Read the article

  • How to fix: Ubuntu 12.04 reboots after loading with elilo

    - by Casey
    I have an HP p6-2120 with CPU: AMD A6-3620 APU with Radeon Graphics RAM: 6GB BIOS: HO2_710.ROM v7.10 [AMI v7.10 4/19/2012] Disk: SATA1 (/dev/sda) - 1 TB (windows) Disk: SATA2 (/dev/sdb) - 1 TB partitioned using "parted -a optimal /dev/sdb" as follows: .. 1049KB 201MB FAT32 boot flag set .. 201MB 60GB ext2 (/) .. 68GB 78GB linux-swap(v1) (swap) .. 78GB 790GB ext4 (/home) .. - rest is "free" space reserved for other purposes (eventually) ubuntu: 12.04.1 LTS [specifically: Release 12.04 (precise) 64-bit] kernel: linux 3.2.0-29-generic I created a bootable EFI USB from the ISO (64-bit) which I downloaded. I can run and install from the USB without any problems. The BIOS is an EFI bios that appears to be capable of booting in either EFI or Legacy mode. Initially, I did the "standard" install with NOTHING on disk2, and let the installer configure everything. The net result of this was that when I started the computer and forced it into "boot" menu mode, it DOES NOT recognize SATA2 as an EFI drive, and when I attempt to "legacy" boot from it, I get the message "ERROR: No Boot Disk has been detected." The "standard" install created one large partition that consumed the entire disk. At that point, I manually partitioned the disk (using sudo parted -a optimal /dev/sdb) as described above. I selected the "other" install, and changed the /dev/sdb1 to "bios_grub", /dev/sdb2 as "/" (ext4), /dev/sdb3 as swap, and /dev/sdb4 as "/home". [Note: fearing that possibly elilo did not recognize ext4, I switched /dev/sdb2 to ext2 and re-insalled] The net result was that the install appeared to trash the /dev/sdb1 partition so that it was NOT readable by anything. I re-formated /dev/sdb1 as FAT32 and set the boot flag. I repeated the install ignoring the messages about no bios_grub partition. After several attempts to get GRUB2 to work, I switched to elilo. I downloaded the most recent version and copied it (elilo-3.14-ia64.efi) to /dev/sdb1/efi/boot/bootx64.efi. (The BIOS boot loader did not recognize it either as elilo-3.14.ia64.efi or as elilo.efi. Based on the advice in one of the web-pages I found, I renamed it to bootx64.efi. This worked.) In that same directory (/efi/boot), I copied the file pointed to the link in /dev/sdb2/vmlinuz to /efi/boot/vmlinuz, and the file pointed to the link in /dev/sdb2/initrd.img to /efi/boot/initrd.img. I created an elilo.conf file as follows: timeout=5000 prompt default=linux-boot image=vmlinuz label=linux-boot read-only initrd=initrd.img root=/dev/sdb2 The /efi/boot directory contains 4 files: bootx64.efi elilo.conf vmlinuz initrd.img When I power-cycle the computer and force the boot menu, drive2 shows up as an EFI bootable drive. When I select it, I get the elilo prompt. Pressing , it appears to load the kernal (I have tried it with verbose=5, and there is a long string of messages with the final one a command line to load the kernel and a series of several dots that fly by) then the screen goes blank, and it reboots the computer. [Note: I have also tried substituting the UUID as found in the /etc/fstab of the installed system for the root directory. This had no effect.] This is a brief synopsis of several nights of fiddling with this. I would deeply appreciate any help you can give.

    Read the article

  • Looking into CSS3 Multiple backgrounds

    - by nikolaosk
    In this post I will be looking into a great feature in CSS3 called multiple backgrounds.With CSS3 ourselves as web designers we can specify multiple background images for box elements, using nothing more than a simple comma-separated list. This is a very nice feature that can be useful in many websites.In this hands-on example I will be using Expression Web 4.0.This application is not a free application. You can use any HTML editor you like.You can use Visual Studio 2012 Express edition. You can download it here.Before I go on with the actual demo I will use the (http://www.caniuse.com) to see the support for CSS 3 Multiple backgrounds from the latest versions of modern browsers.Please have a look in this link All modern browsers support this feature. I am typing this very simple HTML 5 markup with an internal CSS style.<!DOCTYPE html><html lang="en">  <head>    <title>Sold items</title>        <style>                #box{        border:1px solid #afafaf;        width:260px;        height:290px;        background-image:url(shirt.png), url(sold.jpg);        background-position: center bottom, right top;        background-repeat: no-repeat, no-repeat;    </style>    </head>  <body>    <header>                <h1>CSS3 Multiple backgrounds</h1>    </header>           <div id="box">              </div>        <footer>        <p>All Rights Reserved</p>      </footer>     </body>  </html>  Let me explain what I do here, multiple background images are specified using a comma-separated list of values for the background-image property.A comma separated list is also used for the other background properties such as background-repeat, background-position .So in the next three lines of CSS code         background-image:url(shirt.png), url(sold.jpg);        background-position: center bottom, right top;        background-repeat: no-repeat, no-repeat; we have 2 images placed in the div element. The first is placed center bottom in the div element and the second is placed at right top position inside the div element.Both images do not get repeated.I view the page in IE 10 and all the latest versions of Opera,Chrome and Firefox.Have a look at the picture below. Hope it helps!!!!

    Read the article

  • Wireless device bug on 13.10. BCM4313 registers as eth1 instead of wlan0 and no internet access

    - by user205691
    My Hotel wiFi requires me to login with a username & password after connecting to the hotspot. So, my browser would open a page with username & passwrd fields to login and then connect to internet. But unfortunately, firefox & chromium dont seem to work. i dont think it is browser related but a setting for the wifi router or driver which is creating this issue. using Broadcom 801.11 STA wireless driver (proprietary). tried open source as well but same result !! The image linked below shows my wifi connection setting & Chromium. The login page itself comes up after a long time and after entering the credentials, it keeps loading for ever !! it is the same case for every other browser.. so i dont think its browser issue but something to do with wifi setting or network manager stuff.. interestingly, i am able to connect to WiFi networks with WPA key without any issue. Adhoc hotspot is a problem and that is my regular home network :( .. I hope i can get some help solving this issue ! I have tried repeating the same hotspot after login from my android, by creating a virtual repeater with WPA key and it works. I can browse on ubuntu using this method.. but cant be doing this regularly ! I tried loading the same login page of the hotel wifi while browsing through my repeater wifi created on mobile and screen shot attached below. the page loads up quick and easy.. so this means something is wrong with the way network manager handles adhoc connectivity & login ?? i installed wicd0 but it crashes on startup and not helpful at all ! Screenshot of Chromium page Login page with repeated hotspot ifconfig in my terminal results: krishna@krishna-HP-ENVY-4-Notebook-PC:~$ ifconfig eth0 Link encap:Ethernet HWaddr 28:92:4a:1d:54:fa UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth1 Link encap:Ethernet HWaddr e0:06:e6:89:fa:49 inet addr:10.24.1.71 Bcast:10.24.1.255 Mask:255.255.255.0 inet6 addr: fe80::e206:e6ff:fe89:fa49/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10940 errors:0 dropped:0 overruns:0 frame:348431 TX packets:6611 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7669631 (7.6 MB) TX bytes:864195 (864.1 KB) Interrupt:17 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:2146 errors:0 dropped:0 overruns:0 frame:0 TX packets:2146 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:166120 (166.1 KB) TX bytes:166120 (166.1 KB) I wonder why is the wireless configured under eth1 ? I think this is a bug with earlier ubuntu versions, but is this normal in 13.10 or is there a wrong configuration here ? The wireless device in my pc is BCM4313 and i have installed the bcmwl-kernel-sources, wireless-tools to support the device. i also reinstalled the bcmwl-kernel as suggested on broadcom website, via synaptic package manager. Nothing has changed this situation ! I tried booting into liveUSB and then ifconfig results show wireless under wlan0. But then the wireless connects and loads the login page. So is the problem with the device configuration now ? i really want to get this fixed before i start configuring the other stuff like ATI graphics and such on the laptop for overheating.. lack of internet access is too bad a bug for me :P any help is appreciated!

    Read the article

  • Undefined control sequence

    - by Jelle Fresen
    Hi, I am making my Master's Thesis with LaTeX, but I can't get the provided style to work. Specifically, I get the error 'Undefined control sequence' when using the function makeformaltitlepages, which is defined in mscthesis.sty. On the internet, the only answer I could find is the straightforward 'you probably made a typo', or 'you probably forgot to include the package', but I have reason to believe neither of those apply to me. I am quite sure that the function exists, for when I add a little verification, using the @ifundefined command, the logfile shows that the function actually does exist. And, as can be seen in the following piece of code, I also include the package: \usepackage{mscthesis} % setup information like author, company, title, etc. \begin{document} \formatmatter \thispagestyle{empty} \maketitle \makeatletter \@ifundefined{makeformaltitlepages}{\message{Function is not defined.}}{\message{Function is defined.}} \makeatother \makeformaltitlepages{\input{abstract}} % add chapters, sections, etc. and end the document Now, the output shows the line "Function is defined." just before the output of maketitle (which I think is rather strange on its own, but that might be a flushing issue), followed by the following infinitely repeated error (well, cut off after 100 times by LaTeX): Function is defined. // some gibberish about font info ! Undefined control sequence. \GenericError ... #4 \errhelp \@err@ ... l.112 \makeformaltitlepages{} The control sequence at the end of the top line of your error message was never \def'ed. If you have misspelled it (e.g., `\hobx'), type `I' and the correct spelling (e.g., `I\hbox'). Otherwise just continue, and I'll forget about whatever was undefined. While the error keeps repeating, the line that starts with '#4' cycles between the following four lines: #4 \errhelp \@err@ ... \let \@err@ ... \@empty \def \MessageBreak... \endgroup Ok, so, do any of you have a suggestion of how I might continue to hunt this bug? Or what blatantly obvious mistake did I make?

    Read the article

  • Code Golf: Code 39 Bar Code

    - by gwell
    The challenge The shortest code by character count to draw an ASCII representation of a Code 39 bar code. Wikipedia article about Code 39: http://en.wikipedia.org/wiki/Code_39 Input The input will be a string of legal characters for Code 39 bar codes. This means 43 characters are valid: 0-9 A-Z (space) and -.$/+%. The * character will not appear in the input as it is used as the start and stop characters. Output Each character encoded in Code 39 bar codes have nine elements, five bars and four spaces. Bars will be represented with # characters, and spaces will be represented with the space character. Three of the nine elements will be wide. The narrow elements will be one character wide, and the wide elements will be three characters wide. A inter-character space of a single space should be added between each character pattern. The pattern should be repeated so that the height of the bar code is eight characters high. The start/stop character * (bWbwBwBwb) would be represented like this: # # ### ### # # # ### ### # # # ### ### # # # ### ### # # # ### ### # # # ### ### # # # ### ### # # # ### ### # ^ ^ ^^ ^ ^ ^ ^^^ | | || | | | ||| narrow bar -+ | || | | | ||| wide space ---+ || | | | ||| narrow bar -----+| | | | ||| narrow space ------+ | | | ||| wide bar --------+ | | ||| narrow space ----------+ | ||| wide bar ------------+ ||| narrow space --------------+|| narrow bar ---------------+| inter-character space ----------------+ The start and stop character * will need to be output at the start and end of the bar code. No quiet space will need to be included before or after the bar code. No check digit will need to be calculated. Full ASCII Code39 encoding is not required, just the standard 43 characters. No text needs to be printed below the ASCII bar code representation to identify the output contents. The character # can be replaced with another character of higher density if wanted. Using the full block character U+2588, would allow the bar code to actually scan when printed. Test cases Input: ABC Output: # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # # # ### ### # ### # # # ### # ### # # ### ### ### # # # # # ### ### # Input: 1/3 Output: # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # # # ### ### # ### # # # ### # # # # # ### ### # # # # # ### ### # Input: - $ (minus space dollar) Output: # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # # # ### ### # # # # ### ### # ### # ### # # # # # # # # ### ### # Code count includes input/output (full program).

    Read the article

  • Java (JSP): repeating the contentType header in a "sub-jsp"

    - by Webinator
    What happens when headers are repeated in a .jsp you include in another .jsp? For example if example.jsp starts with this: <?xml version="1.0" encoding="UTF-8"?> <jsp:root version="2.0" xmlns:jsp="http://java.sun.com/JSP/Page"> <jsp:directive.page contentType="text/html; charset=UTF-8" /> <div class="content"> <jsp:include page="support-header.jsp"/> ... (it includes support-header.jsp) And then support-header.jsp starts also with this: <?xml version="1.0" encoding="UTF-8"?> <jsp:root version="2.0" xmlns:jsp="http://java.sun.com/JSP/Page"> <jsp:directive.page contentType="text/html; charset=UTF-8" /> ... Is that a problem? Is it bad practice? What does concretely happen when you repeat several times a header that only corresponds to one header in the resulting .html page?

    Read the article

  • Self-reference entity in Hibernate

    - by Marco
    Hi guys, I have an Action entity, that can have other Action objects as child in a bidirectional one-to-many relationship. The problem is that Hibernate outputs the following exception: "Repeated column in mapping for collection: DbAction.childs column: actionId" Below the code of the mapping: <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping> <class name="DbAction" table="actions"> <id name="actionId" type="short" /> <property not-null="true" name="value" type="string" /> <set name="childs" table="action_action" cascade="all-delete-orphan"> <key column="actionId" /> <many-to-many column="actionId" unique="true" class="DbAction" /> </set> <join table="action_action" inverse="true" optional="false"> <key column="actionId" /> <many-to-one name="parentAction" column="actionId" not-null="true" class="DbAction" /> </join> </class> </hibernate-mapping>

    Read the article

  • Jquery Fancybox not working after postback

    - by Lee Englestone
    I have a Fancybox (or more accurately) a number of fancy boxes on an asp.net page. My Fancybox (jquery plugin) works fine until a postback occurs on the page then it refuses to work. Any thoughts? Anyone experienced similar behaviour? UPDATE : Some Code.. I have a databound repeater with a fancybox on each repeating item. They are instanciated by (outside the repeater) $(document).ready(function() { $("a.watchvideo").fancybox({ 'overlayShow': false, 'frameWidth' : 480, 'frameHeight' : 400 }); }); The anchor tag is repeated.. href="#watchvideo_<%#Eval("VideoId")%" As is a div with id="watchvideo_<%#Eval("VideoId") %> As is a script element that instanciates the flash movies Yes the VideoIds are being output the the page. UPDATE : It's not a problem with the flash.. It is not a problem with the flash as i've tried it without the flash, it wont even pop a window with a simple message in. UPDATE : I wonder if it is the updatepanel. http://stackoverflow.com/questions/301473/rebinding-events-in-jquery-after-ajax-update-updatepanel -- lee

    Read the article

  • asp.net mvc2 - controller for master page?

    - by ile
    I've just finished my first ASP.NET MVC (2) CMS. Next step is to build website that will show data from CMS's database. This is website design: #1 (Red box) - displays article categories. ViewModel: public class CategoriesDisplay { public CategoriesDisplay() { } public int CategoryID { set; get; } public string CategoryTitle { set; get; } } #2 (Brown box) - displays last x articles; skips those from green box #3. Viewmodel: public class ArticleDisplay { public ArticleDisplay() { } public int CategoryID { set; get; } public string CategoryTitle { set; get; } public int ArticleID { set; get; } public string ArticleTitle { set; get; } public string URLArticleTitle { set; get; } public DateTime ArticleDate; public string ArticleContent { set; get; } } #3 (green box) - Displays last x articles. Uses the same ViewModel as brown box #2 #4 (blue box) - Displays list of upcoming events. Uses dataContext.Model.Event as ViewModel Boxes #1, #2 and #4 will repeat all over the site and they are part of Master Page. So, my question is: what is the best way to transfer this data from Model to Controller and finally to View pages? Should I make a controller for master page and ViewModel class that will wrap all this classes together OR Should I create partial Views for every of these boxes and make each of them inherit appropriate class (if it is even possible that it works this way?) OR Should I put this repeated code in all controllers and all additional data transfer via ViewData, which would be probably the worse way :) OR There is maybe a better and more simple way but I don't know/see it? Thanks in advance, Ile

    Read the article

  • Sharepoint/WSS Reporting Services Integration woes

    - by mhollers
    after a number of failed attempts i seem to have successfully installed the Reporting services add-in to my WSS farm. However, I seem to be missing most of the enhanced functionality eg no report library template, no report center site template. the only additional functionality available is the report viewer web part. background: 2 server WSS 3.0 farm with CA (Central admin) WFE (web front end) and reporting services addin installed on 1, and SQL05 SP2 with Reporting services (RS) and all databases installed on other. I have a VM environment set up and have rolled this back and repeated a number of times. I have configured RS within CA and activated 'Report Server INtegration Feature'. Within the 'site settings' I have a 'Reporting Services' heading with a 'manage shared schedules' item/link, not sure if there should be other options? I was of the understanding that to view reports within sharepoint i could either create a new site using the 'report center' template or add a report library to an existing site, neither of which seems available I am at a loss as to what to do, as all online information seems to do with dealing with installation issues/errors, which i seem to have eventually got past

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >