Search Results

Search found 110935 results on 4438 pages for 'google summer of code 11'.

Page 46/4438 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5 Part 1: Table per Hierarchy (TPH)

    - by mortezam
    A simple strategy for mapping classes to database tables might be “one table for every entity persistent class.” This approach sounds simple enough and, indeed, works well until we encounter inheritance. Inheritance is such a visible structural mismatch between the object-oriented and relational worlds because object-oriented systems model both “is a” and “has a” relationships. SQL-based models provide only "has a" relationships between entities; SQL database management systems don’t support type inheritance—and even when it’s available, it’s usually proprietary or incomplete. There are three different approaches to representing an inheritance hierarchy: Table per Hierarchy (TPH): Enable polymorphism by denormalizing the SQL schema, and utilize a type discriminator column that holds type information. Table per Type (TPT): Represent "is a" (inheritance) relationships as "has a" (foreign key) relationships. Table per Concrete class (TPC): Discard polymorphism and inheritance relationships completely from the SQL schema.I will explain each of these strategies in a series of posts and this one is dedicated to TPH. In this series we'll deeply dig into each of these strategies and will learn about "why" to choose them as well as "how" to implement them. Hopefully it will give you a better idea about which strategy to choose in a particular scenario. Inheritance Mapping with Entity Framework Code FirstAll of the inheritance mapping strategies that we discuss in this series will be implemented by EF Code First CTP5. The CTP5 build of the new EF Code First library has been released by ADO.NET team earlier this month. EF Code-First enables a pretty powerful code-centric development workflow for working with data. I’m a big fan of the EF Code First approach, and I’m pretty excited about a lot of productivity and power that it brings. When it comes to inheritance mapping, not only Code First fully supports all the strategies but also gives you ultimate flexibility to work with domain models that involves inheritance. The fluent API for inheritance mapping in CTP5 has been improved a lot and now it's more intuitive and concise in compare to CTP4. A Note For Those Who Follow Other Entity Framework ApproachesIf you are following EF's "Database First" or "Model First" approaches, I still recommend to read this series since although the implementation is Code First specific but the explanations around each of the strategies is perfectly applied to all approaches be it Code First or others. A Note For Those Who are New to Entity Framework and Code-FirstIf you choose to learn EF you've chosen well. If you choose to learn EF with Code First you've done even better. To get started, you can find a great walkthrough by Scott Guthrie here and another one by ADO.NET team here. In this post, I assume you already setup your machine to do Code First development and also that you are familiar with Code First fundamentals and basic concepts. You might also want to check out my other posts on EF Code First like Complex Types and Shared Primary Key Associations. A Top Down Development ScenarioThese posts take a top-down approach; it assumes that you’re starting with a domain model and trying to derive a new SQL schema. Therefore, we start with an existing domain model, implement it in C# and then let Code First create the database schema for us. However, the mapping strategies described are just as relevant if you’re working bottom up, starting with existing database tables. I’ll show some tricks along the way that help you dealing with nonperfect table layouts. Let’s start with the mapping of entity inheritance. -- The Domain ModelIn our domain model, we have a BillingDetail base class which is abstract (note the italic font on the UML class diagram below). We do allow various billing types and represent them as subclasses of BillingDetail class. As for now, we support CreditCard and BankAccount: Implement the Object Model with Code First As always, we start with the POCO classes. Note that in our DbContext, I only define one DbSet for the base class which is BillingDetail. Code First will find the other classes in the hierarchy based on Reachability Convention. public abstract class BillingDetail  {     public int BillingDetailId { get; set; }     public string Owner { get; set; }             public string Number { get; set; } } public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } public class CreditCard : BillingDetail {     public int CardType { get; set; }                     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } This object model is all that is needed to enable inheritance with Code First. If you put this in your application you would be able to immediately start working with the database and do CRUD operations. Before going into details about how EF Code First maps this object model to the database, we need to learn about one of the core concepts of inheritance mapping: polymorphic and non-polymorphic queries. Polymorphic Queries LINQ to Entities and EntitySQL, as object-oriented query languages, both support polymorphic queries—that is, queries for instances of a class and all instances of its subclasses, respectively. For example, consider the following query: IQueryable<BillingDetail> linqQuery = from b in context.BillingDetails select b; List<BillingDetail> billingDetails = linqQuery.ToList(); Or the same query in EntitySQL: string eSqlQuery = @"SELECT VAlUE b FROM BillingDetails AS b"; ObjectQuery<BillingDetail> objectQuery = ((IObjectContextAdapter)context).ObjectContext                                                                          .CreateQuery<BillingDetail>(eSqlQuery); List<BillingDetail> billingDetails = objectQuery.ToList(); linqQuery and eSqlQuery are both polymorphic and return a list of objects of the type BillingDetail, which is an abstract class but the actual concrete objects in the list are of the subtypes of BillingDetail: CreditCard and BankAccount. Non-polymorphic QueriesAll LINQ to Entities and EntitySQL queries are polymorphic which return not only instances of the specific entity class to which it refers, but all subclasses of that class as well. On the other hand, Non-polymorphic queries are queries whose polymorphism is restricted and only returns instances of a particular subclass. In LINQ to Entities, this can be specified by using OfType<T>() Method. For example, the following query returns only instances of BankAccount: IQueryable<BankAccount> query = from b in context.BillingDetails.OfType<BankAccount>() select b; EntitySQL has OFTYPE operator that does the same thing: string eSqlQuery = @"SELECT VAlUE b FROM OFTYPE(BillingDetails, Model.BankAccount) AS b"; In fact, the above query with OFTYPE operator is a short form of the following query expression that uses TREAT and IS OF operators: string eSqlQuery = @"SELECT VAlUE TREAT(b as Model.BankAccount)                       FROM BillingDetails AS b                       WHERE b IS OF(Model.BankAccount)"; (Note that in the above query, Model.BankAccount is the fully qualified name for BankAccount class. You need to change "Model" with your own namespace name.) Table per Class Hierarchy (TPH)An entire class hierarchy can be mapped to a single table. This table includes columns for all properties of all classes in the hierarchy. The concrete subclass represented by a particular row is identified by the value of a type discriminator column. You don’t have to do anything special in Code First to enable TPH. It's the default inheritance mapping strategy: This mapping strategy is a winner in terms of both performance and simplicity. It’s the best-performing way to represent polymorphism—both polymorphic and nonpolymorphic queries perform well—and it’s even easy to implement by hand. Ad-hoc reporting is possible without complex joins or unions. Schema evolution is straightforward. Discriminator Column As you can see in the DB schema above, Code First has to add a special column to distinguish between persistent classes: the discriminator. This isn’t a property of the persistent class in our object model; it’s used internally by EF Code First. By default, the column name is "Discriminator", and its type is string. The values defaults to the persistent class names —in this case, “BankAccount” or “CreditCard”. EF Code First automatically sets and retrieves the discriminator values. TPH Requires Properties in SubClasses to be Nullable in the Database TPH has one major problem: Columns for properties declared by subclasses will be nullable in the database. For example, Code First created an (INT, NULL) column to map CardType property in CreditCard class. However, in a typical mapping scenario, Code First always creates an (INT, NOT NULL) column in the database for an int property in persistent class. But in this case, since BankAccount instance won’t have a CardType property, the CardType field must be NULL for that row so Code First creates an (INT, NULL) instead. If your subclasses each define several non-nullable properties, the loss of NOT NULL constraints may be a serious problem from the point of view of data integrity. TPH Violates the Third Normal FormAnother important issue is normalization. We’ve created functional dependencies between nonkey columns, violating the third normal form. Basically, the value of Discriminator column determines the corresponding values of the columns that belong to the subclasses (e.g. BankName) but Discriminator is not part of the primary key for the table. As always, denormalization for performance can be misleading, because it sacrifices long-term stability, maintainability, and the integrity of data for immediate gains that may be also achieved by proper optimization of the SQL execution plans (in other words, ask your DBA). Generated SQL QueryLet's take a look at the SQL statements that EF Code First sends to the database when we write queries in LINQ to Entities or EntitySQL. For example, the polymorphic query for BillingDetails that you saw, generates the following SQL statement: SELECT  [Extent1].[Discriminator] AS [Discriminator],  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift],  [Extent1].[CardType] AS [CardType],  [Extent1].[ExpiryMonth] AS [ExpiryMonth],  [Extent1].[ExpiryYear] AS [ExpiryYear] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] IN ('BankAccount','CreditCard') Or the non-polymorphic query for the BankAccount subclass generates this SQL statement: SELECT  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] = 'BankAccount' Note how Code First adds a restriction on the discriminator column and also how it only selects those columns that belong to BankAccount entity. Change Discriminator Column Data Type and Values With Fluent API Sometimes, especially in legacy schemas, you need to override the conventions for the discriminator column so that Code First can work with the schema. The following fluent API code will change the discriminator column name to "BillingDetailType" and the values to "BA" and "CC" for BankAccount and CreditCard respectively: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) {     modelBuilder.Entity<BillingDetail>()                 .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue("BA"))                 .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue("CC")); } Also, changing the data type of discriminator column is interesting. In the above code, we passed strings to HasValue method but this method has been defined to accepts a type of object: public void HasValue(object value); Therefore, if for example we pass a value of type int to it then Code First not only use our desired values (i.e. 1 & 2) in the discriminator column but also changes the column type to be (INT, NOT NULL): modelBuilder.Entity<BillingDetail>()             .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue(1))             .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue(2)); SummaryIn this post we learned about Table per Hierarchy as the default mapping strategy in Code First. The disadvantages of the TPH strategy may be too serious for your design—after all, denormalized schemas can become a major burden in the long run. Your DBA may not like it at all. In the next post, we will learn about Table per Type (TPT) strategy that doesn’t expose you to this problem. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • Segmentation Fault (11) with modwsgi on CentOS 5.7 when running pyramid app

    - by carbotex
    I'm getting Segmentation fault error when trying to access the "Hello World" pyramid app. This error only occurs when running against CentOS 5.7 setup, but no problem whatsoever when tested against OSX and Arch Linux. Could it be a CentOS specific issue? [error] [client 10.211.55.2] Premature end of script headers: pyramid.wsgi [notice] child pid 31212 exit signal Segmentation fault (11) I have tried to follow the troubleshooting guides posted here http://code.google.com/p/modwsgi/wiki/InstallationIssues which suggests that it might caused by missing Shared Library. A quick check reveals that shared library is not the issue. [centos57@localhost modules]$ ldd mod_wsgi.so linux-gate.so.1 => (0x00e6a000) libpython2.7.so.1.0 => /home/python/lib/libpython2.7.so.1.0 (0x0024c000) libpthread.so.0 => /lib/libpthread.so.0 (0x00da8000) libdl.so.2 => /lib/libdl.so.2 (0x00cd6000) libutil.so.1 => /lib/libutil.so.1 (0x00110000) libm.so.6 => /lib/libm.so.6 (0x0085c000) libc.so.6 => /lib/libc.so.6 (0x00682000) /lib/ld-linux.so.2 (0x0012b000) Then I found another clue that might be able to solve my problem. Unfortunately libexpat is not the source of the problem. http://code.google.com/p/modwsgi/wiki/IssuesWithExpatLibrary [centos57@localhost bin]$ ldd ~/httpd/bin/httpd | grep expat libexpat.so.1 => /usr/local/lib/libexpat.so.1 (0x00b00000) [centos57@localhost bin]$ strings /usr/local/lib/libexpat.so.1 | grep expat libexpat.so.1 expat_2.0.1 [centos57@localhost bin]$ python Python 2.7.2 (default, Nov 26 2011, 08:08:44) [GCC 4.1.2 20080704 (Red Hat 4.1.2-51)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import pyexpat >>> pyexpat.version_info (2, 0, 0) >>> I've been pulling my hair out trying to figure out what I'm missing in my setup. Why the problem only occurs with CentOS? Here is the detailed setup: Apache 2.2.19 Python 2.7.2 mod_wsgi-3.3 /home/httpd/conf/extra/pyramid.wsgi from pyramid.paster import get_app application = get_app('/home/homecamera/hcadmin/root/production.ini', 'main') /home/httpd/conf/extra/modwsgi.conf LoadModule wsgi_module modules/mod_wsgi.so WSGIScriptAlias /myapp /home/root/test.wsgi <Directory /home/root> WSGIProcessGroup pyramid Order allow,deny Allow from all </Directory> # Use only 1 Python sub-interpreter. Multiple sub-interpreters # play badly with C extensions. WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On WSGIDaemonProcess pyramid user=daemon group=daemon processes=1 \ threads=4 \ python-path=/home/python/lib/python2.7/site-packages WSGIScriptAlias /hello /home/httpd/conf/extra/pyramid.wsgi <Directory /home/httpd/conf/extra> WSGIProcessGroup pyramid Order allow,deny Allow from all </Directory> Again this same setup works on OSX and Arch Linux but not on CentOS 5.7. Could someone out there point me to the right direction before I ran out of my hair. ==================================================================================== When apache started with gdb, I got a couple of warnings Reading symbols from /home/httpd/bin/httpd...done. Attaching to program: /home/httpd/bin/httpd, process 1821 warning: .dynamic section for "/lib/libcrypt.so.1" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations warning: .dynamic section for "/lib/libutil.so.1" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations gdb output. After hitting refresh button, to load pyramid. (gdb) cont Continuing. warning: .dynamic section for "/usr/lib/libgssapi_krb5.so.2" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations warning: .dynamic section for "/usr/lib/libkrb5.so.3" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations warning: .dynamic section for "/lib/libresolv.so.2" is not at the expected address warning: difference appears to be caused by prelink, adjusting expectations Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x8edbb90 (LWP 1824)] 0x0814c120 in EVP_PKEY_CTX_dup () apache_error_log [info] mod_wsgi (pid=1821): Starting process 'pyramid' with threads=1. [info] mod_wsgi (pid=1821): Initializing Python. [info] mod_wsgi (pid=1821): Attach interpreter ''. [info] mod_wsgi (pid=1821): Create interpreter 'web.domain.com:20000|/hcadmin'. [info] [client 10.211.55.2] mod_wsgi (pid=1821, process='pyramid', application='web.domain.com:20000|/hcadmin'): Loading WSGI script '/home/httpd/conf/extra/pyramid.wsgi'. [error] hello 1

    Read the article

  • Save searches in Google Reader

    - by pinniger
    Ok, I’m trying to find a way to search my rss feeds in Google Reader every day for certain phrases. If any of the phrases are found, I want to be notified. I thought Google Alerts would do this no problem, but it does not. Does anybody know of any services or any other way of doing this?

    Read the article

  • Is Google Desktop Search Safe?

    - by JW
    Somebody told me, that Google DS is unsafe and Google would safe kind of an user data list about the files on my PC... can anybody tell me something about it? Any experience? Greetz, JW

    Read the article

  • svn path in google apps script

    - by deepasun
    Hi, I want to write a google apps script in google docs spreadsheet, such that it should update the svn revision of particular component automatically (which is in one cell of that spreadsheet) when i run that script

    Read the article

  • Fast way to search stackoverflow.com using google

    - by eSKay
    Everytime I have to search something on stackoverflow.com using Google I have to type the rather long <search term> site:stackoverflow.com Is there some way to speedup the process, so that I need not type the whole 23 characters of site:stackoverflow.com each and every time? I am using Google Chrome.

    Read the article

  • ESX3.5 Cluster & MD3000i -- Both servers see iSCSI Targets, Only one server can use partition.

    - by GruffTech
    Alright. First and foremost, Warning. This is a bigger-then-normal question. I like to be thorough and try to eliminate all possible "easymode" answers, as well as give everyone a feel of what i've tried. I've included several images of our setup and the problem it is having.. TLDR Version: So I've followed the guides located here: ESX Deployment Guide V1 this is the guide Dell has sent me to setup two ESX3.5 servers mounting a Dell MD3000i. It doesn't work. Both servers can't use the same storage partition on the MD3000. Both servers see it, but only one server can actually use it. (that server being whatever server created the partition on the target.) Both ESX servers are members of the Host Group. Full Version I have 2 ESX3.5 Servers (10.0.7.102, also called EPI2, and 10.0.7.103, also called EPI3.) connected to a iSCSI SAN Device (Dell MD3000i). Both ESX servers can "scan" the SAN and see the LUNS. Part One: MD3000i Storage On the MD3000i, Both servers are in my host group. I have two partitions, VM1 and VM2, both 1.6TB (vmware doesn't like anything past 2tb.) And you can even see that the ESX servers are targetting the MD3000 just fine. Part Two: The ESX Servers Figure 1. So as you can see above, Both ESX Servers (10.0.7.102 and 10.0.7.103) are able to see and scan the MD3000i SAN. Figure 2. Above is the storage both servers see. I created the storage partition on EPI2 (102). I then Extended the partition to include the second LUN for a grand total of 3.27 TB of storage. When i "rescan" on 103 (the server not mounting the partition), I get the below log in log/messages. Mar 11 10:41:18 epi3 kernel: scsi1: remove-single-device 0 0 0 failed, device busy(4). being the only line that grabs my attentions. (EPI3 is the server name) Mar 11 10:41:04 epi3 vmkiscsid[5436]: Connected to Discovery Address 192.168.130.101 Mar 11 10:41:04 epi3 vmkiscsid[5437]: Connected to Discovery Address 192.168.130.102 Mar 11 10:41:04 epi3 vmkiscsid[5438]: Connected to Discovery Address 192.168.131.101 Mar 11 10:41:04 epi3 vmkiscsid[5439]: Connected to Discovery Address 192.168.131.102 Mar 11 10:41:17 epi3 kernel: scsi singledevice 2 0 0 0 Mar 11 10:41:17 epi3 kernel: Vendor: DELL Model: MD3000i Rev: 0735 Mar 11 10:41:17 epi3 kernel: Type: Direct-Access ANSI SCSI revision: 05 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Supported VPD pages for sdb : 0x0 0x80 0x83 0x85 0x86 0x87 0xc0 0xc1 0xc2 0xc3 0xc4 0xc8 0xc9 0xca 0xd0 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Device id info for sdb: 0x1 0x3 0x0 0x10 0x60 0x1 0xe4 0xf0 0x0 0x1a 0x1a 0xa2 0x0 0x0 0x15 0xe2 0x4d 0x75 0xf6 0x99 0x53 0x98 0x0 0x54 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x2c 0x74 0x2c 0x30 0x78 0x30 0x30 0x30 0x31 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x32 0x0 0x0 0x0 0x51 0x94 0x0 0x4 0x0 0x0 0x80 0x1 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x0 0x0 0x0 0x0 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Id for sdb 0x60 0x01 0xe4 0xf0 0x00 0x1a 0x1a 0xa2 0x00 0x00 0x15 0xe2 0x4d 0x75 0xf6 0x99 0x4d 0x44 0x33 0x30 0x30 0x30 Mar 11 10:41:17 epi3 kernel: VMWARE: Unique Device attached as scsi disk sdb at scsi2, channel 0, id 0, lun 0 Mar 11 10:41:17 epi3 kernel: Attached scsi disk sdb at scsi2, channel 0, id 0, lun 0 Mar 11 10:41:17 epi3 kernel: scan_scsis starting finish Mar 11 10:41:17 epi3 kernel: SCSI device sdb: 3509329920 512-byte hdwr sectors (1797751 MB) Mar 11 10:41:17 epi3 kernel: sdb: sdb1 Mar 11 10:41:17 epi3 kernel: scan_scsis done with finish Mar 11 10:41:17 epi3 kernel: scsi singledevice 2 0 0 1 Mar 11 10:41:17 epi3 kernel: Vendor: DELL Model: MD3000i Rev: 0735 Mar 11 10:41:17 epi3 kernel: Type: Direct-Access ANSI SCSI revision: 05 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Supported VPD pages for sdc : 0x0 0x80 0x83 0x85 0x86 0x87 0xc0 0xc1 0xc2 0xc3 0xc4 0xc8 0xc9 0xca 0xd0 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Device id info for sdc: 0x1 0x3 0x0 0x10 0x60 0x1 0xe4 0xf0 0x0 0x1a 0x1a 0x86 0x0 0x0 0xd 0xb7 0x4d 0x75 0xf2 0x77 0x53 0x98 0x0 0x54 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x2c 0x74 0x2c 0x30 0x78 0x30 0x30 0x30 0x31 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x32 0x0 0x0 0x0 0x51 0x94 0x0 0x4 0x0 0x0 0x80 0x1 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x0 0x0 0x0 0x0 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Id for sdc 0x60 0x01 0xe4 0xf0 0x00 0x1a 0x1a 0x86 0x00 0x00 0x0d 0xb7 0x4d 0x75 0xf2 0x77 0x4d 0x44 0x33 0x30 0x30 0x30 Mar 11 10:41:18 epi3 kernel: VMWARE: Unique Device attached as scsi disk sdc at scsi2, channel 0, id 0, lun 1 Mar 11 10:41:18 epi3 kernel: Attached scsi disk sdc at scsi2, channel 0, id 0, lun 1 Mar 11 10:41:18 epi3 kernel: scan_scsis starting finish Mar 11 10:41:18 epi3 kernel: SCSI device sdc: 3509329920 512-byte hdwr sectors (1797751 MB) Mar 11 10:41:18 epi3 kernel: sdc: sdc1 Mar 11 10:41:18 epi3 kernel: scan_scsis done with finish Mar 11 10:41:18 epi3 kernel: scsi1: remove-single-device 0 0 0 failed, device busy(4). Mar 11 10:41:18 epi3 kernel: scsi singledevice 1 0 0 0 Things I've Tried: Removing iSCSI targets from only 103, disabling iSCSI, rebooting, enabled iSCSI, re-adding targets, rescan. Same result. Removing partition on 102, Formatted partition on 103 instead. Same result, except flipped. 103 can use storage, 102 can not. Starting Over. Removing all iSCSI Targets on both ESX Boxes, disabling iSCSI, turning off the firewall for iSCSI, rebooting ESX. Then on the MD3000, Removed the Host Group, Removed the Host-to-Virtual Mappings, Restarted the SAN. Followed the Documentation again, same result. Both servers see the storage, but only one server can use it. Disabling and Re-enabling VMware DRS and HA. Same result. Flat-out turning off VMware DRS and HA, and doing the "start over" step to see if maybe that borked it. Same Result. I'm kinda loosing my mind here, Everything i read online says "just partition it and if the ESX boxes can see the targets, it just works".... well crap. Any ideas, any other things to try? Can anyone atleast point me in the right direction? I'm really tired of working from 1am til 4am (our maintenance hours)

    Read the article

  • Server randomly freezes

    - by PsySkeletor
    Im facing a very strange issue, my debian squeeze freezes up always at night (Berlin, time). Here is what i get from a time and after doing this a few times, it becomes frozen and must be hard-reset. From /var/log/messages Dec 11 01:36:11 srv156 kernel: [125983.204251] CPU 1: Dec 11 01:36:11 srv156 kernel: [125983.204251] Modules linked in: xt_multiport nf_conntrack_ipv4 nf_defrag_ipv4 xt_recent xt_state nf_conntrack xt_tcpudp iptable_filter ip_tables x_tables hwmon_vid snd_hda_codec_atihdmi snd_hda_intel snd_hda_codec snd_hwdep snd_pcm radeon snd_timer ttm drm_kms_helper snd k10temp i2c_piix4 soundcore snd_page_alloc edac_core parport_pc drm i2c_algo_bit i2c_core shpchp pci_hotplug pcspkr edac_mce_amd parport wmi evdev processor button ext3 jbd mbcache raid1 md_mod sd_mod crc_t10dif ata_generic ahci ohci_hcd pata_atiixp e100 mii libata xhci floppy ehci_hcd thermal thermal_sys usbcore scsi_mod nls_base [last unloaded: i2c_dev] Dec 11 01:36:11 srv156 kernel: [125983.204251] Pid: 758, comm: flush-9:0 Tainted: G B 2.6.32-5-amd64 #1 GA-78LMT-USB3 Dec 11 01:36:11 srv156 kernel: [125983.204251] RIP: 0010:[<ffffffff810b3506>] [<ffffffff810b3506>] find_get_pages_tag+0x66/0xdd Dec 11 01:36:11 srv156 kernel: [125983.204251] RSP: 0018:ffff8804235e7b30 EFLAGS: 00000286 Dec 11 01:36:11 srv156 kernel: [125983.204251] RAX: ffffffffffffffff RBX: ffff8804235e7c00 RCX: 0000000000000000 Dec 11 01:36:11 srv156 kernel: [125983.204251] RDX: 0000000000040000 RSI: ffffea000496b2a8 RDI: ffffea000496b2a0 Dec 11 01:36:11 srv156 kernel: [125983.204251] RBP: ffffffff8101166e R08: ffff8804235e7af0 R09: 0000000000000000 Dec 11 01:36:11 srv156 kernel: [125983.204251] R10: 0000000000000000 R11: 0000000000040000 R12: ffff8804235e7c08 Dec 11 01:36:11 srv156 kernel: [125983.204251] R13: 0000000d22678a20 R14: ffff8804235e7af0 R15: 00000000091b9060 Dec 11 01:36:11 srv156 kernel: [125983.204251] FS: 0000000000000000(0000) GS:ffff880010440000(0000) knlGS:000000007ebf7b70 Dec 11 01:36:11 srv156 kernel: [125983.204522] CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b Dec 11 01:36:11 srv156 kernel: [125983.204522] CR2: 00000000dec86000 CR3: 0000000001001000 CR4: 00000000000006e0 Dec 11 01:36:11 srv156 kernel: [125983.204522] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Dec 11 01:36:11 srv156 kernel: [125983.204522] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Dec 11 01:36:11 srv156 kernel: [125983.204522] Call Trace: Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff810bb792>] ? pagevec_lookup_tag+0x1a/0x21 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff810ba330>] ? write_cache_pages+0x162/0x327 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff810b9d48>] ? __writepage+0x0/0x25 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff8110758a>] ? writeback_single_inode+0xe7/0x2da Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff81108290>] ? writeback_inodes_wb+0x424/0x4ff Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff81108497>] ? wb_writeback+0x12c/0x1ab Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff8110870d>] ? wb_do_writeback+0x14f/0x165 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff81108754>] ? bdi_writeback_task+0x31/0xaa Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff810c8664>] ? bdi_start_fn+0x0/0xd0 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff810c86d4>] ? bdi_start_fn+0x70/0xd0 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff810c8664>] ? bdi_start_fn+0x0/0xd0 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff81064ac1>] ? kthread+0x79/0x81 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff81011baa>] ? child_rip+0xa/0x20 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff81064a48>] ? kthread+0x0/0x81 Dec 11 01:36:11 srv156 kernel: [125983.204522] [<ffffffff81011ba0>] ? child_rip+0x0/0x20 From /var/log/syslog Dec 10 21:20:29 srv156 kernel: [110625.162930] BUG: Bad page map in process java pte:14fa4f067 pmd:424b54067 Dec 10 21:20:29 srv156 kernel: [110625.162937] page:ffffea000496c148 flags:0200000000000878 count:2 mapcount:-1 mapping:ffff88014f8d7de8 index:2f4 Dec 10 21:20:29 srv156 kernel: [110625.162946] addr:0000000009096000 vm_flags:00100077 anon_vma:ffff880422410d40 mapping:(null) index:9096 Dec 10 21:20:29 srv156 kernel: [110625.162955] Pid: 21356, comm: java Tainted: G B 2.6.32-5-amd64 #1 Dec 10 21:20:29 srv156 kernel: [110625.162961] Call Trace: Dec 10 21:20:29 srv156 kernel: [110625.162966] [<ffffffff810ca4bf>] ? print_bad_pte+0x232/0x24a Dec 10 21:20:29 srv156 kernel: [110625.162973] [<ffffffff810cb56f>] ? unmap_vmas+0x62d/0x931 Dec 10 21:20:29 srv156 kernel: [110625.162980] [<ffffffff810cfc74>] ? exit_mmap+0xc4/0x148 Dec 10 21:20:29 srv156 kernel: [110625.162986] [<ffffffff8104bbc1>] ? mmput+0x3c/0xdf Dec 10 21:20:29 srv156 kernel: [110625.162992] [<ffffffff8104f81e>] ? exit_mm+0x102/0x10d Dec 10 21:20:29 srv156 kernel: [110625.162998] [<ffffffff81051243>] ? do_exit+0x1f8/0x6c9 Dec 10 21:20:29 srv156 kernel: [110625.163004] [<ffffffff81071abb>] ? futex_wake+0xd6/0xe7 Dec 10 21:20:29 srv156 kernel: [110625.163010] [<ffffffff8105178a>] ? do_group_exit+0x76/0x9d Dec 10 21:20:29 srv156 kernel: [110625.163016] [<ffffffff8105df9f>] ? get_signal_to_deliver+0x310/0x339 Dec 10 21:20:29 srv156 kernel: [110625.163023] [<ffffffff81010037>] ? do_notify_resume+0x87/0x73f Dec 10 21:20:29 srv156 kernel: [110625.163029] [<ffffffff810cc664>] ? handle_mm_fault+0x7aa/0x80f Dec 10 21:20:29 srv156 kernel: [110625.163036] [<ffffffff81073f14>] ? compat_sys_futex+0x10d/0x12b Dec 10 21:20:29 srv156 kernel: [110625.163043] [<ffffffff812fb546>] ? do_page_fault+0x2e0/0x2fc Dec 10 21:20:29 srv156 kernel: [110625.163049] [<ffffffff81010e0e>] ? int_signal+0x12/0x17 Dec 10 21:20:29 srv156 kernel: [110625.163114] BUG: Bad page state in process java pfn:14fa0c Dec 10 21:20:29 srv156 kernel: [110625.163120] page:ffffea000496b2a0 flags:020000000002001c count:0 mapcount:-1 mapping:ffff88039dc0db30 index:11e3 Dec 10 21:20:29 srv156 kernel: [110625.164563] Pid: 21356, comm: java Tainted: G B 2.6.32-5-amd64 #1 Dec 10 21:20:29 srv156 kernel: [110625.164570] Call Trace: Dec 10 21:20:29 srv156 kernel: [110625.164578] [<ffffffff810b71a9>] ? bad_page+0x116/0x129 Dec 10 21:20:29 srv156 kernel: [110625.164586] [<ffffffff810b7692>] ? free_pages_check+0x38/0x57 Dec 10 21:20:29 srv156 kernel: [110625.164595] [<ffffffff810b89cf>] ? free_hot_cold_page+0x46/0x190 Dec 10 21:20:29 srv156 kernel: [110625.164603] [<ffffffff810b8b82>] ? __pagevec_free+0x69/0x7f Dec 10 21:20:29 srv156 kernel: [110625.164611] [<ffffffff810bba3f>] ? release_pages+0x137/0x18d Dec 10 21:20:29 srv156 kernel: [110625.164620] [<ffffffff810d8559>] ? free_pages_and_swap_cache+0x57/0x73 Dec 10 21:20:29 srv156 kernel: [110625.164629] [<ffffffff810cb5ed>] ? unmap_vmas+0x6ab/0x931 Dec 10 21:20:29 srv156 kernel: [110625.164637] [<ffffffff810cfc74>] ? exit_mmap+0xc4/0x148 Dec 10 21:20:29 srv156 kernel: [110625.164644] [<ffffffff8104bbc1>] ? mmput+0x3c/0xdf Dec 10 21:20:29 srv156 kernel: [110625.164652] [<ffffffff8104f81e>] ? exit_mm+0x102/0x10d Dec 10 21:20:29 srv156 kernel: [110625.164660] [<ffffffff81051243>] ? do_exit+0x1f8/0x6c9 Dec 10 21:20:29 srv156 kernel: [110625.164667] [<ffffffff81071abb>] ? futex_wake+0xd6/0xe7 Dec 10 21:20:29 srv156 kernel: [110625.164675] [<ffffffff8105178a>] ? do_group_exit+0x76/0x9d Dec 10 21:20:29 srv156 kernel: [110625.164683] [<ffffffff8105df9f>] ? get_signal_to_deliver+0x310/0x339 Dec 10 21:20:29 srv156 kernel: [110625.164692] [<ffffffff81010037>] ? do_notify_resume+0x87/0x73f Dec 10 21:20:29 srv156 kernel: [110625.164700] [<ffffffff810cc664>] ? handle_mm_fault+0x7aa/0x80f The last piece of log, has been recently posted, because I've just found it. It seems Java process do something and began to slowly eat all the resources of the server. I don't know exactly if this could be the root cause. Im using Debian Squeeze. uname -a Linux srv156 2.6.32-5-amd64 #1 SMP Sun Sep 23 11:00:33 UTC 2012 x86_64 GNU/Linux I really will appreciate your help, i dont know what more to do.

    Read the article

  • AS11 Oracle B2B Sync Support - Series 1

    - by sinkarbabu.kirubanithi
    Synchronous message support has been enabled in Oracle B2B 11G. This would help customers to send the business message and receive the corresponding business response synchronously. We would like to keep this blog entry as three part series, first one would carry Oracle B2B configuration related details followed by 'how it can be consumed and utilized in an enterprise' using composites backed model. And, the last one would talk about more sophisticated seeded support built on Oracle B2B platform (Note: the last one is still in description phase and ETA hasn't been finalized yet). Details: In an effort to enable synchronous processing in Oracle B2B, we provided a platform using the existing 'callout' mechanism. In this case, we expect the 'callout' attached to the agreement to deliver incoming business message (inbound) to back-end application and get the corresponding business response from back-end and deliver it to Oracle B2B as its output. The output of 'callout' would be processed as outbound message and the same will be attached as a response for the inbound message. Requirements to enable Sync Support: Outbound side: Outbound Agreement - to send business message request Inbound Agreement - to receive business message response Inbound side: Inbound Agreement - to receive business message request Outbound Agreement - to send business message response Agreement Level Callout - to deliver the inbound request to back-end and get the corresponding business response This feature is supported only for HTTP based transport to exchange messages with Trading Partners. One may initiate the outbound message (enqueue) using any of the available Transports in Oracle B2B. Configuration: Outbound side: Please add "syncresponse=true" as "Additional Transport Header" parameter for remote Trading Partner's HTTP delivery channel configuration. This would enable Oracle B2B to process the HTTP response as inbound message and deliver the same to back-end application. All other configuration related to Agreement and Document setup remain same. Inbound side: There is no change in Agreement and Document setup. To enable "Sync Support", you need to build a 'callout' that takes the responsibility of delivering inbound message to back-end and get the corresponding business response from the back-end and attach the same as its output. Oracle B2B treats the output of 'callout' as outbound message and deliver it to Trading Partner as synchronous HTTP response. The requests that needs to processed synchronously should be received by "syncreceiver" (http://:/b2b/syncreceiver) endpoint in Oracle B2B. Exception Handling: Existing Oracle B2B exception handling applies to this use case as well. Here's the sample callout, SampleSyncCallout.java We will get you second part that talks about 'SOA composites' backed model to design the "Sync Support" use case from back-end to Trading Partners, stay tuned.

    Read the article

  • Mark your calendar : Oracle Week, Nov 18-22, Herzliya

    - by Frederic Pariente
    The local ISV Engineering will be participating at the Israel Oracle Week on Nov 18-22, come meet us there! MARK YOUR CALENDAR Oracle Week Israel Date : November 18-22, 2012 Time : 09:00-16:30 Location :  Daniel HotelHerzliyaIsrael Tracks : DatabaseMiddlewareDevelopment InfrastructureBusiness ApplicationsBig Data ManagementSOA & BPMBI JavaITCloud  Here is a sample list of the Solaris 11 sessions to date, make sure to register for these. Number Name Date Track 12224 Optimizing Enterprise Applications with Oracle Solaris 11 19/11/2012 Infrastructure 12327 Oracle Solaris 11: Engineered Cloud Security with Wire-Speed Encryption and Delegated Admin 20/11/2012 Infrastructure, Cloud 12425 Simplified Lifecycle Management in Oracle Solaris 11 with AI, IPS and Ops Center 21/11/2012 Infrastructure 12528 Oracle Solaris 11 Administration: Zone, Resource Management and System Security 22/11/2012 Infrastructure 12127 Built for Cloud: Virtualization Use Cases and Technologies in Oracle Solaris 11 18/11/2012 Infrastructure, Cloud See you there!

    Read the article

  • SQL Server 2012 : Changes to system objects in RC0

    - by AaronBertrand
    As with every new major milestone, one of the first things I do is check out what has changed under the covers. Since RC0 was released yesterday, I've been poking around at some of the DMV and other system changes. Here is what I have noticed: New objects in RC0 that weren't in CTP3 Quick summary: We see a bunch of new aggregates for use with geography and geometry. I've stayed away from that area of programming so I'm not going to dig into them. There is a new extended procedure called sp_showmemo_xml....(read more)

    Read the article

  • AS11 Oracle B2B Sync Support - Series 2

    - by sinkarbabu.kirubanithi
    In the earlier series, we discussed about how to model "Sync Support" in Oracle B2B. And, we haven't discussed how the response can be consumed synchronously by the back-end application or initiator of sync request. In this sequel, we will see how we can extend it to the SOA composite applications to model the end-to-end usecase, this would help the initiator of sync request to receive the response synchronously. Series 2 - is little lengthier for blog standards so be prepared before you continue further :). Let's start our discussion with a high-level scenario where one need to initiate a synchronous request and get response synchronously. There are various approaches available, we will see one simplest approach here. Components Involved: 1. Oracle B2B 2. Oracle JCA JMS Adapter 3. Oracle BPEL 4. All of the above are wrapped up in a single SOA composite application. Oracle B2B: Skipping the "Sync Support" setup part in B2B, as we have already discussed that in the earlier series 1. Here we have provided "Sync Support" samples that can be imported to B2B directly and users can start testing the same in few minutes. Initiator Sample: This requires two JMS queues to be created, one for B2B to receive initial outbound sync request and the other is for B2B to deliver the incoming sync response to the back-end. Please enable "Use JMS Id" option in both internal listening and delivery channels. This would enable JCA JMS Adapter to correlate the initial B2B request and response and in turn it would be returned as synchronous response of BPEL. Internal Listening Channel Image: Internal Delivery Channel Image: To get going without much challenges, just create queues in Weblogic with the JNDI mentioned in the above two screenshots. If you want to use different names, then you may have to change the queue jndi names in sample after importing it into B2B. Here are the Queue related JNDI names used in the sample, 1. Internal Listening Channel Queue details, Name: JNDI Name: jms/b2b/syncreplyqueue 2. Internal Delivery Channel Queue details, Name: JNDI Name: jms/b2b/syncrequestqueue Here is the Initiator Sample Acme.zip Note: You may have to adjust the ip address of GlobalChips endpoint in the Delivery Channel. Responder Sample: Contains B2B meta-data and the Callout. Just import the sample and place the callout binary under "/tmp/callout" directory. If you choose to use a different location for callout, then you may have to change the same in B2B Configuration after importing the sample. Here are the artifacts, 1. Callout Source SampleCallout.java 2. Callout Binary sample-callout.jar 3. Responder Sample GlobalChips.zip Callout Details: Just gives the static response XML that needs to be sent back as response for the inbound sync request. For a sample purpose, we have given static response but in production you may have to invoke a web service or something similar to get the response. IMPORTANT NOTE: For Sync Support use case, responder is not expected to deliver the inbound sync request to backend as the process of delivering and getting the response from backend are expected from the Callout. This default behavior can be overridden by enabling the config property "b2b.SyncAppDelivery=true" in B2B config mbean (b2b-config.xml). This makes B2B to deliver the inbound sync request to be delivered to backend queue but the response to be sent to remote caller still has to come from Callout. 2. Oracle JCA JMS Adapter: On the initiator side, we have used JCA JMS Request/Reply pattern to send/receive the synchronous message from B2B. 3. Oracle BPEL: Exposes WS-SOAP Endpoint that takes payload as input and passes the same to B2B and returns the synchronous response of B2B as SOAP response. For outside world, it looks as if it is the synchronous web service endpoint but under the cover it uses JMS to trigger/initiate B2B to send and receive the synchronous response. 4. Composite application: All the components discussed above are wired in SOA composite application that helps to model a end-to-end synchronous use case. Here's the composite application sca_B2BSyncSample_rev1.0.jar, you may just deploy this to your AS11 SOA to make use of it. For any editing, you can just import the project in your JDEV under any SOA Application. Here are the composite application screenshots, Composite Application: BPEL With JCA JMS Adapter (Request/Reply):

    Read the article

  • Url Navigation

    - by russ.bishop
    One of the new features is URL-based navigation which is useful for creating intranet links or auto-generating email links (such as from workflow systems, etc). For IIS 6 and earlier, the format is as follows: http://machine/drm-client/Logon.aspx? app=<appname>&action=go&ver=<version name>&hier=<hier name>&node=<node name> Just replace the fields with their appropriate values (URL-encoded of course). <node name> is optional. If provided it will open the hierarchy and expand directly to the target node. Otherwise the hierarchy is opened to the top node. Note that if the specified version is not loaded it will be loaded automatically.

    Read the article

  • SSMS Tools Pack now supports Denali CTP1

    - by AaronBertrand
    Earlier today, Mladen Prajdic ( blog | twitter ) released an updated version of his SSMS Tools Pack (v.1.9.4), a free add-in for Management Studio that provides a ton of helpful functionality that isn't available with the native tools. I'm really glad this happened, because I've installed Denali on all of my VMs and have been using it for most of my work, and I've been missing some of the little things the tool adds. In addition to adding Denali support, Mladen also fixed a handful of minor bugs...(read more)

    Read the article

  • SQL Server 2012 : A couple of notes about installing RC0

    - by AaronBertrand
    If you're going to install Distributed Replay Controller I've posted about this on twitter a few times, but I thought I should put it down somewhere permanent as well. When you install RC0, and have selected the Distributed Replay Controller, you should be very careful about choosing the "Add Current User" button on the following dialog (I felt compelled to embellish with the skull and crossbones): If you click this button (it may also happen for the Add... button), you may experience a little delay...(read more)

    Read the article

  • Friday Fun: Play Air Hockey in Google Chrome

    - by Asian Angel
    Do you like the challenge of fast-paced games? Then get ready to put yourself to the test with the Air Hockey extension for Google Chrome during company time. Air Hockey in Action There are two ways that you can play Air Hockey…either using the drop-down window or opening the game in a new tab. For our example we chose to play in a new tab. Before starting the game you can choose the difficulty level, to enable/disable the sound, and/or to go to full screen if desired. Note: Screenshot of “Full Screen” version shown below. While playing you really have to stay on top of things…the computer player will beat you rather quickly if you do not. Hustle hustle hustle! With a little bit of practice it does become easier but even the “Easy Level” on this game will keep you busy. If the normal size game screen seems just a bit small you can easily get a larger version using the “Full Screen Link” below the game window. Whether your browser is non-maximized as shown here or totally maximized it will fill the entire browser window area. Conclusion If you like fast paced games then the Air Hockey extension certainly fits that criteria and will keep you on your toes. Make sure and keep the sound off while playing during Friday afternoon though! Links Download the Air Hockey extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Friday Fun: Play Tetris in Google ChromeFriday Fun: Play MineSweeper in Google ChromeFriday Fun: Play 3D Rally Racing in Google ChromeHow to Make Google Chrome Your Default BrowserPlay a Webpage Display Prank in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar Manage Photos Across Different Social Sites With Dropico Test Drive Windows 7 Online Download Wallpapers From National Geographic Site

    Read the article

  • How do you pronounce the '...' operator

    - by Uri
    Now, in c++ '...' became a first class operator. In speech, how do you pronounce it? So far I've heard: dot dot dot triple dot ellipsis related: Is it OK to replace ... with ellipsis in writing? e.g. "The ellipsis operator expands the pack" EDIT (clarification): We are all aware that '...' as a punctuation mark is indeed called ellipsis. But in the context of C++ we don't pronounce the names of the punctuation mark. For example, the '&' operator, depends on the context is pronounced as 'and', 'bitwise and', 'address of', 'logical and' (when && is used), or 'reference'. It is rarely pronounced as 'ampersand'. In speeches, I've a feeling that 'dot dot dot' is used more often. For example: http://channel9.msdn.com/Events/GoingNative/GoingNative-2012/Variadic-Templates-are-Funadic (an excellent presentation about variadic templates). On the other hand, 'dot dot dot' is awkward hard to pronouce ('d' and 't' are both pronounce with the tongue). Can we pronounce it 'unpack'?

    Read the article

  • Thou shalt not put code on a piedestal - Code is a tool, no more, no less

    - by Ralf Westphal
    “Write great code and everything else becomes easier” is what Paul Pagel believes in. That´s his version of an adage by Brian Marick he cites: “treat code as an end, not just a means.” And he concludes: “My post-Agile world is software craftsmanship.” I wonder, if that´s really the way to go. Will “simply” writing great code lead the software industry into the light? He´s alluding to the philosopher Kant who proposed, a human beings should never be treated as a means, but always as an end. But should we transfer this ethical statement into the world of software? I doubt it.   Reason #1: Human beings are categorially different from code. They are autonomous entities who need to find a way of living happily together. To Kant it seemed this goal could only be reached if nobody (ab)used a human being for his/her purposes. Because using a human being, i.e. treating it as a means, would contradict the fundamental autonomy and freedom of human beings. People should hold up a symmetric view of their relationships: Since nobody wants to be (ab)used, nobody should (ab)use anybody else. If you want to be treated decently, with respect, in accordance with your own free will - which means as an end - then do the same to other people. Code is dead, it´s a product, it´s a tool for people to reach their goals. No company spends any money on code other than to save money or earn money in the long run. Code is not a puppy. Enterprises do not commission software development to just feel good in its company. Code is not a buddy. Code is a slave, if you will. A mechanical slave, a non-tangible robot. Code is a tool, is a tool. And if we start to treat it differently, if we elevate its status unduely… I guess that will contort our relationship in a contraproductive way. Please get me right: Just because something is “just a tool”, “just a product” does not mean we should not be careful while designing, building, using it. Right to the contrary. We should be very careful when writing code – but not for the code´s sake! We should be careful because we respect our customers who are fellow human beings who should be treated as an end. If we are careless, neglectful, ignorant when producing code on their behalf, then we´re using them. Being sloppy means you´re caring more for yourself that for your customer. You´re then treating the customer as a means to fulfill some of your own needs. That´s plain unethical behavior.   Reason #2: The focus should always be on your purpose, not on any tool. But if code is treated as an end, then the focus is on the code. That might sound right, because where else should be your focus as a software developer? But, well, I´d say, your focus should be on delivering value to your customer. Because in the end your customer does not care if you write a single line of code. She just wants her problem to be solved. Solving problems is the purpose of any contractor. Code must be treated just as a means, a tool we know how to handle very well. But if we´re really trying to be craftsmen then we should be conscious about exactly that and act ethically. That means we must never be so focused on our tool as to be unable to suggest better solutions to the problems of our customers than code.   I´m all with Paul when he urges us to “Write great code”. Sure, if you need to write code, then by all means do so. Write the best code you can think of – and then try to improve it. Paul has all the best intentions when he signs Brians “treat code as an end” - but as we all know: “The road to hell is paved with best intentions” ;-) Yes, I can imagine a “hell of code focus”. In fact, I don´t need to imagine it, I´m seeing it quite often. Because code hell is whereever two developers stand together and are so immersed in talking about all sorts of coding tricks, design patterns, code smells, technologies, platforms, tools that they lose sight of the big picture. Talking about TDD or SOLID or refactoring is a sign of consciousness – relative to the “cowboy coders” view of the world. But from yet another point of view TDD, SOLID, and refactoring are just cures for ailments within a system. And I fear, if “Writing great code” is the only focus or the main focus of software development, then we as an industry lose the ability to see that. Focus draws a line around something, it defines a horizon for perceptions and thinking. So if we focus on code our horizon ends where “the land of code” ends. I don´t think that should be our professional attitude.   So what about Software Craftsmanship as the next big thing after Agility? I think Software Craftsmanship has an important message for all software developers and beyond. But to make it the successor of the Agility movement seems to miss a point. Agility never claimed to solve all software development problems, I´d say. So to blame it for having missed out on certain aspects of it is wrong. If I had to summarize Agility in one word I´d say “Value”. Agility put value for the customer back in software development. Focus on delivering value early and often – that´s Agility´s mantra. All else follows from that. And I ask you: Is that obsolete? Is delivering value not hip anymore? No, sure not. That´s our very purpose as software developers. So how can Agility become obsolete and need to be replaced? We need to do away with this “either/or”-thinking. It´s either Agility or Lean or Software Craftsmanship or whatnot. Instead we should start integrating concepts and movements. Think “both/and”. Think Agility plus Software Craftsmanship plus Lean plus whatnot. We don´t neet to tear down anything from a piedestal and replace it with a new idol. Instead we should do away with piedestals and arrange whatever is helpful is a circle. Then we can turn to concepts, movements for whatever they are best. After 10 years of Agility we should be able to identify what it was good at – and keep that. Keep Agility around and add whatever Agility was lacking or never concerned with. Add whatever is at the core of Software Craftsmanship. Add whatever is at the core of Lean etc. But don´t call out the age of Post-Agility. Because it better never will end. Because once we start to lose Agility´s core we´re losing focus of the customer.

    Read the article

  • Webcast: New Features of Solaris 11.1 and Solaris Cluster 4.1

    - by Jeff Victor
    If you missed last week's webcast of the new features in Oracle Solaris 11.1 you can view the recording. The speakers discuss changes that improve performance and scalability, particularly for Oracle DB, and many other enhancements. New features include Optimized Shared Memory (improves DB startup time), accelerated kernel locks (improves Oracle RAC performance and scalability), virtual memory improvements, a DTrace data collecter in the DB, Zones installed on Shared Storage (simplifies migration), Data Center Bridging, and Edge Virtual Bridging. To view the archived webcast, you must register and use the URL that you receive in e-mail.

    Read the article

  • Google doesn't seem to update the description or title of my homepage

    - by Dayson
    Before we launched our website, we had set up a "coming soon" page and google picked up the title and description from its contents. So the description in the search results said, "Coming soon! Visit epicwhale.org for updates." It's been a few weeks since we launched our website. We've even created a sitemap and submitted it to google. In the google webmaster panel, the pages have been crawled and all the pages are appearing as expected on google, EXCEPT the homepage which is still not updated! The title and description of the homepage in google search results still says coming soon.. The website I am referring to is textmewidget.com and below are the images of the search result. Google: http://i.imgur.com/vAkJg.png I checked on bing too, but it appears to be fine there. Bing: http://i.imgur.com/Q8O6L.png All other pages seem to be indexed fine on google. I don't even have any crawl errors in my reports. So what seems to be the problem? I've already waited for 2 weeks. Thanks in advance!

    Read the article

  • Google ajoute la météo à l'API de Google Maps et l'imagerie de la couche nuageuse

    Google ajoute la météo à l'API de Google Maps Et l'imagerie de la couche nuageuse La météo était disponible sur Google Maps depuis l'année dernière pour les internautes. Elle l'est à présent pour les développeurs. Google vient en effet d'ajouter deux classes (« WeatherLayer » et « CloudLayer ») à la librairie « weather » de l'API de son service de cartographie. La première classe ajoute le temps et les conditions climatiques aux cartes (et des prévisions pour un lieu donné). Ces informations peuvent être modifiées en fonction des besoins (degrés Celsius ou Fahrenheit, vitesse du vent en kilomètres ou en miles). [IMG]http://ftp-developpez.com/gordon-fowler/Goo...

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >