Search Results

Search found 15385 results on 616 pages for 'context menu'.

Page 235/616 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • Why no more macro languages?

    - by Muhammad Alkarouri
    In this answer to a previous question of mine about scripting languages suitability as shells, DigitalRoss identifies the difference between the macro languages and the "parsed typed" languages in terms of string treatment as the main reason that scripting languages are not suitable for shell purposes. Macro languages include nroff and m4 for example. What are the design decisions (or compromises) needed to create a macro programming language? And why are most of the mainstream languages parsed rather than macro? This very similar question (and the accepted answer) covers fairly well why the parsed typed languages, take C for example, suffer from the use of macros. I believe my question here covers different grounds: Macro languages or those working on a textual level are not wholly failures. Arguably, they include bash, Tcl and other shell languages. And they work in a specific niche such as shells as explained in my links above. Even m4 had a fairly long time of success, and some of the web template languages can be regarded as macro languages. It is quite possible that macros and parsed typing do not go well together and that is why macros "break" common languages. In the answer to the linked question, a macro like #define TWO 1+1 would have been covered by the common rules of the language rather than conflicting with those of the host language. And issues like "macros are not typed" and "code doesn't compile" are not relevant in the context of a language designed as untyped and interpreted with little concern for efficiency. The question about the design decisions needed to create a macro language pertain to a hobby project which I am currently working on on designing a new shell. Taking the previous question in context would clarify the difference between adding macros to a parsed language and my objective. I hope the clarification shows that the question linked doesn't cover this question, which is two parts: If I want to create a macro language (for a shell or a web template, for example), what limitations and compromises (and guidelines, if exist) need to be done? (Probably answerable by a link or reference) Why have no macro languages succeed in becoming mainstream except in particular niches? What makes typed languages successful in large programming, while "stringly-typed" languages succeed in shells and one-liner like environments?

    Read the article

  • Enterprise 2.0 Conference recap

    - by kellsey.ruppel
    We had a great week in Boston attending the Enterprise 2.0 Conference. We learned a lot from industry thought leaders and had a chance to speak with a lot of different folks about social and collaboration technologies and trends.  Of all the conferences we attend, this one definitely has a different “feel”. It seems like the attendees are younger, they dress hipper, and there is much more livelihood all around. A few of the sessions addressed this, as the "millenials" or Generation Y, have been using Web 2.0 tools, such as Facebook and Twitter for many years now, and as they are entering the workforce they are expecting similar tools to be a part of how they accomplish their job tasks. It's important to note that it's not just Millenials that are expecting these technologies, as workers young and old alike benefit from social and collaboration tools. I’ve highlighted some of the takeaways I had, as well as a reaction from John Brunswick, who helped us in staffing the booth. Giving your employees choices is empowering, but if there is no course of action or plan, it’s useless. There is no such thing as collaboration without a goal. In a few years, social will become a feature in the “platform”, a component of collaboration. Social will become part of the norm – just like email is expected when you start a job at a company, Social will be too. 1 in 3 of your employees are using tools your company doesn't sanction (how scary is this?!) 25,000 pieces of content are created every second. Context is king. Social tools help us navigate and manage the complexities we face with information overload. We need to design products for the way people work. Consumerization of the enterprise - bringing social tools like Facebook to the organization. From John Brunswick: "The conference had solid attendance, standing as a testament to organizations making a concerted effort to understand what social tools exist to support their businesses.  Many vendors were narrowly focused and people we pleasantly surprised at the breadth of capability provided by Oracle WebCenter.  People seemed to feel that it just made sense that social technology provides the most benefit when presented in the context of key business data." Did you attend the conference? What were some of your key takeaways?

    Read the article

  • USB: Missing Operating System - MAC

    - by Viraj Rao
    Have installed Ubuntu 11.10 on USB using Live CD. Followed the steps as provided in below link https://help.ubuntu.com/community/MactelSupportTeam/AppleIntelInstallation After installation is complete in Refit menu, when i click on Linux icon I receive "Missing Operating System" message. I followed the steps provided in the below link from this forum, but am still receiving the same message. Unable to boot: Missing Operating system Any assistance is highly appreciated.

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5: Part 2 – Table per Type (TPT)

    - by mortezam
    In the previous blog post you saw that there are three different approaches to representing an inheritance hierarchy and I explained Table per Hierarchy (TPH) as the default mapping strategy in EF Code First. We argued that the disadvantages of TPH may be too serious for our design since it results in denormalized schemas that can become a major burden in the long run. In today’s blog post we are going to learn about Table per Type (TPT) as another inheritance mapping strategy and we'll see that TPT doesn’t expose us to this problem. Table per Type (TPT)Table per Type is about representing inheritance relationships as relational foreign key associations. Every class/subclass that declares persistent properties—including abstract classes—has its own table. The table for subclasses contains columns only for each noninherited property (each property declared by the subclass itself) along with a primary key that is also a foreign key of the base class table. This approach is shown in the following figure: For example, if an instance of the CreditCard subclass is made persistent, the values of properties declared by the BillingDetail base class are persisted to a new row of the BillingDetails table. Only the values of properties declared by the subclass (i.e. CreditCard) are persisted to a new row of the CreditCards table. The two rows are linked together by their shared primary key value. Later, the subclass instance may be retrieved from the database by joining the subclass table with the base class table. TPT Advantages The primary advantage of this strategy is that the SQL schema is normalized. In addition, schema evolution is straightforward (modifying the base class or adding a new subclass is just a matter of modify/add one table). Integrity constraint definition are also straightforward (note how CardType in CreditCards table is now a non-nullable column). Another much more important advantage is the ability to handle polymorphic associations (a polymorphic association is an association to a base class, hence to all classes in the hierarchy with dynamic resolution of the concrete class at runtime). A polymorphic association to a particular subclass may be represented as a foreign key referencing the table of that particular subclass. Implement TPT in EF Code First We can create a TPT mapping simply by placing Table attribute on the subclasses to specify the mapped table name (Table attribute is a new data annotation and has been added to System.ComponentModel.DataAnnotations namespace in CTP5): public abstract class BillingDetail {     public int BillingDetailId { get; set; }     public string Owner { get; set; }     public string Number { get; set; } } [Table("BankAccounts")] public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } [Table("CreditCards")] public class CreditCard : BillingDetail {     public int CardType { get; set; }     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } If you prefer fluent API, then you can create a TPT mapping by using ToTable() method: protected override void OnModelCreating(ModelBuilder modelBuilder) {     modelBuilder.Entity<BankAccount>().ToTable("BankAccounts");     modelBuilder.Entity<CreditCard>().ToTable("CreditCards"); } Generated SQL For QueriesLet’s take an example of a simple non-polymorphic query that returns a list of all the BankAccounts: var query = from b in context.BillingDetails.OfType<BankAccount>() select b; Executing this query (by invoking ToList() method) results in the following SQL statements being sent to the database (on the bottom, you can also see the result of executing the generated query in SQL Server Management Studio): Now, let’s take an example of a very simple polymorphic query that requests all the BillingDetails which includes both BankAccount and CreditCard types: projects some properties out of the base class BillingDetail, without querying for anything from any of the subclasses: var query = from b in context.BillingDetails             select new { b.BillingDetailId, b.Number, b.Owner }; -- var query = from b in context.BillingDetails select b; This LINQ query seems even more simple than the previous one but the resulting SQL query is not as simple as you might expect: -- As you can see, EF Code First relies on an INNER JOIN to detect the existence (or absence) of rows in the subclass tables CreditCards and BankAccounts so it can determine the concrete subclass for a particular row of the BillingDetails table. Also the SQL CASE statements that you see in the beginning of the query is just to ensure columns that are irrelevant for a particular row have NULL values in the returning flattened table. (e.g. BankName for a row that represents a CreditCard type) TPT ConsiderationsEven though this mapping strategy is deceptively simple, the experience shows that performance can be unacceptable for complex class hierarchies because queries always require a join across many tables. In addition, this mapping strategy is more difficult to implement by hand— even ad-hoc reporting is more complex. This is an important consideration if you plan to use handwritten SQL in your application (For ad hoc reporting, database views provide a way to offset the complexity of the TPT strategy. A view may be used to transform the table-per-type model into the much simpler table-per-hierarchy model.) SummaryIn this post we learned about Table per Type as the second inheritance mapping in our series. So far, the strategies we’ve discussed require extra consideration with regard to the SQL schema (e.g. in TPT, foreign keys are needed). This situation changes with the Table per Concrete Type (TPC) that we will discuss in the next post. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • Connect to running web role on Azure using Remote Desktop Connection and VS2012

    - by Magnus Karlsson
    We want to be able to collect IntelliTrace information from our running app and also use remote desktop to connect to the IIS and look around(probably debugging). 1. Create certificate 1.1 Right-click the cloud project (marked in red) and select “Configure remote desktop”. 1.2 In the drop down list of certificates, choose <create> at the bottom. 1.3. Follow the instructions, you can set it up with default values. 1.4 When done. Choose the certificate and click “Copy to File…” as seen in the left of the picture above. 1.5. Save the file with any name you want. Now we will save it to local storage to be able to import it to our solution through the azure configuration manager in step 3. 2. Save certificate to local storage Now we need to attach it to our local certificate storage to be able to reach it from our confiuguration manager in visual studio. Microsoft provides the following steps for doing this: http://support.microsoft.com/kb/232137 In order to view the Certificates store on the local computer, perform the following steps: Click Start, and then click Run. Type "MMC.EXE" (without the quotation marks) and click OK. Click Console in the new MMC you created, and then click Add/Remove Snap-in. In the new window, click Add. Highlight the Certificates snap-in, and then click Add. Choose the Computer option and click Next. Select Local Computer on the next screen, and then click OK. Click Close , and then click OK. You have now added the Certificates snap-in, which will allow you to work with any certificates in your computer's certificate store. You may want to save this MMC for later use. Now that you have access to the Certificates snap-in, you can import the server certificate into you computer's certificate store by following these steps: Open the Certificates (Local Computer) snap-in and navigate to Personal, and then Certificates. Note: Certificates may not be listed. If it is not, that is because there are no certificates installed. Right-click Certificates (or Personal if that option does not exist.) Choose All Tasks, and then click Import. When the wizard starts, click Next. Browse to the PFX file you created containing your server certificate and private key. Click Next. Enter the password you gave the PFX file when you created it. Be sure the Mark the key as exportable option is selected if you want to be able to export the key pair again from this computer. As an added security measure, you may want to leave this option unchecked to ensure that no one can make a backup of your private key. Click Next, and then choose the Certificate Store you want to save the certificate to. You should select Personal because it is a Web server certificate. If you included the certificates in the certification hierarchy, it will also be added to this store. Click Next. You should see a summary of screen showing what the wizard is about to do. If this information is correct, click Finish. You will now see the server certificate for your Web server in the list of Personal Certificates. It will be denoted by the common name of the server (found in the subject section of the certificate). Now that you have the certificate backup imported into the certificate store, you can enable Internet Information Services 5.0 to use that certificate (and the corresponding private key). To do this, perform the following steps: Open the Internet Services Manager (under Administrative Tools) and navigate to the Web site you want to enable secure communications (SSL/TLS) on. Right-click on the site and click Properties. You should now see the properties screen for the Web site. Click the Directory Security tab. Under the Secure Communications section, click Server Certificate. This will start the Web Site Certificate Wizard. Click Next. Choose the Assign an existing certificate option and click Next. You will now see a screen showing that contents of your computer's personal certificate store. Highlight your Web server certificate (denoted by the common name), and then click Next. You will now see a summary screen showing you all the details about the certificate you are installing. Be sure that this information is correct or you may have problems using SSL or TLS in HTTP communications. Click Next, and then click OK to exit the wizard. You should now have an SSL/TLS-enabled Web server. Be sure to protect your PFX files from any unwanted personnel. Image of a typical MMC.EXE with the certificates up.   3. Import the certificate to you visual studio project. 3.1 Now right click your equivalent to the MvcWebRole1 (as seen in the first picture under the red oval) and choose properties. 3.2 Choose Certificates. Right click the ellipsis to the right of the “thumbprint” and you should be able to select your newly created certificate here. After selecting it- save the file.   4. Upload the certificate to your Azure subscription. 4.1 Go to the azure management portal, click the services menu icon to the left and choose the service. Click Upload in the bottom menu.     5. Connect to server. Since I tried to use account settings(have to use another name) we have to set up a new name for the connection. No biggie. 5.1 Go to azure management portal, select your service and in the bottom menu, choose “REMOTE”. This will display the configuration for remote connection. It will actually change your ServiceConfiguration.cscfg file. After you change It here it might be good to choose download and replace the one in your project. Set a name that is not your windows azure account name and not Administrator. 5.2 Goto visual studio, click Server Explorer. Choose as selected in the picture below and click “COnnect using remote desktop”.   5.2 You will now be able to log in with the name and password set up in step 5.1. and voila! Windows server 2012, IIS and other nice stuff!   To do this one I’ve been using http://msdn.microsoft.com/en-us/library/windowsazure/ff683671.aspx where you can collect some of this information and additional one.

    Read the article

  • Getting Ubuntu on Bootloader after formatting

    - by Muhammad Badi
    I've been running Windows XP on a desktop machine (C:/) and installed Ubuntu 12.04 (D:/) and Windows bootloader was able to recognize Ubuntu when I turn on the computer. Months later, I've formatted drive C:/, removed XP, and then installed Windows 7 but sadly, Ubuntu is not showing in the boot menu as it used to be presented: Windows 7 Ubuntu How to get it back on boot list? NOTE: Please note that installing Ubuntu was done from inside Windows XP with wubi.exe

    Read the article

  • Bind Ctrl+Right to nextword in nano (as it is in all other apps)

    - by turbo
    And likewise Ctrl+Left to prevword. I read the man page of nanorc and found bind key function menu So the line bind ^Left prevword main would be what I want, the problem is that nano only accepts an alpha character or the word "Space" so Left doesn't exist. Is there a way to accomplish this? Right now I'm on natty (nano 2.2.2) but I will upgrade nano if a later (devel?) version can do this.

    Read the article

  • Allegheny first-years dive into Fedora

    <b>Opensource.com:</b> "Over the course of fourteen weeks, we've introduced them to the Creative Commons, blogging, and open source software in the context of social change, trying to get them ready for their dive into the Fedora project."

    Read the article

  • Will polishing my current project be a better learning experience than starting a new one?

    - by Alejandro Cámara
    I started programming many years ago. Now I'm trying to make games. I have read many recommendations to start cloning some well known games like galaga, tetris, arkanoid, etc. I have also read that I should go for the whole game (including menus, sound, score, etc.). Yesterday I finished the first complete version of my arkanoid clone. But it is far from over. I can still work on it for months (I program as a hobby in my free time) implementing a screen resolution switcher, remap of the control keys, power-ups falling from broken bricks, and a huge etc. But I do not want to be forever learning how to clone ONE game. I have the urge to get to the next clone in order to apply some design ideas I have come upon while developing this arkanoid clone (at the same time I am reading the GoF book and much source code from Ludum Dare 21 game contest). So the question is: Should I keep improving the arkanoid clone until it has all the features the original game had? or should I move to the next clone (there are almost infinite games to clone) and start mending the things I did wrong with the previous clone? This can be a very subjective question, so please restrain the answers to the most effective way to learn how to make my own games (not cloning someone ideas). Thank you! CLARIFICATION In order to clarify what I have implemented I make this list: Features implemented: Bouncing capabilities (the ball bounces on walls, on bricks, and on the bar). Sounds when bouncing on bricks and the bar, and when the player wins or loses. Basic title menu (new game and exit only). Also in-game menu and win/lose menus. Only three levels, but the map system is so easy I do not think it will teach me much (am I wrong?). Features not-implemented: Power-ups when breaking the bricks. Complex bricks (with more than one "hit point" and invincible). Better graphics (I am not really good at it). Programming polishing (use more intensively the design patterns). Here's a link to its (minimal) webpage: http://blog.acamara.es/piperine/ I kind of feel ashamed to show it, so please do not hit me too hard :-) My question was related to the not-implemented features. I wondered what was the fastest (optimal) path to learn. 1) implement the not-implemented features in this project which is getting big, or 2) make a new game which probably will teach me those lessons and new ones. ANSWER I choose @ashes999 answer because, in my case, I think I should polish more and try to "ship" the game. I think all the other answers are also important to bear in mind, so if you came here having the same question, before taking a rush decision read all the discussion. Thank you all!

    Read the article

  • How can I fix "dpkg: error: parsing file"?

    - by Colin Alcock
    ... and what is sudo and where/how would I type the scripts I've seen in some related answers? Yes I am very new to Linux, and am using Ubuntu 12.04 LTS. All updates are failing with installArchives() failed: dpkg: error: parsing file '/var/lib/dpkg/available' near line 2 package 'libgwibber-gtk2': value for `status' field not allowed in this context Error in function: I need to know where and how I would input some of the sudo scripts etc. Any help appreciated, trying to get off of windows.... Colin

    Read the article

  • SQL SERVER – Template Browser – A Very Important and Useful Feature of SSMS

    - by pinaldave
    Let me start today’s blog post with a direction question. How many of you have ever used Template Browser? Template Browser is a very important and useful feature of SQL Server Management Studio (SSMS). Every time when I am talking about SQL Server there is always someone comes up with the question, why there is no step by step procedure included in SSMS for features. Honestly every time I get this question, the question I ask back is How many of you have ever used Template Browser? I think the answer to this question is most of the time either no or we have not heard of the feature. One of the people asked me back – have you ever written about it on your blog? I have not yet written about it. Basically there is nothing much to write about it. It is pretty straight forward feature, like any other feature and it is indeed difficult to elaborate. However, I will try to give a quick introduction to this feature. Templates are like a quick cheat sheet or quick reference. Templates are available to create objects like databases, tables, views, indexes, stored procedures, triggers, statistics, and functions. Templates are also available for Analysis Services as well. The template scripts contain parameters to help you customize the code. You can Replace Template Parameters dialog box to insert values into the script. Additionally users can create new custom templates as well with folder structure. To open a template from Template Explorer Go to View menu >> Template Explorer or type CTRL+ALT+L. You will find a list of categories click on any category and expand the folder structure. For our sample example let us expand Index Folder. In this folder you will notice the various T-SQL Scripts. These scripts can be opened by double click or can be dragged to editor area and modified as needed. Sample template is now available in the query editor area with all the necessary parameter place folder. You can replace the same parameter by typing either CTRL+SHIFT+M or by going to Query Menu >> Specify Values for Template Parameters. In this screen it will show  Specify Values for Template Parameters dialog box, accept the value or replace it with a new value. This will now get your script ready to go. Check it one more time and change the script to fit your requirement. I personally use template explorer for two things. First one is obviously for templates but the hidden one and an important one is for learning new features and T-SQL commands. There is so much to learn and so little time. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Unlocking Productivity

    - by Michael Snow
    Unlocking Productivity in Life Sciences with Consolidated Content Management by Joe Golemba, Vice President, Product Management, Oracle WebCenter As life sciences organizations look to become more operationally efficient, the ability to effectively leverage information is a competitive advantage. Whether data mining at the drug discovery phase or prepping the sales team before a product launch, content management can play a key role in developing, organizing, and disseminating vital information. The goal of content management is relatively straightforward: put the information that people need where they can find it. A number of issues can complicate this; information sits in many different systems, each of those systems has its own security, and the information in those systems exists in many different formats. Identifying and extracting pertinent information from mountains of farflung data is no simple job, but the alternative—wasted effort or even regulatory compliance issues—is worse. An integrated information architecture can enable health sciences organizations to make better decisions, accelerate clinical operations, and be more competitive. Unstructured data matters Often when we think of drug development data, we think of structured data that fits neatly into one or more research databases. But structured data is often directly supported by unstructured data such as experimental protocols, reaction conditions, lot numbers, run times, analyses, and research notes. As life sciences companies seek integrated views of data, they are typically finding diverse islands of data that seemingly have no relationship to other data in the organization. Information like sales reports or call center reports can be locked into siloed systems, and unavailable to the discovery process. Additionally, in the increasingly networked clinical environment, Web pages, instant messages, videos, scientific imaging, sales and marketing data, collaborative workspaces, and predictive modeling data are likely to be present within an organization, and each source potentially possesses information that can help to better inform specific efforts. Historically, content management solutions that had 21CFR Part 11 capabilities—electronic records and signatures—were focused mainly on content-enabling manufacturing-related processes. Today, life sciences companies have many standalone repositories, requiring different skills, service level agreements, and vendor support costs to manage them. With the amount of content doubling every three to six months, companies have recognized the need to manage unstructured content from the beginning, in order to increase employee productivity and operational efficiency. Using scalable and secure enterprise content management (ECM) solutions, organizations can better manage their unstructured content. These solutions can also be integrated with enterprise resource planning (ERP) systems or research systems, making content available immediately, in the context of the application and within the flow of the employee’s typical business activity. Administrative safeguards—such as content de-duplication—can also be applied within ECM systems, so documents are never recreated, eliminating redundant efforts, ensuring one source of truth, and maintaining content standards in the organization. Putting it in context Consolidating structured and unstructured information in a single system can greatly simplify access to relevant information when it is needed through contextual search. Using contextual filters, results can include therapeutic area, position in the value chain, semantic commonalities, technology-specific factors, specific researchers involved, or potential business impact. The use of taxonomies is essential to organizing information and enabling contextual searches. Taxonomy solutions are composed of a hierarchical tree that defines the relationship between different life science terms. When overlaid with additional indexing related to research and/or business processes, it becomes possible to effectively narrow down the amount of data that is returned during searches, as well as prioritize results based on specific criteria and/or prior search history. Thus, search results are more accurate and relevant to an employee’s day-to-day work. For example, a search for the word "tissue" by a lab researcher would return significantly different results than a search for the same word performed by someone in procurement. Of course, diverse data repositories, combined with the immense amounts of data present in an organization, necessitate that the data elements be regularly indexed and cached beforehand to enable reasonable search response times. In its simplest form, indexing of a single, consolidated data warehouse can be expected to be a relatively straightforward effort. However, organizations require the ability to index multiple data repositories, enabling a single search to reference multiple data sources and provide an integrated results listing. Security and compliance Beyond yielding efficiencies and supporting new insight, an enterprise search environment can support important security considerations as well as compliance initiatives. For example, the systems enable organizations to retain the relevance and the security of the indexed systems, so users can only see the results to which they are granted access. This is especially important as life sciences companies are working in an increasingly networked environment and need to provide secure, role-based access to information across multiple partners. Although not officially required by the 21 CFR Part 11 regulation, the U.S. Food and Drug Administraiton has begun to extend the type of content considered when performing relevant audits and discoveries. Having an ECM infrastructure that provides centralized management of all content enterprise-wide—with the ability to consistently apply records and retention policies along with the appropriate controls, validations, audit trails, and electronic signatures—is becoming increasingly critical for life sciences companies. Making the move Creating an enterprise-wide ECM environment requires moving large amounts of content into a single enterprise repository, a daunting and risk-laden initiative. The first key is to focus on data taxonomy, allowing content to be mapped across systems. The second is to take advantage new tools which can dramatically speed and reduce the cost of the data migration process through automation. Additional content need not be frozen while it is migrated, enabling productivity throughout the process. The ability to effectively leverage information into success has been gaining importance in the life sciences industry for years. The rapid adoption of enterprise content management, both in operational processes as well as in scientific management, are clear indicators that the companies are looking to use all available data to be better informed, improve decision making, minimize risk, and increase time to market, to maintain profitability and be more competitive. As more and more varieties and sources of information are brought under the strategic management umbrella, the ability to divine knowledge from the vast pool of information is increasingly difficult. Simple search engines and basic content management are increasingly unable to effectively extract the right information from the mountains of data available. By bringing these tools into context and integrating them with business processes and applications, we can effectively focus on the right decisions that make our organizations more profitable. More Information Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • Content Based Routing with BRE and ESB

    - by Christopher House
    I've been working with BizTalk 2009 and the ESB toolkit for the past couple of days.  This is actually my first exposure to ESB and so far I'm pleased with how easy it is to work with. Initially we had planned to use UDDI for storing endpoint information.  However after discussing this with my client, we opted to look at BRE instead of UDDI since we're already storing transforms in BRE.  Fortunately making the change to BRE from UDDI was quite simple.  This solution of course has the added advantage of not needing to go through the convoluted process of registering our endpoints in UDDI. The first thing to remember if you want to do content based routing with BRE and ESB is that the pipleines included in the ESB toolkit don't include disassembler components.  This means that you'll need to first create a custom recieve pipeline with the necessary disassembler for your message type as well as the ESB components, itinerary selector and dispather. Next you need to create a BRE policy.  The ESB.ContextInfo vocabulary contains vocabulary links for the various items in the ESB context dictionary.  In this vocabulary, you'll find an item called Context Message Type, use this as the left hand side of your condition.  Set the right hand side to your message type, something like http://your.message.namespace/#yourrootelement.  Now find the ESB.EndPointInfo vocabulary.  This contains links to all the properties related to endpoint information.  Use the various set operators in your rule's action to configure your endpoint. In the example above, I'm using the WCF-SQL adapter. Now that the hard work is out of the way, you just need to configure the resolver in your itinerary. Nothing complicated here.  Just select BRE as your resolver implementation and select your policy from the drop-down list.  Note that when you select a policy, the Version field will be automatically filled in with the version of your policy.  If you leave this as-is, the resolver will always use that policy version.  Alternatively, you can clear the version number and the resolver will use the highest deployed version.

    Read the article

  • SolidQ Journal - free SQL goodness for February

    - by Greg Low
    The SolidQ Journal for February just made it out by the end of February 28th. But again, it's great to see the content appearing. I've included the second part of the article on controlling the execution context of stored procedures. The first part was in December. Also this month, along with Fernando Guerrero's editorial, Analysis Services guru Craig Utley has written about aggregations, Herbert Albert and Gianluca Holz have continued their double-act and described how to automate database migrations,...(read more)

    Read the article

  • After last update Unity no longer works

    - by Alen
    An hour ago I installed latest updates and now Unity doesn't work anyomore, I only see desktop with icons. Manually searching I opened system monitor and found out only 400 Mb of RAM is taken which means complete Unity is turned off. Can someone help me to fix this. Since I'm crippled I probably won't be able to provide you with all details you need because I don't have window borders, no right click menu, nothing, only Nautilus.

    Read the article

  • MVVM Properties with Resharper

    - by George Evjen
    Read this early this morning and it is simple since we have all probably put together a code snippet. With the projects that we do at ArchitectNow we write alot of new custom views and view models, which results in having to write repetitive property code. We changed the context of the code a bit to suit our infrastructure but the idea is to have these properties created quickly. thanks to sparky dasrath for reminding us how easy this is to do sdasrath.blogspot.com/2011/02/20110221-resharper-c-snippet-for-mvvm.html

    Read the article

  • Week in Geek: Google Drive Desktop Client Allows Backdoor Access to Google Accounts

    - by Asian Angel
    Our last edition of WIG for October is filled with news link coverage on topics such as Microsoft may not issue a second Windows 7 service pack, Windows Media Center is free for Windows 8 Pro users for limited time, CyanogenMod logged swipe gestures used to unlock Android devices, and more. What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8 HTG Explains: Why You Shouldn’t Use a Task Killer On Android

    Read the article

  • Ubuntu 12.10 Screensaver/blanking problem

    - by Ramon J. Strauff
    I have upgraded to 12.10 and then downgraded again to 12.04 LTS because of the screensaver or screen blanking problem. After 10-12 minutes the screen goes black/blank when i dont do anything (idle). I have tried setting in the Brightness and Lock menu that the screen should never lock or turn off if idle. This is very annoying when watching movies etc... Would be nice if someone knows this.... Thanks...

    Read the article

  • Impossible to install Ubuntu 10.10 dual boot with Windows 7 on new Acer desktop computer

    - by Don Myers
    My brother has a brand new Acer Desktop with Windows 7. I have done many installs (40+) of Ubuntu starting with 8.10, and have never run into this. I've spent three hours trying to do a dual boot install of 10.10. When you get to the place where you normally would choose to install as a dual boot or overwrite the existing information on the hard drive, that block is just blank. Nothing. No choices even to do a manual partition setup. If you try to go on you get the message "No root file system is defined. Please correct this from the partitioning menu." but there is nothing in the partitioning menu. I tried a good 10.04 disc also. Same thing happens with it. I ran a gparted live cd, and it shows the hard drive as sda with 3 partitions on the original. sda1 is a small partition called PQService. sda2 is another small partition called System Reserved, and GParted says it is the boot partition. sda3 is the main partation with the operating system (Windows 7) and all of the empty space. There is a little unallocated space at the very beginning and very end of the hard drive. If I go to places in the Live CD, it shows a 640 gb hard disk called Acer, but it also shows a 640 gb hard disk called system reserved. They are the same disk. There is just one hard drive. If you click properties in the System Reserved 640 gb, it shows all information as unknown. I had to change the boot order in the bios in order to run the live cd. The hard drive instead of being listed as such is listed as Raid:Raid Ready. Something the way this computer is set up is preventing Ubuntu from being able to identify the hard drive partitions at all to do an install, even if you were not doing a dual boot and just wanted to overwrite Windows. Is this a bug that needs reported? This is a major problem for me and my brother, but also for Ubuntu if new users want to Ubuntu and find they cannot install it.

    Read the article

  • Modify the number of entries in the list to choose from for bookmarking a site in Firefox

    - by Roberto Giardina
    I am wondering if it is possible to change the number of entries in the list which appears when in Firefox you right click on a tab and choose "add to bookmarks" Thats because I use to categorize a lot of bookmarks and the standard choice of 5 entries is too small. Of course the idea is to not have to click on "choose" further down following the menu Thanks in advance and this is a really fantastic Website for Ubuntu users cheers Roberto

    Read the article

  • Solution for developers wanting to run a standalone WLS 10.3.6 server against JDev 11.1.1.6.0

    - by Chris Muir
    In my previous post I discussed how to install the 11.1.1.6.0 ADF Runtimes into a standalone WLS 10.3.6 server by using the ADF Runtime installer, not the JDeveloper installer.  Yet there's still a problem for developers here because JDeveloper 11.1.1.6.0 comes coupled with a WLS 10.3.5 server.  What if you want to develop, deploy and test with a 10.3.6 server?  Have we lost the ability to integrate the IDE and the WLS server where we can run and stop the server, deploy our apps automatically the server and more? JDeveloper actually solved this issue sometime back but not many people will have recognized the feature for what it does as it wasn't needed until now. Via the Application Server Navigator you can create 2 types of connections, one to a remote "standalone WLS" and another to an "integrated WLS".  It's this second option that is useful because what we can do is install a local standalone WLS 10.3.6 server on our developer PC, then create a separate "integrated WLS" connection to the standalone server.  Then by accessing your Application's properties through the Application menu -> Application Properties -> Run -> Bind to Integration Application Server option we can choose the newly created WLS server connection to work with our application. In this way JDeveloper will now treat the new server as if it was the integrated WLS.  It will start when we run and deploy our applications, terminate it at request and so on.  Of course don't forget you still need to install the ADF Runtimes for the server to be able to work with ADF applications. Note there is bug 13917844 lurking in the Application Server Navigator for at least JDev 11.1.1.6.0 and earlier.  If you right click the new connection and select "Start Server Instance" it will often start one of the other existing connections instead (typically the original IntegratedWebLogicServer connection).  If you want to manually start the server you can bypass this by using the Run menu -> Start Server Instance option which works correctly.

    Read the article

  • Unable to acces sound settings

    - by Dumitru Cristian
    I’m unable to access the sound settings in U1404 TT. Before updating to Ubuntu 14.04, with right click and System setting I had access to some settings which allowed me to set different sound levels for different application. Now I'm unable to do that. Click on Sound settings send me to a Setting window where sound settings are not available. Actually, as you can see, my system settings menu is quite reduced (no options for keyboard).

    Read the article

  • How can I remove the Unity Launcher?

    - by Magnus Hoff
    Unfortunately, the Unity Launcher on the left hand side of the screen takes more valuable space away than the new menu bar gives. Is there any way to get rid of the Launcher? Alternatives I would be satisfied with include: Not having the Launcher at all Having the Launcher hide automatically Having applications open on top of the Launcher (not next to it) (edit:) Note that I am specifically looking for a way to keep the global menubar, while getting rid of the Launcher.

    Read the article

  • create launcher in unity to be opened in Terminal

    - by Automat
    I would like to make a launcher directed to an application that can only be opened in Terminal. I had it on Maverick and it worked. And also, it added the application to the Installed applications At this moment, on 11.10 I have only made the desktop launcher. If I move it to Unity, it creates a permanent launcher, and it's not in the Installed Apps menu either. I have installed gnome-tweaks-tool. Does anybody have a solution?

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >