Search Results

Search found 27958 results on 1119 pages for 'failed to load viewstate'.

Page 211/1119 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • trouble connecting to MySql DB (PHP)

    - by user332817
    Hi I have the following PHP code to connect to my db. <?php ob_start(); $host="localhost"; // Host name $username="root"; // Mysql username $password=""; // Mysql password $db_name="test"; // Database name $tbl_name="members"; // Table name // Connect to server and select databse. mysql_connect("$host", "$username", "$password")or die("cannot connect"); ?> however I get the following error: Warning: mysql_connect() [function.mysql-connect]: [2002] A connection attempt failed because the connected party did not (trying to connect via tcp://localhost:3306) in C:\Program Files (x86)\EasyPHP-5.3.2i\www\checklogin.php on line 11 Warning: mysql_connect() [function.mysql-connect]: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. in C:\Program Files (x86)\EasyPHP-5.3.2i\www\checklogin.php on line 11 Fatal error: Maximum execution time of 30 seconds exceeded in C:\Program Files (x86)\EasyPHP-5.3.2i\www\checklogin.php on line 11 I am able to add a db/tables via phpmyadmin but I cant connect using php. here is a screenshot of my phpmyadmin page: http://img294.imageshack.us/img294/1589/sqls.jpg any help would be appreciated, thanks in advance.

    Read the article

  • Why do the usable sizes differ

    - by Raigex
    Again, dont really know how to phrase the question so I will explain. I have a video recorder application. I open my camera with cameraRecorder = Camera.open(1); //(this is the front facing camera) And get the camera parameters and all supported preview sizes CameraParameters tmpParams = cameraRecorder.getParameters(); List<Camera.Size> tmpList = tmpParams.getSupportedPreviewSizes(); one of the preview sizes on the Galaxy Tab 10.1 running ICS (4.0.4) is 800x600 but when I try to set the Video size in my media Player mediaRecorder.setVideoSize(800,600); I get this error: 12-19 17:27:55.035: E/CameraSource(110): Video dimension (800x600) is unsupported 12-19 17:27:55.035: E/StagefrightRecorder(110): cameraSource do not init 12-19 17:27:55.035: E/StagefrightRecorder(110): setupCameraSource failed. (-19) 12-19 17:27:55.035: E/StagefrightRecorder(110): setupMediaSource is failed. (-19) 12-19 17:27:55.035: E/StagefrightRecorder(110): setupMPEG4Recording is failed. (-19) 12-19 17:27:55.035: E/MediaRecorder(30119): start failed: -19 Does anyone know why this discrepancy might exist (I know one of the supported record sizes is 1280x720 but that is too big for me).

    Read the article

  • Using dnnModal.show in your modules and content

    - by Chris Hammond
    One thing that was added in DotNetNuke 6 but hasn’t been covered in great detail is a method called dnnModal.show. Calling this method is fairly straight forward depending on your need, but before we get into how to call/use the method, let’s talk about what it does first. dnnModal.show is a method that gets called via JavaScript and allows you to load up a URL into a modal popup window within your DotNetNuke site. Basically it will take that URL and load it into an IFrame within the current DotNetNuke...(read more)

    Read the article

  • SQLAuthority News – SQL Server Technical Article – The Data Loading Performance Guide

    - by pinaldave
    The white paper describes load strategies for achieving high-speed data modifications of a Microsoft SQL Server database. “Bulk Load Methods” and “Other Minimally Logged and Metadata Operations” provide an overview of two key and interrelated concepts for high-speed data loading: bulk loading and metadata operations. After this background knowledge, white paper describe how these methods can be [...]

    Read the article

  • Learning AngularJS by Example – The Customer Manager Application

    - by dwahlin
    I’m always tinkering around with different ideas and toward the beginning of 2013 decided to build a sample application using AngularJS that I call Customer Manager. It’s not exactly the most creative name or concept, but I wanted to build something that highlighted a lot of the different features offered by AngularJS and how they could be used together to build a full-featured app. One of the goals of the application was to ensure that it was approachable by people new to Angular since I’ve never found overly complex applications great for learning new concepts. The application initially started out small and was used in my AngularJS in 60-ish Minutes video on YouTube but has gradually had more and more features added to it and will continue to be enhanced over time. It’ll be used in a new “end-to-end” training course my company is working on for AngularjS as well as in some video courses that will be coming out. Here’s a quick look at what the application home page looks like: In this post I’m going to provide an overview about how the application is organized, back-end options that are available, and some of the features it demonstrates. I’ve already written about some of the features so if you’re interested check out the following posts: Building an AngularJS Modal Service Building a Custom AngularJS Unique Value Directive Using an AngularJS Factory to Interact with a RESTful Service Application Structure The structure of the application is shown to the right. The  homepage is index.html and is located at the root of the application folder. It defines where application views will be loaded using the ng-view directive and includes script references to AngularJS, AngularJS routing and animation scripts, plus a few others located in the Scripts folder and to custom application scripts located in the app folder. The app folder contains all of the key scripts used in the application. There are several techniques that can be used for organizing script files but after experimenting with several of them I decided that I prefer things in folders such as controllers, views, services, etc. Doing that helps me find things a lot faster and allows me to categorize files (such as controllers) by functionality. My recommendation is to go with whatever works best for you. Anyone who says, “You’re doing it wrong!” should be ignored. Contrary to what some people think, there is no “one right way” to organize scripts and other files. As long as the scripts make it down to the client properly (you’ll likely minify and concatenate them anyway to reduce bandwidth and minimize HTTP calls), the way you organize them is completely up to you. Here’s what I ended up doing for this application: Animation code for some custom animations is located in the animations folder. In addition to AngularJS animations (which are defined using CSS in Content/animations.css), it also animates the initial customer data load using a 3rd party script called GreenSock. Controllers are located in the controllers folder. Some of the controllers are placed in subfolders based upon the their functionality while others are placed at the root of the controllers folder since they’re more generic:   The directives folder contains the custom directives created for the application. The filters folder contains the custom filters created for the application that filter city/state and product information. The partials folder contains partial views. This includes things like modal dialogs used in the application. The services folder contains AngularJS factories and services used for various purposes in the application. Most of the scripts in this folder provide data functionality. The views folder contains the different views used in the application. Like the controllers folder, the views are organized into subfolders based on their functionality:   Back-End Services The Customer Manager application (grab it from Github) provides two different options on the back-end including ASP.NET Web API and Node.js. The ASP.NET Web API back-end uses Entity Framework for data access and stores data in SQL Server (LocalDb). The other option on the back-end is Node.js, Express, and MongoDB.   Using the ASP.NET Web API Back-End To run the application using ASP.NET Web API/SQL Server back-end open the .sln file at the root of the project in Visual Studio 2012 or higher (the free Express 2013 for Web version is fine). Press F5 and a browser will automatically launch and display the application. Using the Node.js Back-End To run the application using the Node.js/MongoDB back-end follow these steps: In the CustomerManager directory execute 'npm install' to install Express, MongoDB and Mongoose (package.json). Load sample data into MongoDB by performing the following steps: Execute 'mongod' to start the MongoDB daemon Navigate to the CustomerManager directory (the one that has initMongoCustData.js in it) then execute 'mongo' to start the MongoDB shell Enter the following in the mongo shell to load the seed files that handle seeding the database with initial data: use custmgr load("initMongoCustData.js") load("initMongoSettingsData.js") load("initMongoStateData.js") Start the Node/Express server by navigating to the CustomerManager/server directory and executing 'node app.js' View the application at http://localhost:3000 in your browser. Key Features The Customer Manager application certainly doesn’t cover every feature provided by AngularJS (as mentioned the intent was to keep it as simple as possible) but does provide insight into several key areas: Using factories and services as re-useable data services (see the app/services folder) Creating custom directives (see the app/directives folder) Custom paging (see app/views/customers/customers.html and app/controllers/customers/customersController.js) Custom filters (see app/filters) Showing custom modal dialogs with a re-useable service (see app/services/modalService.js) Making Ajax calls using a factory (see app/services/customersService.js) Using Breeze to retrieve and work with data (see app/services/customersBreezeService.js). Switch the application to use the Breeze factory by opening app/services.config.js and changing the useBreeze property to true. Intercepting HTTP requests to display a custom overlay during Ajax calls (see app/directives/wcOverlay.js) Custom animations using the GreenSock library (see app/animations/listAnimations.js) Creating custom AngularJS animations using CSS (see Content/animations.css) JavaScript patterns for defining controllers, services/factories, directives, filters, and more (see any JavaScript file in the app folder) Card View and List View display of data (see app/views/customers/customers.html and app/controllers/customers/customersController.js) Using AngularJS validation functionality (see app/views/customerEdit.html, app/controllers/customerEditController.js, and app/directives/wcUnique.js) More… Conclusion I’ll be enhancing the application even more over time and welcome contributions as well. Tony Quinn contributed the initial Node.js/MongoDB code which is very cool to have as a back-end option. Access the standard application here and a version that has custom routing in it here. Additional information about the custom routing can be found in this post.

    Read the article

  • Best Practice for Software Maintenance Console

    - by Ben-G
    I am looking for a list of must-have maintenance/administration features/components/services for enterprise applications. I know following common components: Configuration Cockpit (shows current configuration mistakes/issues) Load-Analysis (shows the current load on different system components) Vitality measures Log File Access System Restart Capability Backup/Restore Capability Are there any widely accepted services/features which are included in any software with a focus on reliablity and maintainability?

    Read the article

  • New Feature in ODI 11.1.1.6: ODI for Big Data

    - by Julien Testut
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} By Ananth Tirupattur Starting with Oracle Data Integrator 11.1.1.6.0, ODI is offering a solution to process Big Data. This post provides an overview of this feature. With all the buzz around Big Data and before getting into the details of ODI for Big Data, I will provide a brief introduction to Big Data and Oracle Solution for Big Data. So, what is Big Data? Big data includes: structured data (this includes data from relation data stores, xml data stores), semi-structured data (this includes data from weblogs) unstructured data (this includes data from text blob, images) Traditionally, business decisions are based on the information gathered from transactional data. For example, transactional Data from CRM applications is fed to a decision system for analysis and decision making. Products such as ODI play a key role in enabling decision systems. However, with the emergence of massive amounts of semi-structured and unstructured data it is important for decision system to include them in the analysis to achieve better decision making capability. While there is an abundance of opportunities for business for gaining competitive advantages, process of Big Data has challenges. The challenges of processing Big Data include: Volume of data Velocity of data - The high Rate at which data is generated Variety of data In order to address these challenges and convert them into opportunities, we would need an appropriate framework, platform and the right set of tools. Hadoop is an open source framework which is highly scalable, fault tolerant system, for storage and processing large amounts of data. Hadoop provides 2 key services, distributed and reliable storage called Hadoop Distributed File System or HDFS and a framework for parallel data processing called Map-Reduce. Innovations in Hadoop and its related technology continue to rapidly evolve, hence therefore, it is highly recommended to follow information on the web to keep up with latest information. Oracle's vision is to provide a comprehensive solution to address the challenges faced by Big Data. Oracle is providing the necessary Hardware, software and tools for processing Big Data Oracle solution includes: Big Data Appliance Oracle NoSQL Database Cloudera distribution for Hadoop Oracle R Enterprise- R is a statistical package which is very popular among data scientists. ODI solution for Big Data Oracle Loader for Hadoop for loading data from Hadoop to Oracle. Further details can be found here: http://www.oracle.com/us/products/database/big-data-appliance/overview/index.html ODI Solution for Big Data: ODI’s goal is to minimize the need to understand the complexity of Hadoop framework and simplify the adoption of processing Big Data seamlessly in an enterprise. ODI is providing the capabilities for an integrated architecture for processing Big Data. This includes capability to load data in to Hadoop, process data in Hadoop and load data from Hadoop into Oracle. ODI is expanding its support for Big Data by providing the following out of the box Knowledge Modules (KMs). IKM File to Hive (LOAD DATA).Load unstructured data from File (Local file system or HDFS ) into Hive IKM Hive Control AppendTransform and validate structured data on Hive IKM Hive TransformTransform unstructured data on Hive IKM File/Hive to Oracle (OLH)Load processed data in Hive to Oracle RKM HiveReverse engineer Hive tables to generate models Using the Loading KM you can map files (local and HDFS files) to the corresponding Hive tables. For example, you can map weblog files categorized by date into a corresponding partitioned Hive table schema. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Using the Hive control Append KM you can validate and transform data in Hive. In the below example, two source Hive tables are joined and mapped to a target Hive table. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} The Hive Transform KM facilitates processing of semi-structured data in Hive. In the below example, the data from weblog is processed using a Perl script and mapped to target Hive table. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Using the Oracle Loader for Hadoop (OLH) KM you can load data from Hive table or HDFS to a corresponding table in Oracle. OLH is available as a standalone product. ODI greatly enhances OLH capability by generating the configuration and mapping files for OLH based on the configuration provided in the interface and KM options. ODI seamlessly invokes OLH when executing the scenario. In the below example, a HDFS file is mapped to a table in Oracle. Development and Deployment:The following diagram illustrates the development and deployment of ODI solution for Big Data. Using the ODI Studio on your development machine create and develop ODI solution for processing Big Data by connecting to a MySQL DB or Oracle database on a BDA machine or Hadoop cluster. Schedule the ODI scenarios to be executed on the ODI agent deployed on the BDA machine or Hadoop cluster. ODI Solution for Big Data provides several exciting new capabilities to facilitate the adoption of Big Data in an enterprise. You can find more information about the Oracle Big Data connectors on OTN. You can find an overview of all the new features introduced in ODI 11.1.1.6 in the following document: ODI 11.1.1.6 New Features Overview

    Read the article

  • Website speed issue

    - by Jose David Garcia Llanos
    I've exhausted all my options. The website that I am developing had a load speed of 1.12 seconds and for some unknown reason it is now taking 2 seconds to connect then load. I've tried deactivating loads of plugins but is not the plugins or javascript or images. http://tools.pingdom.com/fpt/#!/ONC4QfFJt/n1bar.com I've tried re-configuring the cache and the .htacess file but still taking long for the first connection. Can someone help me find the problem...

    Read the article

  • Collide with rotation of the object

    - by Lahiru
    I'm developing a mirror for lazer beam(Ball sprite). There I'm trying to redirect the laze beam according to the ration degree of the mirror(Rectangle). How can I collide the ball to the correct angle if the colliding object is with some angle(45 deg) rather than colliding back. here is an screen shot of my work here is my code using System; using System.Collections.Generic; using System.Linq; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.GamerServices; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Input; using Microsoft.Xna.Framework.Media; namespace collision { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; Texture2D ballTexture; Rectangle ballBounds; Vector2 ballPosition; Vector2 ballVelocity; float ballSpeed = 30f; Texture2D blockTexture; Rectangle blockBounds; Vector2 blockPosition; private Vector2 origin; KeyboardState keyboardState; //Font SpriteFont Font1; Vector2 FontPos; private String displayText; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } /// <summary> /// Allows the game to perform any initialization it needs to before starting to run. /// This is where it can query for any required services and load any non-graphic /// related content. Calling base.Initialize will enumerate through any components /// and initialize them as well. /// </summary> protected override void Initialize() { // TODO: Add your initialization logic here ballPosition = new Vector2(this.GraphicsDevice.Viewport.Width / 2, this.GraphicsDevice.Viewport.Height * 0.25f); blockPosition = new Vector2(this.GraphicsDevice.Viewport.Width / 2, this.GraphicsDevice.Viewport.Height /2); ballVelocity = new Vector2(0, 1); base.Initialize(); } /// <summary> /// LoadContent will be called once per game and is the place to load /// all of your content. /// </summary> protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); ballTexture = Content.Load<Texture2D>("ball"); blockTexture = Content.Load<Texture2D>("mirror"); //create rectangles based off the size of the textures ballBounds = new Rectangle((int)(ballPosition.X - ballTexture.Width / 2), (int)(ballPosition.Y - ballTexture.Height / 2), ballTexture.Width, ballTexture.Height); blockBounds = new Rectangle((int)(blockPosition.X - blockTexture.Width / 2), (int)(blockPosition.Y - blockTexture.Height / 2), blockTexture.Width, blockTexture.Height); origin.X = blockTexture.Width / 2; origin.Y = blockTexture.Height / 2; // TODO: use this.Content to load your game content here Font1 = Content.Load<SpriteFont>("SpriteFont1"); FontPos = new Vector2(graphics.GraphicsDevice.Viewport.Width - 100, 20); } /// <summary> /// UnloadContent will be called once per game and is the place to unload /// all content. /// </summary> protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } /// <summary> /// Allows the game to run logic such as updating the world, /// checking for collisions, gathering input, and playing audio. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> /// private float RotationAngle; float circle = MathHelper.Pi * 2; float angle; protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); // TODO: Add your update logic here //check for collision between the ball and the block, or if the ball is outside the bounds of the screen if (ballBounds.Intersects(blockBounds) || !GraphicsDevice.Viewport.Bounds.Contains(ballBounds)) { //we have a simple collision! //if it has hit, swap the direction of the ball, and update it's position ballVelocity = -ballVelocity; ballPosition += ballVelocity * ballSpeed; } else { //move the ball a bit ballPosition += ballVelocity * ballSpeed; } //update bounding boxes ballBounds.X = (int)ballPosition.X; ballBounds.Y = (int)ballPosition.Y; blockBounds.X = (int)blockPosition.X; blockBounds.Y = (int)blockPosition.Y; keyboardState = Keyboard.GetState(); float val = 1.568017f/90; if (keyboardState.IsKeyDown(Keys.Space)) RotationAngle = RotationAngle + (float)Math.PI; if (keyboardState.IsKeyDown(Keys.Left)) RotationAngle = RotationAngle - val; angle = (float)Math.PI / 4.0f; // 90 degrees RotationAngle = angle; // RotationAngle = RotationAngle % circle; displayText = RotationAngle.ToString(); base.Update(gameTime); } /// <summary> /// This is called when the game should draw itself. /// </summary> /// <param name="gameTime">Provides a snapshot of timing values.</param> protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // TODO: Add your drawing code here spriteBatch.Begin(); // Find the center of the string Vector2 FontOrigin = Font1.MeasureString(displayText) / 2; spriteBatch.DrawString(Font1, displayText, FontPos, Color.White, 0, FontOrigin, 1.0f, SpriteEffects.None, 0.5f); spriteBatch.Draw(ballTexture, ballPosition, Color.White); spriteBatch.Draw(blockTexture, blockPosition,null, Color.White, RotationAngle,origin, 1.0f, SpriteEffects.None, 0f); spriteBatch.End(); base.Draw(gameTime); } } }

    Read the article

  • New ways for backup, recovery and restore of Essbase Block Storage databases – part 2 by Bernhard Kinkel

    - by Alexandra Georgescu
    After discussing in the first part of this article new options in Essbase for the general backup and restore, this second part will deal with the also rather new feature of Transaction Logging and Replay, which was released in version 11.1, enhancing existing restore options. Tip: Transaction logging and replay cannot be used for aggregate storage databases. Please refer to the Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1). Even if backups are done on a regular, frequent base, subsequent data entries, loads or calculations would not be reflected in a restored database. Activating Transaction Logging could fill that gap and provides you with an option to capture these post-backup transactions for later replay. The following table shows, which are the transactions that could be logged when Transaction Logging is enabled: In order to activate its usage, corresponding statements could be added to the Essbase.cfg file, using the TRANSACTIONLOGLOCATION command. The complete syntax reads: TRANSACTIONLOGLOCATION [ appname [ dbname]] LOGLOCATION NATIVE ?ENABLE | DISABLE Where appname and dbname are optional parameters giving you the chance in combination with the ENABLE or DISABLE command to set Transaction Logging for certain applications or databases or to exclude them from being logged. If only an appname is specified, the setting applies to all databases in that particular application. If appname and dbname are not defined, all applications and databases would be covered. LOGLOCATION specifies the directory to which the log is written, e.g. D:\temp\trlogs. This directory must already exist or needs to be created before using it for log information being written to it. NATIVE is a reserved keyword that shouldn’t be changed. The following example shows how to first enable logging on a more general level for all databases in the application Sample, followed by a disabling statement on a more granular level for only the Basic database in application Sample, hence excluding it from being logged. TRANSACTIONLOGLOCATION Sample Hyperion/trlog/Sample NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Basic Hyperion/trlog/Sample NATIVE DISABLE Tip: After applying changes to the configuration file you must restart the Essbase server in order to initialize the settings. A maybe required replay of logged transactions after restoring a database can be done only by administrators. The following options are available: In Administration Services selecting Replay Transactions on the right-click menu on the database: Here you can select to replay transactions logged after the last replay request was originally executed or after the time of the last restored backup (whichever occurred later) or transactions logged after a specified time. Or you can replay transactions selectively based on a range of sequence IDs, which can be accessed using Display Transactions on the right-click menu on the database: These sequence ID s (0, 1, 2 … 7 in the screenshot below) are assigned to each logged transaction, indicating the order in which the transaction was performed. This helps to ensure the integrity of the restored data after a replay, as the replay of transactions is enforced in the same order in which they were originally performed. So for example a calculation originally run after a data load cannot be replayed before having replayed the data load first. After a transaction is replayed, you can replay only transactions with a greater sequence ID. For example, replaying the transaction with sequence ID of 4 includes all preceding transactions, while afterwards you can only replay transactions with a sequence ID of 5 or greater. Tip: After restoring a database from a backup you should always completely replay all logged transactions, which were executed after the backup, before executing new transactions. But not only the transaction information itself needs to be logged and stored in a specified directory as described above. During transaction logging, Essbase also creates archive copies of data load and rules files in the following default directory: ARBORPATH/app/appname/dbname/Replay These files are then used during the replay of a logged transaction. By default Essbase archives only data load and rules files for client data loads, but in order to specify the type of data to archive when logging transactions you can use the command TRANSACTIONLOGDATALOADARCHIVE as an additional entry in the Essbase.cfg file. The syntax for the statement is: TRANSACTIONLOGDATALOADARCHIVE [appname [dbname]] [OPTION] While to the [appname [dbname]] argument the same applies like before for TRANSACTIONLOGLOCATION, the valid values for the OPTION argument are the following: Make the respective setting for which files copies should be logged, considering from which location transactions are usually taking place. Selecting the NONE option prevents Essbase from saving the respective files and the data load cannot be replayed. In this case you must first manually load the data before you can replay the transactions. Tip: If you use server or SQL data and the data and rules files are not archived in the Replay directory (for example, you did not use the SERVER or SERVER_CLIENT option), Essbase replays the data that is actually in the data source at the moment of the replay, which may or may not be the data that was originally loaded. You can find more detailed information in the following documents: Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1) Oracle Essbase Online Documentation (rel. 11.1.2.1)) Enterprise Performance Management System Documentation (including previous releases) Or on the Oracle Technology Network. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here; (please make sure to select your country/region at the top of this page) or in the OU Learning paths section, where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected]. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis. Disclaimer: All methods and features mentioned in this article must be considered and tested carefully related to your environment, processes and requirements. As guidance please always refer to the available software documentation. This article does not recommend or advise any explicit action or change, hence the author cannot be held responsible for any consequences due to the use or implementation of these features.

    Read the article

  • How can I remove old kernels/install new ones when /boot is full?

    - by Marcel
    I know this question is asked many times before, however with me it is just a bit different I guess. # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 224G 5.2G 208G 3% / udev 1.9G 4.0K 1.9G 1% /dev tmpfs 777M 260K 777M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 0 1.9G 0% /run/shm /dev/sda2 90M 88M 0 100% /boot /dev/sda6 1.9G 514M 1.3G 29% /tmp My boot partition is full. Current Kernel: # uname -r 3.2.0-35-generic All Kernels: # dpkg --list | grep linux-image ii linux-image-3.2.0-32-generic 3.2.0-32.51 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-34-generic 3.2.0-34.53 Linux kernel image for version 3.2.0 on 64 bit x86 SMP ii linux-image-3.2.0-35-generic 3.2.0-35.55 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iF linux-image-3.2.0-37-generic 3.2.0-37.58 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iF linux-image-3.2.0-38-generic 3.2.0-38.60 Linux kernel image for version 3.2.0 on 64 bit x86 SMP iU linux-image-generic 3.2.0.37.45 Generic Linux kernel image So I thought of removing the 3.2.0.32-generic kernel with: # sudo apt-get purge linux-image-3.2.0-32-generic Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: linux-generic : Depends: linux-headers-generic (= 3.2.0.37.45) but 3.2.0.38.46 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). No success. When I try apt-get -f install it also fails: # apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: linux-headers-3.2.0-34 linux-headers-3.2.0-35 linux-headers-3.2.0-34-generic linux-headers-3.2.0-35-generic Use 'apt-get autoremove' to remove them. The following extra packages will be installed: linux-generic linux-image-generic The following packages will be upgraded: linux-generic linux-image-generic 2 upgraded, 0 newly installed, 0 to remove and 9 not upgraded. 5 not fully installed or removed. Need to get 0 B/4,334 B of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up initramfs-tools (0.99ubuntu13.1) ... update-initramfs: deferring update (trigger activated) Setting up linux-image-3.2.0-37-generic (3.2.0-37.58) ... Running depmod. update-initramfs: deferring update (hook will be called later) The link /initrd.img is a dangling linkto /boot/initrd.img-3.2.0-38-generic Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-37-generic /boot/vmlinuz-3.2.0-37-generic update-initramfs: Generating /boot/initrd.img-3.2.0-37-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-37-generic with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-37-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-37-generic (--configure): subprocess installed post-installation script returned error exit status 2 Setting up linux-image-3.2.0-38-generic (3.2.0-38.60) ... Running depmod. update-initramfs: deferring update (hook will be called later) The link /initrd.img is a dangling linkto /boot/initrd.img-3.2.0-37-generic Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-38-generic /boot/vmlinuz-3.2.0-38-generic update-initramfs: Generating /boot/initrd.img-3.2.0-38-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-38-generic with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0-38-generic.postinst line 1010. dpkg: error processing linux-image-3.2.0-38-generic (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.2.0-37-generic; however: Package linux-image-3.2.0-37-generic is not configured yet. dpkg: error processing linux-image-generic (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.37.45); however: Package linux-image-generic is not configured yet. linux-generic depends on linux-headers-generic (= 3.2.0.37.45); however: Version of linux-headers-generic on system is 3.2.0.38.46. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured Processing triggers for initramfs-tools ... No apport report written because the error message indicates its a followup error from a previous failure. No apport report written because MaxReports is reached already update-initramfs: Generating /boot/initrd.img-3.2.0-35-generic gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-35-generic with 1. dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: linux-image-3.2.0-37-generic linux-image-3.2.0-38-generic linux-image-generic linux-generic initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1) Any help would really be appreciated. Update: I did: sudo rm /boot/*-3.2.0-32-generic /boot/*-3.2.0-34-generic After that the following problem with apt-get -f install: root@localhost:/# apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-generic The following packages will be upgraded: linux-generic 1 upgraded, 0 newly installed, 0 to remove and 9 not upgraded. 1 not fully installed or removed. Need to get 0 B/1,722 B of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y dpkg: dependency problems prevent configuration of linux-generic: linux-generic depends on linux-image-generic (= 3.2.0.37.45); however: Version of linux-image-generic on system is 3.2.0.38.46. linux-generic depends on linux-headers-generic (= 3.2.0.37.45); however: Version of linux-headers-generic on system is 3.2.0.38.46. dpkg: error processing linux-generic (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: linux-generic E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Say goodbye to System.Reflection.Emit (any dynamic proxy generation) in WinRT

    - by mbrit
    tl;dr - Forget any form of dynamic code emitting in Metro-style. It's not going to happen.Over the past week or so I've been trying to get Moq (the popular open source TDD mocking framework) to work on WinRT. Irritatingly, the day before Release Preview was released it was actually working on Consumer Preview. However in Release Preview (RP) the System.Reflection.Emit namespace is gone. Forget any form of dynamic code generation and/or MSIL injection.This kills off any project based on the popular Castle Project Dynamic Proxy component, of which Moq is one example. You can at this point in time not perform any form of mocking using dynamic injection in your Metro-style unit testing endeavours.So let me take you through my journey on this, so that other's don't have to...The headline fact is that you cannot load any assembly that you create at runtime. WinRT supports one Assembly.Load method, and that takes the name of an assembly. That has to be placed within the deployment folder of your app. You cannot give it a filename, or stream. The methods are there, but private. Try to invoke them using Reflection and you'll be met with a caspol exception.You can, in theory, use Rotor to replace SRE. It's all there, but again, you can't load anything you create.You can't write to your deployment folder from within your Metro-style app. But, can you use another service on the machine to move a file that you create into the deployment folder and load it? Not really.The networking stack in Metro-style is intentionally "damaged" to prevent socket communication from Metro-style to any end-point on the local machine. (It just times out.) This militates against an approach where your Metro-style app can signal a properly installed service on the machine to create proxies on its behalf. If you wanted to do this, you'd have to route the calls through a C&C server somewhere. The reason why Microsoft has done this is obvious - taking out SRE know means they don't have to do it in an emergency later. The collateral damage in removing SRE is that you can't do mocking in test mode, but you also can't do any form of injection in production mode. There are plenty of reasons why enterprise apps might want to do this last point particularly. At CP, the assumption was that their inspection tools would prevent SRE being used as a malware vector - it now seems they are less confident about that. (For clarity, the risk here is in allowing a nefarious program to download instructions from a C&C server and make up executable code on the fly to run, getting around the marketplace restrictions.)So, two things:- System.Reflection.Emit is gone in Metro-style/WinRT. Get over it - dynamic, on-the-fly code generation is not going to to happen.- I've more or less got a version of Moq working in Metro-style. This is based on the idea of "baking" the dynamic proxies before you use them. You can find more information here: https://github.com/mbrit/moqrt

    Read the article

  • Cannot connect to the Internet

    - by Pratap
    My desktop has Broadband wired connection - shows connected, but unable to load any web page in Ubuntu 12.04 but works fine with Windows 7. My laptop - wireless connection shows connected, but again does not load any web page. Please help. On running sudo dhclient eth1 on my laptop there is no result - see the screen output:prajna@LAPTOP:~$ sudo dhclient eth1 [sudo] password for prajna: prajna@LAPTOP:~$ sudo dhclient eth1 [sudo] password for prajna: after giving password - some time later the screen comes back to prajna@LAPTOP:~$

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Dependencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. This will change though depending on the size of objects serialized - the larger the object the more processing time is spent inside the actual dynamically activated components and the less difference there will be. Dynamic code is always slower, but how much it really affects your application primarily depends on how frequently the dynamic code is called in relation to the non-dynamic code executing. In most situations where dynamic code is used 'to get the process rolling' as I do here the overhead is small enough to not matter.All that being said though - I serialized 10,000 objects in 80ms vs. 45ms so this is hardly slouchy performance. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?Dynamic loading is not something you need to worry about but on occasion dynamic loading makes sense. But there's a price to be paid in added code  and a performance hit which depends on how frequently the dynamic code is accessed. But for some operations that are not pivotal to a component or application and are only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files adding dependencies and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems like a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful option in your toolset… © Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Banshee does not start (Ubuntu 12.04)

    - by balg
    I have installed banshee, but during the installation something went wrong and now i am experiencing this: balg@scorpion:~$ banshee Unhandled Exception: System.TypeLoadException: Could not load type 'Banshee.ServiceStack.DBusServiceManager' from assembly 'Banshee.Services, Version=2.4.0.0, Culture=neutral, PublicKeyToken=null'. [ERROR] FATAL UNHANDLED EXCEPTION: System.TypeLoadException: Could not load type 'Banshee.ServiceStack.DBusServiceManager' from assembly 'Banshee.Services, Version=2.4.0.0, Culture=neutral, PublicKeyToken=null'. I have tried to remove and purge banshee, delete the config files and then reinstall it, but it didn't help. Can anyone help me? Thanks, balg

    Read the article

  • Dual Monitor (Monitor and TV)

    - by umpirsky
    I connected TV to my computer, and trying to set dual display. Whatever resolution I choose for my second display (TV) I get message like this: The selected configuration for displays could not be applied required virtual size does not fit available size: requested=(2704, 1050), minimum=(320, 200), maximum=(1680, 1680) How can I fix this? Also, while I was experimenting system went to deadlock, I restarted and after boot monitor just turns off once system is up. I boot in recovery mode and after several retries fixed it somehow, I don't know how, probably by changing display config from display manager. now I found xorg.conf.new file in my home dir: Section "ServerLayout" Identifier "X.org Configured" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" RightOf "Screen0" Screen 2 "Screen2" RightOf "Screen1" InputDevice "Mouse0" "CorePointer" InputDevice "Keyboard0" "CoreKeyboard" EndSection Section "Files" ModulePath "/usr/lib/xorg/modules" FontPath "/usr/share/fonts/X11/misc" FontPath "/usr/share/fonts/X11/cyrillic" FontPath "/usr/share/fonts/X11/100dpi/:unscaled" FontPath "/usr/share/fonts/X11/75dpi/:unscaled" FontPath "/usr/share/fonts/X11/Type1" FontPath "/usr/share/fonts/X11/100dpi" FontPath "/usr/share/fonts/X11/75dpi" FontPath "/var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType" FontPath "built-ins" EndSection Section "Module" Load "extmod" Load "dbe" Load "glx" Load "dri" Load "dri2" Load "record" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "ZAxisMapping" "4 5 6 7" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Monitor Vendor" ModelName "Monitor Model" EndSection Section "Monitor" Identifier "Monitor1" VendorName "Monitor Vendor" ModelName "Monitor Model" EndSection Section "Monitor" Identifier "Monitor2" VendorName "Monitor Vendor" ModelName "Monitor Model" EndSection Section "Device" ### Available Driver options are:- ### Values: <i>: integer, <f>: float, <bool>: "True"/"False", ### <string>: "String", <freq>: "<f> Hz/kHz/MHz", ### <percent>: "<f>%" ### [arg]: arg optional #Option "NoAccel" # [<bool>] #Option "SWcursor" # [<bool>] #Option "Dac6Bit" # [<bool>] #Option "Dac8Bit" # [<bool>] #Option "BusType" # [<str>] #Option "CPPIOMode" # [<bool>] #Option "CPusecTimeout" # <i> #Option "AGPMode" # <i> #Option "AGPFastWrite" # [<bool>] #Option "AGPSize" # <i> #Option "GARTSize" # <i> #Option "RingSize" # <i> #Option "BufferSize" # <i> #Option "EnableDepthMoves" # [<bool>] #Option "EnablePageFlip" # [<bool>] #Option "NoBackBuffer" # [<bool>] #Option "DMAForXv" # [<bool>] #Option "FBTexPercent" # <i> #Option "DepthBits" # <i> #Option "PCIAPERSize" # <i> #Option "AccelDFS" # [<bool>] #Option "IgnoreEDID" # [<bool>] #Option "CustomEDID" # [<str>] #Option "DisplayPriority" # [<str>] #Option "PanelSize" # [<str>] #Option "ForceMinDotClock" # <freq> #Option "ColorTiling" # [<bool>] #Option "VideoKey" # <i> #Option "RageTheatreCrystal" # <i> #Option "RageTheatreTunerPort" # <i> #Option "RageTheatreCompositePort" # <i> #Option "RageTheatreSVideoPort" # <i> #Option "TunerType" # <i> #Option "RageTheatreMicrocPath" # <str> #Option "RageTheatreMicrocType" # <str> #Option "ScalerWidth" # <i> #Option "RenderAccel" # [<bool>] #Option "SubPixelOrder" # [<str>] #Option "ClockGating" # [<bool>] #Option "VGAAccess" # [<bool>] #Option "ReverseDDC" # [<bool>] #Option "LVDSProbePLL" # [<bool>] #Option "AccelMethod" # <str> #Option "DRI" # [<bool>] #Option "ConnectorTable" # <str> #Option "DefaultConnectorTable" # [<bool>] #Option "DefaultTMDSPLL" # [<bool>] #Option "TVDACLoadDetect" # [<bool>] #Option "ForceTVOut" # [<bool>] #Option "TVStandard" # <str> #Option "IgnoreLidStatus" # [<bool>] #Option "DefaultTVDACAdj" # [<bool>] #Option "Int10" # [<bool>] #Option "EXAVSync" # [<bool>] #Option "ATOMTVOut" # [<bool>] #Option "R4xxATOM" # [<bool>] #Option "ForceLowPowerMode" # [<bool>] #Option "DynamicPM" # [<bool>] #Option "NewPLL" # [<bool>] #Option "ZaphodHeads" # <str> Identifier "Card0" Driver "radeon" BusID "PCI:2:0:0" EndSection Section "Device" ### Available Driver options are:- ### Values: <i>: integer, <f>: float, <bool>: "True"/"False", ### <string>: "String", <freq>: "<f> Hz/kHz/MHz", ### <percent>: "<f>%" ### [arg]: arg optional #Option "ShadowFB" # [<bool>] #Option "Rotate" # <str> #Option "fbdev" # <str> #Option "debug" # [<bool>] Identifier "Card1" Driver "fbdev" BusID "PCI:2:0:0" EndSection Section "Device" ### Available Driver options are:- ### Values: <i>: integer, <f>: float, <bool>: "True"/"False", ### <string>: "String", <freq>: "<f> Hz/kHz/MHz", ### <percent>: "<f>%" ### [arg]: arg optional #Option "ShadowFB" # [<bool>] #Option "DefaultRefresh" # [<bool>] #Option "ModeSetClearScreen" # [<bool>] Identifier "Card2" Driver "vesa" BusID "PCI:2:0:0" EndSection Section "Screen" Identifier "Screen0" Device "Card0" Monitor "Monitor0" SubSection "Display" Viewport 0 0 Depth 1 EndSubSection SubSection "Display" Viewport 0 0 Depth 4 EndSubSection SubSection "Display" Viewport 0 0 Depth 8 EndSubSection SubSection "Display" Viewport 0 0 Depth 15 EndSubSection SubSection "Display" Viewport 0 0 Depth 16 EndSubSection SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen1" Device "Card1" Monitor "Monitor1" SubSection "Display" Viewport 0 0 Depth 1 EndSubSection SubSection "Display" Viewport 0 0 Depth 4 EndSubSection SubSection "Display" Viewport 0 0 Depth 8 EndSubSection SubSection "Display" Viewport 0 0 Depth 15 EndSubSection SubSection "Display" Viewport 0 0 Depth 16 EndSubSection SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Section "Screen" Identifier "Screen2" Device "Card2" Monitor "Monitor2" SubSection "Display" Viewport 0 0 Depth 1 EndSubSection SubSection "Display" Viewport 0 0 Depth 4 EndSubSection SubSection "Display" Viewport 0 0 Depth 8 EndSubSection SubSection "Display" Viewport 0 0 Depth 15 EndSubSection SubSection "Display" Viewport 0 0 Depth 16 EndSubSection SubSection "Display" Viewport 0 0 Depth 24 EndSubSection EndSection Can I delete it? Second display (TV) only works when I check Mirror displays option.

    Read the article

  • How to store Role Based Access rights in web application?

    - by JonH
    Currently working on a web based CRM type system that deals with various Modules such as Companies, Contacts, Projects, Sub Projects, etc. A typical CRM type system (asp.net web form, C#, SQL Server backend). We plan to implement role based security so that basically a user can have one or more roles. Roles would be broken down by first the module type such as: -Company -Contact And then by the actions for that module for instance each module would end up with a table such as this: Role1 Example: Module Create Edit Delete View Company Yes Owner Only No Yes Contact Yes Yes Yes Yes In the above case Role1 has two module types (Company, and Contact). For company, the person assigned to this role can create companies, can view companies, can only edit records he/she created and cannot delete. For this same role for the module contact this user can create contacts, edit contacts, delete contacts, and view contacts (full rights basically). I am wondering is it best upon coming into the system to session the user's role with something like a: List<Role> roles; Where the Role class would have some sort of List<Module> modules; (can contain Company, Contact, etc.).? Something to the effect of: class Role{ string name; string desc; List<Module> modules; } And the module action class would have a set of actions (Create, Edit, Delete, etc.) for each module: class ModuleActions{ List<Action> actions; } And the action has a value of whether the user can perform the right: class Action{ string right; } Just a rough idea, I know the action could be an enum and the ModuleAction can probably be eliminated with a List<x, y>. My main question is what would be the best way to store this information in this type of application: Should I store it in the User Session state (I have a session class where I manage things related to the user). I generally load this during the initial loading of the application (global.asax). I can simply tack onto this session. Or should this be loaded at the page load event of each module (page load of company etc..). I eventually need to be able to hide / unhide various buttons / divs based on the user's role and that is what got me thinking to load this via session. Any examples or points would be great.

    Read the article

  • Titanium SDK gives a "file too short" error

    - by Dananjaya
    I'm using Ubuntu 11.04 and recently installed Appcelerator Titanium Studio version 1.7 When I load up a demo project to run, I get an error like this, Couldn't load file:/home/dananjaya/.titanium/runtime/linux/1.1.0/libkhost.so, error: /home/dananjaya/.titanium/runtime/linux/1.1.0/libwebkittitanium-1.0.so.2: file too short Am I missing some dependencies here or is it a bug in the application? Thanks in advance.

    Read the article

  • Please allow us to cancel loading of log in SQL Server Management Studio Log Viewer

    - by simonsabin
    The log viewer in management studio is really neat, however if you have large log files or are accessing a remote server over a slow connection it can take a long time to load the log records. Generally you only need the last x records, so you don’t need to load all the records. It would be great to have a cancel button to allow us to cancel the loading of the log records in SQL Server Management Studio. As an aside one of the best features in SQL 2005 was the ability to cancel your connection attempt...(read more)

    Read the article

  • Loading and drawing materials using Lib3ds

    - by Dfowj
    Hey all, i'm currently using Lib3ds to load models into my C++/OpenGL project. So far, i've been follow the model loading tutorial found here. The tutorial gives a good example of how to draw the vertices and normals using VBO's, but so far i've been lost as how to do the same thing with materials. Could i get an explanation/example of how to both load and draw materials of my meshes using Lib3ds and OpenGL?

    Read the article

  • Insight into GNU/Linux boot process

    <b>Musing with GNU/Linux:</b> "In this document we will take a look at the different aspects of the boot process. Such as the BIOS which is the first code which runs, the boot loaders that can load different operating systems, pass arguments to the kernel, load it from different sources like a hard drive, a flash, and network & finally the kernel itself."

    Read the article

  • Error while installation of CHMSee

    - by Anshuman Chakraborty
    I have recently migrated from Windows to Ubuntu. My current locale shows below output :- cha@COMPUTER:~$ locale LANG=en_IN LANGUAGE=en_IN:en LC_CTYPE="en_IN" LC_NUMERIC="en_IN" LC_TIME="en_IN" LC_COLLATE="en_IN" LC_MONETARY="en_IN" LC_MESSAGES="en_IN" LC_PAPER="en_IN" LC_NAME="en_IN" LC_ADDRESS="en_IN" LC_TELEPHONE="en_IN" LC_MEASUREMENT="en_IN" LC_IDENTIFICATION="en_IN" LC_ALL= When I am trying to install CHMSee (or any other Application) using UBUNTU Software Center. I am getting below error. installArchives() failed: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Selecting previously unselected package libchm1. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 207053 files and directories currently installed.) Unpacking libchm1 (from .../libchm1_2%3a0.40a-1_i386.deb) ... Selecting previously unselected package libjavascriptcoregtk-1.0-0. Unpacking libjavascriptcoregtk-1.0-0 (from .../libjavascriptcoregtk-1.0-0_1.8.0-0ubuntu2_i386.deb) ... Selecting previously unselected package libwebkitgtk-1.0-common. Unpacking libwebkitgtk-1.0-common (from .../libwebkitgtk-1.0-common_1.8.0-0ubuntu2_all.deb) ... Selecting previously unselected package libwebkitgtk-1.0-0. Unpacking libwebkitgtk-1.0-0 (from .../libwebkitgtk-1.0-0_1.8.0-0ubuntu2_i386.deb) ... Selecting previously unselected package chmsee. Unpacking chmsee (from .../chmsee_1.3.0-2ubuntu2_i386.deb) ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Processing triggers for man-db ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Setting up qmail (1.06-4) ... The hostname -f command returned: $1 Your system needs to have a fully qualified domain name (fqdn) in order to install the var-qmail packages. Installation aborted. dpkg: error processing qmail (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of qmail-run: qmail-run depends on qmail (>= 1.06-2.1); however: Package qmail is not configured yet. dpkg: error processing qmail-run (--configure): dependency problems - leaving unconfigured Setting up libchm1 (2:0.40a-1) ... No apport report written because the error message indicates its a followup error from a previous failure. Setting up libjavascriptcoregtk-1.0-0 (1.8.0-0ubuntu2) ... Setting up libwebkitgtk-1.0-common (1.8.0-0ubuntu2) ... Setting up libwebkitgtk-1.0-0 (1.8.0-0ubuntu2) ... Setting up chmsee (1.3.0-2ubuntu2) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: qmail qmail-run Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up qmail (1.06-4) ... The hostname -f command returned: $1 Your system needs to have a fully qualified domain name (fqdn) in order to install the var-qmail packages. Installation aborted. dpkg: error processing qmail (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of qmail-run: qmail-run depends on qmail (>= 1.06-2.1); however: Package qmail is not configured yet. dpkg: error processing qmail-run (--configure): dependency problems - leaving unconfigured Can someone please help me in resolving this issue. The elaboration would be most appreciated since I am very new to this. Thanks, Anshuman Chakraborty

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >