Search Results

Search found 2779 results on 112 pages for 'yield keyword'.

Page 108/112 | < Previous Page | 104 105 106 107 108 109 110 111 112  | Next Page >

  • Network Access: I can't access 192.168.1.101 from 192.168.1.102.

    - by takpar
    Hi, I'm running Ubuntu 10.04 on my PC with IP 192.168.1.101. every thing work fine, e.g. my web server is running and I can see http://localhost/ or http://192.168.1.101 properly. But the problem is that I cannot see my PC from my laptop at 192.168.1.102 e.g. at my laptop http://192.168.1.101 gives Connection timed out in browser. or trying to telnet on any port leads to: telnet: Unable to connect to remote host: Connection timed out laptop is running a fresh install of Ubuntu as well and there is no setup for firewall stuff in both computers. PS: Both computers can ping each other well. The router is a cicso linksys wireless ADSL modem. Currently, I can connect to FTP server on the Windows running on 192.168.1.102 from 192.168.1.101 without problem. Theses are commands ran on my PC, 192.168.1.101: ifconfig: adp@adp-desktop:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:26:18:e1:8e:cf inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe70::226:18ff:fee1:8ecf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1831935 errors:0 dropped:0 overruns:0 frame:0 TX packets:1493786 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1996855925 (1.9 GB) TX bytes:215288238 (215.2 MB) Interrupt:27 Base address:0xa000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:951742 errors:0 dropped:0 overruns:0 frame:0 TX packets:951742 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:494351095 (494.3 MB) TX bytes:494351095 (494.3 MB) vmnet1 Link encap:Ethernet HWaddr 00:50:46:c0:00:01 inet addr:192.168.91.1 Bcast:192.168.91.255 Mask:255.255.255.0 inet6 addr: fe70::250:56ff:fec0:1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:50 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) vmnet8 Link encap:Ethernet HWaddr 00:50:46:c0:00:08 inet addr:192.168.156.1 Bcast:192.168.156.255 Mask:255.255.255.0 inet6 addr: fe70::250:56ff:fec0:8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:51 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) port 80 is set to 0.0.0.0 well: adp@adp-desktop:~$ netstat -ln | grep 'LISTEN ' tcp 0 0 127.0.0.1:52815 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:4559 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:4369 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:7634 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:5269 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:5280 0.0.0.0:* LISTEN tcp 0 0 127.0.1.1:7777 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:33601 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:5222 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN tcp6 0 0 :::139 :::* LISTEN tcp6 0 0 ::1:631 :::* LISTEN tcp6 0 0 :::445 :::* LISTEN /etc/hosts.deny is empty: adp@adp-desktop:~$ cat /etc/hosts.deny # /etc/hosts.deny: list of hosts that are _not_ allowed to access the system. # See the manual pages hosts_access(5) and hosts_options(5). # # Example: ALL: some.host.name, .some.domain # ALL EXCEPT in.fingerd: other.host.name, .other.domain # # If you're going to protect the portmapper use the name "portmap" for the # daemon name. Remember that you can only use the keyword "ALL" and IP # addresses (NOT host or domain names) for the portmapper, as well as for # rpc.mountd (the NFS mount daemon). See portmap(8) and rpc.mountd(8) # for further information. # # The PARANOID wildcard matches any host whose name does not match its # address. # # You may wish to enable this to ensure any programs that don't # validate looked up hostnames still leave understandable logs. In past # versions of Debian this has been the default. # ALL: PARANOID netstat -l: adp@adp-desktop:~$ netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 localhost:52815 *:* LISTEN tcp 0 0 *:hylafax *:* LISTEN tcp 0 0 *:www *:* LISTEN tcp 0 0 *:4369 *:* LISTEN tcp 0 0 localhost:7634 *:* LISTEN tcp 0 0 *:ftp *:* LISTEN tcp 0 0 *:xmpp-server *:* LISTEN tcp 0 0 localhost:ipp *:* LISTEN tcp 0 0 *:smtp *:* LISTEN tcp 0 0 *:5280 *:* LISTEN tcp 0 0 adp-desktop:7777 *:* LISTEN tcp 0 0 *:33601 *:* LISTEN tcp 0 0 *:xmpp-client *:* LISTEN tcp 0 0 localhost:mysql *:* LISTEN tcp6 0 0 [::]:netbios-ssn [::]:* LISTEN tcp6 0 0 localhost:ipp [::]:* LISTEN tcp6 0 0 [::]:microsoft-ds [::]:* LISTEN udp 0 0 *:bootpc *:* udp 0 0 *:mdns *:* udp 0 0 *:47467 *:* udp 0 0 192.168.1.10:netbios-ns *:* udp 0 0 192.168.91.1:netbios-ns *:* udp 0 0 192.168.156.:netbios-ns *:* udp 0 0 *:netbios-ns *:* udp 0 0 192.168.1.1:netbios-dgm *:* udp 0 0 192.168.91.:netbios-dgm *:* udp 0 0 192.168.156:netbios-dgm *:* udp 0 0 *:netbios-dgm *:* raw 0 0 *:icmp *:* 7 netstat -rn: adp@adp-desktop:~$ netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.91.0 0.0.0.0 255.255.255.0 U 0 0 0 vmnet1 192.168.156.0 0.0.0.0 255.255.255.0 U 0 0 0 vmnet8 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth0 commands on the laptop, 192.168.1.102: ifconfig: root@fakeuser-laptop:~# ifconfig eth0 Link encap:Ethernet HWaddr 00:1c:33:a2:31:15 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:21 eth1 Link encap:Ethernet HWaddr 00:2d:d9:3e:1f:6c inet addr:192.168.1.102 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe70::21d:d9ff:fe3e:1f6c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5681 errors:0 dropped:0 overruns:0 frame:10313 TX packets:6717 errors:6 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4055251 (4.0 MB) TX bytes:779308 (779.3 KB) Interrupt:18 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:206 errors:0 dropped:0 overruns:0 frame:0 TX packets:206 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:15172 (15.1 KB) TX bytes:15172 (15.1 KB) netstat -rn: root@fakeuser-laptop:~# netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth1

    Read the article

  • New ways for backup, recovery and restore of Essbase Block Storage databases – part 2 by Bernhard Kinkel

    - by Alexandra Georgescu
    After discussing in the first part of this article new options in Essbase for the general backup and restore, this second part will deal with the also rather new feature of Transaction Logging and Replay, which was released in version 11.1, enhancing existing restore options. Tip: Transaction logging and replay cannot be used for aggregate storage databases. Please refer to the Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1). Even if backups are done on a regular, frequent base, subsequent data entries, loads or calculations would not be reflected in a restored database. Activating Transaction Logging could fill that gap and provides you with an option to capture these post-backup transactions for later replay. The following table shows, which are the transactions that could be logged when Transaction Logging is enabled: In order to activate its usage, corresponding statements could be added to the Essbase.cfg file, using the TRANSACTIONLOGLOCATION command. The complete syntax reads: TRANSACTIONLOGLOCATION [ appname [ dbname]] LOGLOCATION NATIVE ?ENABLE | DISABLE Where appname and dbname are optional parameters giving you the chance in combination with the ENABLE or DISABLE command to set Transaction Logging for certain applications or databases or to exclude them from being logged. If only an appname is specified, the setting applies to all databases in that particular application. If appname and dbname are not defined, all applications and databases would be covered. LOGLOCATION specifies the directory to which the log is written, e.g. D:\temp\trlogs. This directory must already exist or needs to be created before using it for log information being written to it. NATIVE is a reserved keyword that shouldn’t be changed. The following example shows how to first enable logging on a more general level for all databases in the application Sample, followed by a disabling statement on a more granular level for only the Basic database in application Sample, hence excluding it from being logged. TRANSACTIONLOGLOCATION Sample Hyperion/trlog/Sample NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Basic Hyperion/trlog/Sample NATIVE DISABLE Tip: After applying changes to the configuration file you must restart the Essbase server in order to initialize the settings. A maybe required replay of logged transactions after restoring a database can be done only by administrators. The following options are available: In Administration Services selecting Replay Transactions on the right-click menu on the database: Here you can select to replay transactions logged after the last replay request was originally executed or after the time of the last restored backup (whichever occurred later) or transactions logged after a specified time. Or you can replay transactions selectively based on a range of sequence IDs, which can be accessed using Display Transactions on the right-click menu on the database: These sequence ID s (0, 1, 2 … 7 in the screenshot below) are assigned to each logged transaction, indicating the order in which the transaction was performed. This helps to ensure the integrity of the restored data after a replay, as the replay of transactions is enforced in the same order in which they were originally performed. So for example a calculation originally run after a data load cannot be replayed before having replayed the data load first. After a transaction is replayed, you can replay only transactions with a greater sequence ID. For example, replaying the transaction with sequence ID of 4 includes all preceding transactions, while afterwards you can only replay transactions with a sequence ID of 5 or greater. Tip: After restoring a database from a backup you should always completely replay all logged transactions, which were executed after the backup, before executing new transactions. But not only the transaction information itself needs to be logged and stored in a specified directory as described above. During transaction logging, Essbase also creates archive copies of data load and rules files in the following default directory: ARBORPATH/app/appname/dbname/Replay These files are then used during the replay of a logged transaction. By default Essbase archives only data load and rules files for client data loads, but in order to specify the type of data to archive when logging transactions you can use the command TRANSACTIONLOGDATALOADARCHIVE as an additional entry in the Essbase.cfg file. The syntax for the statement is: TRANSACTIONLOGDATALOADARCHIVE [appname [dbname]] [OPTION] While to the [appname [dbname]] argument the same applies like before for TRANSACTIONLOGLOCATION, the valid values for the OPTION argument are the following: Make the respective setting for which files copies should be logged, considering from which location transactions are usually taking place. Selecting the NONE option prevents Essbase from saving the respective files and the data load cannot be replayed. In this case you must first manually load the data before you can replay the transactions. Tip: If you use server or SQL data and the data and rules files are not archived in the Replay directory (for example, you did not use the SERVER or SERVER_CLIENT option), Essbase replays the data that is actually in the data source at the moment of the replay, which may or may not be the data that was originally loaded. You can find more detailed information in the following documents: Oracle Hyperion Enterprise Performance Management System Backup and Recovery Guide (rel. 11.1.2.1) Oracle Essbase Online Documentation (rel. 11.1.2.1)) Enterprise Performance Management System Documentation (including previous releases) Or on the Oracle Technology Network. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here; (please make sure to select your country/region at the top of this page) or in the OU Learning paths section, where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected]. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis. Disclaimer: All methods and features mentioned in this article must be considered and tested carefully related to your environment, processes and requirements. As guidance please always refer to the available software documentation. This article does not recommend or advise any explicit action or change, hence the author cannot be held responsible for any consequences due to the use or implementation of these features.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #034

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 UDF – User Defined Function to Strip HTML – Parse HTML – No Regular Expression The UDF used in the blog does fantastic task – it scans entire HTML text and removes all the HTML tags. It keeps only valid text data without HTML task. This is one of the quite commonly requested tasks many developers have to face everyday. De-fragmentation of Database at Operating System to Improve Performance Operating system skips MDF file while defragging the entire filesystem of the operating system. It is absolutely fine and there is no impact of the same on performance. Read the entire blog post for my conversation with our network engineers. Delay Function – WAITFOR clause – Delay Execution of Commands How do you delay execution of the commands in SQL Server – ofcourse by using WAITFOR keyword. In this blog post, I explain the same with the help of T-SQL script. Find Length of Text Field To measure the length of TEXT fields the function is DATALENGTH(textfield). Len will not work for text field. As of SQL Server 2005, developers should migrate all the text fields to VARCHAR(MAX) as that is the way forward. Retrieve Current Date Time in SQL Server CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} There are three ways to retrieve the current datetime in SQL SERVER. CURRENT_TIMESTAMP, GETDATE(), {fn NOW()} Explanation and Comparison of NULLIF and ISNULL An interesting observation is NULLIF returns null if it comparison is successful, whereas ISNULL returns not null if its comparison is successful. In one way they are opposite to each other. Here is my question to you - How to create infinite loop using NULLIF and ISNULL? If this is even possible? 2008 Introduction to SERVERPROPERTY and example SERVERPROPERTY is a very interesting system function. It returns many of the system values. I use it very frequently to get different server values like Server Collation, Server Name etc. SQL Server Start Time We can use DMV to find out what is the start time of SQL Server in 2008 and later version. In this blog you can see how you can do the same. Find Current Identity of Table Many times we need to know what is the current identity of the column. I have found one of my developers using aggregated function MAX () to find the current identity. However, I prefer following DBCC command to figure out current identity. Create Check Constraint on Column Some time we just need to create a simple constraint over the table but I have noticed that developers do many different things to make table column follow rules than just creating constraint. I suggest constraint is a very useful concept and every SQL Developer should pay good attention to this subject. 2009 List Schema Name and Table Name for Database This is one of the blog post where I straight forward display script. One of the kind of blog posts, which I still love to read and write. Clustered Index on Separate Drive From Table Location A table devoid of primary key index is called heap, and here data is not arranged in a particular order, which gives rise to issues that adversely affect performance. Data must be stored in some kind of order. If we put clustered index on it then the order will be forced by that index and the data will be stored in that particular order. Understanding Table Hints with Examples Hints are options and strong suggestions specified for enforcement by the SQL Server query processor on DML statements. The hints override any execution plan the query optimizer might select for a query. 2010 Data Pages in Buffer Pool – Data Stored in Memory Cache One of my earlier year article, which I still read it many times and point developers to read it again. It is clear from the Resultset that when more than one index is used, datapages related to both or all of the indexes are stored in Memory Cache separately. TRANSACTION, DML and Schema Locks Can you create a situation where you can see Schema Lock? Well, this is a very simple question, however during the interview I notice over 50 candidates failed to come up with the scenario. In this blog post, I have demonstrated the situation where we can see the schema lock in database. 2011 Solution – Puzzle – Statistics are not updated but are Created Once In this example I have created following situation: Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Auto Update Statistics and Auto Create Statistics for database is TRUE Now I have requested two things in the example 1) Why this is happening? 2) How to fix this issue? Selecting Domain from Email Address This is a straight to script blog post where I explain how to select only domain name from entire email address. Solution – Generating Zero Without using Any Numbers in T-SQL How to get zero digit without using any digit? This is indeed a very interesting question and the answer is even interesting. Try to come up with answer in next 10 minutes and if you can’t come up with the answer the blog post read this post for solution. 2012 Simple Explanation and Puzzle with SOUNDEX Function and DIFFERENCE Function In simple words - SOUNDEX converts an alphanumeric string to a four-character code to find similar-sounding words or names. DIFFERENCE function returns an integer value. The  integer returned is the number of characters in the SOUNDEX values that are the same. Read Only Files and SQL Server Management Studio (SSMS) I have come across a very interesting feature in SSMS related to “Read Only” files. I believe it is a little unknown feature as well so decided to write a blog about the same. Identifying Column Data Type of uniqueidentifier without Querying System Tables How do I know if any table has a uniqueidentifier column and what is its value without using any DMV or System Catalogues? Only information you know is the table name and you are allowed to return any kind of error if the table does not have uniqueidentifier column. Read the blog post to find the answer. Solution – User Not Able to See Any User Created Object in Tables – Security and Permissions Issue Interesting question – “When I try to connect to SQL Server, it lets me connect just fine as well let me open and explore the database. I noticed that I do not see any user created instances but when my colleague attempts to connect to the server, he is able to explore the database as well see all the user created tables and other objects. Can you help me fix it?” Importing CSV File Into Database – SQL in Sixty Seconds #018 – Video Here is interesting small 60 second video on how to import CSV file into Database. ColumnStore Index – Batch Mode vs Row Mode Here is the logic behind when Columnstore Index uses Batch Mode and when it uses Row Mode. A batch typically represents about 1000 rows of data. Batch mode processing also uses algorithms that are optimized for the multicore CPUs and increased memory throughput. Follow up – Usage of $rowguid and $IDENTITY This is an excellent follow up blog post of my earlier blog post where I explain where to use $rowguid and $identity.  If you do not know the difference between them, this is a blog with a script example. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • New Features and Changes in OIM11gR2

    - by Abhishek Tripathi
    WEB CONSOLEs in OIM 11gR2 ** In 11gR1 there were 3 Admin Web Consoles : ·         Self Service Console ·         Administration Console and ·         Advanced Administration Console accessible Whereas in OIM 11gR2 , Self Service and Administration Console have are now combined and now called as Identity Self Service Console http://host:port/identity  This console has 3 features in it for managing self profile (My Profile), Managing Requests like requesting for App Instances and Approving requests (Requests) and General Administration tasks of creating/managing users, roles, organization, attestation etc (Administration) ** In OIM 11gR2 – new console sysadmin has been added Administrators which includes some of the design console functions apart from general administrations features. http://host:port/sysadmin   Application Instances Application instance is the object that is to be provisioned to a user. Application Instances are checked out in the catalog and user can request for application instances via catalog. ·         In OIM 11gR2 resources and entitlements are bundled in Application Instance which user can select and request from catalog.  ·         Application instance is a combination of IT Resource and RO. So, you cannot create another App Instance with the same RO & IT Resource if it already exists for some other App Instance. One of these ( RO or IT Resource) must have a different name. ·         If you want that users of a particular Organization should be able to request for an Application instances through catalog then App Instances must be attached to that particular Organization. ·         Application instance can be associated with multiple organizations. ·         An application instance can also have entitlements associated with it. Entitlement can include Roles/Groups or Responsibility. ·         Application Instance are published to the catalog by a scheduled task “Catalog Synchronization Job” ·         Application Instance can have child/ parent application instance where child application instance inherits all attributes of parent application instance. Important point to remember with Application Instance If you delete the application Instance in OIM 11gR2 and create a new one with the same name, OIM will not allow doing so. It throws error saying Application Instance already exists with same Resource Object and IT resource. This is because there is still some reference that is not removed in OIM for deleted application Instance.  So to completely delete your application Instance from OIM, you must: 1. Delete the app Instance from sysadmin console. 2. Run the App Instance Post Delete Processing Job in Revoke/Delete mode. 3. Run the Catalog Synchronization job. Once done, you should be able to create a new App instance with the previous RO & IT Resouce name.   Catalog  Catalog allows users to request Roles, Application Instance, and Entitlements in an Application. Catalog Items – Roles, Application Instance and Entitlements that can be requested via catalog are called as catalog items. Detailed Information ( attributes of Catalog item)  Category – Each catalog item is associated with one and only one category. Catalog Administrators can provide a value for catalog item. ·         Tags – are search keywords helpful in searching Catalog. When users search the Catalog, the search is performed against the tags. To define a tag, go to Catalog->Search the resource-> select the resource-> update the tag field with custom search keyword. Tags are of three types: a) Auto-generated Tags: The Catalog synchronization process auto-tags the Catalog Item using the Item Type, Item Name and Item Display Name b) User-defined Tags: User-defined Tags are additional keywords entered by the Catalog Administrator. c) Arbitrary Tags: While defining a metadata if user has marked that metadata as searchable, then that will also be part of tags.   Sandbox  Sanbox is a new feature introduced in OIM11gR2. This serves as a temporary development environment for UI customizations so that they don’t affect other users before they are published and linked to existing OIM UI. All UI customizations should be done inside a sandbox, this ensures that your changes/modifications don’t affect other users until you have finalized the changes and customization is complete. Once UI customization is completed, the Sandbox must be published for the customizations to be merged into existing UI and available to other users. Creating and activating a sandbox is mandatory for customizing the UI by .Without an active sandbox, OIM does not allow to customize any page. a)      Before you perform any activity in OIM (like Create/Modify Forms, Custom Attribute, creating application instances, adding roles/attributes to catalog) you must create a Sand Box and activate it. b)      One can create multiple sandboxes in OIM but only one sandbox can be active at any given time. c)      You can export/import the sandbox to move the changes from one environment to the other. Creating Sandbox To create sandbox, login to identity manager self service (/identity) or System Administration (/sysadmin) and click on top right of link “Sandboxes” and then click on Create SandBox. Publishing Sandbox Before you publish a sandbox, it is recommended to backup MDS. Use /EM to backup MDS by following the steps below : Creating MDS Backup 1.      Login to Oracle Enterprise Manager as the administrator. 2.      On the landing page, click oracle.iam.console.identity.self-service.ear(V2.0). 3.      From the Application Deployment menu at the top, select MDS configuration. 4.      Under Export, select the Export metadata documents to an archive on the machine where this web browser is running option, and then click Export. All the metadata is exported in a ZIP file.   Creating Password Policy through Admin Console : In 11gR1 and previous versions password policies could be created & applied via OIM Design Console only. From OIM11gR2 onwards, Password Policies can be created and assigned using Admin Console as well.  

    Read the article

  • Understanding C# async / await (1) Compilation

    - by Dixin
    Now the async / await keywords are in C#. Just like the async and ! in F#, this new C# feature provides great convenience. There are many nice documents talking about how to use async / await in specific scenarios, like using async methods in ASP.NET 4.5 and in ASP.NET MVC 4, etc. In this article we will look at the real code working behind the syntax sugar. According to MSDN: The async modifier indicates that the method, lambda expression, or anonymous method that it modifies is asynchronous. Since lambda expression / anonymous method will be compiled to normal method, we will focus on normal async method. Preparation First of all, Some helper methods need to make up. internal class HelperMethods { internal static int Method(int arg0, int arg1) { // Do some IO. WebClient client = new WebClient(); Enumerable.Repeat("http://weblogs.asp.net/dixin", 10) .Select(client.DownloadString).ToArray(); int result = arg0 + arg1; return result; } internal static Task<int> MethodTask(int arg0, int arg1) { Task<int> task = new Task<int>(() => Method(arg0, arg1)); task.Start(); // Hot task (started task) should always be returned. return task; } internal static void Before() { } internal static void Continuation1(int arg) { } internal static void Continuation2(int arg) { } } Here Method() is a long running method doing some IO. Then MethodTask() wraps it into a Task and return that Task. Nothing special here. Await something in async method Since MethodTask() returns Task, let’s try to await it: internal class AsyncMethods { internal static async Task<int> MethodAsync(int arg0, int arg1) { int result = await HelperMethods.MethodTask(arg0, arg1); return result; } } Because we used await in the method, async must be put on the method. Now we get the first async method. According to the naming convenience, it is named MethodAsync. Of course a async method can be awaited. So we have a CallMethodAsync() to call MethodAsync(): internal class AsyncMethods { internal static async Task<int> CallMethodAsync(int arg0, int arg1) { int result = await MethodAsync(arg0, arg1); return result; } } After compilation, MethodAsync() and CallMethodAsync() becomes the same logic. This is the code of MethodAsyc(): internal class CompiledAsyncMethods { [DebuggerStepThrough] [AsyncStateMachine(typeof(MethodAsyncStateMachine))] // async internal static /*async*/ Task<int> MethodAsync(int arg0, int arg1) { MethodAsyncStateMachine methodAsyncStateMachine = new MethodAsyncStateMachine() { Arg0 = arg0, Arg1 = arg1, Builder = AsyncTaskMethodBuilder<int>.Create(), State = -1 }; methodAsyncStateMachine.Builder.Start(ref methodAsyncStateMachine); return methodAsyncStateMachine.Builder.Task; } } It just creates and starts a state machine, MethodAsyncStateMachine: [CompilerGenerated] [StructLayout(LayoutKind.Auto)] internal struct MethodAsyncStateMachine : IAsyncStateMachine { public int State; public AsyncTaskMethodBuilder<int> Builder; public int Arg0; public int Arg1; public int Result; private TaskAwaiter<int> awaitor; void IAsyncStateMachine.MoveNext() { try { if (this.State != 0) { this.awaitor = HelperMethods.MethodTask(this.Arg0, this.Arg1).GetAwaiter(); if (!this.awaitor.IsCompleted) { this.State = 0; this.Builder.AwaitUnsafeOnCompleted(ref this.awaitor, ref this); return; } } else { this.State = -1; } this.Result = this.awaitor.GetResult(); } catch (Exception exception) { this.State = -2; this.Builder.SetException(exception); return; } this.State = -2; this.Builder.SetResult(this.Result); } [DebuggerHidden] void IAsyncStateMachine.SetStateMachine(IAsyncStateMachine param0) { this.Builder.SetStateMachine(param0); } } The generated code has been refactored, so it is readable and can be compiled. Several things can be observed here: The async modifier is gone, which shows, unlike other modifiers (e.g. static), there is no such IL/CLR level “async” stuff. It becomes a AsyncStateMachineAttribute. This is similar to the compilation of extension method. The generated state machine is very similar to the state machine of C# yield syntax sugar. The local variables (arg0, arg1, result) are compiled to fields of the state machine. The real code (await HelperMethods.MethodTask(arg0, arg1)) is compiled into MoveNext(): HelperMethods.MethodTask(this.Arg0, this.Arg1).GetAwaiter(). CallMethodAsync() will create and start its own state machine CallMethodAsyncStateMachine: internal class CompiledAsyncMethods { [DebuggerStepThrough] [AsyncStateMachine(typeof(CallMethodAsyncStateMachine))] // async internal static /*async*/ Task<int> CallMethodAsync(int arg0, int arg1) { CallMethodAsyncStateMachine callMethodAsyncStateMachine = new CallMethodAsyncStateMachine() { Arg0 = arg0, Arg1 = arg1, Builder = AsyncTaskMethodBuilder<int>.Create(), State = -1 }; callMethodAsyncStateMachine.Builder.Start(ref callMethodAsyncStateMachine); return callMethodAsyncStateMachine.Builder.Task; } } CallMethodAsyncStateMachine has the same logic as MethodAsyncStateMachine above. The detail of the state machine will be discussed soon. Now it is clear that: async /await is a C# language level syntax sugar. There is no difference to await a async method or a normal method. As long as a method returns Task, it is awaitable. State machine and continuation To demonstrate more details in the state machine, a more complex method is created: internal class AsyncMethods { internal static async Task<int> MultiCallMethodAsync(int arg0, int arg1, int arg2, int arg3) { HelperMethods.Before(); int resultOfAwait1 = await MethodAsync(arg0, arg1); HelperMethods.Continuation1(resultOfAwait1); int resultOfAwait2 = await MethodAsync(arg2, arg3); HelperMethods.Continuation2(resultOfAwait2); int resultToReturn = resultOfAwait1 + resultOfAwait2; return resultToReturn; } } In this method: There are multiple awaits. There are code before the awaits, and continuation code after each await After compilation, this multi-await method becomes the same as above single-await methods: internal class CompiledAsyncMethods { [DebuggerStepThrough] [AsyncStateMachine(typeof(MultiCallMethodAsyncStateMachine))] // async internal static /*async*/ Task<int> MultiCallMethodAsync(int arg0, int arg1, int arg2, int arg3) { MultiCallMethodAsyncStateMachine multiCallMethodAsyncStateMachine = new MultiCallMethodAsyncStateMachine() { Arg0 = arg0, Arg1 = arg1, Arg2 = arg2, Arg3 = arg3, Builder = AsyncTaskMethodBuilder<int>.Create(), State = -1 }; multiCallMethodAsyncStateMachine.Builder.Start(ref multiCallMethodAsyncStateMachine); return multiCallMethodAsyncStateMachine.Builder.Task; } } It creates and starts one single state machine, MultiCallMethodAsyncStateMachine: [CompilerGenerated] [StructLayout(LayoutKind.Auto)] internal struct MultiCallMethodAsyncStateMachine : IAsyncStateMachine { public int State; public AsyncTaskMethodBuilder<int> Builder; public int Arg0; public int Arg1; public int Arg2; public int Arg3; public int ResultOfAwait1; public int ResultOfAwait2; public int ResultToReturn; private TaskAwaiter<int> awaiter; void IAsyncStateMachine.MoveNext() { try { switch (this.State) { case -1: HelperMethods.Before(); this.awaiter = AsyncMethods.MethodAsync(this.Arg0, this.Arg1).GetAwaiter(); if (!this.awaiter.IsCompleted) { this.State = 0; this.Builder.AwaitUnsafeOnCompleted(ref this.awaiter, ref this); } break; case 0: this.ResultOfAwait1 = this.awaiter.GetResult(); HelperMethods.Continuation1(this.ResultOfAwait1); this.awaiter = AsyncMethods.MethodAsync(this.Arg2, this.Arg3).GetAwaiter(); if (!this.awaiter.IsCompleted) { this.State = 1; this.Builder.AwaitUnsafeOnCompleted(ref this.awaiter, ref this); } break; case 1: this.ResultOfAwait2 = this.awaiter.GetResult(); HelperMethods.Continuation2(this.ResultOfAwait2); this.ResultToReturn = this.ResultOfAwait1 + this.ResultOfAwait2; this.State = -2; this.Builder.SetResult(this.ResultToReturn); break; } } catch (Exception exception) { this.State = -2; this.Builder.SetException(exception); } } [DebuggerHidden] void IAsyncStateMachine.SetStateMachine(IAsyncStateMachine stateMachine) { this.Builder.SetStateMachine(stateMachine); } } Once again, the above state machine code is already refactored, but it still has a lot of things. More clean up can be done if we only keep the core logic, and the state machine can become very simple: [CompilerGenerated] [StructLayout(LayoutKind.Auto)] internal struct MultiCallMethodAsyncStateMachine : IAsyncStateMachine { // State: // -1: Begin // 0: 1st await is done // 1: 2nd await is done // ... // -2: End public int State; public TaskCompletionSource<int> ResultToReturn; // int resultToReturn ... public int Arg0; // int Arg0 public int Arg1; // int arg1 public int Arg2; // int arg2 public int Arg3; // int arg3 public int ResultOfAwait1; // int resultOfAwait1 ... public int ResultOfAwait2; // int resultOfAwait2 ... private Task<int> currentTaskToAwait; /// <summary> /// Moves the state machine to its next state. /// </summary> public void MoveNext() // IAsyncStateMachine member. { try { switch (this.State) { // Original code is split by "await"s into "case"s: // case -1: // HelperMethods.Before(); // MethodAsync(Arg0, arg1); // case 0: // int resultOfAwait1 = await ... // HelperMethods.Continuation1(resultOfAwait1); // MethodAsync(arg2, arg3); // case 1: // int resultOfAwait2 = await ... // HelperMethods.Continuation2(resultOfAwait2); // int resultToReturn = resultOfAwait1 + resultOfAwait2; // return resultToReturn; case -1: // -1 is begin. HelperMethods.Before(); // Code before 1st await. this.currentTaskToAwait = AsyncMethods.MethodAsync(this.Arg0, this.Arg1); // 1st task to await // When this.currentTaskToAwait is done, run this.MoveNext() and go to case 0. this.State = 0; MultiCallMethodAsyncStateMachine that1 = this; // Cannot use "this" in lambda so create a local variable. this.currentTaskToAwait.ContinueWith(_ => that1.MoveNext()); break; case 0: // Now 1st await is done. this.ResultOfAwait1 = this.currentTaskToAwait.Result; // Get 1st await's result. HelperMethods.Continuation1(this.ResultOfAwait1); // Code after 1st await and before 2nd await. this.currentTaskToAwait = AsyncMethods.MethodAsync(this.Arg2, this.Arg3); // 2nd task to await // When this.currentTaskToAwait is done, run this.MoveNext() and go to case 1. this.State = 1; MultiCallMethodAsyncStateMachine that2 = this; this.currentTaskToAwait.ContinueWith(_ => that2.MoveNext()); break; case 1: // Now 2nd await is done. this.ResultOfAwait2 = this.currentTaskToAwait.Result; // Get 2nd await's result. HelperMethods.Continuation2(this.ResultOfAwait2); // Code after 2nd await. int resultToReturn = this.ResultOfAwait1 + this.ResultOfAwait2; // Code after 2nd await. // End with resultToReturn. this.State = -2; // -2 is end. this.ResultToReturn.SetResult(resultToReturn); break; } } catch (Exception exception) { // End with exception. this.State = -2; // -2 is end. this.ResultToReturn.SetException(exception); } } /// <summary> /// Configures the state machine with a heap-allocated replica. /// </summary> /// <param name="stateMachine">The heap-allocated replica.</param> [DebuggerHidden] public void SetStateMachine(IAsyncStateMachine stateMachine) // IAsyncStateMachine member. { // No core logic. } } Only Task and TaskCompletionSource are involved in this version. And MultiCallMethodAsync() can be simplified to: [DebuggerStepThrough] [AsyncStateMachine(typeof(MultiCallMethodAsyncStateMachine))] // async internal static /*async*/ Task<int> MultiCallMethodAsync(int arg0, int arg1, int arg2, int arg3) { MultiCallMethodAsyncStateMachine multiCallMethodAsyncStateMachine = new MultiCallMethodAsyncStateMachine() { Arg0 = arg0, Arg1 = arg1, Arg2 = arg2, Arg3 = arg3, ResultToReturn = new TaskCompletionSource<int>(), // -1: Begin // 0: 1st await is done // 1: 2nd await is done // ... // -2: End State = -1 }; multiCallMethodAsyncStateMachine.MoveNext(); // Original code are moved into this method. return multiCallMethodAsyncStateMachine.ResultToReturn.Task; } Now the whole state machine becomes very clean - it is about callback: Original code are split into pieces by “await”s, and each piece is put into each “case” in the state machine. Here the 2 awaits split the code into 3 pieces, so there are 3 “case”s. The “piece”s are chained by callback, that is done by Builder.AwaitUnsafeOnCompleted(callback), or currentTaskToAwait.ContinueWith(callback) in the simplified code. A previous “piece” will end with a Task (which is to be awaited), when the task is done, it will callback the next “piece”. The state machine’s state works with the “case”s to ensure the code “piece”s executes one after another. Callback If we focus on the point of callback, the simplification  can go even further – the entire state machine can be completely purged, and we can just keep the code inside MoveNext(). Now MultiCallMethodAsync() becomes: internal static Task<int> MultiCallMethodAsync(int arg0, int arg1, int arg2, int arg3) { TaskCompletionSource<int> taskCompletionSource = new TaskCompletionSource<int>(); try { // Oringinal code begins. HelperMethods.Before(); MethodAsync(arg0, arg1).ContinueWith(await1 => { int resultOfAwait1 = await1.Result; HelperMethods.Continuation1(resultOfAwait1); MethodAsync(arg2, arg3).ContinueWith(await2 => { int resultOfAwait2 = await2.Result; HelperMethods.Continuation2(resultOfAwait2); int resultToReturn = resultOfAwait1 + resultOfAwait2; // Oringinal code ends. taskCompletionSource.SetResult(resultToReturn); }); }); } catch (Exception exception) { taskCompletionSource.SetException(exception); } return taskCompletionSource.Task; } Please compare with the original async / await code: HelperMethods.Before(); int resultOfAwait1 = await MethodAsync(arg0, arg1); HelperMethods.Continuation1(resultOfAwait1); int resultOfAwait2 = await MethodAsync(arg2, arg3); HelperMethods.Continuation2(resultOfAwait2); int resultToReturn = resultOfAwait1 + resultOfAwait2; return resultToReturn; Yeah that is the magic of C# async / await: Await is not to wait. In a await expression, a Task object will be return immediately so that execution is not blocked. The continuation code is compiled as that Task’s callback code. When that task is done, continuation code will execute. Please notice that many details inside the state machine are omitted for simplicity, like context caring, etc. If you want to have a detailed picture, please do check out the source code of AsyncTaskMethodBuilder and TaskAwaiter.

    Read the article

  • The SPARC SuperCluster

    - by Karoly Vegh
    Oracle has been providing a lead in the Engineered Systems business for quite a while now, in accordance with the motto "Hardware and Software Engineered to Work Together." Indeed it is hard to find a better definition of these systems.  Allow me to summarize the idea. It is:  Build a compute platform optimized to run your technologies Develop application aware, intelligently caching storage components Take an impressively fast network technology interconnecting it with the compute nodes Tune the application to scale with the nodes to yet unseen performance Reduce the amount of data moving via compression Provide this all in a pre-integrated single product with a single-pane management interface All these ideas have been around in IT for quite some time now. The real Oracle advantage is adding the last one to put these all together. Oracle has built quite a portfolio of Engineered Systems, to run its technologies - and run those like they never ran before. In this post I'll focus on one of them that serves as a consolidation demigod, a multi-purpose engineered system.  As you probably have guessed, I am talking about the SPARC SuperCluster. It has many great features inherited from its predecessors, and it adds several new ones. Allow me to pick out and elaborate about some of the most interesting ones from a technological point of view.  I. It is the SPARC SuperCluster T4-4. That is, as compute nodes, it includes SPARC T4-4 servers that we learned to appreciate and respect for their features: The SPARC T4 CPUs: Each CPU has 8 cores, each core runs 8 threads. The SPARC T4-4 servers have 4 sockets. That is, a single compute node can in parallel, simultaneously  execute 256 threads. Now, a full-rack SPARC SuperCluster has 4 of these servers on board. Remember the keyword demigod.  While retaining the forerunner SPARC T3's exceptional throughput, the SPARC T4 CPUs raise the bar with single performance too - a humble 5x better one than their ancestors.  actually, the SPARC T4 CPU cores run in both single-threaded and multi-threaded mode, and switch between these two on-the-fly, fulfilling not only single-threaded OR multi-threaded applications' needs, but even mixed requirements (like in database workloads!). Data security, anyone? Every SPARC T4 CPU core has a built-in encryption engine, that is, encryption algorithms cast into silicon.  A PCI controller right on the chip for customers who need I/O performance.  Built-in, no-cost Virtualization:  Oracle VM for SPARC (the former LDoms or Logical Domains) is not a server-emulation virtualization technology but rather a serverpartitioning one, the hypervisor runs in the server firmware, and all the VMs' HW resources (I/O, CPU, memory) are accessed natively, without performance overhead.  This enables customers to run a number of Solaris 10 and Solaris 11 VMs separated, independent of each other within a physical server II. For Database performance, it includes Exadata Storage Cells - one of the main reasons why the Exadata Database Machine performs at diabolic speed. What makes them important? They provide DB backend storage for your Oracle Databases to run on the SPARC SuperCluster, that is what they are built and tuned for DB performance.  These storage cells are SQL-aware.  That is, if a SPARC T4 database compute node executes a query, it doesn't simply request tons of raw datablocks from the storage, filters the received data, and throws away most of it where the statement doesn't apply, but provides the SQL query to the storage node too. The storage cell software speaks SQL, that is, it is able to prefilter and through that transfer only the relevant data. With this, the traffic between database nodes and storage cells is reduced immensely. Less I/O is a good thing - as they say, all the CPUs of the world do one thing just as fast as any other - and that is waiting for I/O.  They don't only pre-filter, but also provide data preprocessing features - e.g. if a DB-node requests an aggregate of data, they can calculate it, and handover only the results, not the whole set. Again, less data to transfer.  They support the magical HCC, (Hybrid Columnar Compression). That is, data can be stored in a precompressed form on the storage. Less data to transfer.  Of course one can't simply rely on disks for performance, there is Flash Storage included there for caching.  III. The low latency, high-speed backbone network: InfiniBand, that interconnects all the members with: Real High Speed: 40 Gbit/s. Full Duplex, of course. Oh, and a really low latency.  RDMA. Remote Direct Memory Access. This technology allows the DB nodes to do exactly that. Remotely, directly placing SQL commands into the Memory of the storage cells. Dodging all the network-stack bottlenecks, avoiding overhead, placing requests directly into the process queue.  You can also run IP over InfiniBand if you please - that's the way the compute nodes can communicate with each other.  IV. Including a general-purpose storage too: the ZFSSA, which is a unified storage, providing NAS and SAN access too, with the following features:  NFS over RDMA over InfiniBand. Nothing is faster network-filesystem-wise.  All the ZFS features onboard, hybrid storage pools, compression, deduplication, snapshot, replication, NFS and CIFS shares Storageheads in a HA-Cluster configuration providing availability of the data  DTrace Live Analytics in a web-based Administration UI Being a general purpose application data storage for your non-database applications running on the SPARC SuperCluster over whichever protocol they prefer, easily replicating, snapshotting, cloning data for them.  There's a lot of great technology included in Oracle's SPARC SuperCluster, we have talked its interior through. As for external scalability: you can start with a half- of full- rack SPARC SuperCluster, and scale out to several racks - that is, stacking not separate full-rack SPARC SuperClusters, but extending always one large instance of the size of several full-racks. Yes, over InfiniBand network. Add racks as you grow.  What technologies shall run on it? SPARC SuperCluster is a general purpose scaleout consolidation/cloud environment. You can run Oracle Databases with RAC scaling, or Oracle Weblogic (end enjoy the SPARC T4's advantages to run Java). Remember, Oracle technologies have been integrated with the Oracle Engineered Systems - this is the Oracle on Oracle advantage. But you can run other software environments such as SAP if you please too. Run any application that runs on Oracle Solaris 10 or Solaris 11. Separate them in Virtual Machines, or even Oracle Solaris Zones, monitor and manage those from a central UI. Here the key takeaways once again: The SPARC SuperCluster: Is a pre-integrated Engineered System Contains SPARC T4-4 servers with built-in virtualization, cryptography, dynamic threading Contains the Exadata storage cells that intelligently offload the burden of the DB-nodes  Contains a highly available ZFS Storage Appliance, that provides SAN/NAS storage in a unified way Combines all these elements over a high-speed, low-latency backbone network implemented with InfiniBand Can grow from a single half-rack to several full-rack size Supports the consolidation of hundreds of applications To summarize: All these technologies are great by themselves, but the real value is like in every other Oracle Engineered System: Integration. All these technologies are tuned to perform together. Together they are way more than the sum of all - and a careful and actually very time consuming integration process is necessary to orchestrate all these for performance. The SPARC SuperCluster's goal is to enable infrastructure operations and offer a pre-integrated solution that can be architected and delivered in hours instead of months of evaluations and tests. The tedious and most importantly time and resource consuming part of the work - testing and evaluating - has been done.  Now go, provide services.   -- charlie  

    Read the article

  • Repeated Squaring - Matrix Multiplication using NEWMAT

    - by Dinakar Kulkarni
    I'm trying to use the repeated squaring algorithm (using recursion) to perform matrix exponentiation. I've included header files from the NEWMAT library instead of using arrays. The original matrix has elements in the range (-5,5), all numbers being of type float. # include "C:\User\newmat10\newmat.h" # include "C:\User\newmat10\newmatio.h" # include "C:\User\newmat10\newmatap.h" # include <iostream> # include <time.h> # include <ctime> # include <cstdlib> # include <iomanip> using namespace std; Matrix repeated_squaring(Matrix A, int exponent, int n) //Recursive function { A(n,n); IdentityMatrix I(n); if (exponent == 0) //Matrix raised to zero returns an Identity Matrix return I; else { if ( exponent%2 == 1 ) // if exponent is odd return (A * repeated_squaring (A*A, (exponent-1)/2, n)); else //if exponent is even return (A * repeated_squaring( A*A, exponent/2, n)); } } Matrix direct_squaring(Matrix B, int k, int no) //Brute Force Multiplication { B(no,no); Matrix C = B; for (int i = 1; i <= k; i++) C = B*C; return C; } //----Creating a matrix with elements b/w (-5,5)---- float unifRandom() { int a = -5; int b = 5; float temp = (float)((b-a)*( rand()/RAND_MAX) + a); return temp; } Matrix initialize_mat(Matrix H, int ord) { H(ord,ord); for (int y = 1; y <= ord; y++) for(int z = 1; z<= ord; z++) H(y,z) = unifRandom(); return(H); } //--------------------------------------------------- void main() { int exponent, dimension; cout<<"Insert exponent:"<<endl; cin>>exponent; cout<< "Insert dimension:"<<endl; cin>>dimension; cout<<"The number of rows/columns in the square matrix is: "<<dimension<<endl; cout<<"The exponent is: "<<exponent<<endl; Matrix A(dimension,dimension),B(dimension,dimension); Matrix C(dimension,dimension),D(dimension,dimension); B= initialize_mat(A,dimension); cout<<"Initial Matrix: "<<endl; cout<<setw(5)<<setprecision(2)<<B<<endl; //----------------------------------------------------------------------------- cout<<"Repeated Squaring Result: "<<endl; clock_t time_before1 = clock(); C = repeated_squaring (B, exponent , dimension); cout<< setw(5) <<setprecision(2) <<C; clock_t time_after1 = clock(); float diff1 = ((float) time_after1 - (float) time_before1); cout << "It took " << diff1/CLOCKS_PER_SEC << " seconds to complete" << endl<<endl; //--------------------------------------------------------------------------------- cout<<"Direct Squaring Result:"<<endl; clock_t time_before2 = clock(); D = direct_squaring (B, exponent , dimension); cout<<setw(5)<<setprecision(2)<<D; clock_t time_after2 = clock(); float diff2 = ((float) time_after2 - (float) time_before2); cout << "It took " << diff2/CLOCKS_PER_SEC << " seconds to complete" << endl<<endl; } I face the following problems: The random number generator returns only "-5" as each element in the output. The Matrix multiplication yield different results with brute force multiplication and using the repeated squaring algorithm. I'm timing the execution time of my code to compare the times taken by brute force multiplication and by repeated squaring. Could someone please find out what's wrong with the recursion and with the matrix initialization? NOTE: While compiling this program, make sure you've imported the NEWMAT library. Thanks in advance!

    Read the article

  • "The usage of semaphores is subtly wrong"

    - by Hoonose
    This past semester I was taking an OS practicum in C, in which the first project involved making a threads package, then writing a multiple producer-consumer program to demonstrate the functionality. However, after getting grading feedback, I lost points for "The usage of semaphores is subtly wrong" and "The program assumes preemption (e.g. uses yield to change control)" (We started with a non-preemptive threads package then added preemption later. Note that the comment and example contradict each other. I believe it doesn't assume either, and would work in both environments). This has been bugging me for a long time - the course staff was kind of overwhelmed, so I couldn't ask them what's wrong with this over the semester. I've spent a long time thinking about this and I can't see the issues. If anyone could take a look and point out the error, or reassure me that there actually isn't a problem, I'd really appreciate it. I believe the syntax should be pretty standard in terms of the thread package functions (minithreads and semaphores), but let me know if anything is confusing. #include <stdio.h> #include <stdlib.h> #include "minithread.h" #include "synch.h" #define BUFFER_SIZE 16 #define MAXCOUNT 100 int buffer[BUFFER_SIZE]; int size, head, tail; int count = 1; int out = 0; int toadd = 0; int toremove = 0; semaphore_t empty; semaphore_t full; semaphore_t count_lock; // Semaphore to keep a lock on the // global variables for maintaining the counts /* Method to handle the working of a student * The ID of a student is the corresponding minithread_id */ int student(int total_burgers) { int n, i; semaphore_P(count_lock); while ((out+toremove) < arg) { n = genintrand(BUFFER_SIZE); n = (n <= total_burgers - (out + toremove)) ? n : total_burgers - (out + toremove); printf("Student %d wants to get %d burgers ...\n", minithread_id(), n); toremove += n; semaphore_V(count_lock); for (i=0; i<n; i++) { semaphore_P(empty); out = buffer[tail]; printf("Student %d is taking burger %d.\n", minithread_id(), out); tail = (tail + 1) % BUFFER_SIZE; size--; toremove--; semaphore_V(full); } semaphore_P(count_lock); } semaphore_V(count_lock); printf("Student %d is done.\n", minithread_id()); return 0; } /* Method to handle the working of a cook * The ID of a cook is the corresponding minithread_id */ int cook(int total_burgers) { int n, i; printf("Creating Cook %d\n",minithread_id()); semaphore_P(count_lock); while ((count+toadd) <= arg) { n = genintrand(BUFFER_SIZE); n = (n <= total_burgers - (count + toadd) + 1) ? n : total_burgers - (count + toadd) + 1; printf("Cook %d wants to put %d burgers into the burger stack ...\n", minithread_id(),n); toadd += n; semaphore_V(count_lock); for (i=0; i<n; i++) { semaphore_P(full); printf("Cook %d is putting burger %d into the burger stack.\n", minithread_id(), count); buffer[head] = count++; head = (head + 1) % BUFFER_SIZE; size++; toadd--; semaphore_V(empty); } semaphore_P(count_lock); } semaphore_V(count_lock); printf("Cook %d is done.\n", minithread_id()); return 0; } /* Method to create our multiple producers and consumers * and start their respective threads by fork */ void starter(int* c){ int i; for (i=0;i<c[2];i++){ minithread_fork(cook, c[0]); } for (i=0;i<c[1];i++){ minithread_fork(student, c[0]); } } /* The arguments are passed as command line parameters * argv[1] is the no of students * argv[2] is the no of cooks */ void main(int argc, char *argv[]) { int pass_args[3]; pass_args[0] = MAXCOUNT; pass_args[1] = atoi(argv[1]); pass_args[2] = atoi(argv[2]); size = head = tail = 0; empty = semaphore_create(); semaphore_initialize(empty, 0); full = semaphore_create(); semaphore_initialize(full, BUFFER_SIZE); count_lock = semaphore_create(); semaphore_initialize(count_lock, 1); minithread_system_initialize(starter, pass_args); }

    Read the article

  • CodePlex Daily Summary for Saturday, September 22, 2012

    CodePlex Daily Summary for Saturday, September 22, 2012Popular ReleasesWPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.8: Version: 2.5.0.8 (Milestone 8): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete WAF: Mark the class DataModel as serializable. InfoMan: Minor improvements. InfoMan: Add unit tests for all modules. Othe...LogicCircuit: LogicCircuit 2.12.9.20: Logic Circuit - is educational software for designing and simulating logic circuits. Intuitive graphical user interface, allows you to create unrestricted circuit hierarchy with multi bit buses, debug circuits behavior with oscilloscope, and navigate running circuits hierarchy. Changes of this versionToolbars on text note dialog are more flexible now. You can select font face, size, color, and background of text you are typing. RAM now can be initialized to one of the following: random va...$linq - A Javascript LINQ library: Version 1.1: Version 1.1 Implemented batch, equiZip, zipLongest, prepend, pad, padWith, toJQuery, pipe, singleOrFallback, indexOf, indexOfElement, lastIndexOf, lastIndexOfElement, scan, prescan, and aggregate operators.Huo Chess: Huo Chess 0.95: The Huo Chess 0.95 version has an improved chessboard analysis function so as to be able to see which squares are the dangerous squares in the chessboard. This allows the computer to understand better when it is threatened. Two editions are included: Huo Chess 0.95 Console Application (57 KB in size) Huo Chess 0.95 Windows Application with GUI (119 KB in size) See http://harmoniaphilosophica.wordpress.com/2011/09/28/how-to-develop-a-chess-program-for-2jszrulazj6wq-23/ for the infamous How...Symphony Framework: Symphony Framework v2.0.0.2: Symphony Framework version 2.0.0.2. General note: If you install Symphony Framework 2.0.0.2 you must also install CodeGen 4.1.10 because a number of templates now utilise new features added to the tool. Added the user token PROJECTNAMESPACE to the “Symphony_Content.tpl” template to ensure that we can correctly reference the collection classes of the selection lists. Also added the ability to create object references to fields defined as having selection windows assigned. This enhancement ...Community xPress MDS: Initial MDS and DQS Models: Initial MDS & DQS ModelsSiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.1.2020.421): New features: Disable a specific part of SiteMap to keep the data without displaying them in the CRM application. It simply comments XML part of the sitemap (thanks to rboyers for this feature request) Right click an item and click on "Disable" to disable it Items disabled are greyed and a suffix "- disabled" is added Right click an item and click on "Enable" to enable it Refresh list of web resources in the web resources pickerAJAX Control Toolkit: September 2012 Release: AJAX Control Toolkit Release Notes - September 2012 Release Version 60919September 2012 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ...Lib.Web.Mvc & Yet another developer blog: Lib.Web.Mvc 6.1.0: Lib.Web.Mvc is a library which contains some helper classes for ASP.NET MVC such as strongly typed jqGrid helper, XSL transformation HtmlHelper/ActionResult, FileResult with range request support, custom attributes and more. Release contains: Lib.Web.Mvc.dll with xml documentation file Standalone documentation in chm file and change log Library source code Sample application for strongly typed jqGrid helper is available here. Sample application for XSL transformation HtmlHelper/ActionRe...Sense/Net CMS - Enterprise Content Management: SenseNet 6.1.2 Community Edition: Sense/Net 6.1.2 Community EditionMain new featuresOur current release brings a lot of bugfixes, including the resolution of js/css editing cache issues, xlsx file handling from Office, expense claim demo workspace fixes and much more. Besides fixes 6.1.2 introduces workflow start options and other minor features like a reusable Reject client button for approval scenarios and resource editor enhancements. We have also fixed an issue with our install package to bring you a flawless installation...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.3: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...Python Tools for Visual Studio: 1.5 RC: PTVS 1.5RC Available! We’re pleased to announce the release of Python Tools for Visual Studio 1.5 RC. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including CPython/IronPython, Edit/Intellisense/Debug/Profile, Cloud, HPC, IPython, etc. support. The primary new feature for the 1.5 release is Django including Azure support! The http://www.djangoproject.com is a pop...Launchbar: Lanchbar 4.0.0: This application requires .NET 4.5 which you can find here: www.microsoft.com/visualstudio/downloadsAssaultCube Reloaded: 2.5.4 -: Linux has Ubuntu 11.10 32-bit precompiled binaries and Ubuntu 10.10 64-bit precompiled binaries, but you can compile your own as it also contains the source. If you are using Mac or other operating systems, please wait while we try to package for those OSes. Try to compile it. If it fails, download a virtual machine. The server pack is ready for both Windows and Linux, but you might need to compile your own for Linux (source included) Changelog: New logo Improved airstrike! Reset nukes...Extended WPF Toolkit: Extended WPF Toolkit - 1.7.0: Want an easier way to install the Extended WPF Toolkit?The Extended WPF Toolkit is available on Nuget. What's new in the 1.7.0 Release?New controls Zoombox Pie New features / bug fixes PropertyGrid.ShowTitle property added to allow showing/hiding the PropertyGrid title. Modifications to the PropertyGrid.EditorDefinitions collection will now automatically be applied to the PropertyGrid. Modifications to the PropertyGrid.PropertyDefinitions collection will now be reflected automaticaly...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like OData, MongoDB, WebSQL, SqLite, Facebook or YQL. The library can be integrated with Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video Sencha Touch 2 example app using JayData: Netflix browser. What's new in JayData 1.2 For detailed release notes check the release notes. JayData core: all async operations now support promises JayDa...????????API for .Net SDK: SDK for .Net ??? Release 4: 2012?9?17??? ?????,???????????????。 ?????Release 3??????,???????,???,??? ??????????????????SDK,????????。 ??,??????? That's all.VidCoder: 1.4.0 Beta: First Beta release! Catches up to HandBrake nightlies with SVN 4937. Added PGS (Blu-ray) subtitle support. Additional framerates available: 30, 50, 59.94, 60 Additional sample rates available: 8, 11.025, 12 and 16 kHz Additional higher bitrates available for audio. Same as Source Constant Framerate available. Added Apple TV 3 preset. Added new Bob deinterlacing option. Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will keep running and continue pro...DNN Metro7 style Skin package: Metro7 style Skin for DotNetNuke 06.02.01: Stabilization release fixed this issues: Links not worked on FF, Chrome and Safari Modified packaging with own manifest file for install and source package. Moved the user Image on the Login to the left side. Moved h2 font-size to 24px. Note : This release Comes w/o source package about we still work an a solution. Who Needs the Visual Studio source files please go to source and download it from there. Known 16 CSS issues that related to the skin.css. All others are DNN default o...Visual Studio Icon Patcher: Version 1.5.1: This fixes a bug in the 1.5 release where it would crash when no language packs were installed for VS2010.New ProjectsCodePlexDeployment: Please ignore, this project is for testing out some features of the WAWS deployment integrationDotNetNuke Social Dashboard: The DotNetNuke Social Dashboard gives DotNetNuke Administrators and insight into the social statistics of their site.EESTEC LC Trieste: .Event Log Mailer: Mails events from Windows' system Event Log which matches rules in configuration. Runs as Windows service and has super simple configurationflx4432: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam facilisis condimentum nulla. Duis sed quam vitae nunc semper facilisis a eget leo.lanWOLf: Send wake-on-lan packets across subnets by utilizing powered-on machines on each subnet.Micro-Apps Framework: Micro-Apps is a revolutionary piece of software that allows you to have multiple programs running from 1 file under the same process!NJ: NJ Language Learning Helper type Config() = // Just Code Example member x.GetAll() = seq{ yield {Name="Admin" Dictionary = OtfPG: To generate reproducible complex passwords from simple pass phrases, allowing the user to 'remember; a simple phrase, rather that a complex password, without evProject92104: as ppProject92105: ppaProject92107: papaPython intellisense Enhancer: For the python code, the intellisense box will show after you input a character, just like c#.QR Code Reader (By Screen Capture): Reads the QR Codes displayed in webpages. (You need to capture the code area) Displays the code information.scenariov1706jabbr: helloSharePoint 2013 REST Test Web Part: A simple web part (placed in a farm solution) that helps SharePoint developers to test every HTTP call to the new REST interface of SharePoint 2013.SharePoint Resources Updater (2010 /2013): Project for SharePoint 2010/2013 IT pro's and dev's to adress App_GlobalResources difficulties when developping SharePoint solutions(or maintaining large farms)Word CustomXML data services: Services to add,change and read metadata embedded into a Word document. Metadata are stored in a custom XML file into the Word document. ??????: ge ren xiang mu

    Read the article

  • Graphics glitch when drawing to a Cairo context obtained from a gtk.DrawingArea inside a gtk.Viewport.

    - by user410023
    I am trying to redraw the part of the DrawingArea that is visible in the Viewport in the expose-event handler. However, it seems that I am doing something wrong with the coordinates that are passed to the event handler because there is garbage at the edge of the Viewport when scrolling. Can anyone tell what I am doing wrong? Here is a small example: import pygtk pygtk.require("2.0") import gtk from numpy import array from math import pi class Circle(object): def init(self, position = [0., 0.], radius = 0., edge = (0., 0., 0.), fill = None): self.position = position self.radius = radius self.edge = edge self.fill = fill def draw(self, ctx): rect = array(ctx.clip_extents()) rect[2] -= rect[0] rect[3] -= rect[1] center = rect[2:4] / 2 ctx.arc(center[0], center[1], self.radius, 0., 2. * pi) if self.fill != None: ctx.set_source_rgb(*self.fill) ctx.fill_preserve() ctx.set_source_rgb(*self.edge) ctx.stroke() class Scene(object): class Proxy(object): directory = {} def init(self, target, layers = set()): self.target = target self.layers = layers Scene.Proxy.directory[target] = self def __init__(self, viewport): self.objects = {} self.layers = [set()] self.viewport = viewport self.signals = {} def draw(self, ctx): x = self.viewport.get_hadjustment().value y = self.viewport.get_vadjustment().value ctx.set_source_rgb(1., 1., 1.) ctx.paint() ctx.translate(x, y) for obj in self: obj.draw(ctx) def add(self, item, layer = 0): item = Scene.Proxy(item, layers = set((layer,))) assert(hasattr(item.target, "draw")) assert(isinstance(layer, int)) item.layers.add(layer) while not layer < len(self.layers): self.layers.append(set()) self.layers[layer].add(item) if not item in self.objects: self.objects[item] = set() self.objects[item].add(layer) def remove(self, item, layers = None): item = Scene.Proxy.directory[item] if layers == None: layers = self.objects[item] for layer in layers: layer.remove(item) item.layers.remove(layer) if len(item.layers) == 0: self.objects.remove(item) def __iter__(self): for layer in self.layers: for item in layer: yield item.target class App(object): def init(self): signals = { "canvas_exposed": self.update_canvas, "gtk_main_quit": gtk.main_quit } self.builder = gtk.Builder() self.builder.add_from_file("graphics_glitch.glade") self.window = self.builder.get_object("window") self.viewport = self.builder.get_object("viewport") self.canvas = self.builder.get_object("canvas") self.scene = Scene(self.viewport) signals.update(self.scene.signals) self.builder.connect_signals(signals) self.window.show() def update_canvas(self, widget, event): ctx = self.canvas.window.cairo_create() self.scene.draw(ctx) ctx.clip() if name == "main": app = App() scene = app.scene scene.add(Circle((0., 0.), 10.)) gtk.main() And the Glade file "graphics_glitch.glade": <?xml version="1.0"?> <interface> <requires lib="gtk+" version="2.16"/> <!-- interface-naming-policy project-wide --> <object class="GtkWindow" id="window"> <property name="width_request">200</property> <property name="height_request">200</property> <property name="visible">True</property> <signal name="destroy" handler="gtk_main_quit"/> <child> <object class="GtkScrolledWindow" id="scrolledwindow1"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="hadjustment">h_adjust</property> <property name="vadjustment">v_adjust</property> <property name="hscrollbar_policy">automatic</property> <property name="vscrollbar_policy">automatic</property> <child> <object class="GtkViewport" id="viewport"> <property name="visible">True</property> <property name="resize_mode">queue</property> <child> <object class="GtkDrawingArea" id="canvas"> <property name="width_request">640</property> <property name="height_request">480</property> <property name="visible">True</property> <signal name="expose_event" handler="canvas_exposed"/> </object> </child> </object> </child> </object> </child> </object> <object class="GtkAdjustment" id="h_adjust"> <property name="lower">-1000</property> <property name="upper">1000</property> <property name="step_increment">1</property> <property name="page_increment">25</property> <property name="page_size">25</property> </object> <object class="GtkAdjustment" id="v_adjust"> <property name="lower">-1000</property> <property name="upper">1000</property> <property name="step_increment">1</property> <property name="page_increment">25</property> <property name="page_size">25</property> </object> </interface> Thanks! --Dan

    Read the article

  • 5 Lessons learnt in localization / multi language support in WPF

    - by MarkPearl
    For the last few months I have been secretly working away at the second version of an application that we initially released a few years ago. It’s called MaxCut and it is a free panel/cut optimizer for the woodwork, glass and metal industry. One of the motivations for writing MaxCut was to get an end to end experience in developing an application for general consumption. From the early days of v1 of MaxCut I would get the odd email thanking me for the software and then listing a few suggestions on how to improve it. Two of the most dominant suggestions that we received were… Support for imperial measurements (the original program only supported the metric system) Multi language support (we had someone who volunteered to translate the program into Japanese for us). I am not going to dive into the Imperial to Metric support in todays blog post, but I would like to cover a few brief lessons we learned in adding support for multi-language functionality in the software. I have sectioned them below under different lessons. Lesson 1 – Build multi-language support in from the start So the first lesson I learnt was if you know you are going to do multi language support – build it in from the very beginning! One of the power points of WPF/Silverlight is data binding in XAML and so while it wasn’t to painful to retro fit multi language support into the programing, it was still time consuming and a bit tedious to go through mounds and mounds of views and would have been a minor job to have implemented this while the form was being designed. Lesson 2 – Accommodate for varying word lengths using Grids The next lesson was a little harder to learn and was learnt a bit further down the road in the development cycle. We developed everything in English, assuming that other languages would have similar character length words for equivalent meanings… don’t!. A word that is short in your language may be of varying character lengths in other languages. Some language like Dutch and German allow for concatenation of nouns which has the potential to create really long words. We picked up a few places where our views had been structured incorrectly so that if a word was to long it would get clipped off or cut out. To get around this we began using the WPF grid extensively with column widths that would automatically expand if they needed to. Generally speaking the grid replacement got round this hurdle, and if in future you have a choice between a stack panel or a grid – think twice before going for the easier option… often the grid will be a bit more work to setup, but will be more flexible. Lesson 3 – Separate the separators Our initial run through moving the words to a resource dictionary led us to make what I thought was one potential mistake. If we had a label like the following… “length : “ In the resource dictionary we put it as a single entry. This is fine until you start using a word more than once. For instance in our scenario we used the word “length’ frequently. with different variations of the word with grammar and separators included in the resource we ended up having what I would consider a bloated dictionary. When we removed the separators from the words and put them as their own resources we saw a dramatic reduction in dictionary size… so something that looked like this… “length : “ “length. “ “length?” Was reduced to… “length” “:” “?” “.” While this may not seem like a reduction at first glance, consider that the separators “:?.” are used everywhere and suddenly you see a real reduction in bloat. Lesson 4 – Centralize the Language Dictionary This lesson was learnt at the very end of the project after we had already had a release candidate out in the wild. Because our translations would be done on a volunteer basis and remotely, we wanted it to be really simple for someone to translate our program into another language. As a common design practice we had tiered the application so that we had a business logic layer, a ui layer, etc. The problem was in several of these layers we had resource files specific for that layer. What this resulted in was us having multiple resource files that we would need to send to our translators. To add to our problems, some of the wordings were duplicated in different resource files, which would result in additional frustration from our translators as they felt they were duplicating work. Eventually the workaround was to make a separate project in VS2010 with just the language translations. We then exposed the dictionary as public within this project and made it as a reference to the other projects within the solution. This solved out problem as now we had a central dictionary and could remove any duplication's. Lesson 5 – Make a dummy translation file to test that you haven’t missed anything The final lesson learnt about multi language support in WPF was when checking if you had forgotten to translate anything in the inline code, make a test resource file with dummy data. Ideally you want the data for each word to be identical. In our instance we made one which had all the resource key values pointing to a value of test. This allowed us point the language file to our test resource file and very quickly browse through the program and see if we had missed any linking. The alternative to this approach is to have two language files and swap between the two while running the program to make sure that you haven’t missed anything, but the downside of dual language file approach is that it is much a lot harder spotting a mistake if everything is different – almost like playing Where’s Wally / Waldo. It is much easier spotting variance in uniformity – meaning when you put the “test’ keyword for everything, anything that didn’t say “test” stuck out like a sore thumb. So these are my top five lessons learnt on implementing multi language support in WPF. Feel free to make any suggestions in the comments section if you feel maybe something is more important than one of these or if I got it wrong!

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • Selling Federal Enterprise Architecture (EA)

    - by TedMcLaughlan
    Selling Federal Enterprise Architecture A taxonomy of subject areas, from which to develop a prioritized marketing and communications plan to evangelize EA activities within and among US Federal Government organizations and constituents. Any and all feedback is appreciated, particularly in developing and extending this discussion as a tool for use – more information and details are also available. "Selling" the discipline of Enterprise Architecture (EA) in the Federal Government (particularly in non-DoD agencies) is difficult, notwithstanding the general availability and use of the Federal Enterprise Architecture Framework (FEAF) for some time now, and the relatively mature use of the reference models in the OMB Capital Planning and Investment (CPIC) cycles. EA in the Federal Government also tends to be a very esoteric and hard to decipher conversation – early apologies to those who agree to continue reading this somewhat lengthy article. Alignment to the FEAF and OMB compliance mandates is long underway across the Federal Departments and Agencies (and visible via tools like PortfolioStat and ITDashboard.gov – but there is still a gap between the top-down compliance directives and enablement programs, and the bottom-up awareness and effective use of EA for either IT investment management or actual mission effectiveness. "EA isn't getting deep enough penetration into programs, components, sub-agencies, etc.", verified a panelist at the most recent EA Government Conference in DC. Newer guidance from OMB may be especially difficult to handle, where bottom-up input can't be accurately aligned, analyzed and reported via standardized EA discipline at the Agency level – for example in addressing the new (for FY13) Exhibit 53D "Agency IT Reductions and Reinvestments" and the information required for "Cloud Computing Alternatives Evaluation" (supporting the new Exhibit 53C, "Agency Cloud Computing Portfolio"). Therefore, EA must be "sold" directly to the communities that matter, from a coordinated, proactive messaging perspective that takes BOTH the Program-level value drivers AND the broader Agency mission and IT maturity context into consideration. Selling EA means persuading others to take additional time and possibly assign additional resources, for a mix of direct and indirect benefits – many of which aren't likely to be realized in the short-term. This means there's probably little current, allocated budget to work with; ergo the challenge of trying to sell an "unfunded mandate". Also, the concept of "Enterprise" in large Departments like Homeland Security tends to cross all kinds of organizational boundaries – as Richard Spires recently indicated by commenting that "...organizational boundaries still trump functional similarities. Most people understand what we're trying to do internally, and at a high level they get it. The problem, of course, is when you get down to them and their system and the fact that you're going to be touching them...there's always that fear factor," Spires said. It is quite clear to the Federal IT Investment community that for EA to meet its objective, understandable, relevant value must be measured and reported using a repeatable method – as described by GAO's recent report "Enterprise Architecture Value Needs To Be Measured and Reported". What's not clear is the method or guidance to sell this value. In fact, the current GAO "Framework for Assessing and Improving Enterprise Architecture Management (Version 2.0)", a.k.a. the "EAMMF", does not include words like "sell", "persuade", "market", etc., except in reference ("within Core Element 19: Organization business owner and CXO representatives are actively engaged in architecture development") to a brief section in the CIO Council's 2001 "Practical Guide to Federal Enterprise Architecture", entitled "3.3.1. Develop an EA Marketing Strategy and Communications Plan." Furthermore, Core Element 19 of the EAMMF is advised to be applied in "Stage 3: Developing Initial EA Versions". This kind of EA sales campaign truly should start much earlier in the maturity progress, i.e. in Stages 0 or 1. So, what are the understandable, relevant benefits (or value) to sell, that can find an agreeable, participatory audience, and can pave the way towards success of a longer-term, funded set of EA mechanisms that can be methodically measured and reported? Pragmatic benefits from a useful EA that can help overcome the fear of change? And how should they be sold? Following is a brief taxonomy (it's a taxonomy, to help organize SME support) of benefit-related subjects that might make the most sense, in creating the messages and organizing an initial "engagement plan" for evangelizing EA "from within". An EA "Sales Taxonomy" of sorts. We're not boiling the ocean here; the subjects that are included are ones that currently appear to be urgently relevant to the current Federal IT Investment landscape. Note that successful dialogue in these topics is directly usable as input or guidance for actually developing early-stage, "Fit-for-Purpose" (a DoDAF term) Enterprise Architecture artifacts, as prescribed by common methods found in most EA methodologies, including FEAF, TOGAF, DoDAF and our own Oracle Enterprise Architecture Framework (OEAF). The taxonomy below is organized by (1) Target Community, (2) Benefit or Value, and (3) EA Program Facet - as in: "Let's talk to (1: Community Member) about how and why (3: EA Facet) the EA program can help with (2: Benefit/Value)". Once the initial discussion targets and subjects are approved (that can be measured and reported), a "marketing and communications plan" can be created. A working example follows the Taxonomy. Enterprise Architecture Sales Taxonomy Draft, Summary Version 1. Community 1.1. Budgeted Programs or Portfolios Communities of Purpose (CoPR) 1.1.1. Program/System Owners (Senior Execs) Creating or Executing Acquisition Plans 1.1.2. Program/System Owners Facing Strategic Change 1.1.2.1. Mandated 1.1.2.2. Expected/Anticipated 1.1.3. Program Managers - Creating Employee Performance Plans 1.1.4. CO/COTRs – Creating Contractor Performance Plans, or evaluating Value Engineering Change Proposals (VECP) 1.2. Governance & Communications Communities of Practice (CoP) 1.2.1. Policy Owners 1.2.1.1. OCFO 1.2.1.1.1. Budget/Procurement Office 1.2.1.1.2. Strategic Planning 1.2.1.2. OCIO 1.2.1.2.1. IT Management 1.2.1.2.2. IT Operations 1.2.1.2.3. Information Assurance (Cyber Security) 1.2.1.2.4. IT Innovation 1.2.1.3. Information-Sharing/ Process Collaboration (i.e. policies and procedures regarding Partners, Agreements) 1.2.2. Governing IT Council/SME Peers (i.e. an "Architects Council") 1.2.2.1. Enterprise Architects (assumes others exist; also assumes EA participants aren't buried solely within the CIO shop) 1.2.2.2. Domain, Enclave, Segment Architects – i.e. the right affinity group for a "shared services" EA structure (per the EAMMF), which may be classified as Federated, Segmented, Service-Oriented, or Extended 1.2.2.3. External Oversight/Constraints 1.2.2.3.1. GAO/OIG & Legal 1.2.2.3.2. Industry Standards 1.2.2.3.3. Official public notification, response 1.2.3. Mission Constituents Participant & Analyst Community of Interest (CoI) 1.2.3.1. Mission Operators/Users 1.2.3.2. Public Constituents 1.2.3.3. Industry Advisory Groups, Stakeholders 1.2.3.4. Media 2. Benefit/Value (Note the actual benefits may not be discretely attributable to EA alone; EA is a very collaborative, cross-cutting discipline.) 2.1. Program Costs – EA enables sound decisions regarding... 2.1.1. Cost Avoidance – a TCO theme 2.1.2. Sequencing – alignment of capability delivery 2.1.3. Budget Instability – a Federal reality 2.2. Investment Capital – EA illuminates new investment resources via... 2.2.1. Value Engineering – contractor-driven cost savings on existing budgets, direct or collateral 2.2.2. Reuse – reuse of investments between programs can result in savings, chargeback models; avoiding duplication 2.2.3. License Refactoring – IT license & support models may not reflect actual or intended usage 2.3. Contextual Knowledge – EA enables informed decisions by revealing... 2.3.1. Common Operating Picture (COP) – i.e. cross-program impacts and synergy, relative to context 2.3.2. Expertise & Skill – who truly should be involved in architectural decisions, both business and IT 2.3.3. Influence – the impact of politics and relationships can be examined 2.3.4. Disruptive Technologies – new technologies may reduce costs or mitigate risk in unanticipated ways 2.3.5. What-If Scenarios – can become much more refined, current, verifiable; basis for Target Architectures 2.4. Mission Performance – EA enables beneficial decision results regarding... 2.4.1. IT Performance and Optimization – towards 100% effective, available resource utilization 2.4.2. IT Stability – towards 100%, real-time uptime 2.4.3. Agility – responding to rapid changes in mission 2.4.4. Outcomes –measures of mission success, KPIs – vs. only "Outputs" 2.4.5. Constraints – appropriate response to constraints 2.4.6. Personnel Performance – better line-of-sight through performance plans to mission outcome 2.5. Mission Risk Mitigation – EA mitigates decision risks in terms of... 2.5.1. Compliance – all the right boxes are checked 2.5.2. Dependencies –cross-agency, segment, government 2.5.3. Transparency – risks, impact and resource utilization are illuminated quickly, comprehensively 2.5.4. Threats and Vulnerabilities – current, realistic awareness and profiles 2.5.5. Consequences – realization of risk can be mapped as a series of consequences, from earlier decisions or new decisions required for current issues 2.5.5.1. Unanticipated – illuminating signals of future or non-symmetric risk; helping to "future-proof" 2.5.5.2. Anticipated – discovering the level of impact that matters 3. EA Program Facet (What parts of the EA can and should be communicated, using business or mission terms?) 3.1. Architecture Models – the visual tools to be created and used 3.1.1. Operating Architecture – the Business Operating Model/Architecture elements of the EA truly drive all other elements, plus expose communication channels 3.1.2. Use Of – how can the EA models be used, and how are they populated, from a reasonable, pragmatic yet compliant perspective? What are the core/minimal models required? What's the relationship of these models, with existing system models? 3.1.3. Scope – what level of granularity within the models, and what level of abstraction across the models, is likely to be most effective and useful? 3.2. Traceability – the maturity, status, completeness of the tools 3.2.1. Status – what in fact is the degree of maturity across the integrated EA model and other relevant governance models, and who may already be benefiting from it? 3.2.2. Visibility – how does the EA visibly and effectively prove IT investment performance goals are being reached, with positive mission outcome? 3.3. Governance – what's the interaction, participation method; how are the tools used? 3.3.1. Contributions – how is the EA program informed, accept submissions, collect data? Who are the experts? 3.3.2. Review – how is the EA validated, against what criteria?  Taxonomy Usage Example:   1. To speak with: a. ...a particular set of System Owners Facing Strategic Change, via mandate (like the "Cloud First" mandate); about... b. ...how the EA program's visible and easily accessible Infrastructure Reference Model (i.e. "IRM" or "TRM"), if updated more completely with current system data, can... c. ...help shed light on ways to mitigate risks and avoid future costs associated with NOT leveraging potentially-available shared services across the enterprise... 2. ....the following Marketing & Communications (Sales) Plan can be constructed: a. Create an easy-to-read "Consequence Model" that illustrates how adoption of a cloud capability (like elastic operational storage) can enable rapid and durable compliance with the mandate – using EA traceability. Traceability might be from the IRM to the ARM (that identifies reusable services invoking the elastic storage), and then to the PRM with performance measures (such as % utilization of purchased storage allocation) included in the OMB Exhibits; and b. Schedule a meeting with the Program Owners, timed during their Acquisition Strategy meetings in response to the mandate, to use the "Consequence Model" for advising them to organize a rapid and relevant RFI solicitation for this cloud capability (regarding alternatives for sourcing elastic operational storage); and c. Schedule a series of short "Discovery" meetings with the system architecture leads (as agreed by the Program Owners), to further populate/validate the "As-Is" models and frame the "To Be" models (via scenarios), to better inform the RFI, obtain the best feedback from the vendor community, and provide potential value for and avoid impact to all other programs and systems. --end example -- Note that communications with the intended audience should take a page out of the standard "Search Engine Optimization" (SEO) playbook, using keywords and phrases relating to "value" and "outcome" vs. "compliance" and "output". Searches in email boxes, internal and external search engines for phrases like "cost avoidance strategies", "mission performance metrics" and "innovation funding" should yield messages and content from the EA team. This targeted, informed, practical sales approach should result in additional buy-in and participation, additional EA information contribution and model validation, development of more SMEs and quick "proof points" (with real-life testing) to bolster the case for EA. The proof point here is a successful, timely procurement that satisfies not only the external mandate and external oversight review, but also meets internal EA compliance/conformance goals and therefore is more transparently useful across the community. In short, if sold effectively, the EA will perform and be recognized. EA won’t therefore be used only for compliance, but also (according to a validated, stated purpose) to directly influence decisions and outcomes. The opinions, views and analysis expressed in this document are those of the author and do not necessarily reflect the views of Oracle.

    Read the article

  • Entity Framework 6: Alpha2 Now Available

    - by ScottGu
    The Entity Framework team recently announced the 2nd alpha release of EF6.   The alpha 2 package is available for download from NuGet. Since this is a pre-release package make sure to select “Include Prereleases” in the NuGet package manager, or execute the following from the package manager console to install it: PM> Install-Package EntityFramework -Pre This week’s alpha release includes a bunch of great improvements in the following areas: Async language support is now available for queries and updates when running on .NET 4.5. Custom conventions now provide the ability to override the default conventions that Code First uses for mapping types, properties, etc. to your database. Multi-tenant migrations allow the same database to be used by multiple contexts with full Code First Migrations support for independently evolving the model backing each context. Using Enumerable.Contains in a LINQ query is now handled much more efficiently by EF and the SQL Server provider resulting greatly improved performance. All features of EF6 (except async) are available on both .NET 4 and .NET 4.5. This includes support for enums and spatial types and the performance improvements that were previously only available when using .NET 4.5. Start-up time for many large models has been dramatically improved thanks to improved view generation performance. Below are some additional details about a few of the improvements above: Async Support .NET 4.5 introduced the Task-Based Asynchronous Pattern that uses the async and await keywords to help make writing asynchronous code easier. EF 6 now supports this pattern. This is great for ASP.NET applications as database calls made through EF can now be processed asynchronously – avoiding any blocking of worker threads. This can increase scalability on the server by allowing more requests to be processed while waiting for the database to respond. The following code shows an MVC controller that is querying a database for a list of location entities:     public class HomeController : Controller     {         LocationContext db = new LocationContext();           public async Task<ActionResult> Index()         {             var locations = await db.Locations.ToListAsync();               return View(locations);         }     } Notice above the call to the new ToListAsync method with the await keyword. When the web server reaches this code it initiates the database request, but rather than blocking while waiting for the results to come back, the thread that is processing the request returns to the thread pool, allowing ASP.NET to process another incoming request with the same thread. In other words, a thread is only consumed when there is actual processing work to do, allowing the web server to handle more concurrent requests with the same resources. A more detailed walkthrough covering async in EF is available with additional information and examples. Also a walkthrough is available showing how to use async in an ASP.NET MVC application. Custom Conventions When working with EF Code First, the default behavior is to map .NET classes to tables using a set of conventions baked into EF. For example, Code First will detect properties that end with “ID” and configure them automatically as primary keys. However, sometimes you cannot or do not want to follow those conventions and would rather provide your own. For example, maybe your primary key properties all end in “Key” instead of “Id”. Custom conventions allow the default conventions to be overridden or new conventions to be added so that Code First can map by convention using whatever rules make sense for your project. The following code demonstrates using custom conventions to set the precision of all decimals to 5. As with other Code First configuration, this code is placed in the OnModelCreating method which is overridden on your derived DbContext class:         protected override void OnModelCreating(DbModelBuilder modelBuilder)         {             modelBuilder.Properties<decimal>()                 .Configure(x => x.HasPrecision(5));           } But what if there are a couple of places where a decimal property should have a different precision? Just as with all the existing Code First conventions, this new convention can be overridden for a particular property simply by explicitly configuring that property using either the fluent API or a data annotation. A more detailed description of custom code first conventions is available here. Community Involvement I blogged a while ago about EF being released under an open source license.  Since then a number of community members have made contributions and these are included in EF6 alpha 2. Two examples of community contributions are: AlirezaHaghshenas contributed a change that increases the startup performance of EF for larger models by improving the performance of view generation. The change means that it is less often necessary to use of pre-generated views. UnaiZorrilla contributed the first community feature to EF: the ability to load all Code First configuration classes in an assembly with a single method call like the following: protected override void OnModelCreating(DbModelBuilder modelBuilder) {        modelBuilder.Configurations            .AddFromAssembly(typeof(LocationContext).Assembly); } This code will find and load all the classes that inherit from EntityTypeConfiguration<T> or ComplexTypeConfiguration<T> in the assembly where LocationContext is defined. This reduces the amount of coupling between the context and Code First configuration classes, and is also a very convenient shortcut for large models. Other upcoming features coming in EF 6 Lots of information about the development of EF6 can be found on the EF CodePlex site, including a roadmap showing the other features that are planned for EF6. One of of the nice upcoming features is connection resiliency, which will automate the process of retying database operations on transient failures common in cloud environments and with databases such as the Windows Azure SQL Database. Another often requested feature that will be included in EF6 is the ability to map stored procedures to query and update operations on entities when using Code First. Summary EF6 is the first open source release of Entity Framework being developed in CodePlex. The alpha 2 preview release of EF6 is now available on NuGet, and contains some really great features for you to try. The EF team are always looking for feedback from developers - especially on the new features such as custom Code First conventions and async support. To provide feedback you can post a comment on the EF6 alpha 2 announcement post, start a discussion or file a bug on the CodePlex site. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • What's the best way to expose a Model object in a ViewModel?

    - by Angel
    In a WPF MVVM application, I exposed my model object into my viewModel by creating an instance of Model class (which cause dependency) into ViewModel. Instead of creating separate VM properties, I wrap the Model properties inside my ViewModel Property. My model is just an entity framework generated proxy class: public partial class TblProduct { public TblProduct() { this.TblPurchaseDetails = new HashSet<TblPurchaseDetail>(); this.TblPurchaseOrderDetails = new HashSet<TblPurchaseOrderDetail>(); this.TblSalesInvoiceDetails = new HashSet<TblSalesInvoiceDetail>(); this.TblSalesOrderDetails = new HashSet<TblSalesOrderDetail>(); } public int ProductId { get; set; } public string ProductCode { get; set; } public string ProductName { get; set; } public int CategoryId { get; set; } public string Color { get; set; } public Nullable<decimal> PurchaseRate { get; set; } public Nullable<decimal> SalesRate { get; set; } public string ImagePath { get; set; } public bool IsActive { get; set; } public virtual TblCompany TblCompany { get; set; } public virtual TblProductCategory TblProductCategory { get; set; } public virtual TblUser TblUser { get; set; } public virtual ICollection<TblPurchaseDetail> TblPurchaseDetails { get; set; } public virtual ICollection<TblPurchaseOrderDetail> TblPurchaseOrderDetails { get; set; } public virtual ICollection<TblSalesInvoiceDetail> TblSalesInvoiceDetails { get; set; } public virtual ICollection<TblSalesOrderDetail> TblSalesOrderDetails { get; set; } } Here is my ViewModel: public class ProductViewModel : WorkspaceViewModel { #region Constructor public ProductViewModel() { StartApp(); } #endregion //Constructor #region Properties private IProductDataService _dataService; public IProductDataService DataService { get { if (_dataService == null) { if (IsInDesignMode) { _dataService = new ProductDataServiceMock(); } else { _dataService = new ProductDataService(); } } return _dataService; } } //Get and set Model object private TblProduct _product; public TblProduct Product { get { return _product ?? (_product = new TblProduct()); } set { _product = value; } } #region Public Properties public int ProductId { get { return Product.ProductId; } set { if (Product.ProductId == value) { return; } Product.ProductId = value; RaisePropertyChanged("ProductId"); } } public string ProductName { get { return Product.ProductName; } set { if (Product.ProductName == value) { return; } Product.ProductName = value; RaisePropertyChanged(() => ProductName); } } private ObservableCollection<TblProduct> _productRecords; public ObservableCollection<TblProduct> ProductRecords { get { return _productRecords; } set { _productRecords = value; RaisePropertyChanged("ProductRecords"); } } //Selected Product private TblProduct _selectedProduct; public TblProduct SelectedProduct { get { return _selectedProduct; } set { _selectedProduct = value; if (_selectedProduct != null) { this.ProductId = _selectedProduct.ProductId; this.ProductCode = _selectedProduct.ProductCode; } RaisePropertyChanged("SelectedProduct"); } } #endregion //Public Properties #endregion // Properties #region Commands private ICommand _newCommand; public ICommand NewCommand { get { if (_newCommand == null) { _newCommand = new RelayCommand(() => ResetAll()); } return _newCommand; } } private ICommand _saveCommand; public ICommand SaveCommand { get { if (_saveCommand == null) { _saveCommand = new RelayCommand(() => Save()); } return _saveCommand; } } private ICommand _deleteCommand; public ICommand DeleteCommand { get { if (_deleteCommand == null) { _deleteCommand = new RelayCommand(() => Delete()); } return _deleteCommand; } } #endregion //Commands #region Methods private void StartApp() { LoadProductCollection(); } private void LoadProductCollection() { var q = DataService.GetAllProducts(); this.ProductRecords = new ObservableCollection<TblProduct>(q); } private void Save() { if (SelectedOperateMode == OperateModeEnum.OperateMode.New) { //Pass the Model object into Dataservice for save DataService.SaveProduct(this.Product); } else if (SelectedOperateMode == OperateModeEnum.OperateMode.Edit) { //Pass the Model object into Dataservice for Update DataService.UpdateProduct(this.Product); } ResetAll(); LoadProductCollection(); } #endregion //Methods } Here is my Service class: class ProductDataService:IProductDataService { /// <summary> /// Context object of Entity Framework model /// </summary> private MaizeEntities Context { get; set; } public ProductDataService() { Context = new MaizeEntities(); } public IEnumerable<TblProduct> GetAllProducts() { using(var context=new R_MaizeEntities()) { var q = from p in context.TblProducts where p.IsDel == false select p; return new ObservableCollection<TblProduct>(q); } } public void SaveProduct(TblProduct _product) { using(var context=new R_MaizeEntities()) { _product.LastModUserId = GlobalObjects.LoggedUserID; _product.LastModDttm = DateTime.Now; _product.CompanyId = GlobalObjects.CompanyID; context.TblProducts.Add(_product); context.SaveChanges(); } } public void UpdateProduct(TblProduct _product) { using (var context = new R_MaizeEntities()) { context.TblProducts.Attach(_product); context.Entry(_product).State = EntityState.Modified; _product.LastModUserId = GlobalObjects.LoggedUserID; _product.LastModDttm = DateTime.Now; _product.CompanyId = GlobalObjects.CompanyID; context.SaveChanges(); } } public void DeleteProduct(int _productId) { using (var context = new R_MaizeEntities()) { var product = (from c in context.TblProducts where c.ProductId == _productId select c).First(); product.LastModUserId = GlobalObjects.LoggedUserID; product.LastModDttm = DateTime.Now; product.IsDel = true; context.SaveChanges(); } } } I exposed my model object in my viewModel by creating an instance of it using new keyword, also I instantiated my DataService class in VM. I know this will cause a strong dependency. So: What's the best way to expose a Model object in a ViewModel? What's the best way to use DataService in VM?

    Read the article

  • Portable class libraries and fetching JSON

    - by Jeff
    After much delay, we finally have the Windows Phone 8 SDK to go along with the Windows 8 Store SDK, or whatever ridiculous name they’re giving it these days. (Seriously… that no one could come up with a suitable replacement for “metro” is disappointing in an otherwise exciting set of product launches.) One of the neat-o things is the potential for code reuse, particularly across Windows 8 and Windows Phone 8 apps. This is accomplished in part with portable class libraries, which allow you to share code between different types of projects. With some other techniques and quasi-hacks, you can share some amount of code, and I saw it mentioned in one of the Build videos that they’re seeing as much as 70% code reuse. Not bad. However, I’ve already hit a super annoying snag. It appears that the HttpClient class, with its idiot-proof async goodness, is not included in the Windows Phone 8 class libraries. Shock, gasp, horror, disappointment, etc. The delay in releasing it already caused dismay among developers, and I’m sure this won’t help. So I started refactoring some code I already had for a Windows 8 Store app (ugh) to accommodate the use of HttpWebRequest instead. I haven’t tried it in a Windows Phone 8 project beyond compiling, but it appears to work. I used this StackOverflow answer as a starting point since it’s been a long time since I used HttpWebRequest, and keep in mind that it has no exception handling. It needs refinement. The goal here is to new up the client, and call a method that returns some deserialized JSON objects from the Intertubes. Adding facilities for headers or cookies is probably a good next step. You need to use NuGet for a Json.NET reference. So here’s the start: using System.Net; using System.Threading.Tasks; using Newtonsoft.Json; using System.IO; namespace MahProject {     public class ServiceClient<T> where T : class     {         public ServiceClient(string url)         {             _url = url;         }         private readonly string _url;         public async Task<T> GetResult()         {             var response = await MakeAsyncRequest(_url);             var result = JsonConvert.DeserializeObject<T>(response);             return result;         }         public static Task<string> MakeAsyncRequest(string url)         {             var request = (HttpWebRequest)WebRequest.Create(url);             request.ContentType = "application/json";             Task<WebResponse> task = Task.Factory.FromAsync(                 request.BeginGetResponse,                 asyncResult => request.EndGetResponse(asyncResult),                 null);             return task.ContinueWith(t => ReadStreamFromResponse(t.Result));         }         private static string ReadStreamFromResponse(WebResponse response)         {             using (var responseStream = response.GetResponseStream())                 using (var reader = new StreamReader(responseStream))                 {                     var content = reader.ReadToEnd();                     return content;                 }         }     } } Calling it in some kind of repository class may look like this, if you wanted to return an array of Park objects (Park model class omitted because it doesn’t matter): public class ParkRepo {     public async Task<Park[]> GetAllParks()     {         var client = new ServiceClient<Park[]>(http://superfoo/endpoint);         return await client.GetResult();     } } And then from inside your WP8 or W8S app (see what I did there?), when you load state or do some kind of UI event handler (making sure the method uses the async keyword): var parkRepo = new ParkRepo(); var results = await parkRepo.GetAllParks(); // bind results to some UI or observable collection or something Hopefully this saves you a little time.

    Read the article

  • MVVM- Expose Model object in ViewModel

    - by Angel
    I have a wpf MVVM application , I exposed my model object into my viewModel by creating an instance of Model class (which cause dependency) into ViewModel , and instead of creating seperate VM properties , I wrap the Model properties inside my ViewModel Property. My model is just an entity framework generated proxy classes. Here is my Model class : public partial class TblProduct { public TblProduct() { this.TblPurchaseDetails = new HashSet<TblPurchaseDetail>(); this.TblPurchaseOrderDetails = new HashSet<TblPurchaseOrderDetail>(); this.TblSalesInvoiceDetails = new HashSet<TblSalesInvoiceDetail>(); this.TblSalesOrderDetails = new HashSet<TblSalesOrderDetail>(); } public int ProductId { get; set; } public string ProductCode { get; set; } public string ProductName { get; set; } public int CategoryId { get; set; } public string Color { get; set; } public Nullable<decimal> PurchaseRate { get; set; } public Nullable<decimal> SalesRate { get; set; } public string ImagePath { get; set; } public bool IsActive { get; set; } public virtual TblCompany TblCompany { get; set; } public virtual TblProductCategory TblProductCategory { get; set; } public virtual TblUser TblUser { get; set; } public virtual ICollection<TblPurchaseDetail> TblPurchaseDetails { get; set; } public virtual ICollection<TblPurchaseOrderDetail> TblPurchaseOrderDetails { get; set; } public virtual ICollection<TblSalesInvoiceDetail> TblSalesInvoiceDetails { get; set; } public virtual ICollection<TblSalesOrderDetail> TblSalesOrderDetails { get; set; } } Here is my ViewModel , public class ProductViewModel : WorkspaceViewModel { #region Constructor public ProductViewModel() { StartApp(); } #endregion //Constructor #region Properties private IProductDataService _dataService; public IProductDataService DataService { get { if (_dataService == null) { if (IsInDesignMode) { _dataService = new ProductDataServiceMock(); } else { _dataService = new ProductDataService(); } } return _dataService; } } //Get and set Model object private TblProduct _product; public TblProduct Product { get { return _product ?? (_product = new TblProduct()); } set { _product = value; } } #region Public Properties public int ProductId { get { return Product.ProductId; } set { if (Product.ProductId == value) { return; } Product.ProductId = value; RaisePropertyChanged("ProductId"); } } public string ProductName { get { return Product.ProductName; } set { if (Product.ProductName == value) { return; } Product.ProductName = value; RaisePropertyChanged(() => ProductName); } } private ObservableCollection<TblProduct> _productRecords; public ObservableCollection<TblProduct> ProductRecords { get { return _productRecords; } set { _productRecords = value; RaisePropertyChanged("ProductRecords"); } } //Selected Product private TblProduct _selectedProduct; public TblProduct SelectedProduct { get { return _selectedProduct; } set { _selectedProduct = value; if (_selectedProduct != null) { this.ProductId = _selectedProduct.ProductId; this.ProductCode = _selectedProduct.ProductCode; } RaisePropertyChanged("SelectedProduct"); } } #endregion //Public Properties #endregion // Properties #region Commands private ICommand _newCommand; public ICommand NewCommand { get { if (_newCommand == null) { _newCommand = new RelayCommand(() => ResetAll()); } return _newCommand; } } private ICommand _saveCommand; public ICommand SaveCommand { get { if (_saveCommand == null) { _saveCommand = new RelayCommand(() => Save()); } return _saveCommand; } } private ICommand _deleteCommand; public ICommand DeleteCommand { get { if (_deleteCommand == null) { _deleteCommand = new RelayCommand(() => Delete()); } return _deleteCommand; } } #endregion //Commands #region Methods private void StartApp() { LoadProductCollection(); } private void LoadProductCollection() { var q = DataService.GetAllProducts(); this.ProductRecords = new ObservableCollection<TblProduct>(q); } private void Save() { if (SelectedOperateMode == OperateModeEnum.OperateMode.New) { //Pass the Model object into Dataservice for save DataService.SaveProduct(this.Product); } else if (SelectedOperateMode == OperateModeEnum.OperateMode.Edit) { //Pass the Model object into Dataservice for Update DataService.UpdateProduct(this.Product); } ResetAll(); LoadProductCollection(); } #endregion //Methods } Here is my Service class: class ProductDataService:IProductDataService { /// <summary> /// Context object of Entity Framework model /// </summary> private MaizeEntities Context { get; set; } public ProductDataService() { Context = new MaizeEntities(); } public IEnumerable<TblProduct> GetAllProducts() { using(var context=new R_MaizeEntities()) { var q = from p in context.TblProducts where p.IsDel == false select p; return new ObservableCollection<TblProduct>(q); } } public void SaveProduct(TblProduct _product) { using(var context=new R_MaizeEntities()) { _product.LastModUserId = GlobalObjects.LoggedUserID; _product.LastModDttm = DateTime.Now; _product.CompanyId = GlobalObjects.CompanyID; context.TblProducts.Add(_product); context.SaveChanges(); } } public void UpdateProduct(TblProduct _product) { using (var context = new R_MaizeEntities()) { context.TblProducts.Attach(_product); context.Entry(_product).State = EntityState.Modified; _product.LastModUserId = GlobalObjects.LoggedUserID; _product.LastModDttm = DateTime.Now; _product.CompanyId = GlobalObjects.CompanyID; context.SaveChanges(); } } public void DeleteProduct(int _productId) { using (var context = new R_MaizeEntities()) { var product = (from c in context.TblProducts where c.ProductId == _productId select c).First(); product.LastModUserId = GlobalObjects.LoggedUserID; product.LastModDttm = DateTime.Now; product.IsDel = true; context.SaveChanges(); } } } I exposed my model object in my viewModel by creating an instance of it using new keyword, also I instantiated my DataService class in VM, I know this will cause a strong dependency. So , 1- Whats the best way to expose Model object in ViewModel ? 2- Whats the best way to use DataService in VM ?

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • Different behavior for REF CURSOR between Oracle 10g and 11g when unique index present?

    - by wweicker
    Description I have an Oracle stored procedure that has been running for 7 or so years both locally on development instances and on multiple client test and production instances running Oracle 8, then 9, then 10, and recently 11. It has worked consistently until the upgrade to Oracle 11g. Basically, the procedure opens a reference cursor, updates a table then completes. In 10g the cursor will contain the expected results but in 11g the cursor will be empty. No DML or DDL changed after the upgrade to 11g. This behavior is consistent on every 10g or 11g instance I've tried (10.2.0.3, 10.2.0.4, 11.1.0.7, 11.2.0.1 - all running on Windows). The specific code is much more complicated but to explain the issue in somewhat realistic overview: I have some data in a header table and a bunch of child tables that will be output to PDF. The header table has a boolean (NUMBER(1) where 0 is false and 1 is true) column indicating whether that data has been processed yet. The view is limited to only show rows in that have not been processed (the view also joins on some other tables, makes some inline queries and function calls, etc). So at the time when the cursor is opened, the view shows one or more rows, then after the cursor is opened an update statement runs to flip the flag in the header table, a commit is issued, then the procedure completes. On 10g, the cursor opens, it contains the row, then the update statement flips the flag and running the procedure a second time would yield no data. On 11g, the cursor never contains the row, it's as if the cursor does not open until after the update statement runs. I'm concerned that something may have changed in 11g (hopefully a setting that can be configured) that might affect other procedures and other applications. What I'd like to know is whether anyone knows why the behavior is different between the two database versions and whether the issue can be resolved without code changes. Update 1: I managed to track the issue down to a unique constraint. It seems that when the unique constraint is present in 11g the issue is reproducible 100% of the time regardless of whether I'm running the real world code against the actual objects or the following simple example. Update 2: I was able to completely eliminate the view from the equation. I have updated the simple example to show the problem exists even when querying directly against the table. Simple Example CREATE TABLE tbl1 ( col1 VARCHAR2(10), col2 NUMBER(1) ); INSERT INTO tbl1 (col1, col2) VALUES ('TEST1', 0); /* View is no longer required to demonstrate the problem CREATE OR REPLACE VIEW vw1 (col1, col2) AS SELECT col1, col2 FROM tbl1 WHERE col2 = 0; */ CREATE OR REPLACE PACKAGE pkg1 AS TYPE refWEB_CURSOR IS REF CURSOR; PROCEDURE proc1 (crs OUT refWEB_CURSOR); END pkg1; CREATE OR REPLACE PACKAGE BODY pkg1 IS PROCEDURE proc1 (crs OUT refWEB_CURSOR) IS BEGIN OPEN crs FOR SELECT col1 FROM tbl1 WHERE col1 = 'TEST1' AND col2 = 0; UPDATE tbl1 SET col2 = 1 WHERE col1 = 'TEST1'; COMMIT; END proc1; END pkg1; Anonymous Block Demo DECLARE crs1 pkg1.refWEB_CURSOR; TYPE rectype1 IS RECORD ( col1 vw1.col1%TYPE ); rec1 rectype1; BEGIN pkg1.proc1 ( crs1 ); DBMS_OUTPUT.PUT_LINE('begin first test'); LOOP FETCH crs1 INTO rec1; EXIT WHEN crs1%NOTFOUND; DBMS_OUTPUT.PUT_LINE(rec1.col1); END LOOP; DBMS_OUTPUT.PUT_LINE('end first test'); END; /* After creating this index, the problem is seen */ CREATE UNIQUE INDEX unique_col1 ON tbl1 (col1); /* Reset data to initial values */ TRUNCATE TABLE tbl1; INSERT INTO tbl1 (col1, col2) VALUES ('TEST1', 0); DECLARE crs1 pkg1.refWEB_CURSOR; TYPE rectype1 IS RECORD ( col1 vw1.col1%TYPE ); rec1 rectype1; BEGIN pkg1.proc1 ( crs1 ); DBMS_OUTPUT.PUT_LINE('begin second test'); LOOP FETCH crs1 INTO rec1; EXIT WHEN crs1%NOTFOUND; DBMS_OUTPUT.PUT_LINE(rec1.col1); END LOOP; DBMS_OUTPUT.PUT_LINE('end second test'); END; Example of what the output on 10g would be:   begin first test   TEST1   end first test   begin second test   TEST1   end second test Example of what the output on 11g would be:   begin first test   TEST1   end first test   begin second test   end second test Clarification I can't remove the COMMIT because in the real world scenario the procedure is called from a web application. When the data provider on the front end calls the procedure it will issue an implicit COMMIT when disconnecting from the database anyways. So if I remove the COMMIT in the procedure then yes, the anonymous block demo would work but the real world scenario would not because the COMMIT would still happen. Question Why is 11g behaving differently? Is there anything I can do other than re-write the code?

    Read the article

  • 2-Bay External HDD Enclosure in JBOD mode fails to detect both drives (Linux & Windows)

    - by mgc8888
    I recently purchased a couple of USB 3.0 External HDD Enclosures to use for storage and backup; the idea was to have one act as backup to the other, with 4 x 3TB drives in total. However, the second drive in each is not accessible in either Linux nor Windows, and I could not determine the reason. 1. Situation The two enclosures are slightly different (couldn't find them in stock at the same time) yet from many little details appear to be the same Chinese base design with a tweaked outer shell. The models are: Sharkoon 2-Bay RAID Box Fantec MR-35DU3 The drives are Seagate 3TB Barracuda ST33000651AS, firmware CC44, all identical. From reading manuals and online sources, I determined that JBOD would be the optimal setup for my needs -- addressing the two drives separately in each enclosure would be important, making it easy to swap drives and mix&match them if needed; all the other modes implied the controller doing a combination of the drives. The software used was Debian GNU/Linux - testing/wheezy - kernel 2.6.39-2 and Windows 7 Ultimate. 2. Description of the problem Now, here comes the problem: every time I connect either of the enclosures to a PC using the supplied cable (tried a different one as well), only the HDD in the top bay is readable, the one below is detected yet errors out in various ways. According to the manuals, it should not happen: in JBOD, the system should be able to "see" two separate drives upon connection. This happens with both enclosures and any combination of HDDs (i.e. if I swap them, the same thing happens), so the HDDs are good and I think so are the enclosures (two different companies making similar products that failed in an identical fashion would be very unlikely). The top HDD can be used fine every time, I actually tried a speed test from Linux and got about 150MiB/s reads, so all is working as it should; the one below refuses to work every time. So the failure is consistent. To make sure this was not some obscure Linux bug, I tried the same under Windows 7, and the system also only created one drive letter for a drive of 3TB size (so it was only seeing one instead of both). Placing an older, known good, 2TB drive in the top bay made that the one recognised, so we have the same issue under Windows as well. Log entries under Linux (tested here with a 3TB and a 2TB drive so I could differentiate them; either one works in the top enclosure, in the test setup the 3TB one is on top). You can see them being detected, the top one is ok, but for the bottom one only errors: Jul 19 23:28:15 media kernel: [260150.582436] usb 6-1: New USB device found, idVendor=1ca1, idProduct=18ae Jul 19 23:28:15 media kernel: [260150.582440] usb 6-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 Jul 19 23:28:15 media kernel: [260150.582442] usb 6-1: Product: Usb Sata Bridge Jul 19 23:28:15 media kernel: [260150.582444] usb 6-1: Manufacturer: SYMWAVE Jul 19 23:28:15 media kernel: [260150.582446] usb 6-1: SerialNumber: 39584B304C4E3441 Jul 19 23:28:15 media kernel: [260150.870412] scsi11 : usb-storage 6-1:1.0 Jul 19 23:28:16 media kernel: [260151.882087] scsi 11:0:0:0: Direct-Access SYMWAVE ST33000651AS CC44 PQ: 0 ANSI: 4 Jul 19 23:28:16 media kernel: [260151.882242] scsi 11:0:0:1: Direct-Access SYMWAVE ST32000641AS CC12 PQ: 0 ANSI: 4 Jul 19 23:28:16 media kernel: [260151.882677] sd 11:0:0:0: Attached scsi generic sg2 type 0 Jul 19 23:28:16 media kernel: [260151.882774] sd 11:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16). Jul 19 23:28:16 media kernel: [260151.882857] sd 11:0:0:1: Attached scsi generic sg3 type 0 Jul 19 23:28:16 media kernel: [260151.882893] sd 11:0:0:0: [sdb] 5860533168 512-byte logical blocks: (3.00 TB/2.72 TiB) Jul 19 23:28:16 media kernel: [260151.883085] xhci_hcd 0000:03:00.0: WARN: Stalled endpoint Jul 19 23:28:16 media kernel: [260151.883582] sd 11:0:0:0: [sdb] Write Protect is off Jul 19 23:28:16 media kernel: [260151.883961] sd 11:0:0:1: [sdc] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB) Jul 19 23:28:16 media kernel: [260151.884145] xhci_hcd 0000:03:00.0: WARN: Stalled endpoint Jul 19 23:28:16 media kernel: [260151.884570] sd 11:0:0:1: [sdc] Write Protect is off Jul 19 23:28:16 media kernel: [260151.884855] sd 11:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16). Jul 19 23:28:16 media kernel: [260151.885286] xhci_hcd 0000:03:00.0: WARN: Stalled endpoint Jul 19 23:28:16 media kernel: [260151.885807] xhci_hcd 0000:03:00.0: WARN: Stalled endpoint Jul 19 23:28:16 media kernel: [260151.909595] xhci_hcd 0000:03:00.0: WARN: Stalled endpoint Jul 19 23:28:16 media kernel: [260151.910159] sd 11:0:0:1: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jul 19 23:28:16 media kernel: [260151.910163] sd 11:0:0:1: [sdc] Sense Key : Illegal Request [current] Jul 19 23:28:16 media kernel: [260151.910167] Info fld=0x0 Jul 19 23:28:16 media kernel: [260151.910169] sd 11:0:0:1: [sdc] Add. Sense: Invalid field in cdb Jul 19 23:28:16 media kernel: [260151.910172] sd 11:0:0:1: [sdc] CDB: Read(10): 28 20 00 00 00 00 00 00 08 00 Jul 19 23:28:16 media kernel: [260151.910182] quiet_error: 2 callbacks suppressed Jul 19 23:28:16 media kernel: [260151.910570] xhci_hcd 0000:03:00.0: WARN: Stalled endpoint Jul 19 23:28:16 media kernel: [260151.911153] sd 11:0:0:1: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jul 19 23:28:16 media kernel: [260151.911156] sd 11:0:0:1: [sdc] Sense Key : Illegal Request [current] Jul 19 23:28:16 media kernel: [260151.911159] Info fld=0x0 Jul 19 23:28:16 media kernel: [260151.911161] sd 11:0:0:1: [sdc] Add. Sense: Invalid field in cdb Jul 19 23:28:16 media kernel: [260151.911164] sd 11:0:0:1: [sdc] CDB: Read(10): 28 20 00 00 00 00 00 00 08 00 Jul 19 23:28:16 media kernel: [260151.911385] xhci_hcd 0000:03:00.0: WARN: Stalled endpoint Jul 19 23:28:16 media kernel: [260151.911902] sd 11:0:0:1: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jul 19 23:28:16 media kernel: [260151.911905] sd 11:0:0:1: [sdc] Sense Key : Illegal Request [current] Jul 19 23:28:16 media kernel: [260151.911908] Info fld=0x0 Jul 19 23:28:16 media kernel: [260151.911910] sd 11:0:0:1: [sdc] Add. Sense: Invalid field in cdb Jul 19 23:28:16 media kernel: [260151.911913] sd 11:0:0:1: [sdc] CDB: Read(10): 28 20 00 00 00 00 00 00 08 00 Jul 19 23:28:16 media kernel: [260151.912128] xhci_hcd 0000:03:00.0: WARN: Stalled endpoint Jul 19 23:28:16 media kernel: [260151.912650] sd 11:0:0:1: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jul 19 23:28:16 media kernel: [260151.912653] sd 11:0:0:1: [sdc] Sense Key : Illegal Request [current] Jul 19 23:28:16 media kernel: [260151.912656] Info fld=0x0 Jul 19 23:28:16 media kernel: [260151.912657] sd 11:0:0:1: [sdc] Add. Sense: Invalid field in cdb Jul 19 23:28:16 media kernel: [260151.912660] sd 11:0:0:1: [sdc] CDB: Read(10): 28 20 00 00 00 00 00 00 08 00 Jul 19 23:28:16 media kernel: [260151.912876] xhci_hcd 0000:03:00.0: WARN: Stalled endpoint Jul 19 23:28:16 media kernel: [260151.913439] sd 11:0:0:1: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jul 19 23:28:16 media kernel: [260151.913442] sd 11:0:0:1: [sdc] Sense Key : Illegal Request [current] Jul 19 23:28:16 media kernel: [260151.913445] Info fld=0x0 Jul 19 23:28:16 media kernel: [260151.913446] sd 11:0:0:1: [sdc] Add. Sense: Invalid field in cdb Jul 19 23:28:16 media kernel: [260151.913449] sd 11:0:0:1: [sdc] CDB: Read(10): 28 20 00 00 00 00 00 00 08 00 Jul 19 23:28:16 media kernel: [260151.945227] xhci_hcd 0000:03:00.0: WARN: Stalled endpoint Jul 19 23:28:16 media kernel: [260151.945863] sd 11:0:0:1: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jul 19 23:28:16 media kernel: [260151.945866] sd 11:0:0:1: [sdc] Sense Key : Illegal Request [current] Jul 19 23:28:16 media kernel: [260151.945870] Info fld=0x0 Jul 19 23:28:16 media kernel: [260151.945871] sd 11:0:0:1: [sdc] Add. Sense: Invalid field in cdb Jul 19 23:28:16 media kernel: [260151.945875] sd 11:0:0:1: [sdc] CDB: Read(10): 28 20 00 00 00 00 00 00 08 00 (...) and so on for like 10 seconds until it gives up (...) 3. Question So, my question would be: what is causing this? Am I missing something, should I configure things differently, is this a known limitation? Searching online for more information did not yield any useful results... Thank you in advance for any help!

    Read the article

  • Much Ado About Nothing: Stub Objects

    - by user9154181
    The Solaris 11 link-editor (ld) contains support for a new type of object that we call a stub object. A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be executed — the runtime linker will kill any process that attempts to load one. However, you can link to a stub object as a dependency, allowing the stub to act as a proxy for the real version of the object. You may well wonder if there is a point to producing an object that contains nothing but linking interface. As it turns out, stub objects are very useful for building large bodies of code such as Solaris. In the last year, we've had considerable success in applying them to one of our oldest and thorniest build problems. In this discussion, I will describe how we came to invent these objects, and how we apply them to building Solaris. This posting explains where the idea for stub objects came from, and details our long and twisty journey from hallway idea to standard link-editor feature. I expect that these details are mainly of interest to those who work on Solaris and its makefiles, those who have done so in the past, and those who work with other similar bodies of code. A subsequent posting will omit the history and background details, and instead discuss how to build and use stub objects. If you are mainly interested in what stub objects are, and don't care about the underlying software war stories, I encourage you to skip ahead. The Long Road To Stubs This all started for me with an email discussion in May of 2008, regarding a change request that was filed in 2002, entitled: 4631488 lib/Makefile is too patient: .WAITs should be reduced This CR encapsulates a number of cronic issues with Solaris builds: We build Solaris with a parallel make (dmake) that tries to build as much of the code base in parallel as possible. There is a lot of code to build, and we've long made use of parallelized builds to get the job done quicker. This is even more important in today's world of massively multicore hardware. Solaris contains a large number of executables and shared objects. Executables depend on shared objects, and shared objects can depend on each other. Before you can build an object, you need to ensure that the objects it needs have been built. This implies a need for serialization, which is in direct opposition to the desire to build everying in parallel. To accurately build objects in the right order requires an accurate set of make rules defining the things that depend on each other. This sounds simple, but the reality is quite complex. In practice, having programmers explicitly specify these dependencies is a losing strategy: It's really hard to get right. It's really easy to get it wrong and never know it because things build anyway. Even if you get it right, it won't stay that way, because dependencies between objects can change over time, and make cannot help you detect such drifing. You won't know that you got it wrong until the builds break. That can be a long time after the change that triggered the breakage happened, making it hard to connect the cause and the effect. Usually this happens just before a release, when the pressure is on, its hard to think calmly, and there is no time for deep fixes. As a poor compromise, the libraries in core Solaris were built using a set of grossly incomplete hand written rules, supplemented with a number of dmake .WAIT directives used to group the libraries into sets of non-interacting groups that can be built in parallel because we think they don't depend on each other. From time to time, someone will suggest that we could analyze the built objects themselves to determine their dependencies and then generate make rules based on those relationships. This is possible, but but there are complications that limit the usefulness of that approach: To analyze an object, you have to build it first. This is a classic chicken and egg scenario. You could analyze the results of a previous build, but then you're not necessarily going to get accurate rules for the current code. It should be possible to build the code without having a built workspace available. The analysis will take time, and remember that we're constantly trying to make builds faster, not slower. By definition, such an approach will always be approximate, and therefore only incremantally more accurate than the hand written rules described above. The hand written rules are fast and cheap, while this idea is slow and complex, so we stayed with the hand written approach. Solaris was built that way, essentially forever, because these are genuinely difficult problems that had no easy answer. The makefiles were full of build races in which the right outcomes happened reliably for years until a new machine or a change in build server workload upset the accidental balance of things. After figuring out what had happened, you'd mutter "How did that ever work?", add another incomplete and soon to be inaccurate make dependency rule to the system, and move on. This was not a satisfying solution, as we tend to be perfectionists in the Solaris group, but we didn't have a better answer. It worked well enough, approximately. And so it went for years. We needed a different approach — a new idea to cut the Gordian Knot. In that discussion from May 2008, my fellow linker-alien Rod Evans had the initial spark that lead us to a game changing series of realizations: The link-editor is used to link objects together, but it only uses the ELF metadata in the object, consisting of symbol tables, ELF versioning sections, and similar data. Notably, it does not look at, or understand, the machine code that makes an object useful at runtime. If you had an object that only contained the ELF metadata for a dependency, but not the code or data, the link-editor would find it equally useful for linking, and would never know the difference. Call it a stub object. In the core Solaris OS, we require all objects to be built with a link-editor mapfile that describes all of its publically available functions and data. Could we build a stub object using the mapfile for the real object? It ought to be very fast to build stub objects, as there are no input objects to process. Unlike the real object, stub objects would not actually require any dependencies, and so, all of the stubs for the entire system could be built in parallel. When building the real objects, one could link against the stub objects instead of the real dependencies. This means that all the real objects can be built built in parallel too, without any serialization. We could replace a system that requires perfect makefile rules with a system that requires no ordering rules whatsoever. The results would be considerably more robust. We immediately realized that this idea had potential, but also that there were many details to sort out, lots of work to do, and that perhaps it wouldn't really pan out. As is often the case, it would be necessary to do the work and see how it turned out. Following that conversation, I set about trying to build a stub object. We determined that a faithful stub has to do the following: Present the same set of global symbols, with the same ELF versioning, as the real object. Functions are simple — it suffices to have a symbol of the right type, possibly, but not necessarily, referencing a null function in its text segment. Copy relocations make data more complicated to stub. The possibility of a copy relocation means that when you create a stub, the data symbols must have the actual size of the real data. Any error in this will go uncaught at link time, and will cause tragic failures at runtime that are very hard to diagnose. For reasons too obscure to go into here, involving tentative symbols, it is also important that the data reside in bss, or not, matching its placement in the real object. If the real object has more than one symbol pointing at the same data item, we call these aliased symbols. All data symbols in the stub object must exhibit the same aliasing as the real object. We imagined the stub library feature working as follows: A command line option to ld tells it to produce a stub rather than a real object. In this mode, only mapfiles are examined, and any object or shared libraries on the command line are are ignored. The extra information needed (function or data, size, and bss details) would be added to the mapfile. When building the real object instead of the stub, the extra information for building stubs would be validated against the resulting object to ensure that they match. In exploring these ideas, I immediately run headfirst into the reality of the original mapfile syntax, a subject that I would later write about as The Problem(s) With Solaris SVR4 Link-Editor Mapfiles. The idea of extending that poor language was a non-starter. Until a better mapfile syntax became available, which seemed unlikely in 2008, the solution could not involve extentions to the mapfile syntax. Instead, we cooked up the idea (hack) of augmenting mapfiles with stylized comments that would carry the necessary information. A typical definition might look like: # DATA(i386) __iob 0x3c0 # DATA(amd64,sparcv9) __iob 0xa00 # DATA(sparc) __iob 0x140 iob; A further problem then became clear: If we can't extend the mapfile syntax, then there's no good way to extend ld with an option to produce stub objects, and to validate them against the real objects. The idea of having ld read comments in a mapfile and parse them for content is an unacceptable hack. The entire point of comments is that they are strictly for the human reader, and explicitly ignored by the tool. Taking all of these speed bumps into account, I made a new plan: A perl script reads the mapfiles, generates some small C glue code to produce empty functions and data definitions, compiles and links the stub object from the generated glue code, and then deletes the generated glue code. Another perl script used after both objects have been built, to compare the real and stub objects, using data from elfdump, and validate that they present the same linking interface. By June 2008, I had written the above, and generated a stub object for libc. It was a useful prototype process to go through, and it allowed me to explore the ideas at a deep level. Ultimately though, the result was unsatisfactory as a basis for real product. There were so many issues: The use of stylized comments were fine for a prototype, but not close to professional enough for shipping product. The idea of having to document and support it was a large concern. The ideal solution for stub objects really does involve having the link-editor accept the same arguments used to build the real object, augmented with a single extra command line option. Any other solution, such as our prototype script, will require makefiles to be modified in deeper ways to support building stubs, and so, will raise barriers to converting existing code. A validation script that rederives what the linker knew when it built an object will always be at a disadvantage relative to the actual linker that did the work. A stub object should be identifyable as such. In the prototype, there was no tag or other metadata that would let you know that they weren't real objects. Being able to identify a stub object in this way means that the file command can tell you what it is, and that the runtime linker can refuse to try and run a program that loads one. At that point, we needed to apply this prototype to building Solaris. As you might imagine, the task of modifying all the makefiles in the core Solaris code base in order to do this is a massive task, and not something you'd enter into lightly. The quality of the prototype just wasn't good enough to justify that sort of time commitment, so I tabled the project, putting it on my list of long term things to think about, and moved on to other work. It would sit there for a couple of years. Semi-coincidentally, one of the projects I tacked after that was to create a new mapfile syntax for the Solaris link-editor. We had wanted to do something about the old mapfile syntax for many years. Others before me had done some paper designs, and a great deal of thought had already gone into the features it should, and should not have, but for various reasons things had never moved beyond the idea stage. When I joined Sun in late 2005, I got involved in reviewing those things and thinking about the problem. Now in 2008, fresh from relearning for the Nth time why the old mapfile syntax was a huge impediment to linker progress, it seemed like the right time to tackle the mapfile issue. Paving the way for proper stub object support was not the driving force behind that effort, but I certainly had them in mind as I moved forward. The new mapfile syntax, which we call version 2, integrated into Nevada build snv_135 in in February 2010: 6916788 ld version 2 mapfile syntax PSARC/2009/688 Human readable and extensible ld mapfile syntax In order to prove that the new mapfile syntax was adequate for general purpose use, I had also done an overhaul of the ON consolidation to convert all mapfiles to use the new syntax, and put checks in place that would ensure that no use of the old syntax would creep back in. That work went back into snv_144 in June 2010: 6916796 OSnet mapfiles should use version 2 link-editor syntax That was a big putback, modifying 517 files, adding 18 new files, and removing 110 old ones. I would have done this putback anyway, as the work was already done, and the benefits of human readable syntax are obvious. However, among the justifications listed in CR 6916796 was this We anticipate adding additional features to the new mapfile language that will be applicable to ON, and which will require all sharable object mapfiles to use the new syntax. I never explained what those additional features were, and no one asked. It was premature to say so, but this was a reference to stub objects. By that point, I had already put together a working prototype link-editor with the necessary support for stub objects. I was pleased to find that building stubs was indeed very fast. On my desktop system (Ultra 24), an amd64 stub for libc can can be built in a fraction of a second: % ptime ld -64 -z stub -o stubs/libc.so.1 -G -hlibc.so.1 \ -ztext -zdefs -Bdirect ... real 0.019708910 user 0.010101680 sys 0.008528431 In order to go from prototype to integrated link-editor feature, I knew that I would need to prove that stub objects were valuable. And to do that, I knew that I'd have to switch the Solaris ON consolidation to use stub objects and evaluate the outcome. And in order to do that experiment, ON would first need to be converted to version 2 mapfiles. Sub-mission accomplished. Normally when you design a new feature, you can devise reasonably small tests to show it works, and then deploy it incrementally, letting it prove its value as it goes. The entire point of stub objects however was to demonstrate that they could be successfully applied to an extremely large and complex code base, and specifically to solve the Solaris build issues detailed above. There was no way to finesse the matter — in order to move ahead, I would have to successfully use stub objects to build the entire ON consolidation and demonstrate their value. In software, the need to boil the ocean can often be a warning sign that things are trending in the wrong direction. Conversely, sometimes progress demands that you build something large and new all at once. A big win, or a big loss — sometimes all you can do is try it and see what happens. And so, I spent some time staring at ON makefiles trying to get a handle on how things work, and how they'd have to change. It's a big and messy world, full of complex interactions, unspecified dependencies, special cases, and knowledge of arcane makefile features... ...and so, I backed away, put it down for a few months and did other work... ...until the fall, when I felt like it was time to stop thinking and pondering (some would say stalling) and get on with it. Without stubs, the following gives a simplified high level view of how Solaris is built: An initially empty directory known as the proto, and referenced via the ROOT makefile macro is established to receive the files that make up the Solaris distribution. A top level setup rule creates the proto area, and performs operations needed to initialize the workspace so that the main build operations can be launched, such as copying needed header files into the proto area. Parallel builds are launched to build the kernel (usr/src/uts), libraries (usr/src/lib), and commands. The install makefile target builds each item and delivers a copy to the proto area. All libraries and executables link against the objects previously installed in the proto, implying the need to synchronize the order in which things are built. Subsequent passes run lint, and do packaging. Given this structure, the additions to use stub objects are: A new second proto area is established, known as the stub proto and referenced via the STUBROOT makefile macro. The stub proto has the same structure as the real proto, but is used to hold stub objects. All files in the real proto are delivered as part of the Solaris product. In contrast, the stub proto is used to build the product, and then thrown away. A new target is added to library Makefiles called stub. This rule builds the stub objects. The ld command is designed so that you can build a stub object using the same ld command line you'd use to build the real object, with the addition of a single -z stub option. This means that the makefile rules for building the stub objects are very similar to those used to build the real objects, and many existing makefile definitions can be shared between them. A new target is added to the Makefiles called stubinstall which delivers the stub objects built by the stub rule into the stub proto. These rules reuse much of existing plumbing used by the existing install rule. The setup rule runs stubinstall over the entire lib subtree as part of its initialization. All libraries and executables link against the objects in the stub proto rather than the main proto, and can therefore be built in parallel without any synchronization. There was no small way to try this that would yield meaningful results. I would have to take a leap of faith and edit approximately 1850 makefiles and 300 mapfiles first, trusting that it would all work out. Once the editing was done, I'd type make and see what happened. This took about 6 weeks to do, and there were many dark days when I'd question the entire project, or struggle to understand some of the many twisted and complex situations I'd uncover in the makefiles. I even found a couple of new issues that required changes to the new stub object related code I'd added to ld. With a substantial amount of encouragement and help from some key people in the Solaris group, I eventually got the editing done and stub objects for the entire workspace built. I found that my desktop system could build all the stub objects in the workspace in roughly a minute. This was great news, as it meant that use of the feature is effectively free — no one was likely to notice or care about the cost of building them. After another week of typing make, fixing whatever failed, and doing it again, I succeeded in getting a complete build! The next step was to remove all of the make rules and .WAIT statements dedicated to controlling the order in which libraries under usr/src/lib are built. This came together pretty quickly, and after a few more speed bumps, I had a workspace that built cleanly and looked like something you might actually be able to integrate someday. This was a significant milestone, but there was still much left to do. I turned to doing full nightly builds. Every type of build (open, closed, OpenSolaris, export, domestic) had to be tried. Each type failed in a new and unique way, requiring some thinking and rework. As things came together, I became aware of things that could have been done better, simpler, or cleaner, and those things also required some rethinking, the seeking of wisdom from others, and some rework. After another couple of weeks, it was in close to final form. My focus turned towards the end game and integration. This was a huge workspace, and needed to go back soon, before changes in the gate would made merging increasingly difficult. At this point, I knew that the stub objects had greatly simplified the makefile logic and uncovered a number of race conditions, some of which had been there for years. I assumed that the builds were faster too, so I did some builds intended to quantify the speedup in build time that resulted from this approach. It had never occurred to me that there might not be one. And so, I was very surprised to find that the wall clock build times for a stock ON workspace were essentially identical to the times for my stub library enabled version! This is why it is important to always measure, and not just to assume. One can tell from first principles, based on all those removed dependency rules in the library makefile, that the stub object version of ON gives dmake considerably more opportunities to overlap library construction. Some hypothesis were proposed, and shot down: Could we have disabled dmakes parallel feature? No, a quick check showed things being build in parallel. It was suggested that we might be I/O bound, and so, the threads would be mostly idle. That's a plausible explanation, but system stats didn't really support it. Plus, the timing between the stub and non-stub cases were just too suspiciously identical. Are our machines already handling as much parallelism as they are capable of, and unable to exploit these additional opportunities? Once again, we didn't see the evidence to back this up. Eventually, a more plausible and obvious reason emerged: We build the libraries and commands (usr/src/lib, usr/src/cmd) in parallel with the kernel (usr/src/uts). The kernel is the long leg in that race, and so, wall clock measurements of build time are essentially showing how long it takes to build uts. Although it would have been nice to post a huge speedup immediately, we can take solace in knowing that stub objects simplify the makefiles and reduce the possibility of race conditions. The next step in reducing build time should be to find ways to reduce or overlap the uts part of the builds. When that leg of the build becomes shorter, then the increased parallelism in the libs and commands will pay additional dividends. Until then, we'll just have to settle for simpler and more robust. And so, I integrated the link-editor support for creating stub objects into snv_153 (November 2010) with 6993877 ld should produce stub objects PSARC/2010/397 ELF Stub Objects followed by the work to convert the ON consolidation in snv_161 (February 2011) with 7009826 OSnet should use stub objects 4631488 lib/Makefile is too patient: .WAITs should be reduced This was a huge putback, with 2108 modified files, 8 new files, and 2 removed files. Due to the size, I was allowed a window after snv_160 closed in which to do the putback. It went pretty smoothly for something this big, a few more preexisting race conditions would be discovered and addressed over the next few weeks, and things have been quiet since then. Conclusions and Looking Forward Solaris has been built with stub objects since February. The fact that developers no longer specify the order in which libraries are built has been a big success, and we've eliminated an entire class of build error. That's not to say that there are no build races left in the ON makefiles, but we've taken a substantial bite out of the problem while generally simplifying and improving things. The introduction of a stub proto area has also opened some interesting new possibilities for other build improvements. As this article has become quite long, and as those uses do not involve stub objects, I will defer that discussion to a future article.

    Read the article

  • Error compiling GLib in Ubuntu 14.04 (trying to install GimpShop)

    - by Nicolás Salvarrey
    I'm kinda new in Linux, so please take it easy on the most complicated stuff. I'm trying to install GimpShop. Installation guide asks me to install GLib first, and when I try to compile it using the make command I get errors. When I run the ./configure --prefix=/usr command, I get this: checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking whether to enable maintainer-specific portions of Makefiles... no checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking for the BeOS... no checking for Win32... no checking whether to enable garbage collector friendliness... no checking whether to disable memory pools... no checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ANSI C... none needed checking for style of include used by make... GNU checking dependency style of gcc... gcc3 checking for c++... no checking for g++... no checking for gcc... gcc checking whether we are using the GNU C++ compiler... no checking whether gcc accepts -g... no checking dependency style of gcc... gcc3 checking for gcc option to accept ANSI C... none needed checking for a BSD-compatible install... /usr/bin/install -c checking for special C compiler options needed for large files... no checking for _FILE_OFFSET_BITS value needed for large files... no checking for _LARGE_FILES value needed for large files... no checking for pkg-config... /usr/bin/pkg-config checking for gawk... (cached) mawk checking for perl5... no checking for perl... perl checking for indent... no checking for perl... /usr/bin/perl checking for iconv_open... yes checking how to run the C preprocessor... gcc -E checking for egrep... grep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking locale.h usability... yes checking locale.h presence... yes checking for locale.h... yes checking for LC_MESSAGES... yes checking libintl.h usability... yes checking libintl.h presence... yes checking for libintl.h... yes checking for ngettext in libc... yes checking for dgettext in libc... yes checking for bind_textdomain_codeset... yes checking for msgfmt... /usr/bin/msgfmt checking for dcgettext... yes checking for gmsgfmt... /usr/bin/msgfmt checking for xgettext... /usr/bin/xgettext checking for catalogs to be installed... am ar az be bg bn bs ca cs cy da de el en_CA en_GB eo es et eu fa fi fr ga gl gu he hi hr id is it ja ko lt lv mk mn ms nb ne nl nn no or pa pl pt pt_BR ro ru sk sl sq sr sr@ije sr@Latn sv ta tl tr uk vi wa xh yi zh_CN zh_TW checking for a sed that does not truncate output... /bin/sed checking for ld used by gcc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for /usr/bin/ld option to reload object files... -r checking for BSD-compatible nm... /usr/bin/nm -B checking whether ln -s works... yes checking how to recognise dependent libraries... pass_all checking dlfcn.h usability... yes checking dlfcn.h presence... yes checking for dlfcn.h... yes checking for g77... no checking for f77... no checking for xlf... no checking for frt... no checking for pgf77... no checking for fort77... no checking for fl32... no checking for af77... no checking for f90... no checking for xlf90... no checking for pgf90... no checking for epcf90... no checking for f95... no checking for fort... no checking for xlf95... no checking for ifc... no checking for efc... no checking for pgf95... no checking for lf95... no checking for gfortran... no checking whether we are using the GNU Fortran 77 compiler... no checking whether accepts -g... no checking the maximum length of command line arguments... 32768 checking command to parse /usr/bin/nm -B output from gcc object... ok checking for objdir... .libs checking for ar... ar checking for ranlib... ranlib checking for strip... strip checking if gcc static flag works... yes checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fPIC checking if gcc PIC flag -fPIC works... yes checking if gcc supports -c -o file.o... yes checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... no configure: creating libtool appending configuration tag "CXX" to libtool appending configuration tag "F77" to libtool checking for extra flags to get ANSI library prototypes... none needed checking for extra flags for POSIX compliance... none needed checking for ANSI C header files... (cached) yes checking for vprintf... yes checking for _doprnt... no checking for working alloca.h... yes checking for alloca... yes checking for atexit... yes checking for on_exit... yes checking for char... yes checking size of char... 1 checking for short... yes checking size of short... 2 checking for long... yes checking size of long... 8 checking for int... yes checking size of int... 4 checking for void *... yes checking size of void *... 8 checking for long long... yes checking size of long long... 8 checking for __int64... no checking size of __int64... 0 checking for format to printf and scanf a guint64... %llu checking for an ANSI C-conforming const... yes checking if malloc() and friends prototypes are gmem.h compatible... no checking for growing stack pointer... yes checking for __inline... yes checking for __inline__... yes checking for inline... yes checking if inline functions in headers work... yes checking for ISO C99 varargs macros in C... yes checking for ISO C99 varargs macros in C++... no checking for GNUC varargs macros... yes checking for GNUC visibility attribute... yes checking whether byte ordering is bigendian... no checking dirent.h usability... yes checking dirent.h presence... yes checking for dirent.h... yes checking float.h usability... yes checking float.h presence... yes checking for float.h... yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking pwd.h usability... yes checking pwd.h presence... yes checking for pwd.h... yes checking sys/param.h usability... yes checking sys/param.h presence... yes checking for sys/param.h... yes checking sys/poll.h usability... yes checking sys/poll.h presence... yes checking for sys/poll.h... yes checking sys/select.h usability... yes checking sys/select.h presence... yes checking for sys/select.h... yes checking for sys/types.h... (cached) yes checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking sys/times.h usability... yes checking sys/times.h presence... yes checking for sys/times.h... yes checking for unistd.h... (cached) yes checking values.h usability... yes checking values.h presence... yes checking for values.h... yes checking for stdint.h... (cached) yes checking sched.h usability... yes checking sched.h presence... yes checking for sched.h... yes checking langinfo.h usability... yes checking langinfo.h presence... yes checking for langinfo.h... yes checking for nl_langinfo... yes checking for nl_langinfo and CODESET... yes checking whether we are using the GNU C Library 2.1 or newer... yes checking stddef.h usability... yes checking stddef.h presence... yes checking for stddef.h... yes checking for stdlib.h... (cached) yes checking for string.h... (cached) yes checking for setlocale... yes checking for size_t... yes checking size of size_t... 8 checking for the appropriate definition for size_t... unsigned long checking for lstat... yes checking for strerror... yes checking for strsignal... yes checking for memmove... yes checking for mkstemp... yes checking for vsnprintf... yes checking for stpcpy... yes checking for strcasecmp... yes checking for strncasecmp... yes checking for poll... yes checking for getcwd... yes checking for nanosleep... yes checking for vasprintf... yes checking for setenv... yes checking for unsetenv... yes checking for getc_unlocked... yes checking for readlink... yes checking for symlink... yes checking for C99 vsnprintf... yes checking whether printf supports positional parameters... yes checking for signed... yes checking for long long... (cached) yes checking for long double... yes checking for wchar_t... yes checking for wint_t... yes checking for size_t... (cached) yes checking for ptrdiff_t... yes checking for inttypes.h... yes checking for stdint.h... yes checking for snprintf... yes checking for C99 snprintf... yes checking for sys_errlist... yes checking for sys_siglist... yes checking for sys_siglist declaration... yes checking for fd_set... yes, found in sys/types.h checking whether realloc (NULL,) will work... yes checking for nl_langinfo (CODESET)... yes checking for OpenBSD strlcpy/strlcat... no checking for an implementation of va_copy()... yes checking for an implementation of __va_copy()... yes checking whether va_lists can be copied by value... no checking for dlopen... no checking for NSLinkModule... no checking for dlopen in -ldl... yes checking for dlsym in -ldl... yes checking for RTLD_GLOBAL brokenness... no checking for preceeding underscore in symbols... no checking for dlerror... yes checking for the suffix of shared libraries... .so checking for gspawn implementation... gspawn.lo checking for GIOChannel implementation... giounix.lo checking for platform-dependent source... checking whether to compile timeloop... yes checking if building for some Win32 platform... no checking for thread implementation... posix checking thread related cflags... -pthread checking for sched_get_priority_min... yes checking thread related libraries... -pthread checking for localtime_r... yes checking for posix getpwuid_r... yes checking size of pthread_t... 8 checking for pthread_attr_setstacksize... yes checking for minimal/maximal thread priority... sched_get_priority_min(SCHED_OTHER)/sched_get_priority_max(SCHED_OTHER) checking for pthread_setschedparam... yes checking for posix yield function... sched_yield checking size of pthread_mutex_t... 40 checking byte contents of PTHREAD_MUTEX_INITIALIZER... 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 checking whether to use assembler code for atomic operations... x86_64 checking value of POLLIN... 1 checking value of POLLOUT... 4 checking value of POLLPRI... 2 checking value of POLLERR... 8 checking value of POLLHUP... 16 checking value of POLLNVAL... 32 checking for EILSEQ... yes configure: creating ./config.status config.status: creating glib-2.0.pc config.status: creating glib-2.0-uninstalled.pc config.status: creating gmodule-2.0.pc config.status: creating gmodule-no-export-2.0.pc config.status: creating gmodule-2.0-uninstalled.pc config.status: creating gthread-2.0.pc config.status: creating gthread-2.0-uninstalled.pc config.status: creating gobject-2.0.pc config.status: creating gobject-2.0-uninstalled.pc config.status: creating glib-zip config.status: creating glib-gettextize config.status: creating Makefile config.status: creating build/Makefile config.status: creating build/win32/Makefile config.status: creating build/win32/dirent/Makefile config.status: creating glib/Makefile config.status: creating glib/libcharset/Makefile config.status: creating glib/gnulib/Makefile config.status: creating gmodule/Makefile config.status: creating gmodule/gmoduleconf.h config.status: creating gobject/Makefile config.status: creating gobject/glib-mkenums config.status: creating gthread/Makefile config.status: creating po/Makefile.in config.status: creating docs/Makefile config.status: creating docs/reference/Makefile config.status: creating docs/reference/glib/Makefile config.status: creating docs/reference/glib/version.xml config.status: creating docs/reference/gobject/Makefile config.status: creating docs/reference/gobject/version.xml config.status: creating tests/Makefile config.status: creating tests/gobject/Makefile config.status: creating m4macros/Makefile config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands config.status: executing default-1 commands config.status: executing glibconfig.h commands config.status: glibconfig.h is unchanged config.status: executing chmod-scripts commands nsalvarrey@Delleuze:~/glib-2.6.3$ ^C nsalvarrey@Delleuze:~/glib-2.6.3$ And then, with the make command, I get this: galias.h:83:39: error: 'g_ascii_digit_value' aliased to undefined symbol 'IA__g_ascii_digit_value' extern __typeof (g_ascii_digit_value) g_ascii_digit_value __attribute((alias("IA__g_ascii_digit_value"), visibility("default"))); ^ In file included from garray.c:35:0: galias.h:31:35: error: 'g_allocator_new' aliased to undefined symbol 'IA__g_allocator_new' extern __typeof (g_allocator_new) g_allocator_new __attribute((alias("IA__g_allocator_new"), visibility("default"))); ^ make[4]: *** [garray.lo] Error 1 make[4]: se sale del directorio «/home/nsalvarrey/glib-2.6.3/glib» make[3]: *** [all-recursive] Error 1 make[3]: se sale del directorio «/home/nsalvarrey/glib-2.6.3/glib» make[2]: *** [all] Error 2 make[2]: se sale del directorio «/home/nsalvarrey/glib-2.6.3/glib» make[1]: *** [all-recursive] Error 1 make[1]: se sale del directorio «/home/nsalvarrey/glib-2.6.3» make: *** [all] Error 2 nsalvarrey@Delleuze:~/glib-2.6.3$ (it's actually a lot longer) Can somebody help me?

    Read the article

  • Failed to Install Xdebug

    - by burnt1ce
    've registered xdebug in php.ini (as per http://xdebug.org/docs/install) but it's not showing up when i run "php -m" or when i get a test page to run "phpinfo()". I've just installed the latest version of XAMPP. I've used both "zend_extention" and "zend_extention_ts" to specify the path of the xdebug dll. I ensured that my apache server restarted and used the latest change of my php.ini by executing "httpd -k restart". Can anyone provide any suggestions in getting xdebug to show up? Here are the contents of my php.ini file. [PHP] ;;;;;;;;;;;;;;;;;;; ; About php.ini ; ;;;;;;;;;;;;;;;;;;; ; PHP's initialization file, generally called php.ini, is responsible for ; configuring many of the aspects of PHP's behavior. ; PHP attempts to find and load this configuration from a number of locations. ; The following is a summary of its search order: ; 1. SAPI module specific location. ; 2. The PHPRC environment variable. (As of PHP 5.2.0) ; 3. A number of predefined registry keys on Windows (As of PHP 5.2.0) ; 4. Current working directory (except CLI) ; 5. The web server's directory (for SAPI modules), or directory of PHP ; (otherwise in Windows) ; 6. The directory from the --with-config-file-path compile time option, or the ; Windows directory (C:\windows or C:\winnt) ; See the PHP docs for more specific information. ; http://php.net/configuration.file ; The syntax of the file is extremely simple. Whitespace and Lines ; beginning with a semicolon are silently ignored (as you probably guessed). ; Section headers (e.g. [Foo]) are also silently ignored, even though ; they might mean something in the future. ; Directives following the section heading [PATH=/www/mysite] only ; apply to PHP files in the /www/mysite directory. Directives ; following the section heading [HOST=www.example.com] only apply to ; PHP files served from www.example.com. Directives set in these ; special sections cannot be overridden by user-defined INI files or ; at runtime. Currently, [PATH=] and [HOST=] sections only work under ; CGI/FastCGI. ; http://php.net/ini.sections ; Directives are specified using the following syntax: ; directive = value ; Directive names are *case sensitive* - foo=bar is different from FOO=bar. ; Directives are variables used to configure PHP or PHP extensions. ; There is no name validation. If PHP can't find an expected ; directive because it is not set or is mistyped, a default value will be used. ; The value can be a string, a number, a PHP constant (e.g. E_ALL or M_PI), one ; of the INI constants (On, Off, True, False, Yes, No and None) or an expression ; (e.g. E_ALL & ~E_NOTICE), a quoted string ("bar"), or a reference to a ; previously set variable or directive (e.g. ${foo}) ; Expressions in the INI file are limited to bitwise operators and parentheses: ; | bitwise OR ; ^ bitwise XOR ; & bitwise AND ; ~ bitwise NOT ; ! boolean NOT ; Boolean flags can be turned on using the values 1, On, True or Yes. ; They can be turned off using the values 0, Off, False or No. ; An empty string can be denoted by simply not writing anything after the equal ; sign, or by using the None keyword: ; foo = ; sets foo to an empty string ; foo = None ; sets foo to an empty string ; foo = "None" ; sets foo to the string 'None' ; If you use constants in your value, and these constants belong to a ; dynamically loaded extension (either a PHP extension or a Zend extension), ; you may only use these constants *after* the line that loads the extension. ;;;;;;;;;;;;;;;;;;; ; About this file ; ;;;;;;;;;;;;;;;;;;; ; PHP comes packaged with two INI files. One that is recommended to be used ; in production environments and one that is recommended to be used in ; development environments. ; php.ini-production contains settings which hold security, performance and ; best practices at its core. But please be aware, these settings may break ; compatibility with older or less security conscience applications. We ; recommending using the production ini in production and testing environments. ; php.ini-development is very similar to its production variant, except it's ; much more verbose when it comes to errors. We recommending using the ; development version only in development environments as errors shown to ; application users can inadvertently leak otherwise secure information. ;;;;;;;;;;;;;;;;;;; ; Quick Reference ; ;;;;;;;;;;;;;;;;;;; ; The following are all the settings which are different in either the production ; or development versions of the INIs with respect to PHP's default behavior. ; Please see the actual settings later in the document for more details as to why ; we recommend these changes in PHP's behavior. ; allow_call_time_pass_reference ; Default Value: On ; Development Value: Off ; Production Value: Off ; display_errors ; Default Value: On ; Development Value: On ; Production Value: Off ; display_startup_errors ; Default Value: Off ; Development Value: On ; Production Value: Off ; error_reporting ; Default Value: E_ALL & ~E_NOTICE ; Development Value: E_ALL | E_STRICT ; Production Value: E_ALL & ~E_DEPRECATED ; html_errors ; Default Value: On ; Development Value: On ; Production value: Off ; log_errors ; Default Value: Off ; Development Value: On ; Production Value: On ; magic_quotes_gpc ; Default Value: On ; Development Value: Off ; Production Value: Off ; max_input_time ; Default Value: -1 (Unlimited) ; Development Value: 60 (60 seconds) ; Production Value: 60 (60 seconds) ; output_buffering ; Default Value: Off ; Development Value: 4096 ; Production Value: 4096 ; register_argc_argv ; Default Value: On ; Development Value: Off ; Production Value: Off ; register_long_arrays ; Default Value: On ; Development Value: Off ; Production Value: Off ; request_order ; Default Value: None ; Development Value: "GP" ; Production Value: "GP" ; session.bug_compat_42 ; Default Value: On ; Development Value: On ; Production Value: Off ; session.bug_compat_warn ; Default Value: On ; Development Value: On ; Production Value: Off ; session.gc_divisor ; Default Value: 100 ; Development Value: 1000 ; Production Value: 1000 ; session.hash_bits_per_character ; Default Value: 4 ; Development Value: 5 ; Production Value: 5 ; short_open_tag ; Default Value: On ; Development Value: Off ; Production Value: Off ; track_errors ; Default Value: Off ; Development Value: On ; Production Value: Off ; url_rewriter.tags ; Default Value: "a=href,area=href,frame=src,form=,fieldset=" ; Development Value: "a=href,area=href,frame=src,input=src,form=fakeentry" ; Production Value: "a=href,area=href,frame=src,input=src,form=fakeentry" ; variables_order ; Default Value: "EGPCS" ; Development Value: "GPCS" ; Production Value: "GPCS" ;;;;;;;;;;;;;;;;;;;; ; php.ini Options ; ;;;;;;;;;;;;;;;;;;;; ; Name for user-defined php.ini (.htaccess) files. Default is ".user.ini" ;user_ini.filename = ".user.ini" ; To disable this feature set this option to empty value ;user_ini.filename = ; TTL for user-defined php.ini files (time-to-live) in seconds. Default is 300 seconds (5 minutes) ;user_ini.cache_ttl = 300 ;;;;;;;;;;;;;;;;;;;; ; Language Options ; ;;;;;;;;;;;;;;;;;;;; ; Enable the PHP scripting language engine under Apache. ; http://php.net/engine engine = On ; This directive determines whether or not PHP will recognize code between ; <? and ?> tags as PHP source which should be processed as such. It's been ; recommended for several years that you not use the short tag "short cut" and ; instead to use the full <?php and ?> tag combination. With the wide spread use ; of XML and use of these tags by other languages, the server can become easily ; confused and end up parsing the wrong code in the wrong context. But because ; this short cut has been a feature for such a long time, it's currently still ; supported for backwards compatibility, but we recommend you don't use them. ; Default Value: On ; Development Value: Off ; Production Value: Off ; http://php.net/short-open-tag short_open_tag = Off ; Allow ASP-style <% %> tags. ; http://php.net/asp-tags asp_tags = Off ; The number of significant digits displayed in floating point numbers. ; http://php.net/precision precision = 14 ; Enforce year 2000 compliance (will cause problems with non-compliant browsers) ; http://php.net/y2k-compliance y2k_compliance = On ; Output buffering is a mechanism for controlling how much output data ; (excluding headers and cookies) PHP should keep internally before pushing that ; data to the client. If your application's output exceeds this setting, PHP ; will send that data in chunks of roughly the size you specify. ; Turning on this setting and managing its maximum buffer size can yield some ; interesting side-effects depending on your application and web server. ; You may be able to send headers and cookies after you've already sent output ; through print or echo. You also may see performance benefits if your server is ; emitting less packets due to buffered output versus PHP streaming the output ; as it gets it. On production servers, 4096 bytes is a good setting for performance ; reasons. ; Note: Output buffering can also be controlled via Output Buffering Control ; functions. ; Possible Values: ; On = Enabled and buffer is unlimited. (Use with caution) ; Off = Disabled ; Integer = Enables the buffer and sets its maximum size in bytes. ; Note: This directive is hardcoded to Off for the CLI SAPI ; Default Value: Off ; Development Value: 4096 ; Production Value: 4096 ; http://php.net/output-buffering output_buffering = Off ; You can redirect all of the output of your scripts to a function. For ; example, if you set output_handler to "mb_output_handler", character ; encoding will be transparently converted to the specified encoding. ; Setting any output handler automatically turns on output buffering. ; Note: People who wrote portable scripts should not depend on this ini ; directive. Instead, explicitly set the output handler using ob_start(). ; Using this ini directive may cause problems unless you know what script ; is doing. ; Note: You cannot use both "mb_output_handler" with "ob_iconv_handler" ; and you cannot use both "ob_gzhandler" and "zlib.output_compression". ; Note: output_handler must be empty if this is set 'On' !!!! ; Instead you must use zlib.output_handler. ; http://php.net/output-handler ;output_handler = ; Transparent output compression using the zlib library ; Valid values for this option are 'off', 'on', or a specific buffer size ; to be used for compression (default is 4KB) ; Note: Resulting chunk size may vary due to nature of compression. PHP ; outputs chunks that are few hundreds bytes each as a result of ; compression. If you prefer a larger chunk size for better ; performance, enable output_buffering in addition. ; Note: You need to use zlib.output_handler instead of the standard ; output_handler, or otherwise the output will be corrupted. ; http://php.net/zlib.output-compression zlib.output_compression = Off ; http://php.net/zlib.output-compression-level ;zlib.output_compression_level = -1 ; You cannot specify additional output handlers if zlib.output_compression ; is activated here. This setting does the same as output_handler but in ; a different order. ; http://php.net/zlib.output-handler ;zlib.output_handler = ; Implicit flush tells PHP to tell the output layer to flush itself ; automatically after every output block. This is equivalent to calling the ; PHP function flush() after each and every call to print() or echo() and each ; and every HTML block. Turning this option on has serious performance ; implications and is generally recommended for debugging purposes only. ; http://php.net/implicit-flush ; Note: This directive is hardcoded to On for the CLI SAPI implicit_flush = Off ; The unserialize callback function will be called (with the undefined class' ; name as parameter), if the unserializer finds an undefined class ; which should be instantiated. A warning appears if the specified function is ; not defined, or if the function doesn't include/implement the missing class. ; So only set this entry, if you really want to implement such a ; callback-function. unserialize_callback_func = ; When floats & doubles are serialized store serialize_precision significant ; digits after the floating point. The default value ensures that when floats ; are decoded with unserialize, the data will remain the same. serialize_precision = 100 ; This directive allows you to enable and disable warnings which PHP will issue ; if you pass a value by reference at function call time. Passing values by ; reference at function call time is a deprecated feature which will be removed ; from PHP at some point in the near future. The acceptable method for passing a ; value by reference to a function is by declaring the reference in the functions ; definition, not at call time. This directive does not disable this feature, it ; only determines whether PHP will warn you about it or not. These warnings ; should enabled in development environments only. ; Default Value: On (Suppress warnings) ; Development Value: Off (Issue warnings) ; Production Value: Off (Issue warnings) ; http://php.net/allow-call-time-pass-reference allow_call_time_pass_reference = On ; Safe Mode ; http://php.net/safe-mode safe_mode = Off ; By default, Safe Mode does a UID compare check when ; opening files. If you want to relax this to a GID compare, ; then turn on safe_mode_gid. ; http://php.net/safe-mode-gid safe_mode_gid = Off ; When safe_mode is on, UID/GID checks are bypassed when ; including files from this directory and its subdirectories. ; (directory must also be in include_path or full path must ; be used when including) ; http://php.net/safe-mode-include-dir safe_mode_include_dir = ; When safe_mode is on, only executables located in the safe_mode_exec_dir ; will be allowed to be executed via the exec family of functions. ; http://php.net/safe-mode-exec-dir safe_mode_exec_dir = ; Setting certain environment variables may be a potential security breach. ; This directive contains a comma-delimited list of prefixes. In Safe Mode, ; the user may only alter environment variables whose names begin with the ; prefixes supplied here. By default, users will only be able to set ; environment variables that begin with PHP_ (e.g. PHP_FOO=BAR). ; Note: If this directive is empty, PHP will let the user modify ANY ; environment variable! ; http://php.net/safe-mode-allowed-env-vars safe_mode_allowed_env_vars = PHP_ ; This directive contains a comma-delimited list of environment variables that ; the end user won't be able to change using putenv(). These variables will be ; protected even if safe_mode_allowed_env_vars is set to allow to change them. ; http://php.net/safe-mode-protected-env-vars safe_mode_protected_env_vars = LD_LIBRARY_PATH ; open_basedir, if set, limits all file operations to the defined directory ; and below. This directive makes most sense if used in a per-directory ; or per-virtualhost web server configuration file. This directive is ; *NOT* affected by whether Safe Mode is turned On or Off. ; http://php.net/open-basedir ;open_basedir = ; This directive allows you to disable certain functions for security reasons. ; It receives a comma-delimited list of function names. This directive is ; *NOT* affected by whether Safe Mode is turned On or Off. ; http://php.net/disable-functions disable_functions = ; This directive allows you to disable certain classes for security reasons. ; It receives a comma-delimited list of class names. This directive is ; *NOT* affected by whether Safe Mode is turned On or Off. ; http://php.net/disable-classes disable_classes = ; Colors for Syntax Highlighting mode. Anything that's acceptable in ; <span style="color: ???????"> would work. ; http://php.net/syntax-highlighting ;highlight.string = #DD0000 ;highlight.comment = #FF9900 ;highlight.keyword = #007700 ;highlight.bg = #FFFFFF ;highlight.default = #0000BB ;highlight.html = #000000 ; If enabled, the request will be allowed to complete even if the user aborts ; the request. Consider enabling it if executing long requests, which may end up ; being interrupted by the user or a browser timing out. PHP's default behavior ; is to disable this feature. ; http://php.net/ignore-user-abort ;ignore_user_abort = On ; Determines the size of the realpath cache to be used by PHP. This value should ; be increased on systems where PHP opens many files to reflect the quantity of ; the file operations performed. ; http://php.net/realpath-cache-size ;realpath_cache_size = 16k ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://php.net/realpath-cache-ttl ;realpath_cache_ttl = 120 ;;;;;;;;;;;;;;;;; ; Miscellaneous ; ;;;;;;;;;;;;;;;;; ; Decides whether PHP may expose the fact that it is installed on the server ; (e.g. by adding its signature to the Web server header). It is no security ; threat in any way, but it makes it possible to determine whether you use PHP ; on your server or not. ; http://php.net/expose-php expose_php = On ;;;;;;;;;;;;;;;;;;; ; Resource Limits ; ;;;;;;;;;;;;;;;;;;; ; Maximum execution time of each script, in seconds ; http://php.net/max-execution-time ; Note: This directive is hardcoded to 0 for the CLI SAPI max_execution_time = 60 ; Maximum amount of time each script may spend parsing request data. It's a good ; idea to limit this time on productions servers in order to eliminate unexpectedly ; long running scripts. ; Note: This directive is hardcoded to -1 for the CLI SAPI ; Default Value: -1 (Unlimited) ; Development Value: 60 (60 seconds) ; Production Value: 60 (60 seconds) ; http://php.net/max-input-time max_input_time = 60 ; Maximum input variable nesting level ; http://php.net/max-input-nesting-level ;max_input_nesting_level = 64 ; Maximum amount of memory a script may consume (128MB) ; http://php.net/memory-limit memory_limit = 128M ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; Error handling and logging ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; This directive informs PHP of which errors, warnings and notices you would like ; it to take action for. The recommended way of setting values for this ; directive is through the use of the error level constants and bitwise ; operators. The error level constants are below here for convenience as well as ; some common settings and their meanings. ; By default, PHP is set to take action on all errors, notices and warnings EXCEPT ; those related to E_NOTICE and E_STRICT, which together cover best practices and ; recommended coding standards in PHP. For performance reasons, this is the ; recommend error reporting setting. Your production server shouldn't be wasting ; resources complaining about best practices and coding standards. That's what ; development servers and development settings are for. ; Note: The php.ini-development file has this setting as E_ALL | E_STRICT. This ; means it pretty much reports everything which is exactly what you want during ; development and early testing. ; ; Error Level Constants: ; E_ALL - All errors and warnings (includes E_STRICT as of PHP 6.0.0) ; E_ERROR - fatal run-time errors ; E_RECOVERABLE_ERROR - almost fatal run-time errors ; E_WARNING - run-time warnings (non-fatal errors) ; E_PARSE - compile-time parse errors ; E_NOTICE - run-time notices (these are warnings which often result ; from a bug in your code, but it's possible that it was ; intentional (e.g., using an uninitialized variable and ; relying on the fact it's automatically initialized to an ; empty string) ; E_STRICT - run-time notices, enable to have PHP suggest changes ; to your code which will ensure the best interoperability ; and forward compatibility of your code ; E_CORE_ERROR - fatal errors that occur during PHP's initial startup ; E_CORE_WARNING - warnings (non-fatal errors) that occur during PHP's ; initial startup ; E_COMPILE_ERROR - fatal compile-time errors ; E_COMPILE_WARNING - compile-time warnings (non-fatal errors) ; E_USER_ERROR - user-generated error message ; E_USER_WARNING - user-generated warning message ; E_USER_NOTICE - user-generated notice message ; E_DEPRECATED - warn about code that will not work in future versions ; of PHP ; E_USER_DEPRECATED - user-generated deprecation warnings ; ; Common Values: ; E_ALL & ~E_NOTICE (Show all errors, except for notices and coding standards warnings.) ; E_ALL & ~E_NOTICE | E_STRICT (Show all errors, except for notices) ; E_COMPILE_ERROR|E_RECOVERABLE_ERROR|E_ERROR|E_CORE_ERROR (Show only errors) ; E_ALL | E_STRICT (Show all errors, warnings and notices including coding standards.) ; Default Value: E_ALL & ~E_NOTICE ; Development Value: E_ALL | E_STRICT ; Production Value: E_ALL & ~E_DEPRECATED ; http://php.net/error-reporting error_reporting = E_ALL & ~E_NOTICE & ~E_DEPRECATED ; This directive controls whether or not and where PHP will output errors, ; notices and warnings too. Error output is very useful during development, but ; it could be very dangerous in production environments. Depending on the code ; which is triggering the error, sensitive information could potentially leak ; out of your application such as database usernames and passwords or worse. ; It's recommended that errors be logged on production servers rather than ; having the errors sent to STDOUT. ; Possible Values: ; Off = Do not display any errors ; stderr = Display errors to STDERR (affects only CGI/CLI binaries!) ; On or stdout = Display errors to STDOUT ; Default Value: On ; Development Value: On ; Production Value: Off ; http://php.net/display-errors display_errors = On ; The display of errors which occur during PHP's startup sequence are handled ; separately from display_errors. PHP's default behavior is to suppress those ; errors from clients. Turning the display of startup errors on can be useful in ; debugging configuration problems. But, it's strongly recommended that you ; leave this setting off on production servers. ; Default Value: Off ; Development Value: On ; Production Value: Off ; http://php.net/display-startup-errors display_startup_errors = On ; Besides displaying errors, PHP can also log errors to locations such as a ; server-specific log, STDERR, or a location specified by the error_log ; directive found below. While errors should not be displayed on productions ; servers they should still be monitored and logging is a great way to do that. ; Default Value: Off ; Development Value: On ; Production Value: On ; http://php.net/log-errors log_errors = Off ; Set maximum length of log_errors. In error_log information about the source is ; added. The default is 1024 and 0 allows to not apply any maximum length at all. ; http://php.net/log-errors-max-len log_errors_max_len = 1024 ; Do not log repeated messages. Repeated errors must occur in same file on same ; line unless ignore_repeated_source is set true. ; http://php.net/ignore-repeated-errors ignore_repeated_errors = Off ; Ignore source of message when ignoring repeated messages. When this setting ; is On you will not log errors with repeated messages from different files or ; source lines. ; http://php.net/ignore-repeated-source ignore_repeated_source = Off ; If this parameter is set to Off, then memory leaks will not be shown (on ; stdout or in the log). This has only effect in a debug compile, and if ; error reporting includes E_WARNING in the allowed list ; http://php.net/report-memleaks report_memleaks = On ; This setting is on by default. ;report_zend_debug = 0 ; Store the last error/warning message in $php_errormsg (boolean). Setting this value ; to On can assist in debugging and is appropriate for development servers. It should ; however be disabled on production servers. ; Default Value: Off ; Development Value: On ; Production Value: Off ; http://php.net/track-errors track_errors = Off ; Turn off normal error reporting and emit XML-RPC error XML ; http://php.net/xmlrpc-errors ;xmlrpc_errors = 0 ; An XML-RPC faultCode ;xmlrpc_error_number = 0 ; When PHP displays or logs an error, it has the capability of inserting html ; links to documentation related to that error. This directive controls whether ; those HTML links appear in error messages or not. For performance and security ; reasons, it's recommended you disable this on production servers. ; Note: This directive is hardcoded to Off for the CLI SAPI ; Default Value: On ; Development Value: On ; Production value: Off ; http://php.net/html-errors html_errors = On ; If html_errors is set On PHP produces clickable error messages that direct ; to a page describing the error or function causing the error in detail. ; You can download a copy of the PHP manual from http://php.net/docs ; and change docref_root to the base URL of your local copy including the ; leading '/'. You must also specify the file extension being used including ; the dot. PHP's default behavior is to leave these settings empty. ; Note: Never use this feature for production boxes. ; http://php.net/docref-root ; Examples ;docref_root = "/phpmanual/" ; http://php.net/docref-ext ;docref_ext = .html ; String to output before an error message. PHP's default behavior is to leave ; this setting blank. ; http://php.net/error-prepend-string ; Example: ;error_prepend_string = "<font color=#ff0000>" ; String to output after an error message. PHP's default behavior is to leave ; this setting blank. ; http://php.net/error-append-string ; Example: ;error_append_string = "</font>" ; Log errors to specified file. PHP's default behavior is to leave this value ; empty. ; http://php.net/error-log ; Example: ;error_log = php_errors.log ; Log errors to syslog (Event Log on NT, not valid in Windows 95). ;error_log = syslog ;error_log = "C:\xampp\apache\logs\php_error.log" ;;;;;;;;;;;;;;;;; ; Data Handling ; ;;;;;;;;;;;;;;;;; ; Note - track_vars is ALWAYS enabled ; The separator used in PHP generated URLs to separate arguments. ; PHP's default setting is "&". ; http://php.net/arg-separator.output ; Example: arg_separator.output = "&amp;" ; List of separator(s) used by PHP to parse input URLs into variables. ; PHP's default setting is "&

    Read the article

  • Does boost::asio makes excessive small heap allocations or am I wrong?

    - by Poni
    #include <cstdlib> #include <iostream> #include <boost/bind.hpp> #include <boost/asio.hpp> using boost::asio::ip::tcp; class session { public: session(boost::asio::io_service& io_service) : socket_(io_service) { } tcp::socket& socket() { return socket_; } void start() { socket_.async_read_some(boost::asio::buffer(data_, max_length - 1), boost::bind(&session::handle_read, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); } void handle_read(const boost::system::error_code& error, size_t bytes_transferred) { if (!error) { data_[bytes_transferred] = '\0'; if(NULL != strstr(data_, "quit")) { this->socket().shutdown(boost::asio::ip::tcp::socket::shutdown_both); this->socket().close(); // how to make this dispatch "handle_read()" with a "disconnected" flag? } else { boost::asio::async_write(socket_, boost::asio::buffer(data_, bytes_transferred), boost::bind(&session::handle_write, this, boost::asio::placeholders::error)); socket_.async_read_some(boost::asio::buffer(data_, max_length - 1), boost::bind(&session::handle_read, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); } } else { delete this; } } void handle_write(const boost::system::error_code& error) { if (!error) { // } else { delete this; } } private: tcp::socket socket_; enum { max_length = 1024 }; char data_[max_length]; }; class server { public: server(boost::asio::io_service& io_service, short port) : io_service_(io_service), acceptor_(io_service, tcp::endpoint(tcp::v4(), port)) { session* new_session = new session(io_service_); acceptor_.async_accept(new_session->socket(), boost::bind(&server::handle_accept, this, new_session, boost::asio::placeholders::error)); } void handle_accept(session* new_session, const boost::system::error_code& error) { if (!error) { new_session->start(); new_session = new session(io_service_); acceptor_.async_accept(new_session->socket(), boost::bind(&server::handle_accept, this, new_session, boost::asio::placeholders::error)); } else { delete new_session; } } private: boost::asio::io_service& io_service_; tcp::acceptor acceptor_; }; int main(int argc, char* argv[]) { try { if (argc != 2) { std::cerr << "Usage: async_tcp_echo_server <port>\n"; return 1; } boost::asio::io_service io_service; using namespace std; // For atoi. server s(io_service, atoi(argv[1])); io_service.run(); } catch (std::exception& e) { std::cerr << "Exception: " << e.what() << "\n"; } return 0; } While experimenting with boost::asio I've noticed that within the calls to async_write()/async_read_some() there is a usage of the C++ "new" keyword. Also, when stressing this echo server with a client (1 connection) that sends for example 100,000 times some data, the memory usage of this program is getting higher and higher. What's going on? Will it allocate memory for every call? Or am I wrong? Asking because it doesn't seem right that a server app will allocate, anything. Can I handle it, say with a memory pool? Another side-question: See the "this-socket().close();" ? I want it, as the comment right to it says, to dispatch that same function one last time, with a disconnection error. Need that to do some clean-up. How do I do that? Thank you all gurus (:

    Read the article

  • Java Fx Data bind not working with File Read

    - by rjha94
    Hi I am using a very simple JavaFx client to upload files. I read the file in chunks of 1 MB (using Byte Buffer) and upload using multi part POST to a PHP script. I want to update the progress bar of my client to show progress after each chunk is uploaded. The calculations for upload progress look correct but the progress bar is not updated. I am using bind keyword. I am no JavaFx expert and I am hoping some one can point out my mistake. I am writing this client to fix the issues posted here (http://stackoverflow.com/questions/2447837/upload-1gb-files-using-chunking-in-php) /* * Main.fx * * Created on Mar 16, 2010, 1:58:32 PM */ package webgloo; import javafx.stage.Stage; import javafx.scene.Scene; import javafx.scene.layout.VBox; import javafx.geometry.VPos; import javafx.scene.control.Button; import javafx.scene.control.Label; import javafx.scene.layout.HBox; import javafx.scene.layout.LayoutInfo; import javafx.scene.text.Font; import javafx.scene.control.ProgressBar; import java.io.FileInputStream; /** * @author rajeev jha */ var totalBytes:Float = 1; var bytesWritten:Float = 0; var progressUpload:Float; var uploadURI = "http://www.test1.com/test/receiver.php"; var postMax = 1024000 ; function uploadFile(inputFile: java.io.File) { totalBytes = inputFile.length(); bytesWritten = 1; println("To-Upload - {totalBytes}"); var is = new FileInputStream(inputFile); var fc = is.getChannel(); //1 MB byte buffer var chunkCount = 0; var bb = java.nio.ByteBuffer.allocate(postMax); while(fc.read(bb) >= 0){ println("loop:start"); bb.flip(); var limit = bb.limit(); var bytes = GigaFileUploader.getBufferBytes(bb.array(), limit); var content = GigaFileUploader.createPostContent(inputFile.getName(), bytes); GigaFileUploader.upload(uploadURI, content); bytesWritten = bytesWritten + limit ; progressUpload = 1.0 * bytesWritten / totalBytes ; println("Progress is - {progressUpload}"); chunkCount++; bb.clear(); println("loop:end"); } } var label = Label { font: Font { size: 12 } text: bind "Uploaded - {bytesWritten * 100 / (totalBytes)}%" layoutInfo: LayoutInfo { vpos: VPos.CENTER maxWidth: 120 minWidth: 120 width: 120 height: 30 } } def jFileChooser = new javax.swing.JFileChooser(); jFileChooser.setApproveButtonText("Upload"); var button = Button { text: "Upload" layoutInfo: LayoutInfo { width: 100 height: 30 } action: function () { var outputFile = jFileChooser.showOpenDialog(null); if (outputFile == javax.swing.JFileChooser.APPROVE_OPTION) { uploadFile(jFileChooser.getSelectedFile()); } } } var hBox = HBox { spacing: 10 content: [label, button] } var progressBar = ProgressBar { progress: bind progressUpload layoutInfo: LayoutInfo { width: 240 height: 30 } } var vBox = VBox { spacing: 10 content: [hBox, progressBar] layoutX: 10 layoutY: 10 } Stage { title: "Upload File" width: 270 height: 120 scene: Scene { content: [vBox] } resizable: false }

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112  | Next Page >