Search Results

Search found 20433 results on 818 pages for 'marketing always wins'.

Page 149/818 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Any way to speed up this hierarchical query?

    - by RenderIn
    I've got a serious performance problem with a hierarchical query that I can't seem to fix. I am modeling several organization charts in my database, each representing a virtual organization within our company. For example, we have several temporary committees that are created from time to time and there may be a Committee Organizer role at the top of this virtual hierarchy, with several people assigned to the Committee Member role beneath the organizer. Some of our virtual organizations have many levels and several branches at each level. I have a single table in which I represent all the role assignments. i.e. a ROLE_ID column and a PARENT_ROLE_ID column which is a foreign key to the ROLE_ID column. For each assignment we also store as a column the location in the company where this person has the assignment. For example, the Committee Organizer would have a company-level/ CEO assignment, while the committee members would have department-level assignments such as ACCOUNTING, MARKETING, etc. So to model the organizer/member relationship for two individuals we would have: ROLE_ID = 4 PARENT_ROLE_ID = NULL EMPLOYEE_NUMBER = 213423 COMPANY_LOCATION = CEO ROLE_ID = 5 PARENT_ROLE_ID = 4 EMPLOYEE_NUMBER = 838221 COMPANY_LOCATION = ACCOUNTING Here's where things get tricky. I have an application that every person in the organization can log in to. When they log in they should be able to view all the virtual organizations in our company. e.g. the committee members should be able to see the committee organizer and vice-versa. However, only the committee organizer should be able to edit the committee members. The difficulty is in determining whether an individual (who can have multiple role assignments) has edit access for each other assignment. While this seems simple in the example, consider a virtual organization in which we have President at the top, 5 departments directly beneath him, 2 subdepartments below each department. We only want people in the Accounting department to be able to edit individuals in the subdepartments belonging to the Accounting department. They should not have edit access to anybody in the Marketing department or its subdepartments. To determine edit access when a user views a virtual organization in our company I run a query that executes two inline views: A) Hierarchically query for all assignments in this virtual organization and using SYS_CONNECT_BY_PATH to store the entire path to each user/role/company_location and B) Hierarchically retrieve all the assignments the individual logged in has and using the SYS_CONNECT_BY_PATH to store the entire path to each of these assignments. The result of the query is all the records from A) plus a boolean determined by joining with B) which flags whether the logged in user has edit access for each record. Indexes don't seem to be helping... it simply appears that there is too much processing going on to separate all the records and then determine edit access. One issue is that I can't store the SYS_CONNECT_BY_PATH and index it... determining whether an individual record has edit access consists of comparing if: test_record_sys_path LIKE individual_record_sys_path || '%' Is a materialized view the answer?

    Read the article

  • Proving the ROI of a technology?

    - by leeand00
    How does one prove the ROI of a technology to their manager? The closest thing I have found to a document on how to do this is: http://www.agilejournal.com/pdf/Finding-ROI-in-Build-Automation.pdf There are formulas in this document, but I can't really tell if they are just alot of marketing or if they are accurate formulas on how to calculate ROI. I'm not really trying to calculate the ROI of the build tool in the above paper, I was just trying to calculate the ROI of a simple build tool like ANT.

    Read the article

  • Send email to the planet Jupiter

    - by Chacha102
    My boss has a really strange request. He wants to start marketing to the residents of the planet Jupiter. However, I'm not sure the IP addresses I should send the email to. What are some existing tools that allow me to send emails to other planets?

    Read the article

  • What's your experience with Flash drives?

    - by Jon Ericson
    EMC is marketing Solid State Flash Drives and my project is thinking about moving that direction in the future. Does anyone have any experience with replacing traditional disk storage with flash drives? Besides price, have you experienced any downsides to the technology?

    Read the article

  • Square Peg Web: Gets you the traffic to where it matters most: Your Website!

    - by demetriusalwyn
    Have you decided to start your business online or is your business not reaching the targeted audience? Come to Square Peg Web; where you will find what you want to make your business reach new heights. The team at Square Peg Web is professionals who understand what you want and make sure you get it right. Our confidence stems from the fact of thousands of satisfied clients who keep referring friends and business associates to us and we do not let our clients down. Many companies promise the sky but how far is does their work live up to the promises? We do not know about the others however, we are sure that we strive to put together all our ideas and thoughts to make your website rank among the top. Web hosting is something that needs to have a personal touch; Square Peg Web customizes everything to suit your requirements so that you do not have to look further. With Square Peg Web you have a host of features to make your Business go viral. Some of the product details that are offered with Square Peg Web are unlimited product options/ variants/ properties giving you an option on price modifiers. You get unlimited customized input fields for your products and you can also Customer-define the prices. Square Peg Web provides you an option of using multiple product images with zoom features and one can also list a particular product in several categories. There are other aspects which make Square Peg Web the best choice for your website needs; every sale of yours’ is important to you and to us. We make sure that each sale is tracked by the product and also the list of bestsellers that appeal to the audience. Other comprehensive statistics of Square Peg Web includes searchable order data, an interface for shipments and order fulfillments, export sales & customer data for usage in a spreadsheet and the ability to export orders to QuickBooks format. With Square Peg Web; Admin Panel is a lot simpler. Administrative access is completely password protected and any changes done are all in real-time. You can have absolute control on the cart from anywhere around the world using your web browser and the topping on the cake is the unlimited amount of admin accounts that can be created for you. Square Peg Web offers you a world of experience with the options of choosing from marketing websites to e-commerce and from customized applications to community oriented sites. Some of the projects which appear in the portfolio of Square Peg Web are Online Marketing Web Sites, E-Commerce Web Sites, customized web applications, Blog designing and programming, video sharing and the option of downloading web sites, online advertisements, flash animation, customer and product support web sites, web site re-designing and planning and complete information architecture.

    Read the article

  • Scalability comparison between different DBMSs

    - by Björn Lindfors
    By what factor does the performance (read queries/sec) increase when a machine is added to a cluster of machines running either: a Bigtable-like database MySQL? Google's research paper on Bigtable suggests that "near-linear" scaling is achieved can be achieved with Bigtable. This page here featuring MySQL's marketing jargon suggests that MySQL is capable of scaling linearly. Where is the truth?

    Read the article

  • Is the WebAii test automation framework dead?

    - by RyanW
    Is the WebAii framework still available and free? Am I just missing it? After putting it off for too long, I've finally started automated UI testing on my current project. I had WebAii from ArtOfTest on my list to look at, but it looks like it's been killed off by Telerik and now they're asking $1500 for their new WebUI test studio. I can't find anything definitive on Telerik's site, too much marketing. But, it seems to be pretty clear.

    Read the article

  • What does .NET stand for? Is it an acronym?

    - by shaunmartin
    I've seen pronunciation guides and all sorts of definitions of .NET as a framework, but no definition or explanation of the actual name of the framework. Wikipedia doesn't seem to know. This question didn't cover it. Anybody know? Is it pure marketing-generated nonsense, or does it mean something?

    Read the article

  • Inserting checkbox values

    - by rabeea
    hey i have registration form that has checkboxes along with other fields. i cant insert the selected checkbox values into the data base. i have made one field in the database for storing all checked values. this is the code for checkbox part in the form: Websites, IT and Software Writing and Content <pre><input type="checkbox" name="expertise[]" value="Design and Media"> Design and Media <input type="checkbox" name="expertise[]" value="Data entry and Admin"> Data entry and Admin </pre> <pre><input type="checkbox" name="expertise[]" value="Engineering and Skills"> Engineering and Science <input type="checkbox" name="expertise[]" value="Seles and Marketing"> Sales and Marketing </pre> <pre><input type="checkbox" name="expertise[]" value="Business and Accounting"> Business and Accounting <input type="checkbox" name="expertise[]" value="Others"> Others </pre> and this is the corresponding php code for inserting data $checkusername=mysql_query("SELECT * FROM freelancer WHERE fusername='{$_POST['username']}'"); if (mysql_num_rows($checkusername)==1) { echo "username already exist"; } else { $query = "insert into freelancer(ffname,flname,fgender,femail,fusername,fpwd,fphone,fadd,facc,facc_name,fbank_details,fcity,fcountry,fexpertise,fprofile,fskills,fhourly_rate,fresume) values ('".$_POST['first_name']."','".$_POST['last_name']."','".$_POST['gender']."','".$_POST['email']."','".$_POST['username']."','".$_POST['password']."','".$_POST['phone']."','".$_POST['address']."','".$_POST['acc_num']."','".$_POST['acc_name']."','".$_POST['bank']."','".$_POST['city']."','".$_POST['country']."','".implode(',',$_POST['expertise'])."','".$_POST['profile']."','".$_POST['skills']."','".$_POST['rate']."','".$_POST['resume']."')"; $result = ($query) or die (mysql_error()); this code inserts data for all fields but the checkbox value field remains empty???

    Read the article

  • book on domain knowledge

    - by Newbie
    Is there any book that talks about domains i.e. financial , marketing, banking, telecom etc?. I am not talking about Domain Specific Languages(DSL) but only of domains. Thanks

    Read the article

  • Keep track of file access using IIS and ASP.NET

    - by EduardoMello
    I want to put an unique image (1x1) on an e-mail marketing (not spam) of my client. Just like www.spypig.com, I'd like to use it to know how many people have actually read the e-mail. I guess I'll have to check how many times this unique image has been accessed, but I can't figure out how to do it. So my question is: Is there a way to check how many times an image file was accessed using ASP.NET/C#? Thank you

    Read the article

  • iPhone to Android Market

    - by BahaiResearch.com
    (I am coming from an iPhone experience) When we create an iPhone app, Apple gives us a http URL that we can put on web pages that when clicked will open iTunes and give the user a chance to buy on their desktop. As Android has no "iTunes" Windows/Mac application on the desktop, what do I link to on my Web pages/email marketing so that users can go and buy the Android apps we are writing? Ian

    Read the article

  • Call to undefined function phpinclude_once()

    - by Alex Zylman
    I'm making a new php page and I get the error Fatal error: Call to undefined function phpinclude_once() in /home8/nuventio/public_html/marketing/pb2/dashboard.php on line 1 I'm not sure why I'm getting this error, I've done previous pages the same way as this with no problems. This is the line in question: <?php include_once("../utils.php"); ?> After that it just goes into regular HTML code. It works fine without that line.

    Read the article

  • Using jQuery to Insert a New Database Record

    - by Stephen Walther
    The goal of this blog entry is to explore the easiest way of inserting a new record into a database using jQuery and .NET. I’m going to explore two approaches: using Generic Handlers and using a WCF service (In a future blog entry I’ll take a look at OData and WCF Data Services). Create the ASP.NET Project I’ll start by creating a new empty ASP.NET application with Visual Studio 2010. Select the menu option File, New Project and select the ASP.NET Empty Web Application project template. Setup the Database and Data Model I’ll use my standard MoviesDB.mdf movies database. This database contains one table named Movies that looks like this: I’ll use the ADO.NET Entity Framework to represent my database data: Select the menu option Project, Add New Item and select the ADO.NET Entity Data Model project item. Name the data model MoviesDB.edmx and click the Add button. In the Choose Model Contents step, select Generate from database and click the Next button. In the Choose Your Data Connection step, leave all of the defaults and click the Next button. In the Choose Your Data Objects step, select the Movies table and click the Finish button. Unfortunately, Visual Studio 2010 cannot spell movie correctly :) You need to click on Movy and change the name of the class to Movie. In the Properties window, change the Entity Set Name to Movies. Using a Generic Handler In this section, we’ll use jQuery with an ASP.NET generic handler to insert a new record into the database. A generic handler is similar to an ASP.NET page, but it does not have any of the overhead. It consists of one method named ProcessRequest(). Select the menu option Project, Add New Item and select the Generic Handler project item. Name your new generic handler InsertMovie.ashx and click the Add button. Modify your handler so it looks like Listing 1: Listing 1 – InsertMovie.ashx using System.Web; namespace WebApplication1 { /// <summary> /// Inserts a new movie into the database /// </summary> public class InsertMovie : IHttpHandler { private MoviesDBEntities _dataContext = new MoviesDBEntities(); public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; // Extract form fields var title = context.Request["title"]; var director = context.Request["director"]; // Create movie to insert var movieToInsert = new Movie { Title = title, Director = director }; // Save new movie to DB _dataContext.AddToMovies(movieToInsert); _dataContext.SaveChanges(); // Return success context.Response.Write("success"); } public bool IsReusable { get { return true; } } } } In Listing 1, the ProcessRequest() method is used to retrieve a title and director from form parameters. Next, a new Movie is created with the form values. Finally, the new movie is saved to the database and the string “success” is returned. Using jQuery with the Generic Handler We can call the InsertMovie.ashx generic handler from jQuery by using the standard jQuery post() method. The following HTML page illustrates how you can retrieve form field values and post the values to the generic handler: Listing 2 – Default.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Add Movie</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input name="title" /> <br /> <label>Director:</label> <input name="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { $.post("InsertMovie.ashx", $("form").serialize(), insertCallback); }); function insertCallback(result) { if (result == "success") { alert("Movie added!"); } else { alert("Could not add movie!"); } } </script> </body> </html>     When you open the page in Listing 2 in a web browser, you get a simple HTML form: Notice that the page in Listing 2 includes the jQuery library. The jQuery library is included with the following SCRIPT tag: <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> The jQuery library is included on the Microsoft Ajax CDN so you can always easily include the jQuery library in your applications. You can learn more about the CDN at this website: http://www.asp.net/ajaxLibrary/cdn.ashx When you click the Add Movie button, the jQuery post() method is called to post the form data to the InsertMovie.ashx generic handler. Notice that the form values are serialized into a URL encoded string by calling the jQuery serialize() method. The serialize() method uses the name attribute of form fields and not the id attribute. Notes on this Approach This is a very low-level approach to interacting with .NET through jQuery – but it is simple and it works! And, you don’t need to use any JavaScript libraries in addition to the jQuery library to use this approach. The signature for the jQuery post() callback method looks like this: callback(data, textStatus, XmlHttpRequest) The second parameter, textStatus, returns the HTTP status code from the server. I tried returning different status codes from the generic handler with an eye towards implementing server validation by returning a status code such as 400 Bad Request when validation fails (see http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html ). I finally figured out that the callback is not invoked when the textStatus has any value other than “success”. Using a WCF Service As an alternative to posting to a generic handler, you can create a WCF service. You create a new WCF service by selecting the menu option Project, Add New Item and selecting the Ajax-enabled WCF Service project item. Name your WCF service InsertMovie.svc and click the Add button. Modify the WCF service so that it looks like Listing 3: Listing 3 – InsertMovie.svc using System.ServiceModel; using System.ServiceModel.Activation; namespace WebApplication1 { [ServiceBehavior(IncludeExceptionDetailInFaults=true)] [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class MovieService { private MoviesDBEntities _dataContext = new MoviesDBEntities(); [OperationContract] public bool Insert(string title, string director) { // Create movie to insert var movieToInsert = new Movie { Title = title, Director = director }; // Save new movie to DB _dataContext.AddToMovies(movieToInsert); _dataContext.SaveChanges(); // Return movie (with primary key) return true; } } }   The WCF service in Listing 3 uses the Entity Framework to insert a record into the Movies database table. The service always returns the value true. Notice that the service in Listing 3 includes the following attribute: [ServiceBehavior(IncludeExceptionDetailInFaults=true)] You need to include this attribute if you want to get detailed error information back to the client. When you are building an application, you should always include this attribute. When you are ready to release your application, you should remove this attribute for security reasons. Using jQuery with the WCF Service Calling a WCF service from jQuery requires a little more work than calling a generic handler from jQuery. Here are some good blog posts on some of the issues with using jQuery with WCF: http://encosia.com/2008/06/05/3-mistakes-to-avoid-when-using-jquery-with-aspnet-ajax/ http://encosia.com/2008/03/27/using-jquery-to-consume-aspnet-json-web-services/ http://weblogs.asp.net/scottgu/archive/2007/04/04/json-hijacking-and-how-asp-net-ajax-1-0-mitigates-these-attacks.aspx http://www.west-wind.com/Weblog/posts/896411.aspx http://www.west-wind.com/weblog/posts/324917.aspx http://professionalaspnet.com/archive/tags/WCF/default.aspx The primary requirement when calling WCF from jQuery is that the request use JSON: The request must include a content-type:application/json header. Any parameters included with the request must be JSON encoded. Unfortunately, jQuery does not include a method for serializing JSON (Although, oddly, jQuery does include a parseJSON() method for deserializing JSON). Therefore, we need to use an additional library to handle the JSON serialization. The page in Listing 4 illustrates how you can call a WCF service from jQuery. Listing 4 – Default2.aspx <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Add Movie</title> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> </head> <body> <form> <label>Title:</label> <input id="title" /> <br /> <label>Director:</label> <input id="director" /> </form> <button id="btnAdd">Add Movie</button> <script type="text/javascript"> $("#btnAdd").click(function () { // Convert the form into an object var data = { title: $("#title").val(), director: $("#director").val() }; // JSONify the data data = JSON.stringify(data); // Post it $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: "MovieService.svc/Insert", data: data, dataType: "json", success: insertCallback }); }); function insertCallback(result) { // unwrap result result = result["d"]; if (result === true) { alert("Movie added!"); } else { alert("Could not add movie!"); } } </script> </body> </html> There are several things to notice about Listing 4. First, notice that the page includes both the jQuery library and Douglas Crockford’s JSON2 library: <script src="Scripts/json2.js" type="text/javascript"></script> You need to include the JSON2 library to serialize the form values into JSON. You can download the JSON2 library from the following location: http://www.json.org/js.html When you click the button to submit the form, the form data is converted into a JavaScript object: // Convert the form into an object var data = { title: $("#title").val(), director: $("#director").val() }; Next, the data is serialized into JSON using the JSON2 library: // JSONify the data var data = JSON.stringify(data); Finally, the form data is posted to the WCF service by calling the jQuery ajax() method: // Post it $.ajax({   type: "POST",   contentType: "application/json; charset=utf-8",   url: "MovieService.svc/Insert",   data: data,   dataType: "json",   success: insertCallback }); You can’t use the standard jQuery post() method because you must set the content-type of the request to be application/json. Otherwise, the WCF service will reject the request for security reasons. For details, see the Scott Guthrie blog post: http://weblogs.asp.net/scottgu/archive/2007/04/04/json-hijacking-and-how-asp-net-ajax-1-0-mitigates-these-attacks.aspx The insertCallback() method is called when the WCF service returns a response. This method looks like this: function insertCallback(result) {   // unwrap result   result = result["d"];   if (result === true) {       alert("Movie added!");   } else {     alert("Could not add movie!");   } } When we called the jQuery ajax() method, we set the dataType to JSON. That causes the jQuery ajax() method to deserialize the response from the WCF service from JSON into a JavaScript object automatically. The following value is passed to the insertCallback method: {"d":true} For security reasons, a WCF service always returns a response with a “d” wrapper. The following line of code removes the “d” wrapper: // unwrap result result = result["d"]; To learn more about the “d” wrapper, I recommend that you read the following blog posts: http://encosia.com/2009/02/10/a-breaking-change-between-versions-of-aspnet-ajax/ http://encosia.com/2009/06/29/never-worry-about-asp-net-ajaxs-d-again/ Summary In this blog entry, I explored two methods of inserting a database record using jQuery and .NET. First, we created a generic handler and called the handler from jQuery. This is a very low-level approach. However, it is a simple approach that works. Next, we looked at how you can call a WCF service using jQuery. This approach required a little more work because you need to serialize objects into JSON. We used the JSON2 library to perform the serialization. In the next blog post, I want to explore how you can use jQuery with OData and WCF Data Services.

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • How to Run Low-Cost Minecraft on a Raspberry Pi for Block Building on the Cheap

    - by Jason Fitzpatrick
    We’ve shown you how to run your own blocktastic personal Minecraft server on a Windows/OSX box, but what if you crave something lighter weight, more energy efficient, and always ready for your friends? Read on as we turn a tiny Raspberry Pi machine into a low-cost Minecraft server you can leave on 24/7 for around a penny a day. Why Do I Want to Do This? There’s two aspects to this tutorial, running your own Minecraft server and specifically running that Minecraft server on a Raspberry Pi. Why would you want to run your own Minecraft server? It’s a really great way to extend and build upon the Minecraft play experience. You can leave the server running when you’re not playing so friends and family can join and continue building your world. You can mess around with game variables and introduce mods in a way that isn’t possible when you’re playing the stand-alone game. It also gives you the kind of control over your multiplayer experience that using public servers doesn’t, without incurring the cost of hosting a private server on a remote host. While running a Minecraft server on its own is appealing enough to a dedicated Minecraft fan, running it on the Raspberry Pi is even more appealing. The tiny little Pi uses so little resources that you can leave your Minecraft server running 24/7 for a couple bucks a year. Aside from the initial cost outlay of the Pi, an SD card, and a little bit of time setting it up, you’ll have an always-on Minecraft server at a monthly cost of around one gumball. What Do I Need? For this tutorial you’ll need a mix of hardware and software tools; aside from the actual Raspberry Pi and SD card, everything is free. 1 Raspberry Pi (preferably a 512MB model) 1 4GB+ SD card This tutorial assumes that you have already familiarized yourself with the Raspberry Pi and have installed a copy of the Debian-derivative Raspbian on the device. If you have not got your Pi up and running yet, don’t worry! Check out our guide, The HTG Guide to Getting Started with Raspberry Pi, to get up to speed. Optimizing Raspbian for the Minecraft Server Unlike other builds we’ve shared where you can layer multiple projects over one another (e.g. the Pi is more than powerful enough to serve as a weather/email indicator and a Google Cloud Print server at the same time) running a Minecraft server is a pretty intense operation for the little Pi and we’d strongly recommend dedicating the entire Pi to the process. Minecraft seems like a simple game, with all its blocky-ness and what not, but it’s actually a pretty complex game beneath the simple skin and required a lot of processing power. As such, we’re going to tweak the configuration file and other settings to optimize Rasbian for the job. The first thing you’ll need to do is dig into the Raspi-Config application to make a few minor changes. If you’re installing Raspbian fresh, wait for the last step (which is the Raspi-Config), if you already installed it, head to the terminal and type in “sudo raspi-config” to launch it again. One of the first and most important things we need to attend to is cranking up the overclock setting. We need all the power we can get to make our Minecraft experience enjoyable. In Raspi-Config, select option number 7 “Overclock”. Be prepared for some stern warnings about overclocking, but rest easy knowing that overclocking is directly supported by the Raspberry Pi foundation and has been included in the configuration options since late 2012. Once you’re in the actual selection screen, select “Turbo 1000MhHz”. Again, you’ll be warned that the degree of overclocking you’ve selected carries risks (specifically, potential corruption of the SD card, but no risk of actual hardware damage). Click OK and wait for the device to reset. Next, make sure you’re set to boot to the command prompt, not the desktop. Select number 3 “Enable Boot to Desktop/Scratch”  and make sure “Console Text console” is selected. Back at the Raspi-Config menu, select number 8 “Advanced Options’. There are two critical changes we need to make in here and one option change. First, the critical changes. Select A3 “Memory Split”: Change the amount of memory available to the GPU to 16MB (down from the default 64MB). Our Minecraft server is going to ruin in a GUI-less environment; there’s no reason to allocate any more than the bare minimum to the GPU. After selecting the GPU memory, you’ll be returned to the main menu. Select “Advanced Options” again and then select A4 “SSH”. Within the sub-menu, enable SSH. There is very little reason to keep this Pi connected to a monitor and keyboard, by enabling SSH we can remotely access the machine from anywhere on the network. Finally (and optionally) return again to the “Advanced Options” menu and select A2 “Hostname”. Here you can change your hostname from “raspberrypi” to a more fitting Minecraft name. We opted for the highly creative hostname “minecraft”, but feel free to spice it up a bit with whatever you feel like: creepertown, minecraft4life, or miner-box are all great minecraft server names. That’s it for the Raspbian configuration tab down to the bottom of the main screen and select “Finish” to reboot. After rebooting you can now SSH into your terminal, or continue working from the keyboard hooked up to your Pi (we strongly recommend switching over to SSH as it allows you to easily cut and paste the commands). If you’ve never used SSH before, check out how to use PuTTY with your Pi here. Installing Java on the Pi The Minecraft server runs on Java, so the first thing we need to do on our freshly configured Pi is install it. Log into your Pi via SSH and then, at the command prompt, enter the following command to make a directory for the installation: sudo mkdir /java/ Now we need to download the newest version of Java. At the time of this publication the newest release is the OCT 2013 update and the link/filename we use will reflect that. Please check for a more current version of the Linux ARMv6/7 Java release on the Java download page and update the link/filename accordingly when following our instructions. At the command prompt, enter the following command: sudo wget --no-check-certificate http://www.java.net/download/jdk8/archive/b111/binaries/jdk-8-ea-b111-linux-arm-vfp-hflt-09_oct_2013.tar.gz Once the download has finished successfully, enter the following command: sudo tar zxvf jdk-8-ea-b111-linux-arm-vfp-hflt-09_oct_2013.tar.gz -C /opt/ Fun fact: the /opt/ directory name scheme is a remnant of early Unix design wherein the /opt/ directory was for “optional” software installed after the main operating system; it was the /Program Files/ of the Unix world. After the file has finished extracting, enter: sudo /opt/jdk1.8.0/bin/java -version This command will return the version number of your new Java installation like so: java version "1.8.0-ea" Java(TM) SE Runtime Environment (build 1.8.0-ea-b111) Java HotSpot(TM) Client VM (build 25.0-b53, mixed mode) If you don’t see the above printout (or a variation thereof if you’re using a newer version of Java), try to extract the archive again. If you do see the readout, enter the following command to tidy up after yourself: sudo rm jdk-8-ea-b111-linux-arm-vfp-hflt-09_oct_2013.tar.gz At this point Java is installed and we’re ready to move onto installing our Minecraft server! Installing and Configuring the Minecraft Server Now that we have a foundation for our Minecraft server, it’s time to install the part that matter. We’ll be using SpigotMC a lightweight and stable Minecraft server build that works wonderfully on the Pi. First, grab a copy of the the code with the following command: sudo wget http://ci.md-5.net/job/Spigot/lastSuccessfulBuild/artifact/Spigot-Server/target/spigot.jar This link should remain stable over time, as it points directly to the most current stable release of Spigot, but if you have any issues you can always reference the SpigotMC download page here. After the download finishes successfully, enter the following command: sudo /opt/jdk1.8.0/bin/java -Xms256M -Xmx496M -jar /home/pi/spigot.jar nogui Note: if you’re running the command on a 256MB Pi change the 256 and 496 in the above command to 128 and 256, respectively. Your server will launch and a flurry of on-screen activity will follow. Be prepared to wait around 3-6 minutes or so for the process of setting up the server and generating the map to finish. Future startups will take much less time, around 20-30 seconds. Note: If at any point during the configuration or play process things get really weird (e.g. your new Minecraft server freaks out and starts spawning you in the Nether and killing you instantly), use the “stop” command at the command prompt to gracefully shutdown the server and let you restart and troubleshoot it. After the process has finished, head over to the computer you normally play Minecraft on, fire it up, and click on Multiplayer. You should see your server: If your world doesn’t popup immediately during the network scan, hit the Add button and manually enter the address of your Pi. Once you connect to the server, you’ll see the status change in the server status window: According to the server, we’re in game. According to the actual Minecraft app, we’re also in game but it’s the middle of the night in survival mode: Boo! Spawning in the dead of night, weaponless and without shelter is no way to start things. No worries though, we need to do some more configuration; no time to sit around and get shot at by skeletons. Besides, if you try and play it without some configuration tweaks first, you’ll likely find it quite unstable. We’re just here to confirm the server is up, running, and accepting incoming connections. Once we’ve confirmed the server is running and connectable (albeit not very playable yet), it’s time to shut down the server. Via the server console, enter the command “stop” to shut everything down. When you’re returned to the command prompt, enter the following command: sudo nano server.properties When the configuration file opens up, make the following changes (or just cut and paste our config file minus the first two lines with the name and date stamp): #Minecraft server properties #Thu Oct 17 22:53:51 UTC 2013 generator-settings= #Default is true, toggle to false allow-nether=false level-name=world enable-query=false allow-flight=false server-port=25565 level-type=DEFAULT enable-rcon=false force-gamemode=false level-seed= server-ip= max-build-height=256 spawn-npcs=true white-list=false spawn-animals=true texture-pack= snooper-enabled=true hardcore=false online-mode=true pvp=true difficulty=1 player-idle-timeout=0 gamemode=0 #Default 20; you only need to lower this if you're running #a public server and worried about loads. max-players=20 spawn-monsters=true #Default is 10, 3-5 ideal for Pi view-distance=5 generate-structures=true spawn-protection=16 motd=A Minecraft Server In the server status window, seen through your SSH connection to the pi, enter the following command to give yourself operator status on your Minecraft server (so that you can use more powerful commands in game, without always returning to the server status window). op [your minecraft nickname] At this point things are looking better but we still have a little tweaking to do before the server is really enjoyable. To that end, let’s install some plugins. The first plugin, and the one you should install above all others, is NoSpawnChunks. To install the plugin, first visit the NoSpawnChunks webpage and grab the download link for the most current version. As of this writing the current release is v0.3. Back at the command prompt (the command prompt of your Pi, not the server console–if your server is still active shut it down) enter the following commands: cd /home/pi/plugins sudo wget http://dev.bukkit.org/media/files/586/974/NoSpawnChunks.jar Next, visit the ClearLag plugin page, and grab the latest link (as of this tutorial, it’s v2.6.0). Enter the following at the command prompt: sudo wget http://dev.bukkit.org/media/files/743/213/Clearlag.jar Because the files aren’t compressed in a .ZIP or similar container, that’s all there is to it: the plugins are parked in the plugin directory. (Remember this for future plugin downloads, the file needs to be whateverplugin.jar, so if it’s compressed you need to uncompress it in the plugin directory.) Resart the server: sudo /opt/jdk1.8.0/bin/java -Xms256M -Xmx496M -jar /home/pi/spigot.jar nogui Be prepared for a slightly longer startup time (closer to the 3-6 minutes and much longer than the 30 seconds you just experienced) as the plugins affect the world map and need a minute to massage everything. After the spawn process finishes, type the following at the server console: plugins This lists all the plugins currently active on the server. You should see something like this: If the plugins aren’t loaded, you may need to stop and restart the server. After confirming your plugins are loaded, go ahead and join the game. You should notice significantly snappier play. In addition, you’ll get occasional messages from the plugins indicating they are active, as seen below: At this point Java is installed, the server is installed, and we’ve tweaked our settings for for the Pi.  It’s time to start building with friends!     

    Read the article

  • Silverlight Cream for June 10, 2010 -- #879

    - by Dave Campbell
    In this Issue: Emiel Jongerius, Nokola, Christian Schormann, Tim Heuer, David Poll, Mike Snow(-2-), John Papa, and Charles Petzold. Shoutout: Viktor Larsson has a frank look at WP7 based on information from MIX10 and what was said this week in his post: Licking Windows Phone 7... yeah licking, not liking :) .. my guess is even that didn't allow him to keep it! If you haven't already noticed, the CodeProject reader's choice awards are out this week and Telerik won for their RadColorPicker and RadCalendar for Silverlight Telerik also needs congratulations for winning Telerik wins “Best of TechEd” award in the “Components and Middleware” category... check out that trophy... Steven Forte has a picture up of the Telerikers after getting the award. Koen Zwikstra has a new release of Silverlight Spy up that supports the latest release: Silverlight Spy 3.0.0.12 From SilverlightCream.com: Localization of XAML files in Silverlight Emiel Jongerius is back with another post, this time discussing Localizing XAM files... external links and source included. Coolest Silverlight Sound Library for Games I've Seen Yet Nokola talks up a Sound Library for Silverlight 4 Games ... and has links to a great demo, plus the source. SketchFlow: Firing Actions when a Storyboard is Complete Christian Schormann responded to some Twitter questions and demonstrates using the StoryboardCompleted trigger with a Navigate action. Hosting cross-domain Silverlight applications (XAP) Tim Heuer responds to a question from a reader and demonstrates how to host a XAP from a domain other than the one you're working on. Taking Microsoft Silverlight 4 Applications Beyond the Browser (TechEd WEB313) David Poll has all his material up from his TechEd presentation earlier this week on Silverlight OOB... and he covered some pretty extensive material ... check it out! Silverlight Tip of the Day #29 – Configuring Service Reference to Back to LocalHost Mike Snow has a couple new tips up... this first one is quick, but very useful... how to switch your service reference back to localhost without pulling out your hair. Silverlight Tip of the Day #30 – Sending Email from Silverlight In Mike Snow's latest tip, he shows how to send email from your Silverlight app... using a WCF service... and a step-by-step set of instructions. Creating Rich Interactions Using Blend 4: Transition Effects, Fluid Layout and Layout States (Silverlight TV #32) John Papa has Silverlight TV #32 up, and he's talking with Kenny Young of the Expression Blend team while Kenny uses some built-om effects and also creates some impressive examples from scratch -- code included. Simulating Touch Inertia on Windows Phone 7 Charles Petzold has a post up on simulating inertia on WP7... demos in WPF and then moves into WP7... math, source, and external links. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • I still think Twitter is dead &hellip; but

    - by Randy Walker
    Twitter finally hit the mainstream about 8 months ago, but I’ve been saying for a couple of years now, without a real way for the company to earn money, what’s the future fate of Twitter?  On the personal side, where is the real value for the users?  For the most part, Twitter has replaced most people’s IM (instant messaging), at least in the technology circles I run in.  It still has value for users as a communication tool.  But I see it more as a fad.  My prediction is over the next 6 months we’ll start seeing a usage drop (if we haven’t already started to see it). On the business side, how does Twitter make money?  It doesn’t.  If you use the text messaging capabilities, you see a few ads.  But most smart phone and PC users, won’t ever see them.  I still think Twitter has the best chance to make money by forcing the “collectors” to pay money.  You know what I mean by “collector”, those people that collect tons of followers or friends.  If Twitter caps the number of followers and makes you pay to have more, would you?  The normal twitter user doesn’t have that many followers, and this is where my title comes in … BUT The financial value for Twitter is really seen through businesses connecting with their customers.  I’ve seen 3 effective ways this has been accomplished. 1. Giving your customers a coupon or announcing a sale My favorite is @amazonmp3, Being a huge music lover, I get notified when they put music on sale. Various restaurants like @ruthschris_ARK will let their favorite customers know about certain specials @BluefinMemphis I was traveling through Memphis once looking for a sushi restaurant when they had %50 off if we mentioned we saw them on Twitter.  It was their first attempt at trying to encourage customers in the door, and after talking with the management, it was a huge success 2. Giveaways @namecheap Several companies have started huge marketing campaigns, but my favorite is watching companies post trivia questions, and the first person to respond wins a prize. 3. Responding to Customer Complaints I once posted a complaint about American Express (a company that I have slowly come to really dislike) but they actually had someone contact me to try and resolve the issue.  I give them credit for paying attention, but still dislike them for their horrible credit practices.

    Read the article

  • Tic Tac Toe Winner in Javascript and html [closed]

    - by Yehuda G
    I am writing a tic tac toe game using html, css, and JavaScript. I have my JavaScript in an external .js file being referenced into the .html file. Within the .js file, I have a function called playerMove, which allows the player to make his/her move and switches between player 'x' and 'o'. What I am trying to do is determine the winner. Here is what I have: each square, when onclick(this), references playerMove(piece). After each move is made, I want to run an if statement to check for the winner, but am unsure if the parameters would include a reference to 'piece' or a,b, and c. Any suggestions would be greatly appreciated. Javascript: var turn = 0; a = document.getElementById("topLeftSquare").innerHTML; b = document.getElementById("topMiddleSquare").innerHTML; c = document.getElementById("topRightSquare").innerHTML; function playerMove(piece) { var win; if(piece.innerHTML != 'X' && piece.innerHTML != 'O'){ if(turn % 2 == 0){ document.getElementById('playerDisplay').innerHTML= "X Plays " + printEquation(1); piece.innerHTML = 'X'; window.setInterval("X", 10000) piece.style.color = "red"; if(piece.innerHTML == 'X') window.alert("X WINS!"); } else { document.getElementById('playerDisplay').innerHTML= "O Plays " + printEquation(1); piece.innerHTML = 'O'; piece.style.color = "brown"; } turn+=1; } html: <div id="board"> <div class="topLeftSquare" onclick="playerMove(this)"> </div> <div class="topMiddleSquare" onclick="playerMove(this)"> </div> <div class="topRightSquare" onclick="playerMove(this)"> </div> <div class="middleLeftSquare" onclick="playerMove(this)"> </div> <div class="middleSquare" onclick="playerMove(this)"> </div> <div class="middleRightSquare" onclick="playerMove(this)"> </div> <div class="bottomLeftSquare" onclick="playerMove(this)"> </div> <div class="bottomMiddleSquare" onclick="playerMove(this)"> </div> <div class="bottomRightSquare" onclick="playerMove(this)"> </div> </div>

    Read the article

  • Lesser Known NHibernate Session Methods

    - by Ricardo Peres
    The NHibernate ISession, the core of NHibernate usage, has some methods which are quite misunderstood and underused, to name a few, Merge, Persist, Replicate and SaveOrUpdateCopy. Their purpose is: Merge: copies properties from a transient entity to an eventually loaded entity with the same id in the first level cache; if there is no loaded entity with the same id, one will be loaded and placed in the first level cache first; if using version, the transient entity must have the same version as in the database; Persist: similar to Save or SaveOrUpdate, attaches a maybe new entity to the session, but does not generate an INSERT or UPDATE immediately and thus the entity does not get a database-generated id, it will only get it at flush time; Replicate: copies an instance from one session to another session, perhaps from a different session factory; SaveOrUpdateCopy: attaches a transient entity to the session and tries to save it. Here are some samples of its use. ISession session = ...; AuthorDetails existingDetails = session.Get<AuthorDetails>(1); //loads an entity and places it in the first level cache AuthorDetails detachedDetails = new AuthorDetails { ID = existingDetails.ID, Name = "Changed Name" }; //a detached entity with the same ID as the existing one Object mergedDetails = session.Merge(detachedDetails); //merges the Name property from the detached entity into the existing one; the detached entity does not get attached session.Flush(); //saves the existingDetails entity, since it is now dirty, due to the change in the Name property AuthorDetails details = ...; ISession session = ...; session.Persist(details); //details.ID is still 0 session.Flush(); //saves the details entity now and fetches its id ISessionFactory factory1 = ...; ISessionFactory factory2 = ...; ISession session1 = factory1.OpenSession(); ISession session2 = factory2.OpenSession(); AuthorDetails existingDetails = session1.Get<AuthorDetails>(1); //loads an entity session2.Replicate(existingDetails, ReplicationMode.Overwrite); //saves it into another session, overwriting any possibly existing one with the same id; other options are Ignore, where any existing record with the same id is left untouched, Exception, where an exception is thrown if there is a record with the same id and LatestVersion, where the latest version wins SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • Will polishing my current project be a better learning experience than starting a new one?

    - by Alejandro Cámara
    I started programming many years ago. Now I'm trying to make games. I have read many recommendations to start cloning some well known games like galaga, tetris, arkanoid, etc. I have also read that I should go for the whole game (including menus, sound, score, etc.). Yesterday I finished the first complete version of my arkanoid clone. But it is far from over. I can still work on it for months (I program as a hobby in my free time) implementing a screen resolution switcher, remap of the control keys, power-ups falling from broken bricks, and a huge etc. But I do not want to be forever learning how to clone ONE game. I have the urge to get to the next clone in order to apply some design ideas I have come upon while developing this arkanoid clone (at the same time I am reading the GoF book and much source code from Ludum Dare 21 game contest). So the question is: Should I keep improving the arkanoid clone until it has all the features the original game had? or should I move to the next clone (there are almost infinite games to clone) and start mending the things I did wrong with the previous clone? This can be a very subjective question, so please restrain the answers to the most effective way to learn how to make my own games (not cloning someone ideas). Thank you! CLARIFICATION In order to clarify what I have implemented I make this list: Features implemented: Bouncing capabilities (the ball bounces on walls, on bricks, and on the bar). Sounds when bouncing on bricks and the bar, and when the player wins or loses. Basic title menu (new game and exit only). Also in-game menu and win/lose menus. Only three levels, but the map system is so easy I do not think it will teach me much (am I wrong?). Features not-implemented: Power-ups when breaking the bricks. Complex bricks (with more than one "hit point" and invincible). Better graphics (I am not really good at it). Programming polishing (use more intensively the design patterns). Here's a link to its (minimal) webpage: http://blog.acamara.es/piperine/ I kind of feel ashamed to show it, so please do not hit me too hard :-) My question was related to the not-implemented features. I wondered what was the fastest (optimal) path to learn. 1) implement the not-implemented features in this project which is getting big, or 2) make a new game which probably will teach me those lessons and new ones. ANSWER I choose @ashes999 answer because, in my case, I think I should polish more and try to "ship" the game. I think all the other answers are also important to bear in mind, so if you came here having the same question, before taking a rush decision read all the discussion. Thank you all!

    Read the article

  • PASS: Bylaw Change 2013

    - by Bill Graziano
    PASS launched a Global Growth Initiative in the Summer of 2011 with the appointment of three international Board advisors.  Since then we’ve thought and talked extensively about how we make PASS more relevant to our members outside the US and Canada.  We’ve collected much of that discussion in our Global Growth site.  You can find vision documents, plans, governance proposals, feedback sites, and transcripts of Twitter chats and town hall meetings.  We also address these plans at the Board Q&A during the 2012 Summit. One of the biggest changes coming out of this process is around how we elect Board members.  And that requires a change to the bylaws.  We published the proposed bylaw changes as a red-lined document so you can clearly see the changes.  Our goal in these bylaw changes was to address the changes required by the global growth initiatives, conduct a legal review of the document and address other minor issues in the document.  There are numerous small wording changes throughout the document.  For example, we replaced every reference of “The Corporation” with the word “PASS” so it now reads “PASS is organized…”. Board Composition The biggest change in these bylaw changes is how the Board is composed and elected.  This discussion starts in section VI.2.  This section now says that some elected directors will come from geographic regions.  I think this is the best way to make sure we give all of our members a voice in the leadership of the organization.  The key parts of this section are: The remaining Directors (i.e. the non-Officer Directors and non-Vendor Appointed Directors) shall be elected by the voting membership (“Elected Directors”). Elected Directors shall include representatives of defined PASS regions (“Regions”) as set forth below (“Regional Directors”) and at minimum one (1) additional Director-at-Large whose selection is not limited by region. Regional Directors shall include, but are not limited to, two (2) seats for the Region covering Canada and the United States of America. Additional Regions for the purpose of electing additional Regional Directors and additional Director-at-Large seats for the purpose of expanding the Board shall be defined by a majority vote of the current Board of Directors and must be established prior to the public call for nominations in the general election. Previously defined Regions and seats approved by the Board of Directors shall remain in effect and can only be modified by a 2/3 majority vote by the then current Board of Directors. Currently PASS has six At-Large Directors elected by the members.  These changes allow for a Regional Director position that is elected by the members but must come from a particular region.  It also stipulates that there must always be at least one Director-at-Large who can come from any region. We also understand that PASS is currently a very US-centric organization.  Our Summit is held in America, roughly half our chapters are in the US and Canada and most of the Board members over the last ten years have come from America.  We wanted to reflect that by making sure that our US and Canadian volunteers would continue to play a significant role by ensuring that two Regional seats are reserved specifically for Canada and the US. Other than that, the bylaws don’t create any specific regional seats.  These rules allow us to create Regional Director seats but don’t require it.  We haven’t fully discussed what the criteria will be in order for a region to have a seat designated for it or how many regions there will be.  In our discussions we’ve broadly discussed regions for United States and Canada Europe, Middle East, and Africa (EMEA) Australia, New Zealand and Asia (also known as Asia Pacific or APAC) Mexico, South America, and Central America (LATAM) As you can see, our thinking is that there will be a few large regions.  I’ve also considered a non-North America region that we can gradually split into the regions above as our membership grows in those areas.  The regions will be defined by a policy document that will be published prior to the elections. I’m hoping that over the next year we can begin to publish more of what we do as Board-approved policy documents. While the bylaws only require a single non-region specific At-large Director, I would expect we would always have two.  That way we can have one in each election.  I think it’s important that we always have one seat open that anyone who is eligible to run for the Board can contest.  The Board is required to have any regions defined prior to the start of the election process. Board Elections – Regional Seats We spent a lot of time discussing how the elections would work for these Regional Director seats.  Ultimately we decided that the simplest solution is that every PASS member should vote for every open seat.  Section VIII.3 reads: Candidates who are eligible (i.e. eligible to serve in such capacity subject to the criteria set forth herein or adopted by the Board of Directors) shall be designated to fill open Board seats in the following order of priority on the basis of total votes received: (i) full term Regional Director seats, (ii) full term Director-at-Large seats, (iii) not full term (vacated) Regional Director seats, (iv) not full term (vacated) Director-at-Large seats. For the purposes of clarity, because of eligibility requirements, it is contemplated that the candidates designated to the open Board seats may not receive more votes than certain other candidates who are not selected to the Board. We debated whether to have multiple ballots or one single ballot.  Multiple ballot elections get complicated quickly.  Let’s say we have a ballot for US/Canada and one for Region 2.  After that we’d need a mechanism to merge those two together and come up with the winner of the at-large seat or have another election for the at-large position.  We think the best way to do this is a single ballot and putting the highest vote getters into the most restrictive seats.  Let’s look at an example: There are seats open for Region 1, Region 2 and at-large.  The election results are as follows: Candidate A (eligible for Region 1) – 550 votes Candidate B (eligible for Region 1) – 525 votes Candidate C (eligible for Region 1) – 475 votes Candidate D (eligible for Region 2) – 125 votes Candidate E (eligible for Region 2) – 75 votes In this case, Candidate A is the winner for Region 1 and is assigned that seat.  Candidate D is the winner for Region 2 and is assigned that seat.  The at-large seat is filled by the high remaining vote getter which is Candidate B. The key point to understand is that we may have a situation where a person with a lower vote total is elected to a regional seat and a person with a higher vote total is excluded.  This will be true whether we had multiple ballots or a single ballot.  Board Elections – Vacant Seats The other change to the election process is for vacant Board seats.  The actual changes are sprinkled throughout the document. Previously we didn’t have a mechanism that allowed for an election of a Board seat that we knew would be vacant in the future.  The most common case is when a Board members moves to an Officer role in the middle of their term.  One of the key changes is to allow the number of votes members have to match the number of open seats.  This allows each voter to express their preference on all open seats.  This only applies when we know about the opening prior to the call for nominations.  This all means that if there’s a seat will be open at the start of the next Board term, and we know about it prior to the call for nominations, we can include that seat in the elections.  Ultimately, the aim is to have PASS members decide who sits on the Board in as many situations as possible. We discussed the option of changing the bylaws to just take next highest vote-getter in all other cases.  I think that’s wrong for the following reasons: All voters aren’t able to express an opinion on all candidates.  If there are five people running for three seats, you can only vote for three.  You have no way to express your preference between #4 and #5. Different candidates may have different information about the number of seats available.  A person may learn that a Board member plans to resign at the end of the year prior to that information being made public. They may understand that the top four vote getters will end up on the Board while the rest of the members believe there are only three openings.  This may affect someone’s decision to run.  I don’t think this creates a transparent, fair election. Board members may use their knowledge of the election results to decide whether to remain on the Board or not.  Admittedly this one is unlikely but I don’t want to create a situation where this accusation can be leveled. I think the majority of vacancies in the future will be handled through elections.  The bylaw section quoted above also indicates that partial term vacancies will be filled after the full term seats are filled. Removing Directors Section VI.7 on removing directors has always had a clause that allowed members to remove an elected director.  We also had a clause that allowed appointed directors to be removed.  We added a clause that allows the Board to remove for cause any director with a 2/3 majority vote.  The updated text reads: Any Director may be removed for cause by a 2/3 majority vote of the Board of Directors whenever in its judgment the best interests of PASS would be served thereby. Notwithstanding the foregoing, the authority of any Director to act as in an official capacity as a Director or Officer of PASS may be suspended by the Board of Directors for cause. Cause for suspension or removal of a Director shall include but not be limited to failure to meet any Board-approved performance expectations or the presence of a reason for suspension or dismissal as listed in Addendum B of these Bylaws. The first paragraph is updated and the second and third are unchanged (except cleaning up language).  If you scroll down and look at Addendum B of these bylaws you find the following: Cause for suspension or dismissal of a member of the Board of Directors may include: Inability to attend Board meetings on a regular basis. Inability or unwillingness to act in a capacity designated by the Board of Directors. Failure to fulfill the responsibilities of the office. Inability to represent the Region elected to represent Failure to act in a manner consistent with PASS's Bylaws and/or policies. Misrepresentation of responsibility and/or authority. Misrepresentation of PASS. Unresolved conflict of interests with Board responsibilities. Breach of confidentiality. The bold line about your inability to represent your region is what we added to the bylaws in this revision.  We also added a clause to section VII.3 allowing the Board to remove an officer.  That clause is much less restrictive.  It doesn’t require cause and only requires a simple majority. The Board of Directors may remove any Officer whenever in their judgment the best interests of PASS shall be served by such removal. Other There are numerous other small changes throughout the document. Proxy voting.  The laws around how members and Board members proxy votes are specific in Illinois law.  PASS is an Illinois corporation and is subject to Illinois laws.  We changed section IV.5 to come into compliance with those laws.  Specifically this says you can only vote through a proxy if you have a written proxy through your authorized attorney.  English language proficiency.  As we increase our global footprint we come across more members that aren’t native English speakers.  The business of PASS is conducted in English and it’s important that our Board members speak English.  If we get big enough to afford translators, we may be able to relax this but right now we need English language skills for effective Board members. Committees.  The language around committees in section IX is old and dated.  Our lawyers advised us to clean it up.  This section specifically applies to any committees that the Board may form outside of portfolios.  We removed the term limits, quorum and vacancies clause.  We don’t currently have any committees that this would apply to.  The Nominating Committee is covered elsewhere in the bylaws. Electronic Votes.  The change allows the Board to vote via email but the results must be unanimous.  This is to conform with Illinois state law. Immediate Past President.  There was no mechanism to fill the IPP role if an outgoing President chose not to participate.  We changed section VII.8 to allow the Board to invite any previous President to fill the role by majority vote. Nominations Committee.  We’ve opened the language to allow for the transparent election of the Nominations Committee as outlined by the 2011 Election Review Committee. Revocation of Charters. The language surrounding the revocation of charters for local groups was flagged by the lawyers. We have allowed for the local user group to make all necessary payment before considering returning of items to PASS if required. Bylaw notification. We’ve spent countless meetings working on these bylaws with the intent to not open them again any time in the near future. Should the bylaws be opened again, we have included a clause ensuring that the PASS membership is involved. I’m proud that the Board has remained committed to transparency and accountability to members. This clause will require that same level of commitment in the future even when all the current Board members have rolled off. I think that covers everything.  I’d encourage you to look through the red-line document and see the changes.  It’s helpful to look at the language that’s being removed and the language that’s being added.  I’m happy to answer any questions here or you can email them to [email protected].

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >