Search Results

Search found 106338 results on 4254 pages for 'code table'.

Page 71/4254 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • How do I display data in a table and allow users to copy selected data?

    - by cfouche
    Hi I have a long list of data that I want to display in table format to users. The data changes when the user performs certain actions in my app, but it is not directly editable. So the user can create a reasonably big table of data, but he can't change individual cells' values. However, I do want the data to be copy-able. So I want it to be possible for the user to select some or all of the cells, and do a ctrl-C to copy the data to his clipboard, and then a ctrl-V to paste the data to an external text editor. At the moment, I'm displaying the data in a ListView with a GridView and this works perfectly, except that GridView doesn't allow one to copy data. What other options can I try? Ours is a WPF app, coding in c#.

    Read the article

  • How to build up a lookup table in microcontroller?

    - by GiGi
    Hi, I am so confused about how to build up a 3-dimension lookup table. Is the template as follows: create a 3 dimensional array to store data. Then create linked list. Then create function 'insert' to put all the data into the array? As some book said, linked list should be static const, is it need to create another function to expand the list? Because the lookup table should be used in a microcontroller, it only needs to finish the operation of putting the data into the array and whenever want to find the data, it will be fast and easy to search. Could you help me with that and give me some suggestions? Thank you.

    Read the article

  • How to set alternate row color for an iterated table in php?

    - by chandru_cp
    I am using php and i am iterating a table with a result array ... I want to add row color and alternate row color to it.... How to do so? Any suggestion... <table id="chkbox" cellpadding="0" cellspacing="2" width="100%" class="table_Style_Border"> <tr> <td style="width:150px" class="grid_header" align="center">RackName</td> <td style="width:150px" class="grid_header" align="center">LibraryName</td> <td style="width:200px" class="grid_header" align="center">LibrarianName</td> <td style="width:200px" class="grid_header" align="center">Location</td> <td style="width:1%" class="grid_header"></td> </tr> <? if(isset($comment)) { echo '<tr> <td class=table_label colspan=5>'.$comment.'</td></tr>'; } ?> <?php foreach($rackData as $row) { ?> <tr> <td align="left" class="table_label"> <?=$row['rack_name']?> </td> <td align="left" class="table_label"> <?=$row['library_name']?> </td> <td align="center" class="table_label"> <?=$row['librarian']?> </td> <td align="center" class="table_label"> <?=$row['location']?> </td> <td align="center"> <input type="checkbox" name="group" id="group" value="<?=$row['rack_id']?>" onclick="display(this);" > </td> </tr> <? } ?> <table>

    Read the article

  • Adding a clustered index to a SQL table: what dangers exist for a live production system?

    - by MoSlo
    Right, keep in mind i need to describe this by abstracting all possible confidential info: I've been put in charge of a 10-year old transactional system of which the majority business logic is implemented at database level (triggers, stored procedures etc). Win2000 server, MSSQL 2000 Enterprise. No immediate plans for replacing/updating the system are being considered :( The core process is a program that executes transactions - specifically, it executes a stored procedure with various parameters, lets call it sp_ProcessTrans. The program executes the stored procedure at asynchronous intervals. By itself, things work fine. But there are 30 instances of this program on remotely located workstations, all of them asynchronously executing sp_ProcessTrans and then retrieving data from the SQL server (execution is pretty regular - ranging 0 to 60 times a minute, depending on what items the program instance is responsible for) . Performance of the system has dropped considerably with 10 yrs of data growth: the reason is the deadlocks and specifically deadlock wait times. The deadlock is on the Employee table. I have discovered: In sp_ProcessTrans' execution, it selects from an Employee table 7 times (dont ask) The select is done on a field that is NOT the primary key No index exists on this field. Thus a table scan is performed. 7 times. per transaction So the reason for deadlocks is clear. I created a non-unique ordered clustered index on the field (field looks good, almost unique, NUM(7), very rarely changes). Immediate improvement in the test environment. The problem is that i cannot simulate the deadlocks in a test environment (I'd need 30 workstations; i'd need to simulate 'realistic' activity on those stations, so visualization is out). I need to know if i must schedule downtime. Creating an index shouldn't be a risky operation for MSSQL, but is there any danger (data corruption in transactions/select statements/extra wait time etc) to create this field index on the production database while the transactions are still taking place? (although i can select a time when transactions are fairly quiet through the 30 stations) Are there any hidden dangers i'm not seeing (not looking forward to needing to restore the DB if something goes wrong, restoring would take a lot of time with 10yrs of data).

    Read the article

  • How to Export a Table to Excel File at Client Side with JQuery?

    - by kamaci
    I use java, jQuery and jsp at my web application. I want to learn that how can I import the values at my table to excel file with JQuery at client side. There are many examples and suggestions even at stackoverflow.com but I saw the solutions as server side as like Apache POI or something like that. There is a datable plug-in: http://www.datatables.net/ at that plug-in it can be done to export the table data into a excel file I think at client side and so I am searching for a solution as like that.

    Read the article

  • How do you display a PHPmyadmin table onto a web browser/web page?

    - by user1390754
    Just a simple question, i've been searching around and for some reason i cannot find an answer to this. after creating a database/table in Phpmyadmin using xampp, what command do i need to put into my webpage (PHP) to show the table I made? I know the first step involves connecting to the database and i think i've done that properly. This is the code i found from somewhere about connecting (w3 tutorials I believe) $con = mysql_connect("localhost","xxxxxx,""); if (!$con) { die('Could not connect: ' . mysql_error()); }

    Read the article

  • How to Create Views for All Tables with Oracle SQL Developer

    - by thatjeffsmith
    Got this question over the weekend via a friend and Oracle ACE Director, so I thought I would share the answer here. If you want to quickly generate DDL to create VIEWs for all the tables in your system, the easiest way to do that with SQL Developer is to create a data model. Wait, why would I want to do this? StackOverflow has a few things to say on this subject… So, start with importing a data dictionary. Step One: Open of Create a Model In SQL Developer, go to View – Data Modeler – Browser. Then in the browser panel, expand your design and create a new Relational Model. Step Two: Import your Data Dictionary This is a fancy way of saying, ‘suck objects out of the database into my model’ This will open a wizard to connect, select your schema(s), objects, etc. Once they’re in your model, you’re ready to cook with gas I’m using HR (Human Resources) for this example. You should end up with something that looks like this. Our favorite HR model Now we’re ready to generate the views! Step Three: Auto-generate the Views Go to Tools – Data Modeler – Table to View Wizard. I don’t want all my tables included, and I want to change the naming standard Decide if you want to change the default generated view names By default the views will be created as ‘V_TABLE_NAME.’ If you don’t like the ‘V_’ you can enter your own. You also can reference the object and model name with variables as shown in the screenshot above. I’m going to go with something a little more personal. The views are the little green boxes in the diagram Can’t find your views? They should be grouped together in your diagram. Don’t forget to use the Navigator to easily find and navigate to those model diagram objects! Step Four: Generate the DDL Ok, let’s use the Generate DDL button on the toolbar. Un-check everything but your views If you used a prefix, take advantage of that to create a filter. You might have existing views in your model that you don’t want to include, right? Once you click ‘OK’ the DDL will be generated. -- Generated by Oracle SQL Developer Data Modeler 4.0.0.825 -- at: 2013-11-04 10:26:39 EST -- site: Oracle Database 11g -- type: Oracle Database 11g CREATE OR REPLACE VIEW HR.TJS_BLOG_COUNTRIES ( COUNTRY_ID , COUNTRY_NAME , REGION_ID ) AS SELECT COUNTRY_ID , COUNTRY_NAME , REGION_ID FROM HR.COUNTRIES ; CREATE OR REPLACE VIEW HR.TJS_BLOG_EMPLOYEES ( EMPLOYEE_ID , FIRST_NAME , LAST_NAME , EMAIL , PHONE_NUMBER , HIRE_DATE , JOB_ID , SALARY , COMMISSION_PCT , MANAGER_ID , DEPARTMENT_ID ) AS SELECT EMPLOYEE_ID , FIRST_NAME , LAST_NAME , EMAIL , PHONE_NUMBER , HIRE_DATE , JOB_ID , SALARY , COMMISSION_PCT , MANAGER_ID , DEPARTMENT_ID FROM HR.EMPLOYEES ; CREATE OR REPLACE VIEW HR.TJS_BLOG_JOBS ( JOB_ID , JOB_TITLE , MIN_SALARY , MAX_SALARY ) AS SELECT JOB_ID , JOB_TITLE , MIN_SALARY , MAX_SALARY FROM HR.JOBS ; CREATE OR REPLACE VIEW HR.TJS_BLOG_JOB_HISTORY ( EMPLOYEE_ID , START_DATE , END_DATE , JOB_ID , DEPARTMENT_ID ) AS SELECT EMPLOYEE_ID , START_DATE , END_DATE , JOB_ID , DEPARTMENT_ID FROM HR.JOB_HISTORY ; CREATE OR REPLACE VIEW HR.TJS_BLOG_LOCATIONS ( LOCATION_ID , STREET_ADDRESS , POSTAL_CODE , CITY , STATE_PROVINCE , COUNTRY_ID ) AS SELECT LOCATION_ID , STREET_ADDRESS , POSTAL_CODE , CITY , STATE_PROVINCE , COUNTRY_ID FROM HR.LOCATIONS ; CREATE OR REPLACE VIEW HR.TJS_BLOG_REGIONS ( REGION_ID , REGION_NAME ) AS SELECT REGION_ID , REGION_NAME FROM HR.REGIONS ; -- Oracle SQL Developer Data Modeler Summary Report: -- -- CREATE TABLE 0 -- CREATE INDEX 0 -- ALTER TABLE 0 -- CREATE VIEW 6 -- CREATE PACKAGE 0 -- CREATE PACKAGE BODY 0 -- CREATE PROCEDURE 0 -- CREATE FUNCTION 0 -- CREATE TRIGGER 0 -- ALTER TRIGGER 0 -- CREATE COLLECTION TYPE 0 -- CREATE STRUCTURED TYPE 0 -- CREATE STRUCTURED TYPE BODY 0 -- CREATE CLUSTER 0 -- CREATE CONTEXT 0 -- CREATE DATABASE 0 -- CREATE DIMENSION 0 -- CREATE DIRECTORY 0 -- CREATE DISK GROUP 0 -- CREATE ROLE 0 -- CREATE ROLLBACK SEGMENT 0 -- CREATE SEQUENCE 0 -- CREATE MATERIALIZED VIEW 0 -- CREATE SYNONYM 0 -- CREATE TABLESPACE 0 -- CREATE USER 0 -- -- DROP TABLESPACE 0 -- DROP DATABASE 0 -- -- REDACTION POLICY 0 -- -- ERRORS 0 -- WARNINGS 0 You can then choose to save this to a file or not. This has a few steps, but as the number of tables in your system increases, so does the amount of time this feature can save you!

    Read the article

  • Best Practices for Handing over Legacy Code

    - by PersonalNexus
    In a couple of months a colleague will be moving on to a new project and I will be inheriting one of his projects. To prepare, I have already ordered Michael Feathers' Working Effectively with Legacy Code. But this books as well as most questions on legacy code I found so far are concerned with the case of inheriting code as-is. But in this case I actually have access to the original developer and we do have some time for an orderly hand-over. Some background on the piece of code I will be inheriting: It's functioning: There are no known bugs, but as performance requirements keep going up, some optimizations will become necessary in the not too distant future. Undocumented: There is pretty much zero documentation at the method and class level. What the code is supposed to do at a higher level, though, is well-understood, because I have been writing against its API (as a black-box) for years. Only higher-level integration tests: There are only integration tests testing proper interaction with other components via the API (again, black-box). Very low-level, optimized for speed: Because this code is central to an entire system of applications, a lot of it has been optimized several times over the years and is extremely low-level (one part has its own memory manager for certain structs/records). Concurrent and lock-free: While I am very familiar with concurrent and lock-free programming and have actually contributed a few pieces to this code, this adds another layer of complexity. Large codebase: This particular project is more than ten thousand lines of code, so there is no way I will be able to have everything explained to me. Written in Delphi: I'm just going to put this out there, although I don't believe the language to be germane to the question, as I believe this type of problem to be language-agnostic. I was wondering how the time until his departure would best be spent. Here are a couple of ideas: Get everything to build on my machine: Even though everything should be checked into source code control, who hasn't forgotten to check in a file once in a while, so this should probably be the first order of business. More tests: While I would like more class-level unit tests so that when I will be making changes, any bugs I introduce can be caught early on, the code as it is now is not testable (huge classes, long methods, too many mutual dependencies). What to document: I think for starters it would be best to focus documentation on those areas in the code that would otherwise be difficult to understand e.g. because of their low-level/highly optimized nature. I am afraid there are a couple of things in there that might look ugly and in need of refactoring/rewriting, but are actually optimizations that have been out in there for a good reason that I might miss (cf. Joel Spolsky, Things You Should Never Do, Part I) How to document: I think some class diagrams of the architecture and sequence diagrams of critical functions accompanied by some prose would be best. Who to document: I was wondering what would be better, to have him write the documentation or have him explain it to me, so I can write the documentation. I am afraid, that things that are obvious to him but not me would otherwise not be covered properly. Refactoring using pair-programming: This might not be possible to do due to time constraints, but maybe I could refactor some of his code to make it more maintainable while he was still around to provide input on why things are the way they are. Please comment on and add to this. Since there isn't enough time to do all of this, I am particularly interested in how you would prioritize.

    Read the article

  • IE9 not rendering box-shadow Elements inside of Table Cells

    - by Rick Strahl
    Ran into an annoying problem today with IE 9. Slowly updating some older sites with CSS 3 tags and for the most part IE9 does a reasonably decent job of working with the new CSS 3 features. Not all by a long shot but at least some of the more useful ones like border-radius and box-shadow are supported. Until today I was happy to see that IE supported box-shadow just fine, but I ran into a problem with some old markup that uses tables for its main layout sections. I found that inside of a table cell IE fails to render a box-shadow. Below are images from Chrome (left) and IE 9 (right) of the same content: The download and purchase images are rendered with: <a href="download.asp" style="display:block;margin: 10px;"><img src="../images/download.gif" class="boxshadow roundbox" /></a> where the .boxshadow and .roundbox styles look like this:.boxshadow { -moz-box-shadow: 3px 3px 5px #535353; -webkit-box-shadow: 3px 3px 5px #535353; box-shadow: 3px 3px 5px #535353; } .roundbox { -moz-border-radius: 6px 6px 6px 6px; -webkit-border-radius: 6px; border-radius: 6px 6px 6px 6px; } And the Problem is… collapsed Table Borders Now normally these two styles work just fine in IE 9 when applied to elements. But the box-shadow doesn't work inside of this markup - because the parent container is a table cell.<td class="sidebar" style="border-collapse: collapse"> … <a href="download.asp" style="display:block;margin: 10px;"><img src="../images/download.gif" class="boxshadow roundbox" /></a> …</td> This HTML causes the image to not show a shadow. In actuality I'm not styling inline, but as part of my browser Reset I have the following in my master .css file:table { border-collapse: collapse; border-spacing: 0; } which has the same effect as the inline style. border-collapse by default inherits from the parent and so the TD inherits from table and tr - so TD tags are effectively collapsed. You can check out a test document that demonstrates this behavior here in this CodePaste.net snippet or run it here. How to work around this Issue To get IE9 to render the shadows inside of the TD tag correctly, I can just change the style explicitly NOT to use border-collapse:<td class="sidebar" style="border-collapse: separate; border-width: 0;"> Or better yet (thanks to David's comment below), you can add the border-collapse: separate to the .boxshadow style like this:.boxshadow { -moz-box-shadow: 3px 3px 5px #535353; -webkit-box-shadow: 3px 3px 5px #535353; box-shadow: 3px 3px 5px #535353; border-collapse: separate; } With either of these approaches IE renders the shadows correctly. Do you really need border-collapse? Should you bother with border-collapse? I think so! Collapsed borders render flat as a single fat line if a border-width and border-color are assigned, while separated borders render a thin line with a bunch of weird white space around it or worse render a old skool 3D raised border which is terribly ugly as well. So as a matter of course in any app my browser Reset includes the above code to make sure all tables with borders render the same flat borders. As you probably know, IE has all sorts of rendering issues in tables and on backgrounds (opacity backgrounds or image backgrounds) most of which is caused by the way that IE internally uses ActiveX filters to apply these effects. Apparently collapsed borders are yet one more item that causes problems with rendering. There you have it. Another crappy failure in IE we have to check for now, just one more reason to hate Internet Explorer. Luckily this one has a reasonably easy workaround. I hope this helps out somebody and saves them the hour I spent trying to figure out what caused this problem in the first place. Resources Sample HTML document that demonstrates the behavior Run the Sample© Rick Strahl, West Wind Technologies, 2005-2012Posted in HTML  Internet Explorer   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Code Contracts: validating arrays and collections

    - by DigiMortal
    Validating collections before using them is very common task when we use built-in generic types for our collections. In this posting I will show you how to validate collections using code contracts. It is cool how much awful looking code you can avoid using code contracts. Failing code Let’s suppose we have method that calculates sum of all invoices in collection. We have class Invoice and one of properties it has is Sum. I don’t introduce here any complex calculations on invoices because we have another problem to solve in this topic. Here is our code. public static decimal CalculateTotal(IList<Invoice> invoices) {     var sum = invoices.Sum(p => p.Sum);     return sum; } This method is very simple but it fails when invoices list contains at least one null. Of course, we can test if invoice is null but having nulls in lists like this is not good idea – it opens green way for different coding bugs in system. Our goal is to react to bugs ASAP at the nearest place they occur. There is one more way how to make our method fail. It happens when invoices is null. I thing it is also one common bugs during development and it even happens in production environments under some conditions that are usually hardly met. Now let’s protect our little calculation method with code contracts. We need two contracts: invoices cannot be null invoices cannot contain any nulls Our first contract is easy but how to write the second one? Solution: Contract.ForAll Preconditions in code are checked using Contract.Ensures method. This method takes boolean value as argument that sais if contract holds or not. There is also method Contract.ForAll that takes collection and predicate that must hold for that collection. Nice thing is ForAll returns boolean. So, we have very simple solution. public static decimal CalculateTotal(IList<Invoice> invoices) {     Contract.Requires(invoices != null);     Contract.Requires(Contract.ForAll<Invoice>(invoices, p => p != null));       var sum = invoices.Sum(p => p.Sum);     return sum; } And here are some lines of code you can use to test the contracts quickly. var invoices = new List<Invoice>(); invoices.Add(new Invoice()); invoices.Add(null); invoices.Add(new Invoice()); //CalculateTotal(null); CalculateTotal(invoices); If your code is covered with unit tests then I suggest you to write tests to check that these contracts hold for every code run. Conclusion Although it seemed at first place that checking all elements in collection may end up with for-loops that does not look so nice we were able to solve our problem nicely. ForAll method of contract class offered us simple mechanism to check collections and it does it smoothly the code-contracts-way. P.S. I suggest you also read devlicio.us blog posting Validating Collections with Code Contracts by Derik Whittaker.

    Read the article

  • EF Code First to SQL Azure

    - by Predrag Pejic
    I am using EF Code First to create a database on local .\SQLEXPRESS. Among others. I have these 2 classes: public class Shop { public int ShopID { get; set; } [Required(AllowEmptyStrings = false, ErrorMessage = "You must enter a name!")] [MaxLength(25, ErrorMessage = "Name must be 25 characters or less")] public string Name { get; set; } [Required(AllowEmptyStrings = false, ErrorMessage = "You must enter an address!")] [MaxLength(30, ErrorMessage = "Address must be 30 characters or less")] public string Address { get; set; } [Required(AllowEmptyStrings = false, ErrorMessage = "You must enter a valid city name!")] [MaxLength(30, ErrorMessage = "City name must be 30 characters or less")] public string City { get; set; } [Required(AllowEmptyStrings = false, ErrorMessage = "You must enter a phone number!")] [MaxLength(14, ErrorMessage = "Phone number must be 14 characters or less")] public string Phone { get; set; } [MaxLength(100, ErrorMessage = "Description must be 50 characters or less")] public string Description { get; set; } [Required(AllowEmptyStrings = false, ErrorMessage = "You must enter a WorkTime!")] public DateTime WorkTimeBegin { get; set; } [Required(AllowEmptyStrings = false, ErrorMessage = "You must enter a WorkTime!")] public DateTime WorkTimeEnd { get; set; } public DateTime? SaturdayWorkTimeBegin { get; set; } public DateTime? SaturdayWorkTimeEnd { get; set; } public DateTime? SundayWorkTimeBegin { get; set; } public DateTime? SundayWorkTimeEnd { get; set; } public int ShoppingPlaceID { get; set; } public virtual ShoppingPlace ShoppingPlace { get; set; } public virtual ICollection<Category> Categories { get; set; } } public class ShoppingPlace { [Key] public int ShopingplaceID { get; set; } [Required(AllowEmptyStrings = false, ErrorMessage = "You must enter a name!")] [MaxLength(25, ErrorMessage = "Name must be 25 characters or less")] public string Name { get; set; } [Required(AllowEmptyStrings = false, ErrorMessage = "You must enter an address!")] [MaxLength(50, ErrorMessage = "Address must be 50 characters or less")] public string Address { get; set; } [Required(AllowEmptyStrings = false, ErrorMessage = "You must enter a city name!")] [MaxLength(30, ErrorMessage = "City must be 30 characters or less")] public string City { get; set; } [Required(AllowEmptyStrings = false, ErrorMessage = "You must enter a valid phone number!")] [MaxLength(14, ErrorMessage = "Phone number must be 14 characters or less")] public string Phone { get; set; } public int ShoppingCenterID { get; set; } public virtual ShoppingCenter ShoppingCenter { get; set; } public virtual ICollection<Shop> Shops { get; set; } } and a method in DbContext: modelBuilder.Entity<Item>() .HasRequired(p => p.Category) .WithMany(a => a.Items) .HasForeignKey(a => a.CategoryID) .WillCascadeOnDelete(false); modelBuilder.Entity<Category>() .HasRequired(a => a.Shop) .WithMany(a => a.Categories) .HasForeignKey(a => a.ShopID) .WillCascadeOnDelete(false); modelBuilder.Entity<Shop>() .HasOptional(a => a.ShoppingPlace) .WithMany(a => a.Shops) .HasForeignKey(a => a.ShoppingPlaceID) .WillCascadeOnDelete(false); modelBuilder.Entity<ShoppingPlace>() .HasOptional(a => a.ShoppingCenter) .WithMany(a => a.ShoppingPlaces) .HasForeignKey(a => a.ShoppingCenterID) .WillCascadeOnDelete(false); Why I can't create Shop without creating and populating ShopingPlace. How to achieve that? EDIT: Tried with: modelBuilder.Entity<Shop>() .HasOptional(a => a.ShoppingPlace) .WithOptionalPrincipal(); modelBuilder.Entity<ShoppingPlace>() .HasOptional(a => a.ShoppingCenter) .WithOptionalPrincipal(); and it passed, but what is the difference? And why in SQL Server i am allowed to see ShoppingPlaceID and ShoppingPlace_ShopingPlaceID when in the case of Item and Category i see only one?

    Read the article

  • Code review - PHP syntax error unexpected $end

    - by dtufano
    Hey guys! I keep getting a syntax error (unexpected $end), and I've isolated it to this chunk of code. I can't for the life of me see any closure issues. It's probably something obvious but I'm going nutty trying to find it. Would appreciate an additional set of eyes. function generate_pagination( $base_url, $num_items, $per_page, $start_item, $add_prevnext_text = TRUE ) { global $lang; if ( $num_items == 0 ) { } else { $total_pages = ceil( $num_items / $per_page ); if ( $total_pages == 1 ) { return ""; } $on_page = floor( $start_item / $per_page ) + 1; $page_string = ""; if ( 8 < $total_pages ) { $init_page_max = 2 < $total_pages ? 2 : $total_pages; $i = 1; for ( ; $i < $init_page_max + 1; ++$i ) { $page_string .= $i == $on_page ? "<font face='verdana' size='2'><b>[{$i}]</b></font>" : "<a href=\"".$base_url."&amp;offset=".( $i - 1 ) * $per_page."\">{$i}</a>"; if ( $i < $init_page_max ) { $page_string .= ", "; } } if ( 2 < $total_pages ) { if ( 1 < $on_page && $on_page < $total_pages ) { $page_string .= 4 < $on_page ? " ... " : ", "; $init_page_min = 3 < $on_page ? $on_page : 4; $init_page_max = $on_page < $total_pages - 3 ? $on_page : $total_pages - 3; $i = $init_page_min - 1; for ( ; $i < $init_page_max + 2; ++$i ) { $page_string .= $i == $on_page ? "<font face='verdana' size='2'><b>[{$i}]</b></font>" : "<a href=\"".$base_url."&amp;offset=".( $i - 1 ) * $per_page."\">{$i}</a>"; if ( $i < $init_page_max + 1 ) { $page_string .= ", "; } } $page_string .= $on_page < $total_pages - 3 ? " ... " : ", "; } else { $page_string .= " ... "; } $i = $total_pages - 1; for ( ; $i < $total_pages + 1; ++$i ) { $page_string .= $i == $on_page ? "<font face='verdana' size='2'><b>[{$i}]</b></font>" : "<a href=\"".$base_url."&amp;offset=".( $i - 1 ) * $per_page."\">{$i}</a>"; if ( $i < $total_pages ) { $page_string .= ", "; } } continue; } } else { do { $i = 1; for ( ; $i < $total_pages + 1; ++$i) { $page_string .= $i == $on_page ? "<font face='verdana' size='2'><b>[{$i}]</b></font>" : "<a href=\"".$base_url."&amp;offset=".( $i - 1 ) * $per_page."\">{$i}</a>"; if ( $i < $total_pages ) { $page_string .= ", "; break; } } } while (0); if ( 1 < $on_page ) { $page_string = " <font size='2'><a href=\"".$base_url."&amp;offset=".( $on_page - 2 ) * $per_page."\">"."&laquo;"."</a></font>&nbsp;&nbsp;".$page_string; } if ( $on_page < $total_pages ) { $page_string .= "&nbsp;&nbsp;<font size='2'><a href=\"".$base_url."&amp;offset=".$on_page * $per_page."\">"."&raquo;"."</a></font>"; } $page_string = "Pages ({$total_pages}):"." ".$page_string; return $page_string; } }

    Read the article

  • Code Access Security and Sharepoint WebParts

    - by Gordon Carpenter-Thompson
    I've got a vague handle on how Code Access Security works in Sharepoint. I have developed a custom webpart and setup a CAS policy in my Manifest <CodeAccessSecurity> <PolicyItem> <PermissionSet class="NamedPermissionSet" version="1" Description="Permission set for Okana"> <IPermission class="Microsoft.SharePoint.Security.SharePointPermission, Microsoft.SharePoint.Security, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" version="1" ObjectModel="True" Impersonate="True" /> <IPermission class="SecurityPermission" version="1" Flags="Assertion, Execution, ControlThread, ControlPrincipal, RemotingConfiguration" /> <IPermission class="AspNetHostingPermission" version="1" Level="Medium" /> <IPermission class="DnsPermission" version="1" Unrestricted="true" /> <IPermission class="EventLogPermission" version="1" Unrestricted="true"> <Machine name="localhost" access="Administer" /> </IPermission> <IPermission class="EnvironmentPermission" version="1" Unrestricted="true" /> <IPermission class="System.Configuration.ConfigurationPermission, System.Configuration, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" version="1" Unrestricted="true"/> <IPermission class="System.Net.WebPermission, System, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true" /> <IPermission class="System.Net.WebPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" Unrestricted="true" /> <IPermission class="System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" version="1" Unrestricted="true" PathDiscovery="*AllFiles*" /> <IPermission class="IsolatedStorageFilePermission" version="1" Allowed="AssemblyIsolationByUser" UserQuota="9223372036854775807" /> <IPermission class="PrintingPermission" version="1" Level="DefaultPrinting" /> <IPermission class="PerformanceCounterPermission" version="1"> <Machine name="localhost"> <Category name="Enterprise Library Caching Counters" access="Write"/> <Category name="Enterprise Library Cryptography Counters" access="Write"/> <Category name="Enterprise Library Data Counters" access="Write"/> <Category name="Enterprise Library Exception Handling Counters" access="Write"/> <Category name="Enterprise Library Logging Counters" access="Write"/> <Category name="Enterprise Library Security Counters" access="Write"/> </Machine> </IPermission> <IPermission class="ReflectionPermission" version="1" Unrestricted="true"/> <IPermission class="SecurityPermission" version="1" Flags="SerializationFormatter, UnmanagedCode, Infrastructure, Assertion, Execution, ControlThread, ControlPrincipal, RemotingConfiguration, ControlAppDomain,ControlDomainPolicy" /> <IPermission class="SharePointPermission" version="1" ObjectModel="True" /> <IPermission class="SmtpPermission" version="1" Access="Connect" /> <IPermission class="SqlClientPermission" version="1" Unrestricted="true"/> <IPermission class="WebPartPermission" version="1" Connections="True" /> <IPermission class="WebPermission" version="1"> <ConnectAccess> <URI uri="$OriginHost$"/> </ConnectAccess> </IPermission> </PermissionSet> <Assemblies> .... </Assemblies> This is correctly converted into a wss_custom_wss_minimaltrust.config when it is deployed onto the Sharepoint server and mostly works. To get the WebPart working fully, however I find that I need to modify the wss_custom_wss_minimaltrust.config by hand after deployment and set Unrestricted="true" on the permissions set <PermissionSet class="NamedPermissionSet" version="1" Description="Permission set for MyApp" Name="mywebparts.wsp-86d8cae1-7db2-4057-8c17-dc551adb17a2-1"> to <PermissionSet class="NamedPermissionSet" version="1" Description="Permission set for MyApp" Name="mywebparts.wsp-86d8cae1-7db2-4057-8c17-dc551adb17a2-1" Unrestricted="true"> It's all because I'm loading a User Control from the webpart. I don't believe there is a way to enable that using CAS but am willing to be proven wrong. Is there a way to set something in the manifest so I don't need to make this fix by hand? Thanks

    Read the article

  • SQL Developer Quick Tip: Reordering Columns

    - by thatjeffsmith
    Do you find yourself always scrolling and scrolling and scrolling to get to the column you want to see when looking at a table or view’s data? Don’t do that! Instead, just right-click on the column headers, select ‘Columns’, and reorder as desired. Access the Manage Columns dialog Then move up the columns you want to see first… Put them in the order you want – it won’t affect the database. Now I see the data I want to see, when I want to see it – no scrolling. This will only change how the data is displayed for you, and SQL Developer will remember this ordering until you ‘Delete Persisted Settings…’ What IS Remembered Via These ‘Persisted Settings?’ Column Widths Column Sorts Column Positions Find/Highlights This means if you manipulate one of these settings, SQL Developer will remember them the next time you open the tool and go to that table or view. Don’t know what I mean by ‘Find/Highlight?’ Find and highlight values in a grid with Ctrl+F

    Read the article

  • SQL SERVER – Removing Leading Zeros From Column in Table

    - by pinaldave
    Some questions surprises me and make me write code which I have never explored before. Today was similar experience as well. I have always received the question regarding how to reserve leading zeroes in SQL Server while displaying them on the SSMS or another application. I have written articles on this subject over here. SQL SERVER – Pad Ride Side of Number with 0 – Fixed Width Number Display SQL SERVER – UDF – Pad Ride Side of Number with 0 – Fixed Width Number Display SQL SERVER – Preserve Leading Zero While Coping to Excel from SSMS Today I received a very different question where the user wanted to remove leading zero and white space. I am using the same sample sent by user in this example. USE tempdb GO -- Create sample table CREATE TABLE Table1 (Col1 VARCHAR(100)) INSERT INTO Table1 (Col1) SELECT '0001' UNION ALL SELECT '000100' UNION ALL SELECT '100100' UNION ALL SELECT '000 0001' UNION ALL SELECT '00.001' UNION ALL SELECT '01.001' GO -- Original data SELECT * FROM Table1 GO -- Remove leading zeros SELECT SUBSTRING(Col1, PATINDEX('%[^0 ]%', Col1 + ' '), LEN(Col1)) FROM Table1 GO -- Clean up DROP TABLE Table1 GO Here is the resultset of above script. It will remove any leading zero or space and will display the number accordingly. This problem is a very generic problem and I am confident there are alternate solutions to this problem as well. If you have an alternate solution or can suggest a sample data which does not satisfy the SUBSTRING solution proposed, I will be glad to include them in follow up blog post with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Removing Leading Zeros From Column in Table – Part 2

    - by pinaldave
    Earlier I wrote a blog post about Remvoing Leading Zeros from Column In Table. It was a great co-incident that my friend Madhivanan (no need of introduction for him) also post a similar article over on BeyondRelational.com. I strongly suggest to read his blog as well as he has suggested some cool solutions to the same problem. On original blog post asked two questions 1) if my sample for testing is correct and 2) If there is any better method to achieve the same. The response was amazing. I am proud on our SQL Community that we all keep on improving on each other’s contribution. There are some really good suggestions as a comment. Let us go over them right now. Improving the ResultSet I had missed including all zeros in my sample set which was an overlook. Here is the new sample which includes all zero values as well. USE tempdb GO -- Create sample table CREATE TABLE Table1 (Col1 VARCHAR(100)) INSERT INTO Table1 (Col1) SELECT '0001' UNION ALL SELECT '000100' UNION ALL SELECT '100100' UNION ALL SELECT '000 0001' UNION ALL SELECT '00.001' UNION ALL SELECT '01.001' UNION ALL SELECT '0000' GO Now let us go over some of the fantastic solutions which we have received. Response from Rainmaker SELECT CASE PATINDEX('%[^0 ]%', Col1 + ' ‘') WHEN 0 THEN '' ELSE SUBSTRING(Col1, PATINDEX('%[^0 ]%', Col1 + ' '), LEN(Col1)) END FROM Table1 Response from Harsh Solution 1 SELECT SUBSTRING(Col1, PATINDEX('%[^0 ]%', Col1 + 'a'), LEN(Col1)) FROM Table1 Response from Harsh Solution 2 SELECT RIGHT(Col1, LEN(Col1)+1 -PATINDEX('%[^0 ]%', Col1 + 'a' )) FROM Table1 Response from lucazav SELECT T.Col1 , label = CAST( CAST(REPLACE(T.Col1, ' ', '') AS FLOAT) AS VARCHAR(10)) FROM Table1 AS T Response from iamAkashSingh SELECT REPLACE(LTRIM(REPLACE(col1,'0',' ')),' ','0') FROM table1 Here is the resultset of above scripts. It will remove any leading zero or space and will display the number accordingly. If you believe there is a better solution, please leave a comment. I am just glad to see so many various responses and all of them teach us something new. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ODI 11g - Dynamic and Flexible Code Generation

    - by David Allan
    ODI supports conditional branching at execution time in its code generation framework. This is a little used, little known, but very powerful capability - this let's one piece of template code behave dynamically based on a runtime variable's value for example. Generally knowledge module's are free of any variable dependency. Using variable's within a knowledge module for this kind of dynamic capability is a valid use case - definitely in the highly specialized area. The example I will illustrate is much simpler - how to define a filter (based on mapping here) that may or may not be included depending on whether at runtime a certain value is defined for a variable. I define a variable V_COND, if I set this variable's value to 1, then I will include the filter condition 'EMP.SAL > 1' otherwise I will just use '1=1' as the filter condition. I use ODIs substitution tags using a special tag '<$' which is processed just prior to execution in the runtime code - so this code is included in the ODI scenario code and it is processed after variables are substituted (unlike the '<?' tag).  So the lines below are not equal ... <$ if ( "#V_COND".equals("1")  ) { $> EMP.SAL > 1 <$ } else { $> 1 = 1 <$ } $> <? if ( "#V_COND".equals("1")  ) { ?> EMP.SAL > 1 <? } else { ?> 1 = 1 <? } ?> When the <? code is evaluated the code is executed without variable substitution - so we do not get the desired semantics, must use the <$ code. You can see the jython (java) code in red is the conditional if statement that drives whether the 'EMP.SAL > 1' or '1=1' is included in the generated code. For this illustration you need at least the ODI 11.1.1.6 release - with the vanilla 11.1.1.5 release it didn't work for me (may be patches?). As I mentioned, normally KMs don't have dependencies on variables - since any users must then have these variables defined etc. but it does afford a lot of runtime flexibility if such capabilities are required - something to keep in mind, definitely.

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • AJI Report #15&ndash;Zac Harlan Talks About Iowa Code Camp

    - by Jeff Julian
    We sit down with Zac Harlen and talk about Iowa Code Camp, what makes up a Code Camp, and how to start your own Code Camp. Zac has been a part of the leadership team for a few years for Iowa Code Camp and is the Development Manager for JP Cycles. We also get into what it takes to speak at a Code Camp if you are interested in growing beyond the user group as a speaker. Listen to the Show Site: LinkedIn Profile Blog: Zac Harlan Twitter: @ZacHarlan

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >