Search Results

Search found 14737 results on 590 pages for 'dynamic tables'.

Page 23/590 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Making dynamic images have static filenames

    - by michaeltk
    My website currently has various links to a php script that generates the images dynamically. For example, the link may say "img source="/dynamic_images.php?type=pie-chart&color=red" Obviously, this is not great for SEO. I'd like to somehow make the filenames of these links appear to be static, and use a solution (like Mod-Rewrite) to ensure that the images can still be dynamically created. I suppose I could have something like "img src="average-profits-in-scuba-diving-industry.png?type=pie-chart&color=red" (and use Mod-Rewrite to take care of changing the filename prefix to dynamic_images.php), but I'm afraid that the search engines would shy away from the querystring on the end of the image filename. Any solutions? Thanks in advance.

    Read the article

  • How to use JQuery to remove dynamic content?

    - by ed.talmadge
    I have an html page that is generated by a CMS. I cannot modify the page, but can add JavaScript. Each time the page loads, a JavaScript function (that I cannot modify) dynamically inserts a paragraph onto the page. How can I use JQuery to .remove() that paragraph whenever it is loaded? For example, when the page first loads, it look like this (blank): <div></div> Then, a few seconds later, a JavaScript function (that I have no control over) adds a paragraph to the page. The page then looks like this: <div><p id="foo">bar</p></div> How can I use JQuery to remove the paragraph with id=foo each time it is dynamically loaded onto the page?

    Read the article

  • android webview returns blank page when load dynamic html page

    - by user2962555
    I am trying to click one button to load a page into a div block dynamically. To test it, I try to append a list item with text "abc" into the loaded page. However, I always get a blank page. load function works fine because if I try to load a static page, it works. Following is my main html page code. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>LoadPageTest</title> <link rel="stylesheet" href="http://fonts.googleapis.com/css?family=Open+Sans:300,400,700"> <link rel="stylesheet" href="./css/customizedstyle.css"> <link rel="stylesheet" href="./css/themes/default/jquery.mobile-1.4.3.min.css"> <link rel="stylesheet" href="./css/jqm-demos.css"> <script src="./js/jquery.js"></script> <script scr="./js/customizedjs.js"></script> <script src="./js/jquery.mobile-1.4.3.min.js"></script> <script> $( document ).on( "pagecreate", "#demo-page", function() { $( document ).on( "swipeleft swiperight", "#demo-page", function( e ) { if ( $( ".ui-page-active" ).jqmData( "panel" ) !== "open" ) { if ( e.type === "swipeleft" ) { $( "#right-panel" ).panel( "open" ); } } }); }); </script> <style type="text/css"> body { overflow:hidden; } </style> </head> <body style= "overflow:hidden" scrolling="no"> <style type="text/css"> body { overflow:hidden; } </style> <div data-role="page" id="main-page" style= "overflow:hidden" scrolling="no"> <div role="main" class="ui-content" id ="maindiv" style= "overflow: auto"> Will load diff pages here. </div><!-- /content --> <div data-role="panel" id="left-panel" data-theme="b"> <ul data-role="listview" data-icon="false" id="menu"> <li> <a href="#" id = "btnA" data-rel="close">Go Page A <img src="./images/icona.png" class="ui-li-thumb"/> </li> <li> <a href="#" id = "btnB" data-rel="close">Go Page B <img src="./images/iconb.png" class="ui-li-thumb"/> </li> </ul> </div><!-- /panel --> <script type="text/javascript"> $("#btnA").on("click", function(){ $("#maindiv").empty(); $("#maindiv").load("pageA.html"); }); $("#btnB").on("click", function(){ $("#maindiv").empty(); $("#maindiv").load("pageB.html"); }); </script> </div><!-- /page --> </body> </html> Next is code for the page I try to load dynamically. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Page should be loaded</title> <link rel="stylesheet" href="http://fonts.googleapis.com/css?family=Open+Sans:300,400,700"> <link rel="stylesheet" href="./css/customizedstyle.css"> <link rel="stylesheet" href="./css/themes/default/jquery.mobile-1.4.3.min.css"> <link rel="stylesheet" href="./css/jqm-demos.css"> <script src="./js/jquery.js"></script> <script scr="./js/customizedjs.js"></script> <script src="./js/jquery.mobile-1.4.3.min.js"></script> <script> $(document).on('pagebeforeshow', function () { $('#postlist').append('<li> abc </li>'); $('#postlist').listview('refresh'); }); </script> </head> <body > <div data-role="page" id="posthome"> <div data-role = "content"> <ul data-role='listview' id = "postlist"> </ul> </div> </div> </body> </html> I doubt if it is because my javascript in the page doesn't work, cause the swipe js code in the main page seems not work either. Is that possible? I have enabled javascript in the onCreate() function of the activity file as below. protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_message); new LongRunningGetIO().execute(); mWebView = (WebView) findViewById(R.id.webview); mWebView.setWebViewClient(new AppClient()); mWebView.setVerticalScrollBarEnabled(false); mWebView.getSettings().setJavaScriptEnabled(true); mWebView.getSettings().setDomStorageEnabled(true); mWebView.loadUrl("file:///android_asset/index.html"); } I noticed there is a warning for statement to enable javascript "Using setJavaScriptEnabled can introduce XSS vulnerabilities into you application, review carefully". Will that maybe the reason? Then, I added @SuppressLint("SetJavaScriptEnabled") on top of the activity. The warning is gone, but the js code in pages seem still not work.

    Read the article

  • Drupal Webform textfield dynamic growing list

    - by Bob Crowley
    Just curious... I have a project where people can input their cooking recipes. I would like to build a webform that will have a textfield and when it is filled in a new textfield appears below. A "growing textfield list". Let me try to show it here: Ingredient #1 _________________________________ [add] When you type and ingredient click "add" you then are going to see: Ingredient #1 Potatoes_________________________ Ingredient #2 _________________________________ [add] Sorry for not knowing the proper markup. However if anyone knows: a) the proper term for this ( I call a growing textfield list )? b) how to do it with webform in drupal?

    Read the article

  • ExtJS (4.0) dynamic / lazy loading

    - by Paul
    Given a border layout with a west (navigation) and a center region. Let say I click on topic A in the west region, I want to replace (replace as in 'delete last topic') the center region with 'extjs' program code named topic_a.js I succeed in loading with this code: dynamicPanel = new Ext.Component({ loader: { url: '/www/file.htm', renderer: 'html', autoLoad: true, scripts: true } }); var oMainContainer = Ext.getCmp('maincontainer'); oMainContainer.show(); oMainContainer.add(dynamicPanel); But calling this the second time 'adds' things up in the center region and of course fails short in 'deleting', what would be a good approach?

    Read the article

  • Dynamic columns in C# rdlc report

    - by Mugume David
    Suppose I have a report that lists employees (as rows) with their respective taxes charged (in columns). It is possible for a new tax to come up. Since my rdlc report file is currently designed (from XML of-course) to statically generate the coulumns. A future shift in events will need me to alter the rdlc file and add in a new column. how can i do this dynamically. I intend to avoid opening the rdlc file and adding XML code.

    Read the article

  • Rails - Dynamic name routes namespace

    - by Kuro
    Hi, Using Rails 2.3, I'm trying to configure my routes but I uncounter some difficulties. I would like to have something like : http:// mydomain.com/mycontroller/myaction/myid That should respond with controllers in :front namespace http:// mydomain.com/aname/mycontroller/myaction/mydi That should respond with controllers in :custom namespace I try something like this, but I'm totaly wrong : map.namespace :front, :path_prefix => "" do |f| f.root :controller => :home, :action => :index f.resources :home ... end map.namespace :custom, :path_prefix => "" do |m| m.root :controller => :home, :action => :index m.resources :home ... m.match ':sub_url/site/:controller/:action/:id' m.match ':sub_url/site/:controller/:action/:id' m.match ':sub_url/site/:controller/:action/:id.:format' m.match ':sub_url/site/:controller/:action.:format' end I put matching instruction in custom namespace but I'm not sure it's the right place for it. I think I really don't get the way to customize fields in url matching and I don't know how to find documentation about Rails 2.3, most of my research drove me to Rails 3 doc about the topic... Somebody to help me ?

    Read the article

  • how to create a dynamic class at runtime in Java

    - by Mrityunjay
    hi, is it possible to create a new java file from existing java file after changing some of its attributes at runtime?? Suppose i have a java file pubic class Student{ private int rollNo; private String name; // getters and setters // constructor } is it possible to create something like this, provided that rollNo is key element for the table.. public class Student { private StudentKey key; private String name; //getters and setters //constructor } public class StudentKey { private int rollNo; // getters and setters // construcotors } please help..

    Read the article

  • C++ dynamic array sizing problem

    - by Peter
    The basic pseudo code looks like this: void myFunction() { int size = 10; int * MyArray; MyArray = new int[size]; cout << size << endl; cout << sizeof(MyArray) << endl; } The first cout returns 10, as expected, while the second cout returns 4. Anyone have an explanation?

    Read the article

  • Dynamic SQL and Funtions

    - by Unlimited071
    Hi all, is there any way of accomplishing something like the following: CREATE FUNCTION GetQtyFromID ( @oricod varchar(15), @ccocod varchar(15), @ocmnum int, @oinnum int, @acmnum int, @acttip char(2), @unisim varchar(15) ) AS BEGIN DECLARE @Result decimal(18,8) DECLARE @SQLString nvarchar(max); DECLARE @ParmDefinition nvarchar(max); --I need to execute a query stored in a cell which returns the calculated qty. --i.e of AcuQry: select @cant = sum(smt) from table where oricod = @oricod and ... SELECT @SQLString = AcuQry FROM OinActUni WHERE (OriCod = @oricod) AND (ActTipCod = @acttip) AND (UniSim = @unisim) AND (AcuEst > 0) SET @ParmDefinition = N' @oricod varchar(15), @ccocod varchar(15), @ocmnum int, @oinnum int, @acmnum int, @cant decimal(18,8) output'; EXECUTE sp_executesql @SQLString, @ParmDefinition, @oricod = @oricod, @ccocod = @ccocod, @ocmnum = @ocmnum, @oinnum = @oinnum, @acmnum = @acmnum, @cant = @result OUTPUT; RETURN @Result END The problem with this approach is that it is prohibited to execute sp_excutesql in a function... What I need is to do something like: select id, getQtyFromID(id) as qty from table The main idea is to execute a query stored in a table cell, this is because the qty of something depends on it's unit. the unit can be days or it can be metric tons, so there is no relation between the units, therefore the need of a specific query for each unit.

    Read the article

  • Dynamic programming Approach- Knapsack Puzzle

    - by idalsin
    I'm trying to solve the Knapsack problem with the dynamical programming(DP) approach, with Python 3.x. My TA pointed us towards this code for a head start. I've tried to implement it, as below: def take_input(infile): f_open = open(infile, 'r') lines = [] for line in f_open: lines.append(line.strip()) f_open.close() return lines def create_list(jewel_lines): #turns the jewels into a list of lists jewels_list = [] for x in jewel_lines: weight = x.split()[0] value = x.split()[1] jewels_list.append((int(value), int(weight))) jewels_list = sorted(jewels_list, key = lambda x : (-x[0], x[1])) return jewels_list def dynamic_grab(items, max_weight): table = [[0 for weight in range(max_weight+1)] for j in range(len(items)+1)] for j in range(1,len(items)+1): val= items[j-1][0] wt= items[j-1][1] for weight in range(1, max_weight+1): if wt > weight: table[j][weight] = table[j-1][weight] else: table[j][weight] = max(table[j-1][weight],table[j-1][weight-wt] + val) result = [] weight = max_weight for j in range(len(items),0,-1): was_added = table[j][weight] != table[j-1][weight] if was_added: val = items[j-1][0] wt = items[j-1][1] result.append(items[j-1]) weight -= wt return result def totalvalue(comb): #total of a combo of items totwt = totval = 0 for val, wt in comb: totwt += wt totval += val return (totval, -totwt) if totwt <= max_weight else (0,0) #required setup of variables infile = "JT_test1.txt" given_input = take_input(infile) max_weight = int(given_input[0]) given_input.pop(0) jewels_list = create_list(given_input) #test lines print(jewels_list) print(greedy_grab(jewels_list, max_weight)) bagged = dynamic_grab(jewels_list, max_weight) print(totalvalue(bagged)) The sample case is below. It is in the format line[0] = bag_max, line[1:] is in form(weight, value): 575 125 3000 50 100 500 6000 25 30 I'm confused as to the logic of this code in that it returns me a tuple and I'm not sure what the output tuple represents. I've been looking at this for a while and just don't understand what the code is pointing me at. Any help would be appreciated.

    Read the article

  • Oracle 11gR2 exp does not export some tables

    - by Tilo Prütz
    I have an Oracle 11g (11.2.0.1) Database running on Linux (x64). Within the database I have a schema and 33 tables for it. When I log in via sqlplus I can list all the tables via SELECT OBJECT_NAME FROM USER_OBJECTS WHERE OBJECT_TYPE = 'TABLE'; But when I export the Tablespace using exp ... BUFFER=65536 FULL=N COMPRESS=N CONSISTENT=Y TABLESPACES=... FILE=... Then it only exports 24 of the 33 tables. I have tried to export the missing tables via exp ... TABLES=<missing_table> ... But then I get an error: EXP-00011: NPSMIGRO2_CM.DEFAULT_USR_ATTR_VALUES does not exist How can I find out what's wrong here? How can I export all the tables?

    Read the article

  • MySQL Privileges required to GRANT EVENT, EXECUTE, LOCK TABLES, and TRIGGER

    - by Brad
    I have an account, user_a, and I would like to grant all available permissions on some_db to user_b. I have tried the following query: GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SELECT, SHOW VIEW, TRIGGER, UPDATE ON `some_db`.* TO 'user_b'@'%' WITH GRANT OPTION The result: Access denied for user 'user_a'@'%' to database 'some_db' Some experimentation has shown me that the only permissions my account (user_a) is unable to grant are EVENT, EXECUTE, LOCK TABLES, and TRIGGER. What privileges are required for my account to GRANT these privileges to another user? If I run SHOW GRANTS, I get this output: "GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER ON *.* TO 'user_a'@'%' IDENTIFIED BY PASSWORD '1234567890abcdef' WITH GRANT OPTION" "GRANT SELECT, INSERT, UPDATE, DELETE, EXECUTE ON `some_other_unrelated_db`.* TO 'user_a'@'%'" "GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE ROUTINE, ALTER ROUTINE ON `another_unrelated_db`.* TO 'user_a'@'%' WITH GRANT OPTION"

    Read the article

  • Existing tables with binaries to use filestream

    - by user1098487
    I've got a few tables for which I want to use filestream storage. These tables already contain binary data and have rowguids. However at the time they were were created, the tables were not added to a filestream enabled filegroup. What is the best way to have these tables use filestream at this point? Do I need to drop + recreate the tables and migrate the data? Is there an easier way? The database already has filestream enabled and there are other tables which are using them.

    Read the article

  • Managing many draw calls for dynamic objects

    - by codetiger
    We are developing a game (cross-platform) using Irrlicht. The game has many (around 200 - 500) dynamic objects flying around during the game. Most of these objects are static mesh and build from 20 - 50 unique Meshes. We created seperate scenenodes for each object and referring its mesh instance. But the output was very much unexpected. Menu screen: (150 tris - Just to show you the full speed rendering performance of 2 test computers) a) NVidia Quadro FX 3800 with 1GB: 1600 FPS DirectX and 2600 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb: 260 FPS in OpenGL Now inside the game in a test level: (160 dynamic objects counting around 10K tris): a) NVidia Quadro FX 3800 with 1GB: 45 FPS DirectX and 50 FPS on OpenGL b) Mac Mini with Geforce 9400M 256mb: 45 FPS in OpenGL Obviously we don't have the option of mesh batch rendering as most of the objects are dynamic. And the one big static terrain is already in single mesh buffer. To add more information, we use one 2048 png for texture for most of the dynamic objects. And our collision detection hardly and other calculations hardly make any impact on FPS. So we understood its the draw calls we make that eats up all FPS. Is there a way we can optimize the rendering, or are we missing something?

    Read the article

  • Achieve Named Criteria with multiple tables in EJB Data control

    - by Deepak Siddappa
    In EJB create a named criteria using sparse xml and in named criteria wizard, only attributes related to the that particular entities will be displayed.  So here we can filter results only on particular entity bean. Take a scenario where we need to create Named Criteria based on multiple tables using EJB. In BC4J we can achieve this by creating view object based on multiple tables. So in this article, we will try to achieve named criteria based on multiple tables using EJB.Implementation StepsCreate Java EE Web Application with entity based on Departments and Employees, then create a session bean and data control for the session bean.Create a Java Bean, name as CustomBean and add below code to the file. Here in java bean from both Departments and Employees tables three fields are taken. public class CustomBean { private BigDecimal departmentId; private String departmentName; private BigDecimal locationId; private BigDecimal employeeId; private String firstName; private String lastName; public CustomBean() { super(); } public void setDepartmentId(BigDecimal departmentId) { this.departmentId = departmentId; } public BigDecimal getDepartmentId() { return departmentId; } public void setDepartmentName(String departmentName) { this.departmentName = departmentName; } public String getDepartmentName() { return departmentName; } public void setLocationId(BigDecimal locationId) { this.locationId = locationId; } public BigDecimal getLocationId() { return locationId; } public void setEmployeeId(BigDecimal employeeId) { this.employeeId = employeeId; } public BigDecimal getEmployeeId() { return employeeId; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getFirstName() { return firstName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getLastName() { return lastName; } } Open the sessionEJb file and add the below code to the session bean and expose the method in local/remote interface and generate a data control for that. Note:- Here in the below code "em" is a EntityManager. public List<CustomBean> getCustomBeanFindAll() { String queryString = "select d.department_id, d.department_name, d.location_id, e.employee_id, e.first_name, e.last_name from departments d, employees e\n" + "where e.department_id = d.department_id"; Query genericSearchQuery = em.createNativeQuery(queryString, "CustomQuery"); List resultList = genericSearchQuery.getResultList(); Iterator resultListIterator = resultList.iterator(); List<CustomBean> customList = new ArrayList(); while (resultListIterator.hasNext()) { Object col[] = (Object[])resultListIterator.next(); CustomBean custom = new CustomBean(); custom.setDepartmentId((BigDecimal)col[0]); custom.setDepartmentName((String)col[1]); custom.setLocationId((BigDecimal)col[2]); custom.setEmployeeId((BigDecimal)col[3]); custom.setFirstName((String)col[4]); custom.setLastName((String)col[5]); customList.add(custom); } return customList; } Open the DataControls.dcx file and create sparse xml for customBean. In sparse xml navigate to Named criteria tab -> Bind Variable section, create two binding variables deptId,fName. In sparse xml navigate to Named criteria tab ->Named criteria, create a named criteria and map the query attributes to the bind variables. In the ViewController create a file jspx page, from data control palette drop customBeanFindAll->Named Criteria->CustomBeanCriteria->Query as ADF Query Panel with Table. Run the jspx page and enter values in search form with departmentId as 50 and firstName as "M". Named criteria will filter the query of a data source and display the result like below.

    Read the article

  • A ToDynamic() Extension Method For Fluent Reflection

    - by Dixin
    Recently I needed to demonstrate some code with reflection, but I felt it inconvenient and tedious. To simplify the reflection coding, I created a ToDynamic() extension method. The source code can be downloaded from here. Problem One example for complex reflection is in LINQ to SQL. The DataContext class has a property Privider, and this Provider has an Execute() method, which executes the query expression and returns the result. Assume this Execute() needs to be invoked to query SQL Server database, then the following code will be expected: using (NorthwindDataContext database = new NorthwindDataContext()) { // Constructs the query. IQueryable<Product> query = database.Products.Where(product => product.ProductID > 0) .OrderBy(product => product.ProductName) .Take(2); // Executes the query. Here reflection is required, // because Provider, Execute(), and ReturnValue are not public members. IEnumerable<Product> results = database.Provider.Execute(query.Expression).ReturnValue; // Processes the results. foreach (Product product in results) { Console.WriteLine("{0}, {1}", product.ProductID, product.ProductName); } } Of course, this code cannot compile. And, no one wants to write code like this. Again, this is just an example of complex reflection. using (NorthwindDataContext database = new NorthwindDataContext()) { // Constructs the query. IQueryable<Product> query = database.Products.Where(product => product.ProductID > 0) .OrderBy(product => product.ProductName) .Take(2); // database.Provider PropertyInfo providerProperty = database.GetType().GetProperty( "Provider", BindingFlags.NonPublic | BindingFlags.GetProperty | BindingFlags.Instance); object provider = providerProperty.GetValue(database, null); // database.Provider.Execute(query.Expression) // Here GetMethod() cannot be directly used, // because Execute() is a explicitly implemented interface method. Assembly assembly = Assembly.Load("System.Data.Linq"); Type providerType = assembly.GetTypes().SingleOrDefault( type => type.FullName == "System.Data.Linq.Provider.IProvider"); InterfaceMapping mapping = provider.GetType().GetInterfaceMap(providerType); MethodInfo executeMethod = mapping.InterfaceMethods.Single(method => method.Name == "Execute"); IExecuteResult executeResult = executeMethod.Invoke(provider, new object[] { query.Expression }) as IExecuteResult; // database.Provider.Execute(query.Expression).ReturnValue IEnumerable<Product> results = executeResult.ReturnValue as IEnumerable<Product>; // Processes the results. foreach (Product product in results) { Console.WriteLine("{0}, {1}", product.ProductID, product.ProductName); } } This may be not straight forward enough. So here a solution will implement fluent reflection with a ToDynamic() extension method: IEnumerable<Product> results = database.ToDynamic() // Starts fluent reflection. .Provider.Execute(query.Expression).ReturnValue; C# 4.0 dynamic In this kind of scenarios, it is easy to have dynamic in mind, which enables developer to write whatever code after a dot: using (NorthwindDataContext database = new NorthwindDataContext()) { // Constructs the query. IQueryable<Product> query = database.Products.Where(product => product.ProductID > 0) .OrderBy(product => product.ProductName) .Take(2); // database.Provider dynamic dynamicDatabase = database; dynamic results = dynamicDatabase.Provider.Execute(query).ReturnValue; } This throws a RuntimeBinderException at runtime: 'System.Data.Linq.DataContext.Provider' is inaccessible due to its protection level. Here dynamic is able find the specified member. So the next thing is just writing some custom code to access the found member. .NET 4.0 DynamicObject, and DynamicWrapper<T> Where to put the custom code for dynamic? The answer is DynamicObject’s derived class. I first heard of DynamicObject from Anders Hejlsberg's video in PDC2008. It is very powerful, providing useful virtual methods to be overridden, like: TryGetMember() TrySetMember() TryInvokeMember() etc.  (In 2008 they are called GetMember, SetMember, etc., with different signature.) For example, if dynamicDatabase is a DynamicObject, then the following code: dynamicDatabase.Provider will invoke dynamicDatabase.TryGetMember() to do the actual work, where custom code can be put into. Now create a type to inherit DynamicObject: public class DynamicWrapper<T> : DynamicObject { private readonly bool _isValueType; private readonly Type _type; private T _value; // Not readonly, for value type scenarios. public DynamicWrapper(ref T value) // Uses ref in case of value type. { if (value == null) { throw new ArgumentNullException("value"); } this._value = value; this._type = value.GetType(); this._isValueType = this._type.IsValueType; } public override bool TryGetMember(GetMemberBinder binder, out object result) { // Searches in current type's public and non-public properties. PropertyInfo property = this._type.GetTypeProperty(binder.Name); if (property != null) { result = property.GetValue(this._value, null).ToDynamic(); return true; } // Searches in explicitly implemented properties for interface. MethodInfo method = this._type.GetInterfaceMethod(string.Concat("get_", binder.Name), null); if (method != null) { result = method.Invoke(this._value, null).ToDynamic(); return true; } // Searches in current type's public and non-public fields. FieldInfo field = this._type.GetTypeField(binder.Name); if (field != null) { result = field.GetValue(this._value).ToDynamic(); return true; } // Searches in base type's public and non-public properties. property = this._type.GetBaseProperty(binder.Name); if (property != null) { result = property.GetValue(this._value, null).ToDynamic(); return true; } // Searches in base type's public and non-public fields. field = this._type.GetBaseField(binder.Name); if (field != null) { result = field.GetValue(this._value).ToDynamic(); return true; } // The specified member is not found. result = null; return false; } // Other overridden methods are not listed. } In the above code, GetTypeProperty(), GetInterfaceMethod(), GetTypeField(), GetBaseProperty(), and GetBaseField() are extension methods for Type class. For example: internal static class TypeExtensions { internal static FieldInfo GetBaseField(this Type type, string name) { Type @base = type.BaseType; if (@base == null) { return null; } return @base.GetTypeField(name) ?? @base.GetBaseField(name); } internal static PropertyInfo GetBaseProperty(this Type type, string name) { Type @base = type.BaseType; if (@base == null) { return null; } return @base.GetTypeProperty(name) ?? @base.GetBaseProperty(name); } internal static MethodInfo GetInterfaceMethod(this Type type, string name, params object[] args) { return type.GetInterfaces().Select(type.GetInterfaceMap).SelectMany(mapping => mapping.TargetMethods) .FirstOrDefault( method => method.Name.Split('.').Last().Equals(name, StringComparison.Ordinal) && method.GetParameters().Count() == args.Length && method.GetParameters().Select( (parameter, index) => parameter.ParameterType.IsAssignableFrom(args[index].GetType())).Aggregate( true, (a, b) => a && b)); } internal static FieldInfo GetTypeField(this Type type, string name) { return type.GetFields( BindingFlags.GetField | BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.NonPublic).FirstOrDefault( field => field.Name.Equals(name, StringComparison.Ordinal)); } internal static PropertyInfo GetTypeProperty(this Type type, string name) { return type.GetProperties( BindingFlags.GetProperty | BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public | BindingFlags.NonPublic).FirstOrDefault( property => property.Name.Equals(name, StringComparison.Ordinal)); } // Other extension methods are not listed. } So now, when invoked, TryGetMember() searches the specified member and invoke it. The code can be written like this: dynamic dynamicDatabase = new DynamicWrapper<NorthwindDataContext>(ref database); dynamic dynamicReturnValue = dynamicDatabase.Provider.Execute(query.Expression).ReturnValue; This greatly simplified reflection. ToDynamic() and fluent reflection To make it even more straight forward, A ToDynamic() method is provided: public static class DynamicWrapperExtensions { public static dynamic ToDynamic<T>(this T value) { return new DynamicWrapper<T>(ref value); } } and a ToStatic() method is provided to unwrap the value: public class DynamicWrapper<T> : DynamicObject { public T ToStatic() { return this._value; } } In the above TryGetMember() method, please notice it does not output the member’s value, but output a wrapped member value (that is, memberValue.ToDynamic()). This is very important to make the reflection fluent. Now the code becomes: IEnumerable<Product> results = database.ToDynamic() // Here starts fluent reflection. .Provider.Execute(query.Expression).ReturnValue .ToStatic(); // Unwraps to get the static value. With the help of TryConvert(): public class DynamicWrapper<T> : DynamicObject { public override bool TryConvert(ConvertBinder binder, out object result) { result = this._value; return true; } } ToStatic() can be omitted: IEnumerable<Product> results = database.ToDynamic() .Provider.Execute(query.Expression).ReturnValue; // Automatically converts to expected static value. Take a look at the reflection code at the beginning of this post again. Now it is much much simplified! Special scenarios In 90% of the scenarios ToDynamic() is enough. But there are some special scenarios. Access static members Using extension method ToDynamic() for accessing static members does not make sense. Instead, DynamicWrapper<T> has a parameterless constructor to handle these scenarios: public class DynamicWrapper<T> : DynamicObject { public DynamicWrapper() // For static. { this._type = typeof(T); this._isValueType = this._type.IsValueType; } } The reflection code should be like this: dynamic wrapper = new DynamicWrapper<StaticClass>(); int value = wrapper._value; int result = wrapper.PrivateMethod(); So accessing static member is also simple, and fluent of course. Change instances of value types Value type is much more complex. The main problem is, value type is copied when passing to a method as a parameter. This is why ref keyword is used for the constructor. That is, if a value type instance is passed to DynamicWrapper<T>, the instance itself will be stored in this._value of DynamicWrapper<T>. Without the ref keyword, when this._value is changed, the value type instance itself does not change. Consider FieldInfo.SetValue(). In the value type scenarios, invoking FieldInfo.SetValue(this._value, value) does not change this._value, because it changes the copy of this._value. I searched the Web and found a solution for setting the value of field: internal static class FieldInfoExtensions { internal static void SetValue<T>(this FieldInfo field, ref T obj, object value) { if (typeof(T).IsValueType) { field.SetValueDirect(__makeref(obj), value); // For value type. } else { field.SetValue(obj, value); // For reference type. } } } Here __makeref is a undocumented keyword of C#. But method invocation has problem. This is the source code of TryInvokeMember(): public override bool TryInvokeMember(InvokeMemberBinder binder, object[] args, out object result) { if (binder == null) { throw new ArgumentNullException("binder"); } MethodInfo method = this._type.GetTypeMethod(binder.Name, args) ?? this._type.GetInterfaceMethod(binder.Name, args) ?? this._type.GetBaseMethod(binder.Name, args); if (method != null) { // Oops! // If the returnValue is a struct, it is copied to heap. object resultValue = method.Invoke(this._value, args); // And result is a wrapper of that copied struct. result = new DynamicWrapper<object>(ref resultValue); return true; } result = null; return false; } If the returned value is of value type, it will definitely copied, because MethodInfo.Invoke() does return object. If changing the value of the result, the copied struct is changed instead of the original struct. And so is the property and index accessing. They are both actually method invocation. For less confusion, setting property and index are not allowed on struct. Conclusions The DynamicWrapper<T> provides a simplified solution for reflection programming. It works for normal classes (reference types), accessing both instance and static members. In most of the scenarios, just remember to invoke ToDynamic() method, and access whatever you want: StaticType result = someValue.ToDynamic()._field.Method().Property[index]; In some special scenarios which requires changing the value of a struct (value type), this DynamicWrapper<T> does not work perfectly. Only changing struct’s field value is supported. The source code can be downloaded from here, including a few unit test code.

    Read the article

  • Best Practices - Dynamic Reconfiguration

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains) Overview of dynamic Reconfiguration Oracle VM Server for SPARC supports Dynamic Reconfiguration (DR), making it possible to add or remove resources to or from a domain (virtual machine) while it is running. This is extremely useful because resources can be shifted to or from virtual machines in response to load conditions without having to reboot or interrupt running applications. For example, if an application requires more CPU capacity, you can add CPUs to improve performance, and remove them when they are no longer needed. You can use even use Dynamic Resource Management (DRM) policies that automatically add and remove CPUs to domains based on load. How it works (in broad general terms) Dynamic Reconfiguration is done in coordination with Solaris, which recognises a hypervisor request to change its virtual machine configuration and responds appropriately. In essence, Solaris receives a message saying "you now have 16 more CPUs numbered 16 to 31" or "8GB more RAM starting at address X" or "here's a new network or disk device - have fun with it". These actions take very little time. Solaris then can start using the new resource. In the case of added CPUs, that means dispatching processes and potentially binding interrupts to the new CPUs. For memory, Solaris adds the new memory pages to its "free" list and starts using them. Comparable actions occur with network and disk devices: they are recognised by Solaris and then used. Removing is the reverse process: after receiving the DR message to free specific CPUs, Solaris unbinds interrupts assigned to the CPUs and stops dispatching process threads. That takes very little time. primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 1.0% 6d 22h 29m ldom1 active -n---- 5000 16 8G 0.9% 6h 59m primary # ldm set-core 5 ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.2% 6d 22h 29m ldom1 active -n---- 5000 40 8G 0.1% 6h 59m primary # ldm set-core 2 ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 1.0% 6d 22h 29m ldom1 active -n---- 5000 16 8G 0.9% 6h 59m Memory pages are vacated by copying their contents to other memory locations and wiping them clean. Solaris may have to swap memory contents to disk if the remaining RAM isn't enough to hold all the contents. For this reason, deallocating memory can take longer on a loaded system. Even on a lightly loaded system it took several 7 or 8 seconds to switch the domain below between 8GB and 24GB of RAM. primary # ldm set-mem 24g ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.1% 6d 22h 36m ldom1 active -n---- 5000 16 24G 0.2% 7h 6m primary # ldm set-mem 8g ldom1 primary # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- SP 16 4G 0.7% 6d 22h 37m ldom1 active -n---- 5000 16 8G 0.3% 7h 7m What if the device is in use? (this is the anecdote that inspired this blog post) If CPU or memory is being removed, releasing it pretty straightforward, using the method described above. The resources are released, and Solaris continues with less capacity. It's not as simple with a network or I/O device: you don't want to yank a device out from underneath an application that might be using it. In the following example, I've added a virtual network device to ldom1 and want to take it away, even though it's been plumbed. primary # ldm rm-vnet vnet19 ldom1 Guest LDom returned the following reason for failing the operation: Resource Information ---------------------------------------------------------- ----------------------- /devices/virtual-devices@100/channel-devices@200/network@1 Network interface net1 VIO operation failed because device is being used in LDom ldom1 Failed to remove VNET instance That's what I call a helpful error message - telling me exactly what was wrong. In this case the problem is easily solved. I know this NIC is seen in the guest as net1 so: ldom1 # ifconfig net1 down unplumb Now I can dispose of it, and even the virtual switch I had created for it: primary # ldm rm-vnet vnet19 ldom1 primary # ldm rm-vsw primary-vsw9 If I had to take away the device disruptively, I could have used ldm rm-vnet -f but that could disrupt whoever was using it. It's better if that can be avoided. Summary Oracle VM Server for SPARC provides dynamic reconfiguration, which lets you modify a guest domain's CPU, memory and I/O configuration on the fly without reboot. You can add and remove resources as needed, and even automate this for CPUs by setting up resource policies. Taking things away can be more complicated than giving, especially for devices like disks and networks that may contain application and system state or be involved in a transaction. LDoms and Solaris cooperative work together to coordinate resource allocation and de-allocation in a safe and effective way. For best practices, use dynamic reconfiguration to make the best use of your system's resources.

    Read the article

  • Efficiently representing a dynamic transform hierarchy

    - by Mattia
    I'm looking for a way to represent a dynamic transform hierarchy (i.e. one where nodes can be inserted and removed arbitrarily) that's a bit more efficient than using a standard tree of pointers . I saw the answers to this question ( Efficient structure for representing a transform hierarchy. ), but as far as I can determine the tree-as-array approach only works for static hierarchies or dynamic ones where nodes have a fixed number of children (both deal-breakers for me). I'm probably wrong about that but could anyone point out how? If I'm not wrong are there other alternatives that work for dynamic hierarchies?

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Alignment requirements: converting basic disk to dynamic disk in order to set up software RAID?

    - by 0xC0000022L
    On Windows 7 x64 Professional I am struggling to convert a basic disk to a dynamic one. Under Disk Management in the MMC the conversion is supposed to be initiated automatically, but it doesn't. My guess: because of using third-party partitioning tools there isn't enough space in front and after the partitions (system-reserved/boot + system volume) to store the required meta-data. When demoting a dynamic disk to a basic disk manually, I noticed that some space seems to be required before and after the partitions. What are the exact alignment requirements that allow the on-board tools in Windows to do the conversion?

    Read the article

  • Finding out if an IP address is static or dynamic?

    - by Joshua
    I run a large bulletin board and I get spammers every now and again. My moderation team does a good job filtering them out but every time I IP ban them they seem to come back (I'm pretty sure it's the same person on some occasions, as the post patterns are exactly the same as are the usernames) but I'm afraid to ban them by IP address every time. If they are on a dynamic IP address, I could be banning innocent users later down the line when they try to get to my forum through SERPs, but if I ban only via static IPs I know that I'm only banning that one person. So, is there a way to properly determine if an IP address is static or dynamic? Thanks.

    Read the article

  • Is this the right way to organize my database tables?

    - by Moss
    So I'm making a website that allows users to build contact lists. So their are users, the users have lists, and the lists have contacts. It seems to me that I need 3 tables for this but I just want to make sure. There would be a User table of course, and then a "List of Lists" table that has the username, and listname, as primary key along with whatever other info we want to attach to the lists as a whole. Finally, for lack of a better word, the List table which would again have the username/listname p.k., then the contact ID and notes and such that the user attaches to that contact on that specific list. I hope that is a clear explanation. For some reason I feel unsure about this arrangement. For one thing if the website becomes popular the List table could swell to billions of rows. And it also feels a little weird that everybody's list info is all jumbled up in the same table. I suppose I could create separate tables for each user and even for each list but that seems like a bad idea for other reasons. My db explanation assumes I can use foreign keys on my tables which at the moment isn't actually an option. If I can't get InnoDB tables enabled I will probably use ID's for the lists instead of depending on a compound key. Maybe I should do this anyway?

    Read the article

  • why does mysql have so many more open and fragmented tables than tables in the DB?

    - by kswift
    I've been working making our database run a little smoother and had good results over the past week. But there are still some things I dont understand. For one thing, the database has 25 tables. But mysql status shows 512 are open: mysqladmin status Uptime: 212854 Threads: 1 Questions: 43041 Slow queries: 7 Opens: 2605 Flush tables: 1 Open tables: 512 Queries per second avg: 0.202 I've read that isam opens extra file descriptors and a few other reasons why the number of open tables might be higher than 25, but I am guessing that 512 is not a good thing. Any suggestions on why this might be or what I should be looking into? I've also been using mysqltuner and its been helpful. But it has consistently listed the number of fragmented tables at 207. In phpmyadmin I've selected all the tables and optimized them several times. It hasn't reduced the number of fragmented tables that mysqltuner reports. I think I am missing some important concept about how this all works. Does anyone have any suggestions to point me in the right direction or narrow down google searches or just generally help me be less clueless? Thanks!

    Read the article

  • Mysqld increases the load on the CPU and drops after flush-tables

    - by mirage
    Help please advice on the issue. Normal load on the cpu 20-30% us + sy. After restoring the database files from the slave server (same version) began a periodic problem. mysql starts to load the cpu at 100% (us + sy grows proportionally). The queue is growing, everything slows down. But with mysqladmin flush-tables are normalized for a few hours. Dedicated linux server running mysql 2 x E5506 24Gb RAM, database size 50Gb. [OK] Currently running supported MySQL version 5.0.51a-24 + lenny4-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics --------------------------------------- ---- [-] Status: + Archive-BDB-Federated + InnoDB-ISAM-NDBCluster [-] Data in MyISAM tables: 33G (Tables: 1474) [-] Data in InnoDB tables: 1G (Tables: 4) [-] Data in MEMORY tables: 120K (Tables: 3) [-] Reads / Writes: 91% / 9% [-] Total buffers: 12.8M per thread and 7.1G global [OK] Maximum possible memory usage: 15.8G (66% of installed RAM) 4000 - 5500 rps key_buffer = 1536M max_allowed_packet = 2M table_cache = 4096 sort_buffer_size = 409584 read_buffer_size = 128K read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 500 query_cache_size = 100M thread_concurrency = 24 max_connections = 700 tmp_table_size = 4096M join_buffer_size = 4M max_heap_table_size = 4096M query_cache_limit = 1M low_priority_updates = 1 concurrent_insert = 2 wait_timeout = 30 server-id = 1 log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M innodb_buffer_pool_size = 1536M innodb_log_buffer_size = 4M innodb_flush_log_at_trx_commit = 2 How to solve the problem?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >