Search Results

Search found 7127 results on 286 pages for 'calculated columns'.

Page 123/286 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • MySQL Online Database

    - by Marian
    Can anyone suggest a good online free MySQL database. I've tried four till now: db4free FreeMySQL onPhP 000webhost Either of them gave me an timeout error on my connect file or actively restricted connection to it, meaning the host won't allow a remote connection to the database. If there isn't any good online database can I create my own server using my computer, since it gets rarely turned off and when my server is offline I could return an error message saying that the server is currently offline. My final objective is to have a simple comment box for a webpage. Witch I believe it won't need a massive data storage with 3 columns (id, name, comment) NOTE: Can't post more then two links yet.

    Read the article

  • how to do database updates in each release

    - by Manoj R
    Our application uses database (mostly Oracle), and database is at the core. Each customer has its own database, with its own copy of application. Now with each new release of our product, we also need to update the database schema. These changes are adding new tables, removing columns, manipulating data etc. How do the people handle this? Are there any standard processes for this? EDIT:- The main issue is the databases are huge with many tables and more of huge amount of data. We provide the scripts and some utilities to manipulate the data. How to handle the failures and false negatives? More of looking for this kind articles. http://thedailywtf.com/Articles/Database-Changes-Done-Right.aspx

    Read the article

  • Sesame Data Browser: filtering, sorting, selecting and linking

    I have deferred the post about how Sesame is built in favor of publishing a new update.This new release offers major features such as the ability to quickly filter and sort data, select columns, and create hyperlinks to OData. Filtering, sorting, selecting In order to filter data, you just have to use the filter row, which becomes available when you click on the funnel button: You can then type some text and select an operator: The data grid will be refreshed immediately after you apply a filter. It...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • FAQ: GridView Calculation with JavaScript - Displaying Quantity Total

    - by Vincent Maverick Durano
    Previously we've talked about how calculate the sub-totals and grand total in GridView here, how to format the numbers into a currency format and how to validate the quantity to just accept whole numbers using JavaScript here. One of the users in the forum (http://forums.asp.net) is asking if how to modify the script to display the quantity total in the footer. In this post I'm going to show you how to it. Basically we just need to modify the javascript CalculateTotals function and add the codes there for calculating the quantity total and display it in the footer. Here are the code blocks below:   <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title></title> <script type="text/javascript"> function CalculateTotals() { var gv = document.getElementById("<%= GridView1.ClientID %>"); var tb = gv.getElementsByTagName("input"); var lb = gv.getElementsByTagName("span"); var sub = 0; var total = 0; var indexQ = 1; var indexP = 0; var price = 0; var qty = 0; var totalQty = 0; for (var i = 0; i < tb.length; i++) { if (tb[i].type == "text") { ValidateNumber(tb[i]); price = lb[indexP].innerHTML.replace("$", "").replace(",", ""); sub = parseFloat(price) * parseFloat(tb[i].value); if (isNaN(sub)) { lb[i + indexQ].innerHTML = "0.00"; sub = 0; } else { lb[i + indexQ].innerHTML = FormatToMoney(sub, "$", ",", "."); ; } indexQ++; indexP = indexP + 2; if (isNaN(tb[i].value) || tb[i].value == "") { qty = 0; } else { qty = tb[i].value; } totalQty += parseInt(qty); total += parseFloat(sub); } } lb[lb.length - 2].innerHTML = totalQty; lb[lb.length - 1].innerHTML = FormatToMoney(total, "$", ",", "."); } function ValidateNumber(o) { if (o.value.length > 0) { o.value = o.value.replace(/[^\d]+/g, ''); //Allow only whole numbers } } function isThousands(position) { if (Math.floor(position / 3) * 3 == position) return true; return false; }; function FormatToMoney(theNumber, theCurrency, theThousands, theDecimal) { var theDecimalDigits = Math.round((theNumber * 100) - (Math.floor(theNumber) * 100)); theDecimalDigits = "" + (theDecimalDigits + "0").substring(0, 2); theNumber = "" + Math.floor(theNumber); var theOutput = theCurrency; for (x = 0; x < theNumber.length; x++) { theOutput += theNumber.substring(x, x + 1); if (isThousands(theNumber.length - x - 1) && (theNumber.length - x - 1 != 0)) { theOutput += theThousands; }; }; theOutput += theDecimal + theDecimalDigits; return theOutput; } </script> </head> <body> <form id="form1" runat="server"> <asp:gridview ID="GridView1" runat="server" ShowFooter="true" AutoGenerateColumns="false"> <Columns> <asp:BoundField DataField="RowNumber" HeaderText="Row Number" /> <asp:BoundField DataField="Description" HeaderText="Item Description" /> <asp:TemplateField HeaderText="Item Price"> <ItemTemplate> <asp:Label ID="LBLPrice" runat="server" Text='<%# Eval("Price","{0:C}") %>'></asp:Label> </ItemTemplate> <FooterTemplate> <b>Total Qty:</b> </FooterTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Quantity"> <ItemTemplate> <asp:TextBox ID="TXTQty" runat="server" onkeyup="CalculateTotals();"></asp:TextBox> </ItemTemplate> <FooterTemplate> <asp:Label ID="LBLQtyTotal" runat="server" Font-Bold="true" ForeColor="Blue" Text="0" ></asp:Label>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <b>Total Amount:</b> </FooterTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Sub-Total"> <ItemTemplate> <asp:Label ID="LBLSubTotal" runat="server" ForeColor="Green" Text="0.00"></asp:Label> </ItemTemplate> <FooterTemplate> <asp:Label ID="LBLTotal" runat="server" ForeColor="Green" Font-Bold="true" Text="0.00"></asp:Label> </FooterTemplate> </asp:TemplateField> </Columns> </asp:gridview> </form> </body> </html>   Here's the output below when you run it on the page: I hope someone find this post useful! Technorati Tags: ASP.NET,C#,JavaScript,GridView

    Read the article

  • Finding rows that intersect with a date period

    - by DavidWimbush
    This one is mainly a personal reminder but I hope it helps somebody else too. Let's say you have a table that covers something like currency exchange rates with columns for the start and end dates of the period each rate was applicable. Now you need to list the rates that applied during the year 2009. For some reason this always fazes me and I have to work it out with a diagram. So here's the recipe so I never have to do that again: select  * from    ExchangeRate where   StartDate < '31-DEC-2009'         and EndDate > '01-JAN-2009'   That is all!

    Read the article

  • Configuration data: single-row table vs. name-value-pair table

    - by Heinzi
    Let's say you write an application that can be configured by the user. For storing this "configuration data" into a database, two patterns are commonly used. The single-row table CompanyName | StartFullScreen | RefreshSeconds | ... ---------------+-------------------+------------------+-------- ACME Inc. | true | 20 | ... The name-value-pair table ConfigOption | Value -----------------+------------- CompanyName | ACME Inc. StartFullScreen | true (or 1, or Y, ...) RefreshSeconds | 20 ... | ... I've seen both options in the wild, and both have obvious advantages and disadvantages, for example: The single-row tables limits the number of configuration options you can have (since the number of columns in a row is usually limited). Every additional configuration option requires a DB schema change. In a name-value-pair table everything is "stringly typed" (you have to encode/decode your Boolean/Date/etc. parameters). (many more) Is there some consensus within the development community about which option is preferable?

    Read the article

  • What should I use (controls, methods) to make a 2D tile based map editor?

    - by user1306322
    I'm making a 2d game where each tile is a square and it's viewed at straight angle, no skewing, no rotation, it's pretty simple. Two weeks ago I tried using DataGridView, but as the number of rows and columns increased, it became frustratingly slow, then I read how it should've happened to me earlier, because this control is not supposed to work with large number of cells, and I have at least 7500 cells in my smallest level, which made it unbearable to use. This is what I expect from my new editor: Most importantly, tile type. Tile images or their color codes are fine (seeing map as it is in-game is cool, but the faster, the better). Secondly, all tile parameters (in text, preferrably editable in a popup or sidebar). I'm using my own format, so I'm most probably not going to use third party product. Besides, I'm trying to learn how to do it myself.

    Read the article

  • How to create a PeopleCode Application Package/Application Class using PeopleTools Tables

    - by Andreea Vaduva
    This article describes how - in PeopleCode (Release PeopleTools 8.50) - to enable a grid without enabling each static column, using a dynamic Application Class. The goal is to disable the following grid with three columns “Effort Date”, ”Effort Amount” and “Charge Back” , when the Check Box “Finished with task” is selected , without referencing each static column; this PeopleCode could be used dynamically with any grid. If the check box “Finished with task” is cleared, the content of the grid columns is editable (and the buttons “+” and “-“ are available): So, you create an Application Package “CLASS_EXTENSIONS” that contains an Application Class “EWK_ROWSET”. This Application Class is defined with Class extends “ Rowset” and you add two news properties “Enabled” and “Visible”: After creating this Application Class, you use it in two PeopleCode Events : Rowinit and FieldChange : This code is very ‘simple’, you write only one command : ” &ERS2.Enabled = False” → and the entire grid is “Enabled”… and you can use this code with any Grid! So, the complete PeopleCode to create the Application Package is (with explanation in [….]) : ******Package CLASS_EXTENSIONS : [Name of the Package: CLASS_EXTENSIONS] --Beginning of the declaration part------------------------------------------------------------------------------ class EWK_ROWSET extends Rowset; [Definition Class EWK_ROWSET as a subclass of Class Rowset] method EWK_ROWSET(&RS As Rowset); [Constructor is the Method with the same name of the Class] property boolean Visible get set; property boolean Enabled get set; [Definition of the property “Enabled” in read/write] private [Before the word “private”, all the declarations are publics] method SetDisplay(&DisplaySW As boolean, &PropName As string, &ChildSW As boolean); instance boolean &EnSW; instance boolean &VisSW; instance Rowset &NextChildRS; instance Row &NextRow; instance Record &NextRec; instance Field &NextFld; instance integer &RowCnt, &RecCnt, &FldCnt, &ChildRSCnt; instance integer &i, &j, &k; instance CLASS_EXTENSIONS:EWK_ROWSET &ERSChild; [For recursion] Constant &VisibleProperty = "VISIBLE"; Constant &EnabledProperty = "ENABLED"; end-class; --End of the declaration part------------------------------------------------------------------------------ method EWK_ROWSET [The Constructor] /+ &RS as Rowset +/ %Super = &RS; end-method; get Enabled /+ Returns Boolean +/; Return &EnSW; end-get; set Enabled /+ &NewValue as Boolean +/; &EnSW = &NewValue; %This.InsertEnabled=&EnSW; %This.DeleteEnabled=&EnSW; %This.SetDisplay(&EnSW, &EnabledProperty, False); [This method is called when you set this property] end-set; get Visible /+ Returns Boolean +/; Return &VisSW; end-get; set Visible /+ &NewValue as Boolean +/; &VisSW = &NewValue; %This.SetDisplay(&VisSW, &VisibleProperty, False); end-set; method SetDisplay [The most important PeopleCode Method] /+ &DisplaySW as Boolean, +/ /+ &PropName as String, +/ /+ &ChildSW as Boolean +/ [Not used in our example] &RowCnt = %This.ActiveRowCount; &NextRow = %This.GetRow(1); [To know the structure of a line ] &RecCnt = &NextRow.RecordCount; For &i = 1 To &RowCnt [Loop for each Line] &NextRow = %This.GetRow(&i); For &j = 1 To &RecCnt [Loop for each Record] &NextRec = &NextRow.GetRecord(&j); &FldCnt = &NextRec.FieldCount; For &k = 1 To &FldCnt [Loop for each Field/Record] &NextFld = &NextRec.GetField(&k); Evaluate Upper(&PropName) When = &VisibleProperty &NextFld.Visible = &DisplaySW; Break; When = &EnabledProperty; &NextFld.Enabled = &DisplaySW; [Enable each Field/Record] Break; When-Other Error "Invalid display property; Must be either VISIBLE or ENABLED" End-Evaluate; End-For; End-For; If &ChildSW = True Then [If recursion] &ChildRSCnt = &NextRow.ChildCount; For &j = 1 To &ChildRSCnt [Loop for each Rowset child] &NextChildRS = &NextRow.GetRowset(&j); &ERSChild = create CLASS_EXTENSIONS:EWK_ROWSET(&NextChildRS); &ERSChild.SetDisplay(&DisplaySW, &PropName, &ChildSW); [For each Rowset child, call Method SetDisplay with the same parameters used with the Rowset parent] End-For; End-If; End-For; end-method; ******End of the Package CLASS_EXTENSIONS:[Name of the Package: CLASS_EXTENSIONS] About the Author: Pascal Thaler joined Oracle University in 2005 where he is a Senior Instructor. His area of expertise is Oracle Peoplesoft Technology and he delivers the following courses: For Developers: PeopleTools Overview, PeopleTools I &II, Batch Application Engine, Language Oriented Object PeopleCode, Administration Security For Administrators : Server Administration & Installation, Database Upgrade & Data Management Tools For Interface Users: Integration Broker (Web Service)

    Read the article

  • DB2 on SPARC T3 Tuning Tips

    - by cherry.shu(at)oracle.com
    With the new self tuning feature in DB2 V9.x, a lot of database parameters are set to automatic in DB2 v9.7 by default so that DB2 can adjust the values as needed. Most should work fine without manual tweaks. But for transaction workload on SPARC T3 systems, two parameters need to be adjust manually to achieve optimal performance. DATABASE_MEMORY: When this parameter is set to AUTOMATIC and SELF_TUNING_MEM is set to ON, DB2 will allocate small page size (64KB) for all memory allocation, and expands and shrinks the memory as needed. In order to take advantage of the large page size (up to 256MB) supported by the SPARC T3, we need to manually set the size of the DATABASE_MEMORY so that DB2 can use 256MB page size for its buffer pools which are implemented as ISM segments. I know this sounds strange as it seems that you turn a switch and it ends up controlling another function. pmap(1M) output can verify the page sizes used by DB2 db2sysc process. NUM_IOCLEANERS: This parameter defines the number of page cleaners. The default value of this parameter is AUTOMATIC, which is calculated based on the number of available CPUs and the number of logical partitions. On a SPARC T3 system where there are over a hundred of virtual CPUs and single DB2 partition, DB2 would set it to #CPUs - 1. This would lead to too many page cleaners to compete flushing to disks and cause aio mutex lock contentions. So we need to decrease the value for it. The good practice is to set the value to the number of physical devices that are used by the database table space containers.

    Read the article

  • Securing data inside Azure SQL? Any good libraries or DIY?

    - by Sid
    Azure SQL doesn't support many of the encryption features found in SQL Server (Table and Column encryption). We need to store some sensitive information that needs to be encrypted and we've rolled our own using AesCryptoServiceProvider to encrypt/decrypt data to/from the database. This solves the immediate issue (no cleartext in db) but poses other problems like Key rotation (we have to roll our own code for this, walking through the db converting old cipher text into new cipher text) metadata mapping of which tables and which columns are encrypted. This is simple with it's a few but quickly gets out of hand ... So are there any libraries out there that do this well? Any other resources or design patterns I can be pointed to?

    Read the article

  • Why isn't my lighting working properly? Are my normals messed up?

    - by Radek Slupik
    I'm relatively new to OpenGL and I am trying to draw a 3D model (loaded from a 3ds file using lib3ds) using OpenGL with lighting, but about half of it is drawn in black. I set up the light as such: glEnable(GL_LIGHTING); glShadeModel(GL_SMOOTH); GLfloat ambientColor[] = {0.2f, 0.2f, 0.2f, 1.0f}; glLightModelfv(GL_LIGHT_MODEL_AMBIENT, ambientColor); glEnable(GL_LIGHT0); GLfloat lightColor0[] = {1.0f, 1.0f, 1.0f, 1.0f}; GLfloat lightPos0[] = {4.0f, 0.0f, 8.0f, 0.0f}; glLightfv(GL_LIGHT0, GL_DIFFUSE, lightColor0); glLightfv(GL_LIGHT0, GL_POSITION, lightPos0); The model is in a VBO and drawn using glDrawArrays. The normals are in a separate VBO, and the normals are calculated using lib3ds_mesh_calculate_vertex_normals: std::vector<std::array<float, 3>> normals; for (std::size_t i = 0; i < model->nmeshes; ++i) { auto& mesh = *model->meshes[i]; std::vector<float[3]> vertex_normals(mesh.nfaces * 3); lib3ds_mesh_calculate_vertex_normals(&mesh, vertex_normals.data()); for (std::size_t j = 0; j < mesh.nfaces; ++j) { auto& face = mesh.faces[j]; normals.push_back(make_array(vertex_normals[j])); } } glBindBuffer(GL_ARRAY_BUFFER, normal_vbo_); glBufferData(GL_ARRAY_BUFFER, normals.size() * sizeof(decltype(normals)::value_type), normals.data(), GL_STATIC_DRAW); The problem isn't the vertices; the model is drawn correctly when drawing it as a wireframe. I also fixed the normals in Blender using controlN. What could be the problem? Should I store the normals in a different order?

    Read the article

  • Tips for adapting Date table to Power View forecasting #powerview #powerbi

    - by Marco Russo (SQLBI)
    During the keynote of the PASS Business Analytics Conference, Amir Netz presented the new forecasting capabilities in Power View for Office 365. I immediately tried the new feature (which was immediately available, a welcome surprise in a Microsoft announcement for a new release) and I had several issues trying to use existing data models. The forecasting has a few requirements that are not compatible with the “best practices” commonly used for a calendar table until this announcement. For example, if you have a Year-Month-Day hierarchy and you want to display a line chart aggregating data at the month level, you use a column containing month and year as a string (e.g. May 2014) sorted by a numeric column (such as 201405). Such a column cannot be used in the x-axis of a line chart for forecasting, because you need a date or numeric column. There are also other requirements and I wrote the article Prepare Data for Power View Forecasting in Power BI on SQLBI, describing how to create columns that can be used with the new forecasting capabilities in Power View for Office 365.

    Read the article

  • Logic to create common Serverlet3 Login

    - by user3696143
    I am using Servlet3 Login to Authenticate User in website I have these Login Website Normal Login(Fill the Sigup form) Facebook Login (From Facebook Id) Twitter Login (From Twitter) And I am already authenticate user by below code HttpServletRequest request = (HttpServletRequest) FacesContext.getCurrentInstance().getExternalContext().getRequest(); request.login(username, password); And it is working fine for Website Login as user gave his/her EMailId and password and it store in DB. Now I modified table and added more columns to save Facebookid in same user table and also password for Facebook login FacebookId work as a Password as well. Same I will do for Twitter But I want the same Servlet3 to authenticate user. How can I achieve it? And also added context.xml file inside META-INF folder <Realm localDataSource="true" debug="99" className="org.apache.catalina.realm.JDBCRealm" connectionName="user" connectionPassword="password" connectionURL="jdbc:mysql://localhost:3306/ ccc" digest="md5" driverName="com.mysql.jdbc.Driver" roleNameCol="role_name" userCredCol="password" userNameCol="email_id" userRoleTable="users_list" userTable="user_list_view" /> Also it is possible to check which query fired by realm entry?

    Read the article

  • Building a Store Locator ASP.NET Application Using Google Maps API (Part 2)

    Last week's article, <a href="http://www.4guysfromrolla.com/articles/051910-1.aspx">Building a Store Locator ASP.NET Application Using Google Maps API (Part 1)</a>, was the first in a multi-part article series exploring how to add store locator-type functionality to your ASP.NET website using the free <a href="http://code.google.com/apis/maps/">Google Maps API</a>. Part 1 started with an examination of the database used to power the store locator, which contains a single table named <code>Stores</code> with columns capturing the store number, its address and its <a href="http://en.wikipedia.org/wiki/Latitude">latitude</a> and <a

    Read the article

  • Best Practice - XML To Excel

    - by MemLeak
    I've to read a big XML file with a lot of information. Afterwards I extract the needed information (~20 Points(columns) / ~80 relevant Data (rows, some of them with subdatasets) and write them out in a Excel File. My Question is how to handle the extraction (of unused Data) part, should I copy the whole file and delete the unused parts, and then write it to excel or is it a good approach to create Objects for each column? should I write the whole xml to excel and start to delete rows in excel? What would be performant and a acceptable solution?

    Read the article

  • Discovering path through unknown territory

    - by TravisG
    Let's say all the AI knows about it's surroundings is a pixel-map that it has which clearly shows walkable terrain and obstacles. I want the AI to be able to traverse this terrain until it finds an exit point. There are some restrictions: There is always a way to the exit in the entire map that the AI walks around in, but there may be dead ends. The path to the exit is always pretty random, meaning that if you stand at crossroads, nothing indicates which direction would be the right one to go. It doesn't matter if the AI reaches a dead end, but it has to be able walk back out of it to a previously not inspected location and continue its search there. Initially, the AI starts out knowing only the starting area of the whole map. As it walks around, new points will be added to the pixel-map as the AI corresponding to the AIs range of sight (think of it like the AI is clearing the fog of war) The problem is in 2D space. All I have is the pixel map. There are no paths in the pixel map which are "too narrow". The AI fits through everything. It shouldn't be a brute force solution. E.g. it would be possible to simply find a path to each pixel in the pixel map that is yet undiscovered (with A*, for example), which will lead to the AI discovering new pixels. This could be repeated until the end is reached. The path doesn't have to be the shortest path (this is impossible without knowing the entire map beforehand), but when movements within the visible area are calculated, the shortest and from a human standpoint most logical path should be taken (e.g. if you can see a way out of your room into a hallway, you would obviously go there instead of exploring the corner of your current room). What kind of approaches to solve this problem are there?

    Read the article

  • Rails Easy Data Dumping

    - by Madhan ayyasamy
    Hi Friends,The following useful snippets,you can find out the easiest way of ruby on rails environment data dumping. You’ll often need to get data from production to dev or dev to your local or your local to another developer’s local. One plug-in we use over and over is Yaml_db. This nifty little plug-in enables you to dump or load data by issuing a Rake command. The data is persisted in a yaml file located in db/data.yml. This is very portable and easy to read if you need to examine the data.01rake db:data:dump02 03example data found in db/data.yml04 05---06campaigns:07  columns:08  - id09  - client_id10  - name11  - created_at12  - updated_at13  - token14  records:15  - - "1"16    - "1"17    - First push18    - 2008-11-03 18:23:5319    - 2008-11-03 18:23:5320    - 3f2523f6a66521  - - "2"22    - "2"23    - First push24    - 2008-11-03 18:26:5725    - 2008-11-03 18:26:5726    - 9ee8bc427d94

    Read the article

  • Demystifying "chunked level of detail"

    - by Caius Eugene
    Just recently trying to make sense of implementing a chunked level of detail system in Unity. I'm going to be generating four mesh planes, each with a height map but I guess that isn't too important at the moment. I have a lot of questions after reading up about this technique, I hope this isn't too much to ask all in one go, but I would be extremely grateful for someone to help me make sense of this technique. 1 : I can't understand at which point down the Chunked LOD pipeline that the mesh gets split into chunks. Is this during the initial mesh generation, or is there a separate algorithm which does this. 2 : I understand that a Quadtree data structure is used to store the Chunked LOD data, I think i'm missing the point a bit, but Is the quadtree storing vertex and triangles data for each subdivision level? 3a : How is the camera distance usually calculated. When reading up about quadtree's, Axis-aligned bounding box's are mentioned a lot. In this case would each chunk have a collision bounding box to detect the camera or player is nearby? or is there a better way of doing this? (raycast maybe?) 3b : Do the chunks calculate the camera distance themselves? 4 : Does each chunk have the same "resolution". for example at top level the mesh will be 32x32, will each subdivided node also be 32x32. Example below:

    Read the article

  • Use TV (hdmi) with non-square pixels

    - by labsin
    I am having a problem when I connect my LG plasma tv (with a native resolution of 1024x768 pixels) to my 12.04 laptop. The pixels (actual pixels, not the signal) of my TV are stretched so it gets his 16:9 ratio. The pixels are rectangular (1.3333x1). Everything I display from my laptop oviously get stretched (4:3 stretched to 16:9). There is a different dpi in X and Y needed for it to display properly (some kind of anamorphic mode). Default Ubuntu uses a dpi of 96x96. I can change it using xrandr, but only square eg 100x100 or 70x70. Already looked here, but it seems Ubuntu totally ignore the displaySize in xorg.conf When I use the code below to see the dpi and nothing I do changes it. The displaySize also stays the same (calculated using 96 dpi and the resolution) xdpyinfo | grep -B2 resolution I use the propretary ATI drivers for my ATI Mobility Radeon HD 50xx but it is the same with the Radeon drivers. My temporary solution is to use: xrandr --output DFP1 --mode 1024x768 --scale 1.333333333333x1 --output LVDS --off But with this the right side of the screen is nog accesable. This is a known problem with xrandr --scale and ubuntu. This is because of a patch for the mouse/windows not going outside the screen. I search a way to change the DisplaySize or the dpi(to something not square like 128x96) when I connect the display.

    Read the article

  • Calendar like tool for managing multiple clients simultaneously?

    - by Yuji Tomita
    I've tried several systems to keep my clients requests / work organized, but somehow they still fall through the cracks. I don't like the idea of a bug tracker-like site for every client (I would not check them all). Ideally, it's all in one page but separated by client. What systems do you guys use? I'm about to just use a spreadsheet with columns for clients. Just curious if there's something better out there :) I've seen smartsheet in action which is basically a really nice spreadsheet that shows bars of time between things that are due. This looks promising.

    Read the article

  • Working with packed dates in SSIS

    - by Jim Giercyk
    One of the challenges recently thrown my way was to read an EBCDIC flat file, decode packed dates, and insert the dates into a SQL table.  For those unfamiliar with packed data, it is a way to store data at the nibble level (half a byte), and was often used by mainframe programmers to conserve storage space.  In the case of my input file, the dates were 2 bytes long and  represented the number of days that have past since 01/01/1950.  My first thought was, in the words of Scooby, Hmmmmph?  But, I love a good challenge, so I dove in. Reading in the flat file was rather simple.  The only difference between reading an EBCDIC and an ASCII file is the Code Page option in the connection manager.  In my case, I needed to use Code Page 1140 for EBCDIC (I could have also used Code Page 37).       Once the code page is set correctly, SSIS can understand what it is reading and it will convert the output to the default code page, 1252.  However, packed data is either unreadable or produces non-alphabetic characters, as we can see in the preview window.   Column 1 is actually the packed date, columns 0 and 2 are the values in the rest of the file.  We are only interested in Column 1, which is a 2 byte field representing a packed date.  We know that 2 bytes of packed data can be stored in 1 byte of character data, so we are working with 4 packed digits in 2 character bytes.  If you are confused, stay tuned….this will make sense in a minute.   Right-click on your Flat File Source shape and select “Show Advanced Editor”. Here is where the magic begins. By changing the properties of the output columns, we can access the packed digits from each byte. By default, the Output Column data type is DT_STR. Since we want to look at the bytes individually and not the entire string, change the data type to DT_BYTES. Next, and most important, set UseBinaryFormat to TRUE. This will write the HEX VALUES of the output string instead of writing the character values.  Now we are getting somewhere! Next, you will need to use a Data Conversion shape in your Data Flow to transform the 2 position byte stream to a 4 position Unicode string containing the packed data.  You need the string to be 4 bytes long because it will contain the 4 packed digits.  Here is what that should look like in the Data Conversion shape: Direct the output of your data flow to a test table or file to see the results.  In my case, I created a test table.  The results looked like this:     Hold on a second!  That doesn't look like a date at all.  No, of course not.  It is a hex number which represents the days which have passed between 01/01/1950 and the date.  We have to convert the Hex value to a decimal value, and use the DATEADD function to get a date value.  Luckily, I have created a function to convert Hex to Decimal:   -- ============================================= -- Author:        Jim Giercyk -- Create date: March, 2012 -- Description:    Converts a Hex string to a decimal value -- ============================================= CREATE FUNCTION [dbo].[ftn_HexToDec] (     @hexValue NVARCHAR(6) ) RETURNS DECIMAL AS BEGIN     -- Declare the return variable here DECLARE @decValue DECIMAL IF @hexValue LIKE '0x%' SET @hexValue = SUBSTRING(@hexValue,3,4) DECLARE @decTab TABLE ( decPos1 VARCHAR(2), decPos2 VARCHAR(2), decPos3 VARCHAR(2), decPos4 VARCHAR(2) ) DECLARE @pos1 VARCHAR(1) = SUBSTRING(@hexValue,1,1) DECLARE @pos2 VARCHAR(1) = SUBSTRING(@hexValue,2,1) DECLARE @pos3 VARCHAR(1) = SUBSTRING(@hexValue,3,1) DECLARE @pos4 VARCHAR(1) = SUBSTRING(@hexValue,4,1) INSERT @decTab VALUES (CASE               WHEN @pos1 = 'A' THEN '10'                 WHEN @pos1 = 'B' THEN '11'               WHEN @pos1 = 'C' THEN '12'               WHEN @pos1 = 'D' THEN '13'               WHEN @pos1 = 'E' THEN '14'               WHEN @pos1 = 'F' THEN '15'               ELSE @pos1              END, CASE               WHEN @pos2 = 'A' THEN '10'                 WHEN @pos2 = 'B' THEN '11'               WHEN @pos2 = 'C' THEN '12'               WHEN @pos2 = 'D' THEN '13'               WHEN @pos2 = 'E' THEN '14'               WHEN @pos2 = 'F' THEN '15'               ELSE @pos2              END, CASE               WHEN @pos3 = 'A' THEN '10'                 WHEN @pos3 = 'B' THEN '11'               WHEN @pos3 = 'C' THEN '12'               WHEN @pos3 = 'D' THEN '13'               WHEN @pos3 = 'E' THEN '14'               WHEN @pos3 = 'F' THEN '15'               ELSE @pos3              END, CASE               WHEN @pos4 = 'A' THEN '10'                 WHEN @pos4 = 'B' THEN '11'               WHEN @pos4 = 'C' THEN '12'               WHEN @pos4 = 'D' THEN '13'               WHEN @pos4 = 'E' THEN '14'               WHEN @pos4 = 'F' THEN '15'               ELSE @pos4              END) SET @decValue = (CONVERT(INT,(SELECT decPos4 FROM @decTab)))         +                 (CONVERT(INT,(SELECT decPos3 FROM @decTab))*16)      +                 (CONVERT(INT,(SELECT decPos2 FROM @decTab))*(16*16)) +                 (CONVERT(INT,(SELECT decPos1 FROM @decTab))*(16*16*16))     RETURN @decValue END GO     Making use of the function, I found the decimal conversion, added that number of days to 01/01/1950 and FINALLY arrived at my “unpacked relative date”.  Here is the query I used to retrieve the formatted date, and the result set which was returned: SELECT [packedDate] AS 'Hex Value',        dbo.ftn_HexToDec([packedDate]) AS 'Decimal Value',        CONVERT(DATE,DATEADD(day,dbo.ftn_HexToDec([packedDate]),'01/01/1950'),101) AS 'Relative String Date'   FROM [dbo].[Output Table]         This technique can be used any time you need to retrieve the hex value of a character string in SSIS.  The date example may be a bit difficult to understand at first, but with SSIS becoming the preferred tool for enterprise level integration for many companies, there is no doubt that developers will encounter these types of requirements with regularity in the future. Please feel free to contact me if you have any questions.

    Read the article

  • 2 min video about the SQL_Compare

    - by CatherineRussell
    It is nice to start blogging again! I am working on new project in a small company now. We do not have a full time database admin. I have to cover multiple roles: getting requirements, writing docs and creating diagrams, designing app, writing code, testing and DBA role. I am not a DBA. But, I have to do day to day database changes: adding new new columns and tables. Check out 2 min video about the SQL_Compare. This tool saves time by automatically comparing and synchronizing database schemas; eliminate mistakes migrating database changes from dev, to test, to production; speed up the deployment of new database schema updates; generate T-SQL scripts to update one database to match the schema of another; find and fix errors caused by differences between databases;  keeps an accurate history of all previous database records.  http://www.red-gate.com/products/SQL_Compare/index.htm

    Read the article

  • Columnstore Case Study #2: Columnstore faster than SSAS Cube at DevCon Security

    - by aspiringgeek
    Preamble This is the second in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in my big deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. See also Columnstore Case Study #1: MSIT SONAR Aggregations Why Columnstore? As stated previously, If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. The Customer DevCon Security provides home & business security services & has been in business for 135 years. I met DevCon personnel while speaking to the Utah County SQL User Group on 20 February 2012. (Thanks to TJ Belt (b|@tjaybelt) & Ben Miller (b|@DBADuck) for the invitation which serendipitously coincided with the height of ski season.) The App: DevCon Security Reporting: Optimized & Ad Hoc Queries DevCon users interrogate a SQL Server 2012 Analysis Services cube via SSRS. In addition, the SQL Server 2012 relational back end is the target of ad hoc queries; this DW back end is refreshed nightly during a brief maintenance window via conventional table partition switching. SSRS, SSAS, & MDX Conventional relational structures were unable to provide adequate performance for user interaction for the SSRS reports. An SSAS solution was implemented requiring personnel to ramp up technically, including learning enough MDX to satisfy requirements. Ad Hoc Queries Even though the fact table is relatively small—only 22 million rows & 33GB—the table was a typical DW table in terms of its width: 137 columns, any of which could be the target of ad hoc interrogation. As is common in DW reporting scenarios such as this, it is often nearly to optimize for such queries using conventional indexing. DevCon DBAs & developers attended PASS 2012 & were introduced to the marvels of columnstore in a session presented by Klaus Aschenbrenner (b|@Aschenbrenner) The Details Classic vs. columnstore before-&-after metrics are impressive. Scenario Conventional Structures Columnstore ? SSRS via SSAS 10 - 12 seconds 1 second >10x Ad Hoc 5-7 minutes (300 - 420 seconds) 1 - 2 seconds >100x Here are two charts characterizing this data graphically.  The first is a linear representation of Report Duration (in seconds) for Conventional Structures vs. Columnstore Indexes.  As is so often the case when we chart such significant deltas, the linear scale doesn’t expose some the dramatically improved values corresponding to the columnstore metrics.  Just to make it fair here’s the same data represented logarithmically; yet even here the values corresponding to 1 –2 seconds aren’t visible.  The Wins Performance: Even prior to columnstore implementation, at 10 - 12 seconds canned report performance against the SSAS cube was tolerable. Yet the 1 second performance afterward is clearly better. As significant as that is, imagine the user experience re: ad hoc interrogation. The difference between several minutes vs. one or two seconds is a game changer, literally changing the way users interact with their data—no mental context switching, no wondering when the results will appear, no preoccupation with the spinning mind-numbing hurry-up-&-wait indicators.  As we’ve commonly found elsewhere, columnstore indexes here provided performance improvements of one, two, or more orders of magnitude. Simplified Infrastructure: Because in this case a nonclustered columnstore index on a conventional DW table was faster than an Analysis Services cube, the entire SSAS infrastructure was rendered superfluous & was retired. PASS Rocks: Once again, the value of attending PASS is proven out. The trip to Charlotte combined with eager & enquiring minds let directly to this success story. Find out more about the next PASS Summit here, hosted this year in Seattle on November 4 - 7, 2014. DevCon BI Team Lead Nathan Allan provided this unsolicited feedback: “What we found was pretty awesome. It has been a game changer for us in terms of the flexibility we can offer people that would like to get to the data in different ways.” Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the second in a series of reports on columnstore implementations, results from DevCon Security, a live customer production app for which performance increased by factors of from 10x to 100x for all report queries, including canned queries as well as reducing time for results for ad hoc queries from 5 - 7 minutes to 1 - 2 seconds. As a result of columnstore performance, the customer retired their SSAS infrastructure. I invite you to consider leveraging columnstore in your own environment. Let me know if you have any questions.

    Read the article

  • Logarithmic spacing of FFT bins

    - by Mykel Stone
    I'm trying to do the examples within the GameDev.net Beat Detection article ( http://archive.gamedev.net/archive/reference/programming/features/beatdetection/index.html ) I have no issue with performing a FFT and getting the frequency data and doing most of the article. I'm running into trouble though in the section 2.B, Enhancements and beat decision factors. in this section the author gives 3 equations numbered R10-R12 to be used to determine how many bins go into each subband: R10 - Linear increase of the width of the subband with its index R11 - We can choose for example the width of the first subband R12 - The sum of all the widths must not exceed 1024 He says the following in the article: "Once you have equations (R11) and (R12) it is fairly easy to extract 'a' and 'b', and thus to find the law of the 'wi'. This calculus of 'a' and 'b' must be made manually and 'a' and 'b' defined as constants in the source; indeed they do not vary during the song." However, I cannot seem to understand how these values are calculated...I'm probably missing something simple, but learning fourier analysis in a couple of weeks has left me Decimated-in-Mind and I cannot seem to see it.

    Read the article

  • FAQ: Highlight GridView Row on Click and Retain Selected Row on Postback

    - by Vincent Maverick Durano
    A couple of months ago I’ve written a simple demo about “Highlighting GridView Row on MouseOver”. I’ve noticed many members in the forums (http://forums.asp.net) are asking how to highlight row in GridView and retain the selected row across postbacks. So I’ve decided to write this post to demonstrate how to implement it as reference to others who might need it. In this demo I going to use a combination of plain JavaScript and jQuery to do the client-side manipulation. I presumed that you already know how to bind the grid with data because I will not include the codes for populating the GridView here. For binding the gridview you can refer this post: Binding GridView with Data the ADO.Net way or this one: GridView Custom Paging with LINQ. To get started let’s implement the highlighting of GridView row on row click and retain the selected row on postback.  For simplicity I set up the page like this: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>You have selected Row: (<asp:Label ID="Label1" runat="server" />)</h2> <asp:HiddenField ID="hfCurrentRowIndex" runat="server"></asp:HiddenField> <asp:HiddenField ID="hfParentContainer" runat="server"></asp:HiddenField> <asp:Button ID="Button1" runat="server" onclick="Button1_Click" Text="Trigger Postback" /> <asp:GridView ID="grdCustomer" runat="server" AutoGenerateColumns="false" onrowdatabound="grdCustomer_RowDataBound"> <Columns> <asp:BoundField DataField="Company" HeaderText="Company" /> <asp:BoundField DataField="Name" HeaderText="Name" /> <asp:BoundField DataField="Title" HeaderText="Title" /> <asp:BoundField DataField="Address" HeaderText="Address" /> </Columns> </asp:GridView> </asp:Content>   Note: Since the action is done at the client-side, when we do a postback like (clicking on a button) the page will be re-created and you will lose the highlighted row. This is normal because the the server doesn't know anything about the client/browser not unless if you do something to notify the server that something has changed. To persist the settings we will use some HiddenFields control to store the data so that when it postback we can reference the value from there. Now here’s the JavaScript functions below: <asp:content id="Content1" runat="server" contentplaceholderid="HeadContent"> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js" type="text/javascript"></script> <script type="text/javascript">       var prevRowIndex;       function ChangeRowColor(row, rowIndex) {           var parent = document.getElementById(row);           var currentRowIndex = parseInt(rowIndex) + 1;                 if (prevRowIndex == currentRowIndex) {               return;           }           else if (prevRowIndex != null) {               parent.rows[prevRowIndex].style.backgroundColor = "#FFFFFF";           }                 parent.rows[currentRowIndex].style.backgroundColor = "#FFFFD6";                 prevRowIndex = currentRowIndex;                 $('#<%= Label1.ClientID %>').text(currentRowIndex);                 $('#<%= hfParentContainer.ClientID %>').val(row);           $('#<%= hfCurrentRowIndex.ClientID %>').val(rowIndex);       }             $(function () {           RetainSelectedRow();       });             function RetainSelectedRow() {           var parent = $('#<%= hfParentContainer.ClientID %>').val();           var currentIndex = $('#<%= hfCurrentRowIndex.ClientID %>').val();           if (parent != null) {               ChangeRowColor(parent, currentIndex);           }       }          </script> </asp:content>   The ChangeRowColor() is the function that sets the background color of the selected row. It is also where we set the previous row and rowIndex values in HiddenFields.  The $(function(){}); is a short-hand for the jQuery document.ready event. This event will be fired once the page is posted back to the server that’s why we call the function RetainSelectedRow(). The RetainSelectedRow() function is where we referenced the current selected values stored from the HiddenFields and pass these values to the ChangeRowColor() function to retain the highlighted row. Finally, here’s the code behind part: protected void grdCustomer_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { e.Row.Attributes.Add("onclick", string.Format("ChangeRowColor('{0}','{1}');", e.Row.ClientID, e.Row.RowIndex)); } } The code above is responsible for attaching the javascript onclick event for each row and call the ChangeRowColor() function and passing the e.Row.ClientID and e.Row.RowIndex to the function. Here’s the sample output below:   That’s it! I hope someone find this post useful! Technorati Tags: jQuery,GridView,JavaScript,TipTricks

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >