Search Results

Search found 1041 results on 42 pages for 'formula'.

Page 39/42 | < Previous Page | 35 36 37 38 39 40 41 42  | Next Page >

  • Python: need to get energies of charge pairs.

    - by Two786
    I am new to python. I have to make a program for a project that takes a PDB format file as input and returns a list of all the intra-chain and inter-chain charge pairs and their energies (using coulomb’s law assuming a dielectric constant of (?) of 40.0). For simplicity, the charged residues for this program are just Arg (CZ), Lys (NZ), Asp (CG) and Glu (CD) with the charge bearing atoms for each indicated in parentheses. The program should report any attractive or repulsive interactions within 8.0 Å. Here is some additional information needed for the program. Eij = energy of interaction between atoms i and j in kilocalories/mole (kcals/mol) qi = charge for atom i (+1 for Lys or Arg, -1 for Glu or Asp) rij = distance between atoms i and j in angstroms using the distance formula The output should adhere to the following format: First residue : Second residue Distance Energy Lys 10 Chain A: ASP 46 Chain A D= 4.76 ang E= -2.32 kcals/mol (For some reason I can't organize the top two rows, but the first row should be lables and below it the corresponding values.) I really have no idea how to tackle this problem, any and all help is greatly appreciated. I hope this is the right place to ask. Thank you in advance. Using python 2.5

    Read the article

  • How come JFrame window size in Java does not produce the size of window specified?

    - by typoknig
    Hi all, I am just messing around trying to make a game right now, but I have had this problem before too. When I specify a specific window size (1024 x 768 for instance) the window produced is just a little larger than what I specified. Very annoying. Is there a reason for this? How do I correct it so the window created is actually the size I want instead of being just a little bit bigger? Up till now I have always just gone back and manually adjusted the size a few pixels at a time until I got the result I wanted, but that is getting old. If there was even a formula I could use that would tell me how many pixels I needed to add/subtract from my my variable that would be excellent! P.S. I don't know if my OS could be a factor in this, but I am using W7X64. private int windowWidth = 1024; private int windowHeight = 768; public SomeWindow() { this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); this.setSize(windowWidth, windowHeight); this.setResizable(false); this.setLocation(0,0); this.setVisible(true); }

    Read the article

  • How can I determine the "correct" number of steps in a questionnaire where branching is used?

    - by Mike Kingscott
    I have a potential maths / formula / algorithm question which I would like help on. I've written a questionnaire application in ASP.Net that takes people through a series of pages. I've introduced conditional processing, or branching, so that some pages can be skipped dependent on an answer, e.g. if you're over a certain age, you will skip the page that has Teen Music Choice and go straight to the Golden Oldies page. I wish to display how far along the questionnaire someone is (as a percentage). Let's say I have 10 pages to go through and my first answer takes me straight to page 9. Technically, I'm now 90% of the way through the questionnaire, but the user can think of themselves as being on page 2 of 3: the start page (with the branching question), page 9, then the end page (page 10). How can I show that I'm on 66% and not 90% when I'm on page 9 of 10? For further information, each Page can have a number of questions on that can have one or more conditions on them that will send the user to another page. By default, the next page will be the next one in the collection, but that can be over-ridden (e.g. entire sets of pages can be skipped). Any thoughts? :-s

    Read the article

  • finding the maximum in series

    - by peril brain
    I need to know a code that will automatically:- search a specific word in excel notes it row or column number (depends on data arrangement) searches numerical type values in the respective row or column with that numeric value(suppose a[7][0]or a[0][7]) it compares all other values of respective row or column(ie. a[i][0] or a[0][i]) sets that value to the highest value only if IT HAS GOT NO FORMULA FOR DERIVATION i know most of coding but at a few places i got myself stuck... i'm writing a part of my program upto which i know: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Threading; using Microsoft.Office.Interop; using Excel = Microsoft.Office.Interop.Excel; Excel.Application oExcelApp; namespace a{ class b{ static void main(){ try { oExcelApp = (Excel.Application)System.Runtime.InteropServices.Marshal.GetActiveObject("Excel.Application"); ; if(oExcelApp.ActiveWorkbook != null) {Excel.Workbook xlwkbook = (Excel.Workbook)oExcelApp.ActiveWorkbook; Excel.Worksheet ws = (Excel.Worksheet)xlwkbook.ActiveSheet; Excel.Range rn; rn = ws.Cells.Find("maximum", Type.Missing, Excel.XlFindLookIn.xlValues, Excel.XlLookAt.xlPart,Excel.XlSearchOrder.xlByRows, Excel.XlSearchDirection.xlNext, false, Type.Missing, Type.Missing); }}} now ahead of this i only know tat i have to use cell.value2 ,cell.hasformula methods..... & no more idea can any one help me with this..

    Read the article

  • model.matrix() with na.action=NULL?

    - by Vincent
    I have a formula and a data frame, and I want to extract the model.matrix(). However, I need the resulting matrix to include the NAs that were found in the original dataset. If I were to use model.frame() to do this, I would simply pass it na.action=NULL. However, the output I need is of the model.matrix() format. Specifically, I need only the right-hand side variables, I need the output to be a matrix (not a data frame), and I need factors to be converted to a series of dummy variables. I'm sure I could hack something together using loops or something, but I was wondering if anyone could suggest a cleaner and more efficient workaround. Thanks a lot for your time! And here's an example: dat <- data.frame(matrix(rnorm(20),5,4), gl(5,2)) dat[3,5] <- NA names(dat) <- c(letters[1:4], 'fact') ff <- a ~ b + fact # This omits the row with a missing observation on the factor model.matrix(ff, dat) # This keeps the NA, but it gives me a data frame and does not dichotomize the factor model.frame(ff, dat, na.action=NULL) Here is what I would like to obtain: (Intercept) b fact2 fact3 fact4 fact5 1 1 0.7266086 0 0 0 0 2 1 -0.6088697 0 0 0 0 3 NA 0.4643360 NA NA NA NA 4 1 -1.1666248 1 0 0 0 5 1 -0.7577394 0 1 0 0 6 1 0.7266086 0 1 0 0 7 1 -0.6088697 0 0 1 0 8 1 0.4643360 0 0 1 0 9 1 -1.1666248 0 0 0 1 10 1 -0.7577394 0 0 0 1

    Read the article

  • finding the maximum in the range

    - by comfreak
    I need to know a code that will automatically:- search a specific word in excel notes it row or column number (depends on data arrangement) searches numerical type values in the respective row or column with that numeric value(suppose a[7][0]or a[0][7]) it compares all other values of respective row or column(ie. a[i][0] or a[0][i]) sets that value to the highest value only if IT HAS GOT NO FORMULA FOR DERIVATION i know most of coding but at a few places i got myself stuck... i'm writing a part of my program upto which i know: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Threading; using Microsoft.Office.Interop; using Excel = Microsoft.Office.Interop.Excel; Excel.Application oExcelApp; namespace a{ class b{ static void main(){ try { oExcelApp = (Excel.Application)System.Runtime.InteropServices.Marshal.GetActiveObject("Excel.Application"); ; if(oExcelApp.ActiveWorkbook != null) {Excel.Workbook xlwkbook = (Excel.Workbook)oExcelApp.ActiveWorkbook; Excel.Worksheet ws = (Excel.Worksheet)xlwkbook.ActiveSheet; Excel.Range rn; rn = ws.Cells.Find("maximum", Type.Missing, Excel.XlFindLookIn.xlValues, Excel.XlLookAt.xlPart,Excel.XlSearchOrder.xlByRows, Excel.XlSearchDirection.xlNext, false, Type.Missing, Type.Missing); }}} now ahead of this i only know tat i have to use cell.value2 ,cell.hasformula methods..... & no more idea can any one help me with this..

    Read the article

  • search for the maximum

    - by peril brain
    I need to know a code that will automatically:- search a specific word in excel notes it row or column number (depends on data arrangement) searches numerical type values in the respective row or column with that numeric value(suppose a[7][0]or a[0][7]) it compares all other values of respective row or column(ie. a[i][0] or a[0][i]) sets that value to the highest value only if IT HAS GOT NO FORMULA FOR DERIVATION i know most of coding but at a few places i got myself stuck... i'm writing a part of my program upto which i know: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Threading; using Microsoft.Office.Interop; using Excel = Microsoft.Office.Interop.Excel; Excel.Application oExcelApp; namespace a{ class b{ static void main(){ try { oExcelApp = (Excel.Application)System.Runtime.InteropServices.Marshal.GetActiveObject("Excel.Application"); ; if(oExcelApp.ActiveWorkbook != null) {Excel.Workbook xlwkbook = (Excel.Workbook)oExcelApp.ActiveWorkbook; Excel.Worksheet ws = (Excel.Worksheet)xlwkbook.ActiveSheet; Excel.Range rn; rn = ws.Cells.Find("maximum", Type.Missing, Excel.XlFindLookIn.xlValues, Excel.XlLookAt.xlPart,Excel.XlSearchOrder.xlByRows, Excel.XlSearchDirection.xlNext, false, Type.Missing, Type.Missing); }}} now ahead of this i only know tat i have to use cell.value2 ,cell.hasformula methods..... & no more idea can any one help me with this..

    Read the article

  • Custom Tile Layer Problem

    - by Myra
    Hi,I'm currently implementing logic on custom tile layers via OpenLayers function getTiles() { var res = this.map.getResolution(); var x = Math.round((bounds.left - this.maxExtent.left) / (res * this.tileSize.w)); var y = Math.round((this.maxExtent.top - bounds.top) / (res * this.tileSize.h)); var z = this.map.getZoom(); return ......; } What I need to is to carry this code in Google API v3. As I searched documentation I found this code to work with: var customtile = new google.maps.ImageMapType({ getTileUrl: function (coord, zoom) { .... .... } Unfortonately,I cannot convert logic in OpenLayers code to Google. As I know resolution is 180 * tileSize.w / Math.pow(2, zoom) //where tileSize is 256x256 Since Google projection is same with my tiles WGS84 boundary should be -180,-90,90,180 I need to calculate to extent coordinates,but in function getTileUrl,there are two arguments.One of which is zoom,but the other coord is some x,y pair which I dont understand what that is.What is that exactly ? How can I generalize formula for calculating tile numbers in Google Maps? Thank you Myra

    Read the article

  • Complexity of subset product

    - by threenplusone
    I have a set of numbers produced using the following formula with integers 0 < x < a. f(x) = f(x-1)^2 % a For example starting at 2 with a = 649. {2, 4, 16, 256, 636, 169, 5, 25, 649, 576, 137, ...} I am after a subset of these numbers that when multiplied together equals 1 mod N. I believe this problem by itself to be NP-complete (based on similaries to Subset-Sum problem). However starting with any integer (x) gives the same solution pattern. Eg. a = 649 {2, 4, 16, 256, 636, 169, 5, 25, 649, 576, 137, ...} = 16 * 5 * 576 = 1 % 649 {3, 9, 81, 71, 498, 86, 257, 500, 135, 53, 213, ...} = 81 * 257 * 53 = 1 % 649 {4, 16, 256, 636, 169, 5, 25, 649, 576, 137, 597, ...} = 256 * 25 * 137 = 1 % 649 I am wondering if this additional fact makes this problem solvable faster? Or if anyone has run into this problem previously or has any advice?

    Read the article

  • The sign of zero with float2

    - by JackOLantern
    Consider the following code performing operations on complex numbers with C/C++'s float: float real_part = log(3.f); float imag_part = 0.f; float real_part2 = (imag_part)*(imag_part)-(real_part*real_part); float imag_part2 = (imag_part)*(real_part)+(real_part*imag_part); The result will be real_part2= -1.20695 imag_part2= 0 angle= 3.14159 where angle is the phase of the complex number and, in this case, is pi. Now consider the following code: float real_part = log(3.f); float imag_part = 0.f; float real_part2 = (-imag_part)*(-imag_part)-(real_part)*(real_part); float imag_part2 = (-imag_part)*(real_part)+(real_part)*(-imag_part); The result will be real_part2= -1.20695 imag_part2= 0 angle= -3.14159 The imaginary part of the result is -0 which makes the phase of the result be -pi. Although still accomplishing with the principal argument of a complex number and with the signed property of floating point's 0, this changes is a problem when one is defining functions of complex numbers. For example, if one is defining sqrt of a complex number by the de Moivre formula, this will change the sign of the imaginary part of the result to a wrong value. How to deal with this effect?

    Read the article

  • How do I delete duplicates between two excel sheets quickly vba

    - by MainTank
    I am using vba and I have two sheets one is named "Do Not Call" and has about 800,000 rows of data in column A. I want to use this data to check column I in the second sheet, named "Sheet1". If it finds a match I want it to delete the whole row in "Sheet1". I have tailored the code I have found from a similar question here: Excel formula to Cross reference 2 sheets, remove duplicates from one sheet and ran it but nothing happens. I am not getting any errors but it is not functioning. Here is the code I am currently trying and have no idea why it is not working Option Explicit Sub CleanDupes() Dim wsA As Worksheet Dim wsB As Worksheet Dim keyColA As String Dim keyColB As String Dim rngA As Range Dim rngB As Range Dim intRowCounterA As Integer Dim intRowCounterB As Integer Dim strValueA As String keyColA = "A" keyColB = "I" intRowCounterA = 1 intRowCounterB = 1 Set wsA = Worksheets("Do Not Call") Set wsB = Worksheets("Sheet1") Dim dict As Object Set dict = CreateObject("Scripting.Dictionary") Do While Not IsEmpty(wsA.Range(keyColA & intRowCounterA).Value) Set rngA = wsA.Range(keyColA & intRowCounterA) strValueA = rngA.Value If Not dict.Exists(strValueA) Then dict.Add strValueA, 1 End If intRowCounterA = intRowCounterA + 1 Loop intRowCounterB = 1 Do While Not IsEmpty(wsB.Range(keyColB & intRowCounterB).Value) Set rngB = wsB.Range(keyColB & intRowCounterB) If dict.Exists(rngB.Value) Then wsB.Rows(intRowCounterB).delete intRowCounterB = intRowCounterB - 1 End If intRowCounterB = intRowCounterB + 1 Loop End Sub I apologize if the above code is not in a code tag. This is my first time posting code online and I have no idea if I did it correctly.

    Read the article

  • Excel: Automating the Selection of an Unknown Number of Cells

    - by user1905080
    I’m trying to automate the formatting of an excel file by a macro and am seeking a solution. I have two columns titled Last Name and First Name which I would like to concatenate into a separate column titled Last Name, First Name. This is simple enough when done by hand: create one cell which does this, then drag that cell to include all cells within the range. The problem appears when trying to automate this. Because I can’t know the number of names that need to be concatenated ahead of time, I can’t automate the selection of cells by dragging. Can you help me automate this? I’ve tried a process of copying the initial concatenated cell, highlighting the column, and then pasting. I’ve also tried to use a formula which returned the concatenation only if there is text in the “Last Name” and “First Name” columns. However, in both cases, I end up with some 100,000 rows, putting a serious cramp on my ability to manipulate the worksheet. The best solution I can think of is to create concatenations within a fixed range of cells. Although this would create useless cells, at least there wouldn’t be 99,900 of them.

    Read the article

  • Maximum length of a std::basic_string<_CharT> string

    - by themoondothshine
    Hey all, I was wondering how one can fix an upper limit for the length of a string (in C++) for a given platform. I scrutinized a lot of libraries, and most of them define it arbitrarily. The GNU C++ STL (the one with experimental C++0x features) has quite a definition: size_t npos = size_t(-1); /*!< The maximum value that can be stored in a variable of type size_t */ size_t _S_max_len = ((npos - sizeof(_Rep_base))/sizeof(_CharT) - 1) / 4; /*!< Where _CharT is a template parameter; _Rep_base is a structure which encapsulates the allocated memory */ Here's how I understand the formula: The size_t type must hold the count of units allocated to the string (where each unit is of type _CharT) Theoretically, the maximum value that a variable of type size_t can take on is the total number of units of 1 byte (ie, of type char) that may be allocated The previous value minus the overhead required to keep track of the allocated memory (_Rep_base) is therefore the maximum number of units in a string. Divide this value by sizeof(_CharT) as _CharT may require more than a byte Subtract 1 from the previous value to account for a terminating character Finally, that leave the division by 4. I have absolutely no idea why! I looked at a lot of places for an explanation, but couldn't find a satisfactory one anywhere (that's why I've been trying to make up something for it! Please correct me if I'm wrong!!).

    Read the article

  • Jquery image slider window resize scaling issue

    - by eb_Dev
    Hi, I have created image gallery slider, the images of which scale with the window size. My problem is I can't seem to create the right formula to ensure that the slider is at the right position when the window is scaled. If I am at the first image and the slider 'left' offset is 0 then the scaling works fine and my offset is correct, however when I am on image two and my slider is offset by say -1500 and then the window is resized, the slider is then at the wrong offset. Taking this into account I figured I could just add / subtract the new window size difference from the slider offset and therefore have everything in the right position. I was wrong. It works for the first two images but then If I am at the end of my slider on the last image I get a nice big gap between the end of the last image and the border of the page. Can someone please show my where I am going wrong please? I've spent days on this :( The relevant code is as follows: $(window).bind('resize', function(event, delta) { /* Resize work images - resizes all to document width */ resizeWorkImages('div.page.work div#work ul'); /* Position slider controls */ positionSliderControls(); /* Get the difference between the old window width and the new */ var windowDiff = (gLastWindowWidth - $(window).width()) * -1; /* Apply difference to slider left position */ var newSliderLeftPos = stripPx($('div.page.work div#work ul').css('left')) - windowDiff; /* Apply to slider */ $('div.page.work div#work ul').css('left',newSliderLeftPos+'px') /* Update gSlider settings */ gSlider.update(); /* Record new window width */ gLastWindowWidth = $(window).width(); }); Thanks, eb_dev

    Read the article

  • determining the starting speed for an accelerated animation (in flash/actionscript but it's a math question)

    - by vulkanino
    This question burns my brain. I have an object on a plane, but for the sake of simplicity let's work just on a single dimension, thus the object has a starting position xs. I know the ending position xe. The object has to move from starting to ending position with an accelerated (acceleration=a) movement. I know the velocity the object has to have at the ending position (=ve). In my special case the ending speed is zero, but of course I need a general formula. The only unknown is the starting velocity vs. The objects starts with vs in xs and ends with ve in xe, moving along a space x with an acceleration a in a time t. Since I'm working with flash, space is expressed in pixels, time is expressed in frames (but you can reason in terms of seconds, it's easy to convert knowing the frames-per-second). In the animation loop (think onEnterFrame) I compute the new velocity and the new position with (a=0.4 for example): vx *= a (same for vy) x += vx (same for y) I want the entire animation to last, say, 2 seconds, which at 30 fps is 60 frames. Now you know that in 60 frames my object has to move from xs to xe with a constant deceleration so that the ending speed is 0. How do I compute the starting speed vs? Maybe there's a simpler way to do this in Flash, but I am now interested in the math/physics behind this.

    Read the article

  • How to set 2 conditions / criterias for VLOOKUP / LOOKUP / etc in OpenOffice Calc (or Excel)

    - by MestreLion
    I have this spreadsheet that started as a silly aid for a game (Mafia Wars 2), but grew into a tricky spreadsheet question. In the game your character have 9 "slots" for weapons and armors, 1 for each "type": Light Weapon, Heavy Weapon, Body Armor, Head Armor, etc. So I made a list of all weapons and armors available in the game, 1 item per row. Example: SHOP ITEM TYPE ITEM NAME ATK DEF PRICE EQUIPPED? Marketplace Weapon Light Konrad Knife 16 5 5.500 Marketplace Weapon Light Ice Queen 19 6 8.200 Marketplace Armor Body Up Layered Polym 0 31 8.600 Marketplace Armor Body Up Full Shield 7 42 17.650 Marketplace Weapon Heavy Konrad Bullpup 53 25 24.500 Marketplace Weapon Heavy Full Moon Blow 73 12 24.500 x Marketplace Armor Body Low Knee Pads 17 26 14.200 x Marketplace Armor Body Low Army Boots 15 55 24.500 Bone Yard Weapon Light Bone Launcher 41 2 9.400 x Neon Strip Vehicle Ground Supercharged 41 34 24.500 Dead End Weapon Heavy Sharp Sickle 21 5 24.500 Dead End Armor Body Low Unholy Boots 5 36 15.000 Dead End Armor Head Hockey Mask 5 18 15.900 x Last columns is an indication of the items i have already bought and equipped (marked with "x"). What I need is a formula that, for each "slot" (item type), returns info related to the item of that kind that I am using. That would be: ITEM TYPE SHOP NAME ITEM NAME ATK DEF PRICE Weapon Light Bone Yard Bone Launcher 41 2 9.400 Weapon Heavy Marketplace Full Moon Blow 73 12 24.500 Weapon Special -- -- -- -- -- Armor Body Up -- -- -- -- -- Armor Body Low Marketplace Knee Pads 17 26 14.200 Armor Head Dead End Hockey Mask 5 18 15.900 Vehicle Ground -- -- -- -- -- Vehicle Water -- -- -- -- -- Vehicle Air -- -- -- -- -- The item types are fixed, so they can be hard coded. Each row for an item type. So, for 1st result line, it would return data from the row where both 2nd column is "Weapon Light" and last column is "x". Basically I need a LOOKUP (or VLOOKUP, or anything else) that uses 2 criteria to find a given row, the item type and the X marker. Question is: HOW? I am using OpenOffice Calc 3.2.1, but since it shares so many functions with MS Excel, answers for Excel are also fine (as long as it only uses regular formulas, no VBScript or Macros or VBA etc) Last but not least, suggestions / solutions for rearranging the data so it makes this problem easier to solve are also welcome. Thanks!

    Read the article

  • Snmpd update interface counters slowly or something like this

    - by Korjavin Ivan
    I update one my freebsd box to 9-stable (totally new installation) and install net-snmp for monitoring. uname -r 9.1-PRERELEASE pkg_info net-snmp-5.7.1_7 Information for net-snmp-5.7.1_7: Comment: An extendable SNMP implementation .... cat /var/db/ports/net-snmp/options # This file is auto-generated by 'make config'. # Options for net-snmp-5.7.1_7 _OPTIONS_READ=net-snmp-5.7.1_7 _FILE_COMPLETE_OPTIONS_LIST= IPV6 MFD_REWRITES PERL PERL_EMBEDDED PYTHON DUMMY TKMIB DMALLOC MYSQL AX_SOCKONLY UNPRIVILEGED OPTIONS_FILE_UNSET+=IPV6 OPTIONS_FILE_UNSET+=MFD_REWRITES OPTIONS_FILE_SET+=PERL OPTIONS_FILE_SET+=PERL_EMBEDDED OPTIONS_FILE_UNSET+=PYTHON OPTIONS_FILE_SET+=DUMMY OPTIONS_FILE_UNSET+=TKMIB OPTIONS_FILE_SET+=DMALLOC OPTIONS_FILE_UNSET+=MYSQL OPTIONS_FILE_UNSET+=AX_SOCKONLY OPTIONS_FILE_UNSET+=UNPRIVILEGED I have about 500 vlan on this machine, and collect info about interface through snmpd to 2 different software, zabbix and cacti. And both of them plot the graphs with blank fields. I tryed change polling time in zabbix, from 15, sec to 30,60,90,120,10. And anyway i have blank fields. snmpd.conf is empty - only a access controls. This configuration worked fine on freebsd 8. Where is my fault? How fix this graphs? UPD: Changing pooling time, switch off one of agent, doesnt help. I look at zabbix log (recieved data from snmpd) and see that: sorry for russian locale, just look at numbers: and thats is not true, as my "iftop" show speed was about 90Mbits, but snmpd return 2Mbits. I understand that snmpd doesnt return speed, it return just a counter. But how its possible? why 2Mbit/s ? I tryed recompile snmpd with 64-bit counters, and without it. In both variants this blank fields present. So i think its my OS (freebsd) doesnt update interface counters well. I still collect tcpdump for found this request/response. But have problem with that, to much trash. UPD2: I decrypt tcpdump-ed file, and public this as google doc at gdocfile Timediff looks strange.. Like zabbix sometimes "forget" do request, and then do twice at row, ehh UPD3: I parse log from command "while true; do netstat -bin -I vlan4008 /var/log/netstat; sleep 300; done" and load as google docs, and add formula for speed : link Looks like all counters in OS are good. Now i think problem in : 1. zabbix get request twice at row (and what about cacti) 2. snmpd use counter32

    Read the article

  • What is some good lossless video codec for recording gameplay?

    - by Don Salva
    I'm an avid gamer and I like to record my gameplay. Usually I've been using Fraps to do it, however I'm thinking of switching to Dxtory as it allows to write on multiple HDDs at once. Say I have 3 HDDs with the following write speeds: HDD1 with 50 mb/s, HDD2 with 22 mb/s and HDD3 with 45 mb/s. Combined write speed would be: 117 mb/s. Dxtory allows you to utilize all 3 HDD's at once while recording your gameplay. Using this formula: RGB24 YUV24: Width x Height x 3 x fps = bitrate (byte/sec) YUV420: Width x Height x 3 / 2 x fps = bitrate (byte/sec) YUV410: Width x Height x 9 / 8 x fps = bitrate (byte/sec) And recording in YUV420 colorspace at 1920x1080 with 30 fps I'd need about 95 mb/s write speed. Dxtory is good because it allows me to play with constant 60 fps while recording in 30 fps. Fraps does not (even though they say it does), once you start recording with Fraps, the game's fps drops. So I'm looking for a codec that doesn't need a very high write speed (bitrate) yet records in good (lossless) quality. Dxtory comes with its own codec, the Dxtory codec. Which allows me some experimentation. Fraps has it's own codec which I can use in Dxtory to expirement around. I also came across http://lags.leetcode.net/codec.html . Are there more lossless codecs out there (besides Fraps' and Dxtory's) which are good for what I want to do? Edit: To clarify, yes, I'm aware a lossless codec always has "good" quality. But that's not what I'm looking for. Let me take the Fraps codec and Dxtory codec to clarify what I'm looking for. When I record with the Dxtory codec in RGB colorspace at 1920x1080 with targeted 30 fps, I can play the game at 60 fps, BUT I'm recording with 10-15 fps, that's because RGB with Dxtory needs much, much more write speed than my hdd can handle. When recording with Dxtory codec in YUV410 colorspace at 1920x1080 with targeted 30 fps, I can play at 60 fps and record at 30 fps, again, that's because YUV410 in Dxtory's codec takes much, much less write speed than RGB When recording with Fraps codec in ??? (I dunno the color space Fraps records in, I guess YUV420), I can play with 60 fps and record with 30 fps. What I'm looking for is a lossless codec that can record in YUV420 (or even RGB??) which does not exceed a write speed (or bitrate if you will) of 100 mb/s in 1920x1080 or in other words, which will allow me to record in constant 30fps. Obviously the best solution would be to buy an SDD, but that's not what I'm after.

    Read the article

  • Data capture from other sheet into Summary sheet

    - by Hemant
    an Excel workbook which has Summary sheet, Pending and Master Sheet. My requirement is below and try to develop a Macro or VB logic for excel • I want to control this workbook from Summary sheet. o Generate Fault Summary – ? I have set logic but if doesn’t give warning if sheet name is exists , so need to add this logic . ? When we press the Fault Report Summary command button then it copy the master sheet with cell “A6” Name and will hide the Master sheet. Again when you select the another Month name then it will generate the sheet for that month name. o Generate Toll System Uptime ? When I select the sheet name and “Week” then Press the “Enter “Command button then it should get the result from that sheet number . Each sheet number has Month detail in B2 Cell. ? To calculate the Uptime formula for Week wise is • Week-01 = (1680-SUMIFS(L5:L23,B5:B23,"="&B2,B5:B23,"<="&(B2+6)))/1680 • Week-02 =(1680-SUMIFS(L5:L23,B5:B23,"="&(B2+7),B5:B23,"<="&(B2+13)))/1680 • Week-03 =(1680-SUMIFS(L5:L23,B5:B23,"="&(B2+14),B5:B23,"<="&(B2+20)))/1680 • Week-04 =(1680-SUMIFS(L5:L23,B5:B23,"="&(B2+21),B5:B23,"<="&(B2+27)))/1680 • Month =(1680-SUMIFS(L5:L23,B5:B23,"="&(B2),B5:B23,"<="&(DATE(YEAR(B2),1+MONTH(B2),1)-1)))/1680 ? Result should reflect in Summary sheet at B18 cell . o Pending Fault Report Summary ? When segregate the report on its status like which one is open or Close . It is open then it is Pending Fault Report and when it is Close status it means it is closed. ? If any fault which has OPEN status in all sheets(Jan-13,Feb-13,Mar-13….etc) then it should be come as well as in Pending Sheet which ascending date order. ? When it’s status is changed then it should be moved in that month sheet or nearby fault created date. It status is close then it should not be available in pending sheet as it’s status is Closed. ? Each fault has Reported date and we monitor all fault according reported date. ? When we press the Update Fault Report Summary command button then it should update as above logic. ? Some time we export the Pending fault report , so date calendar should be present in Start and End date to Choose the date. When we press the Export command line then it should export the Pending fault report and able to save in Excel,PDF.

    Read the article

  • Difference between LASTDATE and MAX for semi-additive measures in #DAX

    - by Marco Russo (SQLBI)
    I recently wrote an article on SQLBI about the semi-additive measures in DAX. I included the formulas common calculations and there is an interesting point that worth a longer digression: the difference between LASTDATE and MAX (which is similar to FIRSTDATE and MIN – I just describe the former, for the latter just replace the correspondent names). LASTDATE is a dax function that receives an argument that has to be a date column and returns the last date active in the current filter context. Apparently, it is the same value returned by MAX, which returns the maximum value of the argument in the current filter context. Of course, MAX can receive any numeric type (including date), whereas LASTDATE only accepts a column of type date. But overall, they seems identical in the result. However, the difference is a semantic one. In fact, this expression: LASTDATE ( 'Date'[Date] ) could be also rewritten as: FILTER ( VALUES ( 'Date'[Date] ), 'Date'[Date] = MAX ( 'Date'[Date] ) ) LASTDATE is a function that returns a table with a single column and one row, whereas MAX returns a scalar value. In DAX, any expression with one row and one column can be automatically converted into the corresponding scalar value of the single cell returned. The opposite is not true. So you can use LASTDATE in any expression where a table or a scalar is required, but MAX can be used only where a scalar expression is expected. Since LASTDATE returns a table, you can use it in any expression that expects a table as an argument, such as COUNTROWS. In fact, you can write this expression: COUNTROWS ( LASTDATE ( 'Date'[Date] ) ) which will always return 1 or BLANK (if there are no dates active in the current filter context). You cannot pass MAX as an argument of COUNTROWS. You can pass to LASTDATE a reference to a column or any table expression that returns a column. The following two syntaxes are semantically identical: LASTDATE ( 'Date'[Date] ) LASTDATE ( VALUES ( 'Date'[Date] ) ) The result is the same and the use of VALUES is not required because it is implicit in the first syntax, unless you have a row context active. In that case, be careful that using in a row context the LASTDATE function with a direct column reference will produce a context transition (the row context is transformed into a filter context) that hides the external filter context, whereas using VALUES in the argument preserve the existing filter context without applying the context transition of the row context (see the columns LastDate and Values in the following query and result). You can use any other table expressions (including a FILTER) as LASTDATE argument. For example, the following expression will always return the last date available in the Date table, regardless of the current filter context: LASTDATE ( ALL ( 'Date'[Date] ) ) The following query recap the result produced by the different syntaxes described. EVALUATE     CALCULATETABLE(         ADDCOLUMNS(              VALUES ('Date'[Date] ),             "LastDate", LASTDATE( 'Date'[Date] ),             "Values", LASTDATE( VALUES ( 'Date'[Date] ) ),             "Filter", LASTDATE( FILTER ( VALUES ( 'Date'[Date] ), 'Date'[Date] = MAX ( 'Date'[Date] ) ) ),             "All", LASTDATE( ALL ( 'Date'[Date] ) ),             "Max", MAX( 'Date'[Date] )         ),         'Date'[Calendar Year] = 2008     ) ORDER BY 'Date'[Date] The LastDate columns repeat the current date, because the context transition happens within the ADDCOLUMNS. The Values column preserve the existing filter context from being replaced by the context transition, so the result corresponds to the last day in year 2008 (which is filtered in the external CALCULATETABLE). The Filter column works like the Values one, even if we use the FILTER instead of the LASTDATE approach. The All column shows the result of LASTDATE ( ALL ( ‘Date’[Date] ) ) that ignores the filter on Calendar Year (in fact the date returned is in year 2010). Finally, the Max column shows the result of the MAX formula, which is the easiest to use and only don’t return a table if you need it (like in a filter argument of CALCULATE or CALCULATETABLE, where using LASTDATE is shorter). I know that using LASTDATE in complex expressions might create some issue. In my experience, the fact that a context transition happens automatically in presence of a row context is the main reason of confusion and unexpected results in DAX formulas using this function. For a reference of DAX formulas using MAX and LASTDATE, read my article about semi-additive measures in DAX.

    Read the article

  • Guest Post: Christian Finn: Is Facebook About to Become a Victim of its Own Success?

    - by Michael Snow
    12.00 Print 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:Cambria; mso-fareast-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}  Since we have a number of new members of the WebCenter Evangelist team - I thought it would be appropriate to close the week with the newest hire and leader of the global WebCenter Evangelists, Christian Finn, who has just joined the Red team after many years with the small technology company up in Redmond, WA. He gave an intro to himself in an earlier post this morning but his post below is a great example of how customer engagement takes on a life of its own in our global online connected and social digital ecosystem. Is Facebook About to Become a Victim of its Own Success? What if I told you that your brand could advertise so successfully, you wouldn’t have to pay for the ads? A recent campaign by Ford Motor Company for the Ford Focus featuring Doug the spokespuppet (I am not making this up) did just that—and it raises some interesting issues for marketers and social media alike in the brave new world of customer engagement that is the Social Web. Allow me to elaborate. An article in the Wall Street Journal last week—“Big Brands Like Facebook, But They Don’t Like to Pay” tells the story of Ford’s recently concluded online campaign for the 2012 Ford Focus. (Ford, by the way, under the leadership of people such as Scott Monty, has been a pioneer of effective social campaigns.) The centerpiece of the campaign was the aforementioned Doug, who appeared as a character on Facebook in videos and via chat. (If you are not familiar with Doug, you can see him in action here, and read the WSJ story here.) You may be thinking puppet ads are a sign of Internet Bubble 2.0 and want to stop now, but bear with me. The Journal reported that Ford spent about $95M on its overall Ford Focus campaign, with TV accounting for over $60M of that spend. The Internet buy for the campaign was just over $10M, which included ad buys to drive traffic to Facebook for people to meet and ‘Like’ Doug and some amount on Facebook ads, too, to promote Doug and by extension, the Ford Focus. So far, a fairly straightforward consumer marketing story in the Internet Era. Yet here’s the curious thing: once Doug reached 10,000 fans on Facebook, Ford stopped paying for Facebook ads. Doug had gone viral with people sharing his videos with one another; once critical mass was reached there was no need to buy more ads on Facebook. Doug went on to be Liked by over 43,000 people, and 61% of his fans said they would be more likely to consider buying a Focus. According to the article, Ford says Focus sales are up this year—and increasing sales is every marketer’s goal. And so in effect, Ford found its Facebook campaign so successful that it could stop paying for it, instead letting its target consumers communicate its messages for fun—and for free. Not only did they get a 3X increase in fans beyond their paid campaign, they had thousands of customers sharing their messages in video form for months. Since free advertising is the Holy Grail of marketing both old and new-- and it appears social networks have an advantage in generating that buzz—it seems reasonable to ask: what would happen to brands’ advertising strategies—and the media they use to engage customers, if this success were repeated at scale? It seems logical to conclude that, at least initially, more ad dollars would be spent with social networks like Facebook as brands attempt to replicate Ford’s success. Certainly Facebook ad revenues are on the rise—eMarketer expects Facebook’s ad revenues to quintuple by 2012 compared with 2009 levels, to nearly 2.9B. That’s bad news for TV and the already battered print media and good news for Facebook. But perhaps not so over the longer run. With TV buys, you have to keep paying to generate impressions. If Doug the spokespuppet is any guide, however, that may not be true for social media campaigns. After an initial outlay, if a social campaign takes off, the audience will generate more impressions on its own. Thus a social medium like Facebook could be the victim of its own success when it comes to ad revenue. It may be there is an inherent limiting factor in the ad spend they can capture, as exemplified by Ford’s experience with Dough and the Focus. And brands may spend much less overall on advertising, with as good or better results, than they ever have in the past. How will these trends evolve? Can brands create social campaigns that repeat Ford’s formula for the Focus with effective results? Can social networks find ways to capture more spend and overcome their potential tendency to make further spend unnecessary? And will consumers become tired and insulated from social campaigns, much as they have to traditional advertising channels? These are the questions CMOs and Facebook execs alike will be asking themselves in the brave new world of customer engagement. As always, your thoughts and comments are most welcome.

    Read the article

  • ACORD LOMA Session Highlights Policy Administration Trends

    - by [email protected]
    Helen Pitts, senior product marketing manager for Oracle Insurance, attended and is blogging from the ACORD LOMA Insurance Forum this week. Above: Paul Vancheri, Chief Information Officer, Fidelity Investments Life Insurance Company. Vancheri gave a presentation during the ACORD LOMA Insurance Systems Forum about the key elements of modern policy administration systems and how insurers can mitigate risk during legacy system migrations to safely introduce new technologies. When I had a few particularly challenging honors courses in college my father, a long-time technology industry veteran, used to say, "If you don't know how to do something go ask the experts. Find someone who has been there and done that, don't be afraid to ask the tough questions, and apply and build upon what you learn." (Actually he still offers this same advice today.) That's probably why my favorite sessions at industry events, like the ACORD LOMA Insurance Forum this week, are those that include insight on industry trends and case studies from carriers who share their experiences and offer best practices based upon their own lessons learned. I had the opportunity to attend a particularly insightful session Wednesday as Craig Weber, senior vice president of Celent's Insurance practice, and Paul Vancheri, CIO of Fidelity Life Investments, presented, "Managing the Dynamic Insurance Landscape: Enabling Growth and Profitability with a Modern Policy Administration System." Policy Administration Trends Growing the business is the top issue when it comes to IT among both life and annuity and property and casualty carriers according to Weber. To drive growth and capture market share from competitors, carriers are looking to modernize their core insurance systems, with 65 percent of those CIOs participating in recent Celent research citing plans to replace their policy administration systems. Weber noted that there has been continued focus and investment, particularly in the last three years, by software and technology vendors to offer modern, rules-based, configurable policy administration solutions. He added that these solutions are continuing to evolve with the ongoing aim of helping carriers rapidly meet shifting business needs--whether it is to launch new products to market faster than the competition, adapt existing products to meet shifting consumer and /or regulatory demands, or to exit unprofitable markets. He closed by noting the top four trends for policy administration either in the process of being adopted today or on the not-so-distant horizon for the future: Underwriting and service desktops New business automation Convergence of ultra-configurable and domain content-rich systems Better usability and screen design Mitigating the Risk When Making the Decision to Modernize Third-party analyst research from advisory firms like Celent was a key part of the due diligence process for Fidelity as it sought a replacement for its legacy policy administration system back in 2005, according to Vancheri. The company's business opportunities were outrunning system capability. Its legacy system had not been upgraded in several years and was deficient from a functionality and currency standpoint. This was constraining the carrier's ability to rapidly configure and bring new and complex products to market. The company sought a new, modern policy administration system, one that would enable it to keep pace with rapid and often unexpected industry changes and ahead of the competition. A cross-functional team that included representatives from finance, actuarial, operations, client services and IT conducted an extensive selection process. This process included deep documentation review, pilot evaluations, demonstrations of required functionality and complex problem-solving, infrastructure integration capability, and the ability to meet the company's desired cost model. The company ultimately selected an adaptive policy administration system that met its requirements to: Deliver ease of use - eliminating paper and rework, while easing the burden on representatives to sell and service annuities Provide customer parity - offering Web-based capabilities in alignment with the company's focus on delivering a consistent customer experience across its business Deliver scalability, efficiency - enabling automation, while simplifying and standardizing systems across its technology stack Offer desired functionality - supporting Fidelity's product configuration / rules management philosophy, focus on customer service and technology upgrade requirements Meet cost requirements - including implementation, professional services and licenses fees and ongoing maintenance Deliver upon business requirements - enabling the ability to drive time to market for new products and flexibility to make changes Best Practices for Addressing Implementation Challenges Based upon lessons learned during the company's implementation, Vancheri advised carriers to evaluate staffing capabilities and cultural impacts, review business requirements to avoid rebuilding legacy processes, factor in dependent systems, and review policies and practices to secure customer data. His formula for success: upfront planning + clear requirements = precision execution. Achieving a Return on Investment Vancheri said the decision to replace their legacy policy administration system and deploy a modern, rules-based system--before the economic downturn occurred--has been integral in helping the company adapt to shifting market conditions, while enabling growth in its direct channel sales of variable annuities. Since deploying its new policy admin system, the company has reduced its average time to market for new products from 12-15 months to 4.5 months. The company has since migrated its other products to the new system and retired its legacy system, significantly decreasing its overall product development cycle. From a processing standpoint Vancheri noted the company has achieved gains in automation, information, and ease of use, resulting in improved real-time data edits, controls for better quality, and tax handling capability. Plus, with by having only one platform to manage, the company has simplified its IT environment and is well positioned to deliver system enhancements for greater efficiencies. Commitment to Continuing the Investment In the short and longer term future Vancheri said the company plans to enhance business functionality to support money movement, wire automation, divorce processing on payout contracts and cost-based tracking improvements. It also plans to continue system upgrades to remain current as well as focus on further reducing cycle time, driving down maintenance costs, and integrating with other products. Helen Pitts is senior product marketing manager for Oracle Insurance focused on life/annuities and enterprise document automation.

    Read the article

  • SQL SERVER – Number-Crunching with SQL Server – Exceed the Functionality of Excel

    - by Pinal Dave
    Imagine this. Your users have developed an Excel spreadsheet that extracts data from your SQL Server database, manipulates that data through the use of Excel formulas and, possibly, some VBA code which is then used to calculate P&L, hedging requirements or even risk numbers. Management comes to you and tells you that they need to get rid of the spreadsheet and that the results of the spreadsheet calculations need to be persisted on the database. SQL Server has a very small set of functions for analyzing data. Excel has hundreds of functions for analyzing data, with many of them focused on specific financial and statistical calculations. Is it even remotely possible that you can use SQL Server to replace the complex calculations being done in a spreadsheet? Westclintech has developed a library of functions that match or exceed the functionality of Excel’s functions and contains many functions that are not available in EXCEL. Their XLeratorDB library of functions contains over 700 functions that can be incorporated into T-SQL statements. XLeratorDB takes advantage of the SQL CLR architecture introduced in SQL Server 2005. SQL CLR permits managed code to be compiled into the database and run alongside built-in SQL Server functions like COUNT or SUM. The Westclintech developers have taken advantage of this architecture to bring robust analytical functions to the database. In our hypothetical spreadsheet, let’s assume that our users are using the YIELD function and that the data are extracted from a table in our database called BONDS. Here’s what the spreadsheet might look like. We go to column G and see that it contains the following formula. Obviously, SQL Server does not offer a native YIELD function. However, with XLeratorDB we can replicate this calculation in SQL Server with the following statement: SELECT *, wct.YIELD(CAST(GETDATE() AS date),Maturity,Rate,Price,100,Frequency,Basis) AS YIELD FROM BONDS This produces the following result. This illustrates one of the best features about XLeratorDB; it is so easy to use. Since I knew that the spreadsheet was using the YIELD function I could use the same function with the same calling structure to do the calculation in SQL Server. I didn’t need to know anything at all about the mechanics of calculating the yield on a bond. It was pretty close to cut and paste. In fact, that’s one way to construct the SQL. Just copy the function call from the cell in the spreadsheet and paste it into SMS and change the cell references to column names. I built the SQL for this query by starting with this. SELECT * ,YIELD(TODAY(),B2,C2,D2,100,E2,F2) FROM BONDS I then changed the cell references to column names. SELECT * --,YIELD(TODAY(),B2,C2,D2,100,E2,F2) ,YIELD(TODAY(),Maturity,Rate,Price,100,Frequency,Basis) FROM BONDS Finally, I replicated the TODAY() function using GETDATE() and added the schema name to the function name. SELECT * --,YIELD(TODAY(),B2,C2,D2,100,E2,F2) --,YIELD(TODAY(),Maturity,Rate,Price,100,Frequency,Basis) ,wct.YIELD(GETDATE(),Maturity,Rate,Price,100,Frequency,Basis) FROM BONDS Then I am able to execute the statement returning the results seen above. The XLeratorDB libraries are heavy on financial, statistical, and mathematical functions. Where there is an analog to an Excel function, the XLeratorDB function uses the same naming conventions and calling structure as the Excel function, but there are also hundreds of additional functions for SQL Server that are not found in Excel. You can find the functions by opening Object Explorer in SQL Server Management Studio (SSMS) and expanding the Programmability folder under the database where the functions have been installed. The  Functions folder expands to show 3 sub-folders: Table-valued Functions; Scalar-valued functions, Aggregate Functions, and System Functions. You can expand any of the first three folders to see the XLeratorDB functions. Since the wct.YIELD function is a scalar function, we will open the Scalar-valued Functions folder, scroll down to the wct.YIELD function and and click the plus sign (+) to display the input parameters. The functions are also Intellisense-enabled, with the input parameters displayed directly in the query tab. The Westclintech website contains documentation for all the functions including examples that can be copied directly into a query window and executed. There are also more one hundred articles on the site which go into more detail about how some of the functions work and demonstrate some of the extensive business processes that can be done in SQL Server using XLeratorDB functions and some T-SQL. XLeratorDB is organized into libraries: finance, statistics; math; strings; engineering; and financial options. There is also a windowing library for SQL Server 2005, 2008, and 2012 which provides functions for calculating things like running and moving averages (which were introduced in SQL Server 2012), FIFO inventory calculations, financial ratios and more, without having to use triangular joins. To get started you can download the XLeratorDB 15-day free trial from the Westclintech web site. It is a fully-functioning, unrestricted version of the software. If you need more than 15 days to evaluate the software, you can simply download another 15-day free trial. XLeratorDB is an easy and cost-effective way to start adding sophisticated data analysis to your SQL Server database without having to know anything more than T-SQL. Get XLeratorDB Today and Now! Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Excel

    Read the article

  • SQLAuthority News – A Conversation with an Old Friend – Sri Sridharan

    - by pinaldave
    Sri Sridharan is my old friend and we often talk on GTalk. The subject varies from Life in India/USA, movies, musics, and of course SQL. We have our differences when we talk about food or movie but we always agree when we talk about SQL. Yesterday while chatting with him we talked about SQLPASS and the conversation lasted for a long time. Here is the conversation between us on GTalk. I have removed a few of the personal talks and formatted into paragraphs as GTalk often shows stuff out of formatting. Pinal: Sri, Congrats on running for the PASS BoD again. You were so close last year. What made you decide to run again this year? Sri: Thank you Pinal for your leadership in the PASS India Community and all the things you do out there. After coming so close last year, there was no doubt in my mind that I will run again. I was truly humbled by the support I got from the community. Growing up in India for over 25 years, you are brought up in a very competitive part of the world. Right from the pressure of staying in the top of the class from kindergarten to your graduation, the relentless push from your parents about studying and getting good grades (and nothing else matters), you land up essentially living in a pressure cooker. To survive that relentless pressure, you need to have a thick skin, ability to stand up for who you really are , what you want to accomplish and in the process stay true those values. I am striving for a greater cause, to make PASS an organization that can help people with their SQL Server careers, to make PASS relevant to its chapter members, to make PASS an organization that every SQL professional in the world wants to be connected with. Just because I did not get elected or appointed last year does not mean that these causes are not worth fighting. Giving up upon failing the first time is simply not in me. If I did that, what message would I send to those who voted for me? What message would I send to my kids? Pinal: As someone who has such strong roots in India, what can the Indian PASS Community expect from you? Sri: First of all, I think fostering a regional leadership is something PASS must encourage as part of its global growth plan. For PASS global being able to understand all the issues in a region of the world and make sound decisions will be a tough thing to do on a continuous basis. I expect people like you, chapter leaders, regional mentors, MVPs of the region start playing a bigger role in shaping the next generation of PASS. That is something I said in my campaign and I still stand by it. I would like to see growth in the number of chapters in India. The current count does not truly represent the full potential of that region. I was pretty thrilled to see the Bangalore SQLSaturday happen early this year. I would like to see more of SQLSaturday events, at least in the major metro cities. I know the issues in India are very different from the rest of the world. So the formula needs to be tweaked a little for it to work better in India. Once the SQLSaturday model is vetted out, maybe there could be enough justification to have SQLRally India. PASS needs to have a premier SQL event in that region. Going to USA or Europe for that matter is incredibly hard due to VISA issues etc. So this could be a case of where PASS comes closer to where the community is. Pinal: What portfolio would take on if you are elected to the PASS Board? Sri: There are some very strong folks on the PASS Board today. The President discusses the portfolios with the group and makes the final call on the portfolios. I am also a fan of having a team associated with the portfolios. In that case, one person is the primary for a portfolio but secondary on a couple of other portfolios. This way people on the board have a direct vested interest in a few portfolios. Personally, I know I would these portfolios good justice – Chapters, Global Growth and Events (SQLSat, SQLRally). I would try to see if we can get a director to focus on Volunteers.  To me that is very critical for growth in the international regions. Pinal: This is an interesting conversation with you Sri. I know you so long time but this is indeed inspiring to many. India is a big country and we appreciate your thoughts. Sri: Thank you very much for taking time to chat with me today. Cheers. There are pretty strong candidates for SQLPASS Board of Elections this year. I know all of them in person and honestly it is going to be extremely difficult to not to vote for anybody. I am indeed in a crunch right now how to pick one over another. Though the choice is difficult, I encourage you to vote for them. I am extremely confident that the new board of directors will all have the same goal – Better SQL Server Community. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, DBA, PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Extending Blend for Visual Studio 2013

    - by Chris Skardon
    Originally posted on: http://geekswithblogs.net/cskardon/archive/2013/11/01/extending-blend-for-visual-studio-2013.aspxSo, I got a comment yesterday on my post about Extending Blend 4 and Blend for Visual Studio 2012 asking if I knew how to get it working for Blend for Visual Studio 2013.. My initial thoughts were, just change the location to get the blend dlls from Visual Studio 11.0 to 12.0 and you’re all set, so I went to do that, only to discover that the dlls I normally reference, well – they don’t exist. So… I’ve made a presumption that the actual process of using MEF etc is still the same. I was wrong. So, the route to discovery – required DotPeek and opening a few of blends dlls.. Browsing through the Blend install directory (./Microsoft Visual Studio 12.0/Blend/) I notice the .addin files: So I decide to peek into the SketchFlow dll, then promptly remember SketchFlow is quite a big thing, and hunting through there is not ideal, luckily there is another dll using an .addin file, ‘Microsoft.Expression.Importers.Host’, so we’ll go for that instead. We can see it’s still using the ‘IPackage’ formula, but where is that sucker? Well, we just press F12 on the ‘IPackage’ bit and DotPeek takes us there, with a very handy comment at the top: // Type: Microsoft.Expression.Framework.IPackage // Assembly: Microsoft.Expression.Framework, Version=12.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a // MVID: E092EA54-4941-463C-BD74-283FD36478E2 // Assembly location: C:\Program Files (x86)\Microsoft Visual Studio 12.0\Blend\Microsoft.Expression.Framework.dll Now we know where the IPackage interface is defined, so let’s just try writing a control. Last time I did a separate dll for the control, this time I’m not, but it still works if you want to do it that way. Let’s build a control! STEP 1 Create a new WPF application Naming doesn’t matter any more! I have gone with ‘Hello2013’ (see what I did there?) STEP 2 Delete: App.Config App.xaml MainWindow.xaml We won’t be needing them STEP 3 Change your application to be a Class Library instead. (You might also want to delete the ‘vshost’ stuff in your output directory now, as they only exist for hosting the WPF app, and just cause clutter) STEP 4 Add a reference to the ‘Microsoft.Expression.Framework.dll’ (which you can find in ‘C:\Program Files\Microsoft Visual Studio 12.0\Blend’ – that’s Program Files (x86) if you’re on an x64 machine!). STEP 5 Add a User Control, I’m going with ‘Hello2013Control’, and following from last time, it’s just a TextBlock in a Grid: <UserControl x:Class="Hello2013.Hello2013Control" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="300"> <Grid> <TextBlock>Hello Blend for VS 2013</TextBlock> </Grid> </UserControl> STEP 6 Add a class to load the package – I’ve called it – yes you guessed – Hello2013Package, which will look like this: namespace Hello2013 { using Microsoft.Expression.Framework; using Microsoft.Expression.Framework.UserInterface; public class Hello2013Package : IPackage { private Hello2013Control _hello2013Control; private IWindowService _windowService; public void Load(IServices services) { _windowService = services.GetService<IWindowService>(); Initialize(); } private void Initialize() { _hello2013Control = new Hello2013Control(); if (_windowService.PaletteRegistry["HelloPanel"] == null) _windowService.RegisterPalette("HelloPanel", _hello2013Control, "Hello Window"); } public void Unload(){} } } You might note that compared to the 2012 version we’re no longer [Exporting(typeof(IPackage))]. The file you create in STEP 7 covers this for us. STEP 7 Add a new file called: ‘<PROJECT_OUTPUT_NAME>.addin’ – in reality you can call it anything and it’ll still read it in just fine, it’s just nicer if it all matches up, so I have ‘Hello2013.addin’. Content wise, we need to have: <?xml version="1.0" encoding="utf-8"?> <AddIn AssemblyFile="Hello2013.dll" /> obviously, replacing ‘Hello2013.dll’ with whatever your dll is called. STEP 8 We set the ‘addin’ file to be copied to the output directory: STEP 9 Build! STEP 10 Go to your output directory (./bin/debug) and copy the 3 files (Hello2013.dll, Hello2013.pdb, Hello2013.addin) and then paste into the ‘Addins’ folder in your Blend directory (C:\Program Files\Microsoft Visual Studio 12.0\Blend\Addins) STEP 11 Start Blend for Visual Studio 2013 STEP 12 Go to the ‘Window’ menu and select ‘Hello Window’ STEP 13 Marvel at your new control! Feel free to email me / comment with any problems!

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42  | Next Page >