Search Results

Search found 1330 results on 54 pages for 'jonathan jones'.

Page 52/54 | < Previous Page | 48 49 50 51 52 53 54  | Next Page >

  • Converting table columns to key value pairs

    - by TomD1
    I am writing a PL/SQL procedure that loads some data from Schema A into Schema B. They are both very different schemas and I can't change the structure of Schema B. Columns in various tables in Schema A (joined together in a view) need to be inserted into Schema B as key=value pairs in 2 columns in a table, each on a separate row. For example, an employee's first name might be present as employee.firstname in Schema A, but would need to be entered in Schema B as: id=>1, key=>'A123', value=>'Smith' There are almost 100 keys, with the potential for more to be added in future. This means I don't really want to hardcode any of these keys. Sample code: create table schema_a_employees ( emp_id number(8,0), firstname varchar2(50), surname varchar2(50) ); insert into schema_a_employees values ( 1, 'James', 'Smith' ); insert into schema_a_employees values ( 2, 'Fred', 'Jones' ); create table schema_b_values ( emp_id number(8,0), the_key varchar2(5), the_value varchar2(200) ); I thought an elegant solution would most likely involve a lookup table to determine what value to insert for each key, and doesn't involve effectively hardcoding dozens of similar statements like.... insert into schema_b_values ( 1, 'A123', v_firstname ); insert into schema_b_values ( 1, 'B123', v_surname ); What I'd like to be able to do is have a local lookup table in Schema A that lists all the keys from Schema B, along with a column that gives the name of the column in the table in Schema A that should be used to populate, e.g. key "A123" in Schema B should be populated with the value of the column "firstname" in Schema A, e.g. create table schema_a_lookup ( the_key varchar2(5), the_local_field_name varchar2(50) ); insert into schema_a_lookup values ( 'A123', 'firstname' ); insert into schema_a_lookup values ( 'B123', 'surname' ); But I'm not sure how I could dynamically use values from the lookup table to tell Oracle which columns to use. So my question is, is there an elegant solution to populate schema_b_values table with the data from schema_a_employees without hardcoding for every possible key (i.e. A123, B123, etc)? Cheers.

    Read the article

  • Attempted to read or write protected memory

    - by Interfector
    I have a sample ASP.NET MVC 3 web application that is following Jonathan McCracken's Test-Drive Asp.NET MVC (great book , by the way) and I have stumbled upon a problem. Note that I'm using MVCContrib, Rhino and NUnit. [Test] public void ShouldSetLoggedInUserToViewBag() { var todoController = new TodoController(); var builder = new TestControllerBuilder(); builder.InitializeController(todoController); builder.HttpContext.User = new GenericPrincipal(new GenericIdentity("John Doe"), null); Assert.That(todoController.Index().AssertViewRendered().ViewData["UserName"], Is.EqualTo("John Doe")); } The code above always throws this error: System.AccessViolationException : Attempted to read or write protected memory. This is often an indication that other memory is corrupt. The controller action code is the following: [HttpGet] public ActionResult Index() { ViewData.Model = Todo.ThingsToBeDone; ViewBag.UserName = HttpContext.User.Identity.Name; return View(); } From what I have figured out, the app seems to crash because of the two assignements in the controller action. However, I cannot see how there are wrong!? Can anyone help me pinpoint the solution to this problem. Thank you.

    Read the article

  • Select XML nodes as rows

    - by Bjørn
    I am selecting from a table that has an XML column using T-SQL. I would like to select a certain type of node and have a row created for each one. For instance, suppose I am selecting from a people table. This table has an XML column for addresses. The XML is formated similar to the following: <address> <street>Street 1</street> <city>City 1</city> <state>State 1</state> <zipcode>Zip Code 1</zipcode> </address> <address> <street>Street 2</street> <city>City 2</city> <state>State 2</state> <zipcode>Zip Code 2</zipcode> </address> How can I get results like this: Name         City         State Joe Baker   Seattle      WA Joe Baker   Tacoma     WA Fred Jones  Vancouver BC

    Read the article

  • Floating point precision in Visual C++

    - by Luigi Giaccari
    HI, I am trying to use the robust predicates for computational geometry from Jonathan Richard Shewchuk. I am not a programmer, so I am not even sure of what I am saying, I may be doing some basic mistake. The point is the predicates should allow for precise aritmthetic with adaptive floating point precision. On my computer: Asus pro31/S (Core Due Centrino Processor) they do not work. The problem may stay in the fact the my computer may use some improvements in the floating point precision taht conflicts with the one used by Shewchuk. The author says: /* On some machines, the exact arithmetic routines might be defeated by the / / use of internal extended precision floating-point registers. Sometimes / / this problem can be fixed by defining certain values to be volatile, / / thus forcing them to be stored to memory and rounded off. This isn't / / a great solution, though, as it slows the arithmetic down. */ Now what I would like to know is that there is a way, maybe some compiler option, to turn off the internal extended precision floating-point registers. I really appriaciate your help

    Read the article

  • PDO bindparam not working.

    - by jim
    I am trying to save data into a database using PDO. All columns save correctly with the exception of one. No matter what I try, I cannot get the data to go in. myfunc($db, $data) { echo $data; // <----- Outputs my data. example: 'jim jones' $stmt = $db->prepare("CALL test(:id, :data, :ip, :expires)"); $stmt->bindParam(':id', $id, PDO::PARAM_STR); $stmt->bindParam(':data', $data, PDO::PARAM_STR); $stmt->bindParam(':ip', $ip, PDO::PARAM_STR); $stmt->bindParam(':expires', $expires, PDO::PARAM_STR); ... } So even after verifying that the data variable in fact holds my data, the bindParam method will not bind. When I echo the data variable, I can see the data is there. It will not save though. If I copy the echo'd output of the data variable to screen and paste it into a new variable, it WILL save. I'm at this now for a couple of hours. Can someone please have a look? EDIT: I want to also mention that I have tried using bindValue() in place of bindParam() and the data for the data variable will still not save.

    Read the article

  • Problem Displaying XML in Grid View-newbie

    - by Dean
    I am trying to do something in VisualWebDev 2008 Express that I thought would be simple, but it is not working. I want to display data from an XML file so I added the XMLDataSource to my page, pointed it to the XML file, and then added the GridView and connected it to the datasource. I am getting the following error: GridView - GridView1There was an error rendering the control. The data source for GridView with id 'GridView1' did not have any properties or attributes from which to generate columns. Ensure that your data source has content. Could someone please tell me what I might be doing wrong, TIA Dean A smippet from my XML is as follows: 6019 - Renaissance MS - New School Renaissance MS 7155 Hall Road Fairburn, GA 30213 NS-6019200-LA-01 New School Close-out NS-6019200 0.000000000000000e+000 The construction of the new Renaissance MS will be at the intersection of Jones/Hall Road, in the districts 7th & 9F and Land Lots 117, 143 & 146 of Fulton County, GA. The work includes the construction of the 180,500 square foot building that will house 34 standard classrooms, 12 standard science labs, 20 special purpose classrooms, cafeteria and litchen, gymnasium, media center and administrative offices. The site will also have multi-purpose playfields with track, softball field, tennis courts and basketball/volleyball court. Terry O'Brien Parsons Stevens Wilkinson Stang Newdow Barton Malow -84.62242 33.61497

    Read the article

  • Can I use POSIX signals in my Perl program to create event-driven programming?

    - by Shiftbit
    Is there any POSIX signals that I could utilize in my Perl program to create event-driven programming? Currently, I have multi-process program that is able to cross communicate but my parent thread is only able to listen to listen at one child at a time. foreach (@proc) { sysread(${$_}{'read'}, my $line, 100); #problem here chomp($line); print "Parent hears: $line\n"; } The problem is that the parent sits in a continual wait state until it receives it a signal from the first child before it can continue on. I am relying on 'pipe' for my intercommunication. My current solution is very similar to: http://stackoverflow.com/questions/2558098/how-can-i-use-pipe-to-facilitate-interprocess-communication-in-perl If possible I would like to rely on a $SIG{...} event or any non-CPAN solution. Update: As Jonathan Leffler mentioned, kill can be used to send a signal: kill USR1 = $$; # send myself a SIGUSR1 My solution will be to send a USR1 signal to my child process. This event tells the parent to listen to the particular child. child: kill USR1 => $parentPID if($customEvent); syswrite($parentPipe, $msg, $buffer); #select $parentPipe; print $parentPipe $msg; parent: $SIG{USR1} = { #get child pid? sysread($array[$pid]{'childPipe'}, $msg, $buffer); }; But how do I get my the source/child pid that signaled the parent? Have the child Identify itself in its message. What happens if two children signal USR1 at the same time?

    Read the article

  • Legitimate uses of the Function constructor

    - by Marcel Korpel
    As repeatedly said, it is considered bad practice to use the Function constructor (also see the ECMAScript Language Specification, 5th edition, § 15.3.2.1): new Function ([arg1[, arg2[, … argN]],] functionBody) (where all arguments are strings containing argument names and the last (or only) string contains the function body). To recapitulate, it is said to be slow, as explained by the Opera team: Each time […] the Function constructor is called on a string representing source code, the script engine must start the machinery that converts the source code to executable code. This is usually expensive for performance – easily a hundred times more expensive than a simple function call, for example. (Mark ‘Tarquin’ Wilton-Jones) Though it's not that bad, according to this post on MDC (I didn't test this myself using the current version of Firefox, though). Crockford adds that [t]he quoting conventions of the language make it very difficult to correctly express a function body as a string. In the string form, early error checking cannot be done. […] And it is wasteful of memory because each function requires its own independent implementation. Another difference is that a function defined by a Function constructor does not inherit any scope other than the global scope (which all functions inherit). (MDC) Apart from this, you have to be attentive to avoid injection of malicious code, when you create a new Function using dynamic contents. Lots of disadvantages and it is intelligible that ECMAScript 5 discourages the use of the Function constructor by throwing an exception when using it in strict mode (§ 13.1). That said, T.J. Crowder says in an answer that [t]here's almost never any need for the similar […] new Function(...), either, again except for some advanced edge cases. So, now I am wondering: what are these “advanced edge cases”? Are there legitimate uses of the Function constructor?

    Read the article

  • PHP programmer wanting to learn Flash game development

    - by grokker
    Hi! I'm currently a PHP programmer and one of my childhood dreams is to create a game. A game to show my friends or my children when the time comes :) The problem is I don't know Flash. I'm not great at drawing stuff or even artistic. I could program a little with Javascript and I could consider myself intermediate with jQuery. So my question is, how do I get started? What books do I read first? What's the steps that I should take on this journey? Thank you SO friends. I hope you all could help me create something I could be proud of :) P.S. I'm just assuming that Flash is the easiest way to create and show to other people my game. Anyway I'm very open for other suggestions! My game in mind is a side scroller about an indiana jones type of character and the setting is on the jungle with trees and snakes and a lot of animals!

    Read the article

  • C: 8x8 -> 16 bit multiply precision guaranteed by integer promotions?

    - by craig-blome
    I'm trying to figure out if the C Standard (C90, though I'm working off Derek Jones' annotated C99 book) guarantees that I will not lose precision multiplying two unsigned 8-bit values and storing to a 16-bit result. An example statement is as follows: unsigned char foo; unsigned int foo_u16 = foo * 10; Our Keil 8051 compiler (v7.50 at present) will generate a MUL AB instruction which stores the MSB in the B register and the LSB in the accumulator. If I cast foo to a unsigned int first: unsigned int foo_u16 = (unsigned int)foo * 10; then the compiler correctly decides I want a unsigned int there and generates an expensive call to a 16x16 bit integer multiply routine. I would like to argue beyond reasonable doubt that this defensive measure is not necessary. As I read the integer promotions described in 6.3.1.1, the effect of the first line shall be as if foo and 10 were promoted to unsigned int, the multiplication performed, and the result stored as unsigned int in foo_u16. If the compiler knows an instruction that does 8x8-16 bit multiplications without loss of precision, so much the better; but the precision is guaranteed. Am I reading this correctly? Best regards, Craig Blome

    Read the article

  • JavaFx 2.1, 2.2 TableView update issue

    - by Lewis Liu
    My application uses JPA read data into TableView then modify and display them. The table refreshed modified record under JavaFx 2.0.3. Under JavaFx 2.1, 2.2, the table wouldn't refresh the update anymore. I found other people have similar issue. My plan was to continue using 2.0.3 until someone fixes the issue under 2.1 and 2.2. Now I know it is not a bug and wouldn't be fixed. Well, I don't know how to deal with this. Following are codes are modified from sample demo to show the issue. If I add a new record or delete a old record from table, table refreshes fine. If I modify a record, the table wouldn't refreshes the change until a add, delete or sort action is taken. If I remove the modified record and add it again, table refreshes. But the modified record is put at button of table. Well, if I remove the modified record, add the same record then move the record to the original spot, the table wouldn't refresh anymore. Below is a completely code, please shine some light on this. import javafx.application.Application; import javafx.beans.property.SimpleStringProperty; import javafx.collections.FXCollections; import javafx.collections.ObservableList; import javafx.event.ActionEvent; import javafx.event.EventHandler; import javafx.geometry.HPos; import javafx.geometry.Insets; import javafx.geometry.Pos; import javafx.scene.Group; import javafx.scene.Scene; import javafx.scene.control.*; import javafx.scene.control.cell.PropertyValueFactory; import javafx.scene.layout.GridPane; import javafx.scene.layout.HBox; import javafx.scene.layout.VBox; import javafx.scene.text.Font; import javafx.stage.Modality; import javafx.stage.Stage; import javafx.stage.StageStyle; public class Main extends Application { private TextField firtNameField = new TextField(); private TextField lastNameField = new TextField(); private TextField emailField = new TextField(); private Stage editView; private Person fPerson; public static class Person { private final SimpleStringProperty firstName; private final SimpleStringProperty lastName; private final SimpleStringProperty email; private Person(String fName, String lName, String email) { this.firstName = new SimpleStringProperty(fName); this.lastName = new SimpleStringProperty(lName); this.email = new SimpleStringProperty(email); } public String getFirstName() { return firstName.get(); } public void setFirstName(String fName) { firstName.set(fName); } public String getLastName() { return lastName.get(); } public void setLastName(String fName) { lastName.set(fName); } public String getEmail() { return email.get(); } public void setEmail(String fName) { email.set(fName); } } private TableView<Person> table = new TableView<Person>(); private final ObservableList<Person> data = FXCollections.observableArrayList( new Person("Jacob", "Smith", "[email protected]"), new Person("Isabella", "Johnson", "[email protected]"), new Person("Ethan", "Williams", "[email protected]"), new Person("Emma", "Jones", "[email protected]"), new Person("Michael", "Brown", "[email protected]")); public static void main(String[] args) { launch(args); } @Override public void start(Stage stage) { Scene scene = new Scene(new Group()); stage.setTitle("Table View Sample"); stage.setWidth(535); stage.setHeight(535); editView = new Stage(); final Label label = new Label("Address Book"); label.setFont(new Font("Arial", 20)); TableColumn firstNameCol = new TableColumn("First Name"); firstNameCol.setCellValueFactory( new PropertyValueFactory<Person, String>("firstName")); firstNameCol.setMinWidth(150); TableColumn lastNameCol = new TableColumn("Last Name"); lastNameCol.setCellValueFactory( new PropertyValueFactory<Person, String>("lastName")); lastNameCol.setMinWidth(150); TableColumn emailCol = new TableColumn("Email"); emailCol.setMinWidth(200); emailCol.setCellValueFactory( new PropertyValueFactory<Person, String>("email")); table.setItems(data); table.getColumns().addAll(firstNameCol, lastNameCol, emailCol); //--- create a edit button and a editPane to edit person Button addButton = new Button("Add"); addButton.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent event) { fPerson = null; firtNameField.setText(""); lastNameField.setText(""); emailField.setText(""); editView.show(); } }); Button editButton = new Button("Edit"); editButton.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent event) { if (table.getSelectionModel().getSelectedItem() != null) { fPerson = table.getSelectionModel().getSelectedItem(); firtNameField.setText(fPerson.getFirstName()); lastNameField.setText(fPerson.getLastName()); emailField.setText(fPerson.getEmail()); editView.show(); } } }); Button deleteButton = new Button("Delete"); deleteButton.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent event) { if (table.getSelectionModel().getSelectedItem() != null) { data.remove(table.getSelectionModel().getSelectedItem()); } } }); HBox addEditDeleteButtonBox = new HBox(); addEditDeleteButtonBox.getChildren().addAll(addButton, editButton, deleteButton); addEditDeleteButtonBox.setAlignment(Pos.CENTER_RIGHT); addEditDeleteButtonBox.setSpacing(3); GridPane editPane = new GridPane(); editPane.getStyleClass().add("editView"); editPane.setPadding(new Insets(3)); editPane.setHgap(5); editPane.setVgap(5); Label personLbl = new Label("Person:"); editPane.add(personLbl, 0, 1); GridPane.setHalignment(personLbl, HPos.LEFT); firtNameField.setPrefWidth(250); lastNameField.setPrefWidth(250); emailField.setPrefWidth(250); Label firstNameLabel = new Label("First Name:"); Label lastNameLabel = new Label("Last Name:"); Label emailLabel = new Label("Email:"); editPane.add(firstNameLabel, 0, 3); editPane.add(firtNameField, 1, 3); editPane.add(lastNameLabel, 0, 4); editPane.add(lastNameField, 1, 4); editPane.add(emailLabel, 0, 5); editPane.add(emailField, 1, 5); GridPane.setHalignment(firstNameLabel, HPos.RIGHT); GridPane.setHalignment(lastNameLabel, HPos.RIGHT); GridPane.setHalignment(emailLabel, HPos.RIGHT); Button saveButton = new Button("Save"); saveButton.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent event) { if (fPerson == null) { fPerson = new Person( firtNameField.getText(), lastNameField.getText(), emailField.getText()); data.add(fPerson); } else { int k = -1; if (data.size() > 0) { for (int i = 0; i < data.size(); i++) { if (data.get(i) == fPerson) { k = i; } } } fPerson.setFirstName(firtNameField.getText()); fPerson.setLastName(lastNameField.getText()); fPerson.setEmail(emailField.getText()); data.set(k, fPerson); table.setItems(data); // The following will work, but edited person has to be added to the button // // data.remove(fPerson); // data.add(fPerson); // add and remove refresh the table, but now move edited person to original spot, // it failed again with the following code // while (data.indexOf(fPerson) != k) { // int i = data.indexOf(fPerson); // Collections.swap(data, i, i - 1); // } } editView.close(); } }); Button cancelButton = new Button("Cancel"); cancelButton.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent event) { editView.close(); } }); HBox saveCancelButtonBox = new HBox(); saveCancelButtonBox.getChildren().addAll(saveButton, cancelButton); saveCancelButtonBox.setAlignment(Pos.CENTER_RIGHT); saveCancelButtonBox.setSpacing(3); VBox editBox = new VBox(); editBox.getChildren().addAll(editPane, saveCancelButtonBox); Scene editScene = new Scene(editBox); editView.setTitle("Person"); editView.initStyle(StageStyle.UTILITY); editView.initModality(Modality.APPLICATION_MODAL); editView.setScene(editScene); editView.close(); final VBox vbox = new VBox(); vbox.setSpacing(5); vbox.getChildren().addAll(label, table, addEditDeleteButtonBox); vbox.setPadding(new Insets(10, 0, 0, 10)); ((Group) scene.getRoot()).getChildren().addAll(vbox); stage.setScene(scene); stage.show(); } }

    Read the article

  • OpenCL - incremental summation during compute

    - by user1721997
    I'm absolutelly novice in OpenCL programming. For my app. (molecular simulaton) I wrote a kernel for calculate intermolecular potential of lennard-jones liquid. In this kernel I need to compute cumulative value of the potential of all particles with one: __kernel void Molsim(__global const float* inmatrix, __global float* fi, const int c, const float r1, const float r2, const float r3, const float rc, const float epsilon, const float sigma, const float h1, const float h23) { float fi0; float fi1; float d; unsigned int i = get_global_id(0); //number of particles (typically 2000) if(c!=i) { // potential before particle movement d=sqrt(pow((0.5*h1-fabs(0.5*h1-fabs(inmatrix[c*3]-inmatrix[i*3]))),2.0)+pow((0.5*h23-fabs(0.5*h23-fabs(inmatrix[c*3+1]-inmatrix[i*3+1]))),2.0)+pow((0.5*h23-fabs(0.5*h23-fabs(inmatrix[c*3+2]-inmatrix[i*3+2]))),2.0)); if(d<rc) { fi0=4.0*epsilon*(pow(sigma/d,12.0)-pow(sigma/d,6.0)); } else { fi0=0; } // potential after particle movement d=sqrt(pow((0.5*h1-fabs(0.5*h1-fabs(r1-inmatrix[i*3]))),2.0)+pow((0.5*h23-fabs(0.5*h23-fabs(r2-inmatrix[i*3+1]))),2.0)+pow((0.5*h23-fabs(0.5*h23-fabs(r3-inmatrix[i*3+2]))),2.0)); if(d<rc) { fi1=4.0*epsilon*(pow(sigma/d,12.0)-pow(sigma/d,6.0)); } else { fi1=0; } // cumulative difference of potentials fi[0]+=fi1-fi0; } } My problem is in the line: fi[0]+=fi1-fi0;. In the one-element vector fi[0] are wrong results. I read something about sum reduction, but I do not know how to do it during the calculation. Exist any simple solution of my problem?

    Read the article

  • How can I change the tags/css class of some dynamic text based on output from a helper?

    - by Angela
    I have a repeated line which outputs something like: Call John Jones in -3 days (status) I have a helper called show_status(contact,email) which will output whether that particular email had been sent to that particular contact. If it is "sent," then that entire line should show up as "strike out." Similarly, if the number of days is -3 (<0), the line should be formatted in red. Here's my hack, but there must be a cleaner way to put the logic into the controller? I hard-code a value that wraps around the lines I want formatted, and assign the value based on a separate call to the same helper: <% for call in @campaign.calls %> <% if !show_call_status(@contact,call).blank? %> <%= strike_start = '<s>'%> <%= strike_end = '</s>' %> <% end %> <p> <%= strike_start %> <%= link_to call.title, call_path(call) %> on <%= (@contact.date_entered + call.days).to_s(:long) %> in <%= interval_email(@contact,call) %> days <%= make_call(@contact,call) %> <span class='status'><%= show_call_status(@contact,call) %></span> <%= strike_end %> </p> <% end %> I guess what I'd like to do is not have the if statement in the View. Not sure how to do this.

    Read the article

  • HTML, CSS: overbar matching square root symbol

    - by Pindatjuh
    Is there a way in HTML and/or CSS to do the following, but then correctly: √¯¯¯¯¯¯φ·(2π−γ) Such that there is an overbar above the expression, which neatly aligns with the &radic;? I know there is the Unicode &macr;, that looks like the overbar I need (as used in the above example, though as you can see – it doesn't align well with the root symbol). The solution I'm looking for works at least for one standard font, on most sizes, and all modern browsers. I can't use images; I'd like to have a pure HTML4/CSS way, without client scripting. Here is my current code, thank you Matthew Jones (+1) for the text-decoration: overline! Still some problems <div style="font-family: Georgia; font-size: 200%"> <span style="vertical-align: -15%;">&radic;</span><span style="text-decoration: overline;">&nbsp;x&nbsp;+&nbsp;1&nbsp;</span> </div> The line doesn't match the &radic; because I lowered it with 15% baseline height. (Because the default placement is not nice) The line thickness doesn't match the thickness of the &radic;. Thanks!

    Read the article

  • Lync 2010, Kamailio, & Trixbox 2.6.23 (Asterisk 1.4)

    - by slashp
    I'm having an issue trying to connect Lync 2010 phone calls with our trixbox PBX. I've gotten to the point where Kamailio seems to be functioning properly and acting as a bridge between TCP traffic (from Lync) & UDP traffic (to the trixbox, as Asterisk 1.4 does not support SIP over TCP). Our Lync box IP: 10.100.10.41 Our Kamailio box IP: 10.100.10.44 Our trixbox IP: 10.100.10.2 The issue I'm running into is as follows when enabling SIP debugging for the Kamailio box: <--- SIP read from 10.100.10.44:5060 ---> PRACK sip:TNECLTSLY01.contoso.com:5068;transport=Tcp;maddr=10.100.10.41 SIP/2.0 FROM: <sip:9121;[email protected];user=phone>;epid=CF2380792B;tag=4852bab430 TO: <sip:[email protected];user=phone>;epid=CF2380792B;tag=3684a6a24e CSEQ: 24 PRACK CALL-ID: 192daae6-00e1-4140-bddd-0394b35d475b MAX-FORWARDS: 70 Via: SIP/2.0/UDP 10.100.10.44;branch=z9hG4bKcydzigwkX;i=d VIA: SIP/2.0/TCP 10.100.10.41:51677;branch=z9hG4bK159fc989 CONTACT: <sip:TNECLTSLY01.contoso.com:5068;transport=Tcp;maddr=10.100.10.41> CONTENT-LENGTH: 0 USER-AGENT: RTCC/4.0.0.0 MediationServer RAck: 1 23 INVITE <-------------> --- (12 headers 0 lines) --- Sending to 10.100.10.44 : 5060 (NAT) <--- Transmitting (NAT) to 10.100.10.44:5060 ---> SIP/2.0 481 Call leg/transaction does not exist Via: SIP/2.0/UDP 10.100.10.44;branch=z9hG4bKcydzigwkX;i=d;received=10.100.10.44 Via: SIP/2.0/TCP 10.100.10.41:51677;branch=z9hG4bK159fc989 From: <sip:9121;[email protected];user=phone>;epid=CF2380792B;tag=4852bab430 To: <sip:[email protected];user=phone>;epid=CF2380792B;tag=3684a6a24e Call-ID: 192daae6-00e1-4140-bddd-0394b35d475b CSeq: 24 PRACK User-Agent: Asterisk PBX Allow: INVITE, ACK, CANCEL, OPTIONS, BYE, REFER, SUBSCRIBE, NOTIFY Supported: replaces Content-Length: 0 <------------> trixbox1*CLI> <--- SIP read from 10.100.10.44:5060 ---> ACK sip:[email protected];user=phone SIP/2.0 FROM: "John Jones"<sip:9121;[email protected];user=phone>;tag=4852bab430;epid=CF2380792B TO: <sip:[email protected];user=phone>;tag=3684a6a24e;epid=CF2380792B CSEQ: 23 ACK CALL-ID: 192daae6-00e1-4140-bddd-0394b35d475b MAX-FORWARDS: 70 Via: SIP/2.0/UDP 10.100.10.44;branch=z9hG4bKcydzigwkX;i=d VIA: SIP/2.0/TCP 10.100.10.41:51677;branch=z9hG4bK79a21c CONTENT-LENGTH: 0 My SIP trunk on the trixbox looks like this: [from-lync] exten => _+4XXX!,1,Noop(Stripping + from start of number) exten => _+4XXX!,n,Goto(from-internal,${EXTEN:1}) Though I am still having no luck getting the + stripped or the call to go through. Any ideas would be greatly appreciated. Thank you! -slashp

    Read the article

  • SQLAuthority News – Meeting SQL Friends – SQLPASS 2011 Event Log

    - by pinaldave
    One of the biggest reason I go to SQLPASS is that my friends are going there too. There are so many friends with whom I often talk on Facebook and Twitter but I rarely get time to meet them as well talk with them. One thing I am usually sure that many fo them will be for sure attend SQLPASS. This is one event which every SQL Server Enthusiast should attend. Just like everybody I had pleasant time to meet many of my SQL friends. There were so many friends that I met and I did not click photo. There were so many friends who clicked photo in their camera and I do not have them. Here are 1% of the photos which I have. If you are not in the photo, it does not mean I have less respect to our friendship. Please post link to our photo together :) I was very fortunate that I was able to snap a quick photograph with Pinal Dave with Dr. David DeWitt. I stood outside of the hall waiting for Dr. to show up and when he was heading down from convention center I requested him if I can have one photo for my memory lane and very politely he agreed to have one. It indeed made my day! Pinal Dave with Dr. David DeWitt Every single time I met Steve, I make sure I have one photo for my memory. Steve is so kind every single time. If you know SQL and do not know Steve Jones, you do not know SQL (IMHO). Following is the photograph with Michael McLean. More details about this photo in future blog post! Pinal Dave, Michael McLean, and Rick Morelan Arnie always shares his wisdom with me. I still remember when I very first time visited USA, I was standing alone in corner and Arnie walked to me and introduced to every single person he know. Talking to Arnie is always pleasure and inspiring. Arnie Rowland and Pinal Dave I am now published author and have written two books so far. I am fortunate to have Rick Morelan as Co-author of both of my books. He is great guy and very easy to become friends with. I am very much impressed by him and his kindness during book co-authoring. Here is very first of our photograph together at SQLPASS. Rick Morelan and Pinal Dave Diego Nogare and I have been talking for long time on twitter and on various social media channels. I finally got chance to meet my friend from Brazil. It was excellent experience to meet a friend whom one wants to meet for long time and had never got chance earlier. Buck Woody – who does not know Buck. He is funny, kind and most important friends of every one. Buck is so kind that he does not hesitate to approach people even though he is famous and most known in community. Every time I meet him I learn something. He is always smiling and approachable. Pinal Dave and Buck Woddy Rushabh Mehta is current SQL PASS president and personal friend. He has always smiling face and tremendous love for SQL community. I often wonder where he gets all the time for all the time and efforts he puts in for community. I never miss a chance to meet and greet him. Even though he is renowned SQL Guru and extremely busy person – every single time I meet him he always asks me – “How is Nupur and Shaivi?” He even remembers my wife and daughters name. I am touched. Rushabh Mehta and Pinal Dave Nigel Sammy has extremely well sense of humor and passion from community. We have excellent synergy while we are together. The attached photo is taken while I was talking to him on Seattle Shoreline about SQL. Pinal Dave and Nigel Sammy Rick Morelan wanted my this trip to be memorable. I am vegetarian and I told him that I do not like Seafood. Well, to prove the point, he took me to fantastic Seafood restaurant in Seattle and treated me with mouth watering vegetarian dishes. I think when I go to Seattle next time, I am going to make him to take me again to the same place. Rick, Rushabh, Pinal and Paras Well, this is a short summary of few of the friends I met at Seattle. What is the life without friends, eh? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • Going to the Score Cards - Exceptional DBA Awards 2011

    - by Rodney
    This year marks my 4th year as a judge for the Exceptional DBA Awards, founded by Red Gate in 2008 to "recognize the essential but often overlooked contributions of DBAs, the unsung heroes of the IT community." As a professional DBA myself I have been honored to participate as a judge. It is not an easy job because there is a voluminous amount of nominees from all over the world. Each judge has to read through every word of the nominee's answers, deciding what makes each person special and stand out amongst their peers. What drives them? What single element of their submission will shine above all others? It is my hope that what I am about to divulge to you as a judge will prompt you to think about yourself or someone you know and decide that you may be the exceptional DBA who can take home the gold at this year's award ceremony in Seattle. We are more than a few weeks into the nomination process and there are quite a number of submissions already. I can not tell you how many as that would not be fair. I can say it is not 1 million or more. I can also say that it is not 100,000. But that is all I can say about that. However, I can tell you that it is enough this year that we are breaking records on the number of people who have been influenced, inspired or intrigued by the awards in the past. I remember them all like it were yesterday. fuzzy thought cloud here. It was a rainy day in Seattle (all memories for each award ceremony will start thusly) and I was in the hotel going over my notes on what I wanted to say about the winner of the 2008 Red Gate Exceptional DBA Award. The notes were on index cards that I had either bought or stolen from my wife, I do not recall, but I was nervous which was unlike me. This was, after all, a big night for the winner. Of course, we, the judges and the SQL community, had already decided the winner and now all that remained was to present the award. The room was packed. It was Casino night, sponsored by sqlservercentral.com. Money (fake), drinks (not fake) and camaraderie flowed through the room. Dan McClain won the award that year. He worked for Anheuser-Busch at the time. I promise that did not influence my decision. We presented Dan with the award. He was very proud of this achievement, rightfully so, as was the SQL community for him. I spoke with Dan throughout the conference and realized how huge this award was for him, not just personally but professionally. It was a rainy day in Seattle in 2009 and I was nervous. I was asked to speak to a group of people again as a judge for the Exceptional DBA Awards. This year, Josef Richberg would be the recipient of the award, but he would not be able to attend. We all prayed for him as he fought through an illness and congratulated him for his accomplishments as a DBA for his company. He got better and sallied forth and continued to give back to the SQL community that he saw as one big family. In 2010, and I am getting ahead of myself, he was asked to be a judge himself for the very award he had just received the year before. It was a sunny day in Seattle and I missed it, because it was in July and I was not there. It was a rainy day in Seattle and it is 2010 and Tracy Hamlin enters a submission that blows this judge away. She is managing a 50 Terabyte distributed database ("50 Gigabytes! Are you kidding me!!!", Rodney jokes.)  and loves her daily job as a DBA working with developers, mentoring them and teaching them best practices with kindness and patience. She is a people person who just happens to have 10+ years experience with RDBMS'. She wins the award and goes on to be recognized as famous at PASS. It will be a rainy day in Seattle this year when I sit amongst my old constituent judges and friends, Brad McGehee, http://www.simple-talk.com/books/sql-books/how-to-become-an-exceptional-dba,-2nd-edition/, Steve Jones, whom we all know and love at http://www.sqlservercentral.com and a young upstart to the SQL Community, this cat named Brent Ozar to announce the 2011 winner. I personally have not heard of Brent but I am told I have interviewed him for a DBA position several years ago and turned him down, http://www.brentozar.com/archive/2011/05/exceptional-dba-contest/ . I hope that did not jeopardize his future in the SQL world. I am a big hearted oaf and would feel horrible. Hopefully I will meet him at PASS and we can work this all out and I can help him get a DBA job. The rain has stopped and a new year is upon us. The stakes are high...the competition is fierce...the rewards are incredible. The entry form awaits you. http://www.exceptionaldba.com/ I very much look forward to meeting you and presenting the award to you in front of hundreds of your envious but proud peers as the new Exceptional DBA for 2011 at the PASS Summit. Here is what you could win: The Exceptional DBA of the Year receives full conference registration for the 2011 PSS Summit in Seattle, where the awards ceremony will take place, four nights' hotel accommodation, and $300 towards travel expenses. They will also be featured on Simple-Talk. Are you ready? Are you nervous?

    Read the article

  • Goodby jQuery Templates, Hello JsRender

    - by SGWellens
    A funny thing happened on my way to the jQuery website, I blinked and a feature was dropped: jQuery Templates have been discontinued. The new pretender to the throne is JsRender. jQuery Templates looked pretty useful when they first came out. Several articles were written about them but I stayed away because being on the bleeding edge of technology is not a productive place to be. I wanted to wait until it stabilized…in retrospect, it was a serendipitous decision. This time however, I threw all caution to the wind and took a close look at JSRender. Why? Maybe I'm having a midlife crisis; I'll go motorcycle shopping tomorrow. Caveat, here is a message from the site: Warning: JsRender is not yet Beta, and there may be frequent changes to APIs and features in the coming period. Fair enough, we've been warned. The first thing we need is some data to render. Below is some JSON formatted data. Typically this will come from an asynchronous call to a web service. For simplicity, I hard coded a variable:     var Golfers = [         { ID: "1", "Name": "Bobby Jones", "Birthday": "1902-03-17" },         { ID: "2", "Name": "Sam Snead", "Birthday": "1912-05-27" },         { ID: "3", "Name": "Tiger Woods", "Birthday": "1975-12-30" }         ]; We also need some templates, I created two. Note: The script blocks have the id property set. They are needed so JsRender can locate them.     <script id="GolferTemplate1" type="text/html">         {{=ID}}: <b>{{=Name}}</b> <i>{{=Birthday}}</i> <br />     </script>       <script id="GolferTemplate2" type="text/html">         <tr>             <td>{{=ID}}</td>             <td><b>{{=Name}}</b></td>             <td><i>{{=Birthday}}</i> </td>         </tr>     </script> Including the correct JavaScript files is trivial:     <script src="Scripts/jquery-1.7.js" type="text/javascript"></script>     <script src="Scripts/jsrender.js" type="text/javascript"></script> Of course we need some place to render the output:     <div id="GolferDiv"></div><br />     <table id="GolferTable"></table> The code is also trivial:     function Test()     {         $("#GolferDiv").html($("#GolferTemplate1").render(Golfers));         $("#GolferTable").html($("#GolferTemplate2").render(Golfers));           // you can inspect the rendered html if there are poblems.         // var html = $("#GolferTemplate2").render(Golfers);     } And here's what it looks like with some random CSS formatting that I had laying around.    Not bad, I hope JsRender lasts longer than jQuery Templates. One final warning, a lot of jQuery code is ugly, butt-ugly. If you do look inside the jQuery files, you may want to cover your keyboard with some plastic in case you get vertigo and blow chunks. I hope someone finds this useful. Steve Wellens CodeProject

    Read the article

  • SQL SERVER – CXPACKET – Parallelism – Usual Solution – Wait Type – Day 6 of 28

    - by pinaldave
    CXPACKET has to be most popular one of all wait stats. I have commonly seen this wait stat as one of the top 5 wait stats in most of the systems with more than one CPU. Books On-Line: Occurs when trying to synchronize the query processor exchange iterator. You may consider lowering the degree of parallelism if contention on this wait type becomes a problem. CXPACKET Explanation: When a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. There is an organizer/coordinator thread (thread 0), which takes waits for all the threads to complete and gathers result together to present on the client’s side. The organizer thread has to wait for the all the threads to finish before it can move ahead. The Wait by this organizer thread for slow threads to complete is called CXPACKET wait. Note that not all the CXPACKET wait types are bad. You might experience a case when it totally makes sense. There might also be cases when this is unavoidable. If you remove this particular wait type for any query, then that query may run slower because the parallel operations are disabled for the query. Reducing CXPACKET wait: We cannot discuss about reducing the CXPACKET wait without talking about the server workload type. OLTP: On Pure OLTP system, where the transactions are smaller and queries are not long but very quick usually, set the “Maximum Degree of Parallelism” to 1 (one). This way it makes sure that the query never goes for parallelism and does not incur more engine overhead. EXEC sys.sp_configure N'cost threshold for parallelism', N'1' GO RECONFIGURE WITH OVERRIDE GO Data-warehousing / Reporting server: As queries will be running for long time, it is advised to set the “Maximum Degree of Parallelism” to 0 (zero). This way most of the queries will utilize the parallel processor, and long running queries get a boost in their performance due to multiple processors. EXEC sys.sp_configure N'cost threshold for parallelism', N'0' GO RECONFIGURE WITH OVERRIDE GO Mixed System (OLTP & OLAP): Here is the challenge. The right balance has to be found. I have taken a very simple approach. I set the “Maximum Degree of Parallelism” to 2, which means the query still uses parallelism but only on 2 CPUs. However, I keep the “Cost Threshold for Parallelism” very high. This way, not all the queries will qualify for parallelism but only the query with higher cost will go for parallelism. I have found this to work best for a system that has OLTP queries and also where the reporting server is set up. Here, I am setting ‘Cost Threshold for Parallelism’ to 25 values (which is just for illustration); you can choose any value, and you can find it out by experimenting with the system only. In the following script, I am setting the ‘Max Degree of Parallelism’ to 2, which indicates that the query that will have a higher cost (here, more than 25) will qualify for parallel query to run on 2 CPUs. This implies that regardless of the number of CPUs, the query will select any two CPUs to execute itself. EXEC sys.sp_configure N'cost threshold for parallelism', N'25' GO EXEC sys.sp_configure N'max degree of parallelism', N'2' GO RECONFIGURE WITH OVERRIDE GO Read all the post in the Wait Types and Queue series. Additionally a must read comment of Jonathan Kehayias. Note: The information presented here is from my experience and I no way claim it to be accurate. I suggest you all to read the online book for further clarification. All the discussion of Wait Stats over here is generic and it varies from system to system. It is recommended that you test this on the development server before implementing on the production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL Server devs–what source control system do you use, if any? (answer and maybe win free stuff)

    - by jamiet
    Recently I noticed a tweet from notable SQL Server author and community dude-at-large Steve Jones in which he asked how many SQL Server developers were putting their SQL Server source code (i.e. DDL) under source control (I’m paraphrasing because I can’t remember the exact tweet and Twitter’s search functionality is useless). The question surprised me slightly as I thought a more pertinent question would be “how many SQL Server developers are not using source control?” because I have been doing just that for many years now and I simply assumed that use of source control is a given in this day and age. Then I started thinking about it. “Perhaps I’m wrong” I pondered, “perhaps the SQL Server folks that do use source control in their day-to-day jobs are in the minority”. So, dear reader, I’m interested to know a little bit more about your use of source control. Are you putting your SQL Server code into a source control system? If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? Why did you make those particular software choices? Any interesting anecdotes to share in regard to your use of source control and SQL Server? To encourage you to contribute I have five pairs of licenses for Red Gate SQL Source Control and Red Gate SQL Connect to give away to what I consider to be the five best replies (“best” is totally subjective of course but this is my blog so my decision is final ), if you want to be considered don’t forget to leave contact details; email address, Twitter handle or similar will do. To start you off and to perhaps get the brain cells whirring, here are my answers to the questions above: Are you putting your SQL Server code into a source control system? As I think I’ve already said…yes. Always. If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? I move around a lot between many clients so it changes on a fairly regular basis; my current client uses Team Foundation Server (aka TFS) and as part of a separate project is trialing the use of Team Foundation Service. I have used SVN extensively in the past which I am a fan of (I generally prefer it to TFS) and am trying to get my head around Git by using it for ObjectStorageHelper. What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? On my current project, Team Explorer. In the past I have used Tortoise to connect to SVN. Why did you make those particular software choices? I generally use whatever the client uses and given that I work with SQL Server I find that the majority of my clients use TFS, I guess simply because they are Microsoft development shops. Any interesting anecdotes to share in regard to your use of source control and SQL Server? Not an anecdote as such but I am going to share some frustrations about TFS. In many ways TFS is a great product because it integrates many separate functions (source control, work item tracking, build agents) into one whole and I’m firmly of the opinion that that is a good thing if for no reason other than being able to associate your check-ins with a work-item. However, like many people there are aspects to TFS source control that annoy me day-in, day-out. Chief among them has to be the fact that it uses a file’s read-only property to determine if a file should be checked-out or not and, if it determines that it should, it will happily do that check-out on your behalf without you even asking it to. I didn’t realise how ridiculous this was until I first used SVN about three years ago – with SVN you make any changes you wish and then use your source control client to determine which files have changed and thus be checked-in; the notion of “check-out” doesn’t even exist. That sounds like a small thing but you don’t realise how liberating it is until you actually start working that way. Hoping to hear some more anecdotes and opinions in the comments. Remember….free software is up for grabs! @jamiet 

    Read the article

  • My Right-to-Left Foot (T-SQL Tuesday #13)

    - by smisner
    As a business intelligence consultant, I often encounter the situation described in this month's T-SQL Tuesday, hosted by Steve Jones ( Blog | Twitter) – “What the Business Says Is Not What the  Business Wants.” Steve posed the question, “What issues have you had in interacting with the business to get your job done?” My profession requires me to have one foot firmly planted in the technology world and the other foot planted in the business world. I learned long ago that the business never says exactly what the business wants because the business doesn't have the words to describe what the business wants accurately enough for IT. Not only do technological-savvy barriers exist, but there are also linguistic barriers between the two worlds. So how do I cope? The adage "a picture is worth a thousand words" is particularly helpful when I'm called in to help design a new business intelligence solution. Many of my students in BI classes have heard me explain ("rant") about left-to-right versus right-to-left design. To understand what I mean about these two design options, let's start with a picture: When we design a business intelligence solution that includes some sort of traditional data warehouse or data mart design, we typically place the data sources on the left, the new solution in the middle, and the users on the right. When I've been called in to help course-correct a failing BI project, I often find that IT has taken a left-to-right approach. They look at the data sources, decide how to model the BI solution as a _______ (fill in the blank with data warehouse, data mart, cube, etc.), and then build the new data structures and supporting infrastructure. (Sometimes, they actually do this without ever having talked to the business first.) Then, when they show what they've built to the business, the business says that is not what we want. Uh-oh. I prefer to take a right-to-left approach. Preferably at the beginning of a project. But even if the project starts left-to-right, I'll do my best to swing it around so that we’re back to a right-to-left approach. (When circumstances are beyond my control, I carry on, but it’s a painful project for everyone – not because of me, but because the approach just doesn’t get to what the business wants in the most effective way.) By using a right to left approach, I try to understand what it is the business is trying to accomplish. I do this by having them explain reports to me, and explaining the decision-making process that relates to these reports. Sometimes I have them explain to me their business processes, or better yet show me their business processes in action because I need pictures, too. I (unofficially) call this part of the project "getting inside the business's head." This is starting at the right side of the diagram above. My next step is to start moving leftward. I do this by preparing some type of prototype. Depending on the nature of the project, this might mean that I simply mock up some data in a relational database and build a prototype report in Reporting Services. If I'm lucky, I might be able to use real data in a relational database. I'll either use a subset of the data in the prototype report by creating a prototype database to hold the sample data, or select data directly from the source. It all depends on how much data there is, how complex the queries are, and how fast I need to get the prototype completed. If the solution will include Analysis Services, then I'll build a prototype cube. Analysis Services makes it incredibly easy to prototype. You can sit down with the business, show them the prototype, and have a meaningful conversation about what the BI solution should look like. I know I've done a good job on the prototype when I get knocked out of my chair so that the business user can explore the solution further independently. (That's really happened to me!) We can talk about dimensions, hierarchies, levels, members, measures, and so on with something tangible to look at and without using those terms. It's not helpful to use sample data like Adventure Works or to use BI terms that they don't really understand. But when I show them their data using the BI technology and talk to them in their language, then they truly have a picture worth a thousand words. From that, we can fine tune the prototype to move it closer to what they want. They have a better idea of what they're getting, and I have a better idea of what to build. So right to left design is not truly moving from the right to the left. But it starts from the right and moves towards the middle, and once I know what the middle needs to look like, I can then build from the left to meet in the middle. And that’s how I get past what the business says to what the business wants.

    Read the article

  • Oracle WebCenter at the Enterprise 2.0 Conference

    - by Brian Dirking
    We had a great week at the E20 Conference, presenting in four sessions – Andy MacMillan gave a session titled Today’s Successful Enterprises are Social Enterprises and was on a panel that Tony Byrne moderated; Christian Finn spoke on a panel on Unified Communications Unified Communications + Social Computing = Best of Both Worlds?, Mark Bennett spoke on a panel on The Evolution of Talent Management. The key areas of focus this year were sentiment analysis, adoption and community building, the benefits of failure, and social’s role in process applications. Sentiment analysis. This was focused not on external audiences but more on employee sentiment. Tim Young showed his internal "NikoNiko" project, where employees use smilies to report their current mood. The result was a dashboard that showed the company mood by department. Since the goal is to improve productivity, people can see which departments are running into issues and try and address them. A company might otherwise wait until the end of the quarter financials to find out that there was a problem and product didn’t ship. This is a way to identify issues immediately. Tim is great – he had the crowd laughing as soon as he hit the stage, with his proposed hastag for his session: by making it 138 characters long, people couldn’t say much behind his back. And as I tweeted during his session, I loved his comment that complexity diffuses energy - it sounds like something Sun Tzu would say. Another example of employee sentiment analysis was CubeVibe. Founder and CEO Aaron Aycock, in his 3 minute pitch or die session talked about how engaged employees perform better. It was too bad he got gonged, he was just picking up speed, but CubeVibe did win the vote – congratulations to them. Internal adoption, community building, and involvement. On this topic I spoke to Terri Griffith, and she said there is some good work going on at University of Indiana regarding this, and hinted that she might be blogging about it in the near future. This area holds lots of interest for me. Amongst our customers, - CPAC stands out as an organization that has successfully built a community. So, I wonder - what are the building blocks? A strong leader? A common or unifying purpose? A certain level of engagement? I imagine someone has created an equation that says “for a community to grow at 30% per month, there must be an engagement level x to the square root of y, where x equals current community size, and y equals the expected growth rate, and the result is how many engagements the average user must contribute to maintain that growth.” Does anyone have a framework like that? The net result of everyone’s experience is that there is nothing to do but start early and fail often. Kevin Jones made this the focus of his keynote. He talked about the types of failure and what they mean. And he showed his famous kids at work video: Kevin’s blog also has this post: Social Business Failure #8: Workflow Integration. This is something that we’ve been working on at Oracle. Since so much of business is based in enterprise applications such as ERP and CRM (and since Oracle offers e-Business Suite, Siebel, PeopleSoft, and JD Edwards, as well as Fusion Applications), it makes sense that the social capabilities of Oracle WebCenter is built right into these applications. There are two types of social collaboration – ad-hoc, and exception handling. When you are in a business process and encounter an exception, you immediately look for 1) the document that tells you how to handle it, or 2) the person who can tell you how to handle it. With WebCenter built into these processes, people either search their content management system, or engage in expertise location and conversation. The great thing is, THEY DON’T HAVE TO LEAVE THE APPLICATION TO DO IT. Oracle has built the social capabilities right into the applications and business processes. I don’t think enough folks were able to see that at the event, but I expect that over the next six months folks will become very aware of it. WebCenter also provides the ability to have ad-hoc collaboration, search, and expertise location that folks need when they are innovating or collaborating. We demonstrated Oracle Social Network. It’s built on our Oracle WebCenter product to provide social collaboration inside and outside of your company. When we showed it to people, there were a number of areas that they commented on that were different from the other products being shown at the conference: Screenshots from within the product Many authors working on documents simultaneously Flagging people for follow up Direct ability to call out to people Ability to see presence not just if someone is online, but which conversation they are actively in Great stuff, the conference was full of smart people that that we enjoy spending time with. We’ll keep up in the meantime, but we look forward to seeing you in Boston.

    Read the article

  • SQLIO Writes

    - by Grant Fritchey
    SQLIO is a fantastic utility for testing the abilities of the disks in your system. It has a very unfortunate name though, since it's not really a SQL Server testing utility at all. It really is a disk utility. They ought to call it DiskIO because they'd get more people using I think. Anyway, branding is not the point of this blog post. Writes are the point of this blog post. SQLIO works by slamming your disk. It performs as mean reads as it can or it performs as many writes as it can depending on how you've configured your tests. There are much smarter people than me who will get into all the various types of tests you should run. I'd suggest reading a bit of what Jonathan Kehayias (blog|twitter) has to say or wade into Denny Cherry's (blog|twitter) work. They're going to do a better job than I can describing all the benefits and mechanisms around using this excellent piece of software. My concerns are very focused. I needed to set up a series of tests to see how well our product SQL Storage Compress worked. I wanted to know the effects it would have on a system, the disk for sure, but also memory and CPU. How to stress the system? SQLIO of course. But when I set it up and ran it, following the documentation that comes with it, I was seeing better than 99% compression on the files. Don't get me wrong. Our product is magnificent, wonderful, all things great and beautiful, gets you coffee in the morning and is made mostly from bacon. But 99% compression. No, it's not that good. So what's up? Well, it's the configuration. The default mechanism is to load up a file, something large that will overwhelm your disk cache. You're instructed to load the file with a character 0x0. I never got a computer science degree. I went to film school. Because of this, I didn't memorize ASCII tables so when I saw this, I thought it was zero's or something. Nope. It's NULL. That's right, you're making a very large file, but you're filling it with NULL values. That's actually ok when all you're testing is the disk sub-system. But, when you want to test a compression and decompression, that can be an issue. I got around this fairly quickly. Instead of generating a file filled with NULL values, I just copied a database file for my tests. And to test it with SQL Storage Compress, I used a database file that had already been run through compression (about 40% compression on that file if you're interested). Now the reads were taken care of. I am seeing very realistic performance from decompressing the information for reads through SQLIO. But what about writes? Well, the issue is, what does SQLIO write? I don't have access to the code. But I do have access to the results. I did two different tests, just to be sure of what I was seeing. First test, use the .DAT file as described in the documentation. I opened the .DAT file after I was done with SQLIO, using WordPad. Guess what? It's a giant file full of air. SQLIO writes NULL values. What does that do to compression? I did the test again on a copy of an uncompressed database file. Then I ran the original and the SQLIO modified copy through ZIP to see what happened. I got better than 99% compression out of the SQLIO modified file (original file of 624,896kb went to 275,871kb compressed, after SQLIO it went to 608kb compressed). So, what does SQLIO write? It writes air. If you're trying to test it with compression or maybe some other type of file storage mechanism like dedupe, you need to know this because your tests really won't be valid. Should I find some other mechanism for testing? Yeah, if all I'm interested in is establishing performance to my own satisfaction, yes. But, I want to be able to compare my results with other people's results and we all need to be using the same tool in order for that to happen. SQLIO is the common mechanism that most people I know use to establish disk performance behavior. It'd be better if we could get SQLIO to do writes in some other fashion. Oh, and before I go, I get to brag a bit. Measuring IOPS, SQL Storage Compress outperforms my disk alone by about 30%.

    Read the article

  • Which functions in the C standard library commonly encourage bad practice?

    - by Ninefingers
    Hello all, This is inspired by this question and the comments on one particular answer in that I learnt that strncpy is not a very safe string handling function in C and that it pads zeros, until it reaches n, something I was unaware of. Specifically, to quote R.. strncpy does not null-terminate, and does null-pad the whole remainder of the destination buffer, which is a huge waste of time. You can work around the former by adding your own null padding, but not the latter. It was never intended for use as a "safe string handling" function, but for working with fixed-size fields in Unix directory tables and database files. snprintf(dest, n, "%s", src) is the only correct "safe strcpy" in standard C, but it's likely to be a lot slower. By the way, truncation in itself can be a major bug and in some cases might lead to privilege elevation or DoS, so throwing "safe" string functions that truncate their output at a problem is not a way to make it "safe" or "secure". Instead, you should ensure that the destination buffer is the right size and simply use strcpy (or better yet, memcpy if you already know the source string length). And from Jonathan Leffler Note that strncat() is even more confusing in its interface than strncpy() - what exactly is that length argument, again? It isn't what you'd expect based on what you supply strncpy() etc - so it is more error prone even than strncpy(). For copying strings around, I'm increasingly of the opinion that there is a strong argument that you only need memmove() because you always know all the sizes ahead of time and make sure there's enough space ahead of time. Use memmove() in preference to any of strcpy(), strcat(), strncpy(), strncat(), memcpy(). So, I'm clearly a little rusty on the C standard library. Therefore, I'd like to pose the question: What C standard library functions are used inappropriately/in ways that may cause/lead to security problems/code defects/inefficiencies? In the interests of objectivity, I have a number of criteria for an answer: Please, if you can, cite design reasons behind the function in question i.e. its intended purpose. Please highlight the misuse to which the code is currently put. Please state why that misuse may lead towards a problem. I know that should be obvious but it prevents soft answers. Please avoid: Debates over naming conventions of functions (except where this unequivocably causes confusion). "I prefer x over y" - preference is ok, we all have them but I'm interested in actual unexpected side effects and how to guard against them. As this is likely to be considered subjective and has no definite answer I'm flagging for community wiki straight away. I am also working as per C99.

    Read the article

  • how to use SQL wildcard % with Queryset extra>select?

    - by tylias
    I'm trying to add weights to search terms I'm using to filter a queryset. Using the '%' wildcard is causing me some problems. I'm using the extra() modifier to add a weight parameter to the queryset, which I will be using to inform a sort ordering. (See http://docs.djangoproject.com/en/1.1/ref/models/querysets/#extra-select-none-where-none-params-none-tables-none-order-by-none-select-params-none ) Here's the gist of the code: def viewname(request) ... exact_matchstrings="" exact_matchstrings.append("(accountprofile.first_name LIKE '" + term + "')") exact_matchstrings.append("(accountprofile.first_name LIKE '" + term + '\%' + "')") extraquerystring = " + ".join(exact_matchstrings) return_queryset = return_queryset.extra( select = { 'match_weight': extraquerystring }, ) The effect I'm going for is that if the search term matches exactly, the weight associated with the record is 2, but if the term merely starts with the search term and isn't an exact match, the weight is 1. (for example, if 'term' = 'Jon', an entry with first_name='Jon' gets a weight of 2 but an entry with an entry with first_name = 'Jonathan' gets a weight of 1.) I can test the statement in SQL and it seems to work well enough. If I make this SQL query from the mysql shell, no problem: select (first_name like "Carl") + (first_name like "Car%") from accountprofile; But trying to run it via the extra() modifier in my view code and evaluating the resulting queryset gives me the following error: Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 68, in __repr__ data = list(self[:REPR_OUTPUT_SIZE + 1]) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 83, in __len__ self._result_cache.extend(list(self._iter)) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 238, in iterator for row in self.query.results_iter(): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 287, in results_iter for rows in self.execute_sql(MULTI): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/query.py", line 2369, in execute_sql cursor.execute(sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/util.py", line 22, in execute sql = self.db.ops.last_executed_query(self.cursor, sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/__init__.py", line 217, in last_executed_query return smart_unicode(sql) % u_params ValueError: unsupported format character ''' (0x27) at index 309 I've tried it escaping and not escaping % wildcard but that doesn't solve the problem. Doesn't seem to affect it at all, really. Any ideas?

    Read the article

< Previous Page | 48 49 50 51 52 53 54  | Next Page >