Search Results

Search found 1365 results on 55 pages for 'jeff fl'.

Page 53/55 | < Previous Page | 49 50 51 52 53 54 55  | Next Page >

  • 2011 The Year of Awesomesauce

    - by MOSSLover
    So I was talking to one of my friends, Cathy Dew, and I’m wondering how to start out this post.  What kind of title should I put?  Somehow we’re just randomly throwing things out and this title pops into my head the one you see above. I woke up today to the buzz of a text message.  I spent New Years laying around until 3 am watching Warehouse 13 Episodes and drinking champagne.  It was one of the best New Year’s I spent with my boyfriend and my cat.  I figured I would sleep in until Noon, but ended up waking up around 11:15 to that text message buzz.  I guess my DE, Rachel Appel, had texted me “Happy New Years”, because Rachel is that kind of person.  I immediately proceeded to check my email.  I noticed my live account had a hit.  The account I rarely ever use had an email.  I sort of had that sinking suspicion I was going to get Silverlight MVP right?  So I open the email and something out of the blue happens it says “blah blah blah SharePoint Server MVP blah blah…”.  I’m sitting here a little confused what?  Really?  Just about when you give up on something the unexplained happens.  I am grateful for what I have every day. So let me tell you a story.  I was a senior in high school and it was December 31st, 1999.  A couple days prior my grandmother was complaining she had a cold and her assisted living facility was not going to let her see a doctor.  She claimed to be very sick.  New Year’s Eve Day 1999 my grandmother was rushed to the hospital sometime very early in the morning.  My uncle, my little brother, and myself were sitting in the waiting room eagerly awaiting news.  The Sydney Opera House was playing in the background as New Years 2000 for Australia was ringing in.  They come out and they tell us my grandmother has pneumonia.  She is in the ICU in critical condition.  Eventually time passes in the day and my parents take my brother and I home.  So in the car we had a huge fight that ended in the worst new years of my life.  The next 30 days were the worst 30 days of my life.  I went to the hospital every single day to do my homework and watch my grandmother.  Each day was a challenge mentally and physically as my grandmother berated me in her demented state.  On the 30th day my grandmother ended up in critical condition in the ICU maxed out on painkillers.  At approximately 3 am I hear my parents telling me they don’t want to wake me up and that my grandmother had passed away.  I must have cried more collectively that day than any other day in my life.  Every New Years Even since I have cried thinking about who she was and what she represented.  She was human looking back she wasn’t anything great, but she was one of the positive lights in my life.  Her and my dad and my other grandmother constantly tried to make me feel great when my mother was telling me the opposite.  I’d like to think since 2000 the past 11 years have been the best 11 years of my life.  I got out of a bad situation by using the tools that I had in front of me.  Good grades and getting into a college so I could aspire to be the person that I wanted to be.  I had some great people along the way to help me out. So getting to the point I like to help people further there lives somehow in the best way I can possibly help out.  This New Years was one of the great years that helped me forget the past and focus on the present.  It makes me realize how far I’ve come since high school and even since college.  The one thing I’ve been grappling with over the years is how do you feel good about making money while helping others out.  I’d to think I try really hard to give back to my community.  I could not have done what I did without other people’s help.  I sent out an email prior to even announcing I got the award today.  I can’t say I did everything on my own.  It’s not possible.  I had the help of others every step of the way.  I’m not sure if this makes sense but the award can’t just be mine.  This award is really owned by each and everyone who helped me get here.  From my dad to my grandmother to Rachel Appel to Bob Hunt to Jason Gallicchio to Cathy Dew to Mark Rackley to Johnny Ennion to Lee Brandt to Jeff Julian to John Alexander to Lori Gowin and to many others.  Thank you guys for all the help and support. Technorati Tags: SharePoint Community,MVP Award,Microsoft Community

    Read the article

  • Updated Agenda for OTN Architect Day Los Angeles (Oct 25)

    - by Bob Rhubart
    Here's the latest information on the session schedule and content for Oracle Technology Network Architect Day in Los Angeles on October 25, 2012. Registration is open, but seating is limited. When: Thursday October 25 12, 2012 8:30am – 5:00pm Where: Sofitel Los Angeles 8555 Beverly Boulevard Los Angeles, CA 90048 Agenda Time Session Title Room 8:30 am - 9:00 am Registration and Continental Breakfast 9:00 am - 9:15 am Welcome and Opening Comments | Bob Rhubart Beverly Ballroom 9:15 am - 10:00 am Engineered Systems: Oracle's Vision for the Future | Ralf Dossmann Oracle's Exadata and Exalogic are impressive products in their own right. But working in combination they deliver unparalleled transaction processing performance with up to a 30x increase over existing legacy systems, with the lowest cost of ownership over a 3 or 5 year basis than any other hardware. In this session you'll learn how to leverage Oracle's Engineered Systems within your enterprise to deliver record-breaking performance at the lowest TCO. Beverly Ballroom 10:00 am - 10:30 am Monitoring and Managing Applications in the Cloud | Basheer Khan Oracle offers a broad portfolio of software and hardware products and services to enable public, private and hybrid clouds to power the enterprise. However, enterprise cloud computing presents new management challenges, that need to be addressed to realize the economic benefits of cloud computing. In this session you will learn about the methods and tools you can use to proactively monitor your end-to-end Oracle Applications environment in the cloud, define service-level objectives, gain insight into your end users, and troubleshoot performance problems from a single console. Beverly Ballroom 10:30 am - 10:45 am Break 10:45 am - 11:30 am Breakout Sessions (pick one) Cloud Computing - Making IT Simple | Dr. James Baty The road to Cloud Computing is not without a few bumps. This session will help to smooth out your journey by tackling some of the potential complications. We'll examine whether standardization is a prerequisite for the Cloud. We'll look at why refactoring isn't just for application code. We'll check out deployable entities and their simplification via higher levels of abstraction. And we'll close out the session with a look at engineered systems and modular clouds. Beverly Ballroom Innovations in Grid Computing with Oracle Coherence | Ashok Aletty Learn how Oracle Coherence can increase the availability, scalability and performance of your existing applications with its advanced low-latency data-grid technologies. Also hear some interesting industry-specific use cases that customers had implemented and how Oracle is integrating Coherence into its Enterprise Java stack. Hollywood Room 11:30 am - 12:15 pm Breakout Sessions (pick one) Enterprise Strategy for Cloud Security | Dave Chappelle Security is high on the list of concerns for many organizations as they evaluate their cloud computing options. This session will examine security in the context of the various forms of cloud computing. We'll consider technical and non-technical aspects of security, and discuss several strategies for cloud computing, from both the consumer and producer perspectives. Beverly Ballroom Oracle Enterprise Manager | Perren Walker This session examines new Oracle Enterprise Manager monitoring, administration, and management features for Oracle Exalogic. It focuses on two management themes: cloud management related to virtualization and applications-to-disk management. For private cloud management, it discusses virtualization management features providing an enhanced set of application deployment capabilities enabling IaaS as well as PaaS interactions. Then from an end-to-end perspective, it covers the specific capabilities and—where applicable—best practices for machine, cloud, middleware, and application administration. Hollywood Room 12:15 pm - 1:15 pm Lunch Beverly Ballroom Lounge 1:15 pm - 2:00 pm Panel Discussion - Q&A with session speakers Beverly Ballroom 2:00 pm - 2:45 pm Breakout Sessions (pick one) Oracle Cloud Reference Architecture | Anbu Krishnaswamy Cloud initiatives are beginning to dominate enterprise IT roadmaps. Successful adoption of Cloud and the subsequent governance challenges warrant a Cloud reference architecture that is applied consistently across the enterprise. This presentation will answer the important questions: What exactly is a Cloud, why you need it, what changes it will bring to the enterprise, and what are the key capabilities of a Cloud infrastructure are - using Oracle's Cloud Reference Architecture, which is part of the IT Strategies from Oracle (ITSO) Cloud Enterprise Technology Strategy ETS). Beverly Ballroom 21st Century SOA | Jeff Davies Service Oriented Architecture has evolved from concept to reality in the last decade. The right methodology coupled with mature SOA technologies has helped customers demonstrate success in both innovation and ROI. In this session you will learn how Oracle SOA Suite's orchestration, virtualization, and governance capabilities provide the infrastructure to run mission critical business and system applications. We'll also take a special look at the convergence of SOA & BPM using Oracle's Unified technology stack. Hollywood Room 2:45 pm - 3:00 pm Break 3:00 pm - 4:00 pm Roundtable Discussion Beverly Ballroom 4:00 pm - 4:15 pm Closing Comments & Readouts from Roundtables Beverly Ballroom 4:15 pm - 5:00 pm Networking / Reception Beverly Ballroom Lounge Note: Session schedule and content subject to change.

    Read the article

  • Does ModSecurity 2.7.1 work with ASP.NET MVC 3?

    - by autonomatt
    I'm trying to get ModSecurity 2.7.1 to work with an ASP.NET MVC 3 website. The installation ran without errors and looking at the event log, ModSecurity is starting up successfully. I am using the modsecurity.conf-recommended file to set the basic rules. The problem I'm having is that whenever I am POSTing some form data, it doesn't get through to the controller action (or model binder). I have SecRuleEngine set to DetectionOnly. I have SecRequestBodyAccess set to On. With these settings, the body of the POST never reaches the controller action. If I set SecRequestBodyAccess to Off it works, so it's definitely something to do with how ModSecurity forwards the body data. The ModSecurity debug shows the following (looks to me as if all passed through): Second phase starting (dcfg 94b750). Input filter: Reading request body. Adding request argument (BODY): name "[0].IsSelected", value "on" Adding request argument (BODY): name "[0].Quantity", value "1" Adding request argument (BODY): name "[0].VariantSku", value "047861" Adding request argument (BODY): name "[1].Quantity", value "0" Adding request argument (BODY): name "[1].VariantSku", value "047862" Input filter: Completed receiving request body (length 115). Starting phase REQUEST_BODY. Recipe: Invoking rule 94c620; [file "*********************"] [line "54"] [id "200001"]. Rule 94c620: SecRule "REQBODY_ERROR" "!@eq 0" "phase:2,auditlog,id:200001,t:none,log,deny,status:400,msg:'Failed to parse request body.',logdata:%{reqbody_error_msg},severity:2" Transformation completed in 0 usec. Executing operator "!eq" with param "0" against REQBODY_ERROR. Operator completed in 0 usec. Rule returned 0. Recipe: Invoking rule 5549c38; [file "*********************"] [line "75"] [id "200002"]. Rule 5549c38: SecRule "MULTIPART_STRICT_ERROR" "!@eq 0" "phase:2,auditlog,id:200002,t:none,log,deny,status:44,msg:'Multipart request body failed strict validation: PE %{REQBODY_PROCESSOR_ERROR}, BQ %{MULTIPART_BOUNDARY_QUOTED}, BW %{MULTIPART_BOUNDARY_WHITESPACE}, DB %{MULTIPART_DATA_BEFORE}, DA %{MULTIPART_DATA_AFTER}, HF %{MULTIPART_HEADER_FOLDING}, LF %{MULTIPART_LF_LINE}, SM %{MULTIPART_MISSING_SEMICOLON}, IQ %{MULTIPART_INVALID_QUOTING}, IP %{MULTIPART_INVALID_PART}, IH %{MULTIPART_INVALID_HEADER_FOLDING}, FL %{MULTIPART_FILE_LIMIT_EXCEEDED}'" Transformation completed in 0 usec. Executing operator "!eq" with param "0" against MULTIPART_STRICT_ERROR. Operator completed in 0 usec. Rule returned 0. Recipe: Invoking rule 554bd70; [file "********************"] [line "80"] [id "200003"]. Rule 554bd70: SecRule "MULTIPART_UNMATCHED_BOUNDARY" "!@eq 0" "phase:2,auditlog,id:200003,t:none,log,deny,status:44,msg:'Multipart parser detected a possible unmatched boundary.'" Transformation completed in 0 usec. Executing operator "!eq" with param "0" against MULTIPART_UNMATCHED_BOUNDARY. Operator completed in 0 usec. Rule returned 0. Recipe: Invoking rule 554cbe0; [file "*********************************"] [line "94"] [id "200004"]. Rule 554cbe0: SecRule "TX:/^MSC_/" "!@streq 0" "phase:2,log,auditlog,id:200004,t:none,deny,msg:'ModSecurity internal error flagged: %{MATCHED_VAR_NAME}'" Rule returned 0. Hook insert_filter: Adding input forwarding filter (r 5541fc0). Hook insert_filter: Adding output filter (r 5541fc0). Initialising logging. Starting phase LOGGING. Recording persistent data took 0 microseconds. Audit log: Ignoring a non-relevant request. I can't see anything unusual in Fiddler. I'm using a ViewModel in the parameters of my action. No data is bound if SecRequestBodyAccess is set to On. I'm even logging all the Request.Form.Keys and values via log4net, but not getting any values there either. I'm starting to wonder if ModSecurity actually works with ASP.NET MVC or if there is some conflict with the ModSecurity http Module and the model binder kicking in. Does anyone have any suggestions or can anyone confirm they have ModSecurity working with an ASP.NET MVC website?

    Read the article

  • Texture and Lighting Issue in 3D world

    - by noah
    Im using OpenGL ES 1.1 for iPhone. I'm attempting to implement a skybox in my 3d world and started out by following one of Jeff Lamarches tutorials on creating textures. Heres the tutorial: iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-part-6_25.html Ive successfully added the image to my 3d world but am not sure why the lighting on the other shapes has changed so much. I want the shapes to be the original color and have the image in the background. Before: https://www.dropbox.com/s/ojmb8793vj514h0/Screen%20Shot%202012-10-01%20at%205.34.44%20PM.png After: https://www.dropbox.com/s/8v6yvur8amgudia/Screen%20Shot%202012-10-01%20at%205.35.31%20PM.png Heres the init OpenGL: - (void)initOpenGLES1 { glShadeModel(GL_SMOOTH); // Enable lighting glEnable(GL_LIGHTING); // Turn the first light on glEnable(GL_LIGHT0); const GLfloat lightAmbient[] = {0.2, 0.2, 0.2, 1.0}; const GLfloat lightDiffuse[] = {0.8, 0.8, 0.8, 1.0}; const GLfloat matAmbient[] = {0.3, 0.3, 0.3, 0.5}; const GLfloat matDiffuse[] = {1.0, 1.0, 1.0, 1.0}; const GLfloat matSpecular[] = {1.0, 1.0, 1.0, 1.0}; const GLfloat lightPosition[] = {0.0, 0.0, 1.0, 0.0}; const GLfloat lightShininess = 100.0; //Configure OpenGL lighting glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, matAmbient); glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, matDiffuse); glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, matSpecular); glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS, lightShininess); glLightfv(GL_LIGHT0, GL_AMBIENT, lightAmbient); glLightfv(GL_LIGHT0, GL_DIFFUSE, lightDiffuse); glLightfv(GL_LIGHT0, GL_POSITION, lightPosition); // Define a cutoff angle glLightf(GL_LIGHT0, GL_SPOT_CUTOFF, 40.0); // Set the clear color glClearColor(0, 0, 0, 1.0f); // Projection Matrix config glMatrixMode(GL_PROJECTION); glLoadIdentity(); CGSize layerSize = self.view.layer.frame.size; // Swapped height and width for landscape mode gluPerspective(45.0f, (GLfloat)layerSize.height / (GLfloat)layerSize.width, 0.1f, 750.0f); [self initSkyBox]; // Modelview Matrix config glMatrixMode(GL_MODELVIEW); glLoadIdentity(); // This next line is not really needed as it is the default for OpenGL ES glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glDisable(GL_BLEND); // Enable depth testing glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LESS); glDepthMask(GL_TRUE); } Heres the drawSkybox that gets called in the drawFrame method: -(void)drawSkyBox { glDisable(GL_LIGHTING); glDisable(GL_DEPTH_TEST); glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_NORMAL_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); static const SSVertex3D vertices[] = { {-1.0, 1.0, -0.0}, { 1.0, 1.0, -0.0}, {-1.0, -1.0, -0.0}, { 1.0, -1.0, -0.0} }; static const SSVertex3D normals[] = { {0.0, 0.0, 1.0}, {0.0, 0.0, 1.0}, {0.0, 0.0, 1.0}, {0.0, 0.0, 1.0} }; static const GLfloat texCoords[] = { 0.0, 0.5, 0.5, 0.5, 0.0, 0.0, 0.5, 0.0 }; glLoadIdentity(); glTranslatef(0.0, 0.0, -3.0); glBindTexture(GL_TEXTURE_2D, texture[0]); glVertexPointer(3, GL_FLOAT, 0, vertices); glNormalPointer(GL_FLOAT, 0, normals); glTexCoordPointer(2, GL_FLOAT, 0, texCoords); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_NORMAL_ARRAY); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_LIGHTING); glEnable(GL_DEPTH_TEST); } Heres the init Skybox: -(void)initSkyBox { // Turn necessary features on glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glBlendFunc(GL_ONE, GL_SRC_COLOR); // Bind the number of textures we need, in this case one. glGenTextures(1, &texture[0]); // create a texture obj, give unique ID glBindTexture(GL_TEXTURE_2D, texture[0]); // load our new texture name into the current texture glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); NSString *path = [[NSBundle mainBundle] pathForResource:@"space" ofType:@"jpg"]; NSData *texData = [[NSData alloc] initWithContentsOfFile:path]; UIImage *image = [[UIImage alloc] initWithData:texData]; GLuint width = CGImageGetWidth(image.CGImage); GLuint height = CGImageGetHeight(image.CGImage); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); void *imageData = malloc( height * width * 4 ); // times 4 because will write one byte for rgb and alpha CGContextRef cgContext = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big ); // Flip the Y-axis CGContextTranslateCTM (cgContext, 0, height); CGContextScaleCTM (cgContext, 1.0, -1.0); CGColorSpaceRelease( colorSpace ); CGContextClearRect( cgContext, CGRectMake( 0, 0, width, height ) ); CGContextDrawImage( cgContext, CGRectMake( 0, 0, width, height ), image.CGImage ); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData); CGContextRelease(cgContext); free(imageData); [image release]; [texData release]; } Any help is greatly appreciated.

    Read the article

  • Android - Create a custom multi-line ListView bound to an ArrayList

    - by Bill Osuch
    The Android HelloListView tutorial shows how to bind a ListView to an array of string objects, but you'll probably outgrow that pretty quickly. This post will show you how to bind the ListView to an ArrayList of custom objects, as well as create a multi-line ListView. Let's say you have some sort of search functionality that returns a list of people, along with addresses and phone numbers. We're going to display that data in three formatted lines for each result, and make it clickable. First, create your new Android project, and create two layout files. Main.xml will probably already be created by default, so paste this in: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"  android:orientation="vertical"  android:layout_width="fill_parent"   android:layout_height="fill_parent">  <TextView   android:layout_height="wrap_content"   android:text="Custom ListView Contents"   android:gravity="center_vertical|center_horizontal"   android:layout_width="fill_parent" />   <ListView    android:id="@+id/ListView01"    android:layout_height="wrap_content"    android:layout_width="fill_parent"/> </LinearLayout> Next, create a layout file called custom_row_view.xml. This layout will be the template for each individual row in the ListView. You can use pretty much any type of layout - Relative, Table, etc., but for this we'll just use Linear: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"  android:orientation="vertical"  android:layout_width="fill_parent"   android:layout_height="fill_parent">   <TextView android:id="@+id/name"   android:textSize="14sp"   android:textStyle="bold"   android:textColor="#FFFF00"   android:layout_width="wrap_content"   android:layout_height="wrap_content"/>  <TextView android:id="@+id/cityState"   android:layout_width="wrap_content"   android:layout_height="wrap_content"/>  <TextView android:id="@+id/phone"   android:layout_width="wrap_content"   android:layout_height="wrap_content"/> </LinearLayout> Now, add an object called SearchResults. Paste this code in: public class SearchResults {  private String name = "";  private String cityState = "";  private String phone = "";  public void setName(String name) {   this.name = name;  }  public String getName() {   return name;  }  public void setCityState(String cityState) {   this.cityState = cityState;  }  public String getCityState() {   return cityState;  }  public void setPhone(String phone) {   this.phone = phone;  }  public String getPhone() {   return phone;  } } This is the class that we'll be filling with our data, and loading into an ArrayList. Next, you'll need a custom adapter. This one just extends the BaseAdapter, but you could extend the ArrayAdapter if you prefer. public class MyCustomBaseAdapter extends BaseAdapter {  private static ArrayList<SearchResults> searchArrayList;    private LayoutInflater mInflater;  public MyCustomBaseAdapter(Context context, ArrayList<SearchResults> results) {   searchArrayList = results;   mInflater = LayoutInflater.from(context);  }  public int getCount() {   return searchArrayList.size();  }  public Object getItem(int position) {   return searchArrayList.get(position);  }  public long getItemId(int position) {   return position;  }  public View getView(int position, View convertView, ViewGroup parent) {   ViewHolder holder;   if (convertView == null) {    convertView = mInflater.inflate(R.layout.custom_row_view, null);    holder = new ViewHolder();    holder.txtName = (TextView) convertView.findViewById(R.id.name);    holder.txtCityState = (TextView) convertView.findViewById(R.id.cityState);    holder.txtPhone = (TextView) convertView.findViewById(R.id.phone);    convertView.setTag(holder);   } else {    holder = (ViewHolder) convertView.getTag();   }      holder.txtName.setText(searchArrayList.get(position).getName());   holder.txtCityState.setText(searchArrayList.get(position).getCityState());   holder.txtPhone.setText(searchArrayList.get(position).getPhone());   return convertView;  }  static class ViewHolder {   TextView txtName;   TextView txtCityState;   TextView txtPhone;  } } (This is basically the same as the List14.java API demo) Finally, we'll wire it all up in the main class file: public class CustomListView extends Activity {     @Override     public void onCreate(Bundle savedInstanceState) {         super.onCreate(savedInstanceState);         setContentView(R.layout.main);                 ArrayList<SearchResults> searchResults = GetSearchResults();                 final ListView lv1 = (ListView) findViewById(R.id.ListView01);         lv1.setAdapter(new MyCustomBaseAdapter(this, searchResults));                 lv1.setOnItemClickListener(new OnItemClickListener() {          @Override          public void onItemClick(AdapterView<?> a, View v, int position, long id) {           Object o = lv1.getItemAtPosition(position);           SearchResults fullObject = (SearchResults)o;           Toast.makeText(ListViewBlogPost.this, "You have chosen: " + " " + fullObject.getName(), Toast.LENGTH_LONG).show();          }          });     }         private ArrayList<SearchResults> GetSearchResults(){      ArrayList<SearchResults> results = new ArrayList<SearchResults>();            SearchResults sr1 = new SearchResults();      sr1.setName("John Smith");      sr1.setCityState("Dallas, TX");      sr1.setPhone("214-555-1234");      results.add(sr1);            sr1 = new SearchResults();      sr1.setName("Jane Doe");      sr1.setCityState("Atlanta, GA");      sr1.setPhone("469-555-2587");      results.add(sr1);            sr1 = new SearchResults();      sr1.setName("Steve Young");      sr1.setCityState("Miami, FL");      sr1.setPhone("305-555-7895");      results.add(sr1);            sr1 = new SearchResults();      sr1.setName("Fred Jones");      sr1.setCityState("Las Vegas, NV");      sr1.setPhone("612-555-8214");      results.add(sr1);            return results;     } } Notice that we first get an ArrayList of SearchResults objects (normally this would be from an external data source...), pass it to the custom adapter, then set up a click listener. The listener gets the item that was clicked, converts it back to a SearchResults object, and does whatever it needs to do. Fire it up in the emulator, and you should wind up with something like this:

    Read the article

  • ???Flashback Log???????Redo Log?

    - by Liu Maclean(???)
    ????????????????????redo log?   RVWR( Recovery Writer)?3s??flashback generate buffer??block before image?????????? ?????block change???RVWR??block before image ?flashback log? ?????????,Oracle???????????before image????????,????????flashback database logs?????   ???????????,????? ??????????????????,???????????before image?????shared pool??flashback log buffer?,RVWR??????flashback log buffer??????????? ?DBWR???????????????,DBWR?????buffer header??FBA(Flashback Byte Address)?flashback log buffer?????????? ???? ?????? ??? ????????????? , RVWR???????????(flashback markers)?flashback database logs?? ????(flashback markers)?????????????Oracle??flashback ??????????  ??????????, Oracle ??????(flashback markers)????????????flashback database log???????????block image; ??Oracle ???????(forward recovery)?????????????????SCN?????? flashback markers for example: **** Record at fba: (lno 1 thr 1 seq 1 bno 4 bof 8184) **** RECORD HEADER: Type: 3 (Skip) Size: 8132 RECORD DATA (Skip): **** Record at fba: (lno 1 thr 1 seq 1 bno 4 bof 52) **** RECORD HEADER: Type: 7 (Begin Crash Recovery Record) Size: 36 RECORD DATA (Begin Crash Recovery Record): Previous logical record fba: (lno 1 thr 1 seq 1 bno 3 bof 316) Record scn: 0x0000.00000000 [0.0] **** Record at fba: (lno 1 thr 1 seq 1 bno 3 bof 8184) **** RECORD HEADER: Type: 3 (Skip) Size: 7868 RECORD DATA (Skip): **** Record at fba: (lno 1 thr 1 seq 1 bno 3 bof 316) **** RECORD HEADER: Type: 2 (Marker) Size: 300 RECORD DATA (Marker): Previous logical record fba: (lno 0 thr 0 seq 0 bno 0 bof 0) Record scn: 0x0000.00000000 [0.0] Marker scn: 0x0000.0060e024 [0.6348836] 06/13/2012 15:56:35 Flag 0x0 Flashback threads: 1, Enabled redo threads 1 Recovery Start Checkpoint: scn: 0x0000.0060e024 [0.6348836] 06/13/2012 15:56:12 thread:1 rba:(0x80.180.10) Flashback thread Markers: Thread:1 status:0 fba: (lno 1 thr 1 seq 1 bno 2 bof 8184) Redo Thread Checkpoint Info: Thread:1 rba:(0x80.180.10) **** Record at fba: (lno 1 thr 1 seq 1 bno 2 bof 8184) **** RECORD HEADER: Type: 3 (Skip) Size: 8168 RECORD DATA (Skip): End-Of-Thread reached ????????????????block change ????before image????????flashback log?? ?????block change???flashback log record ????????? redo log???!????flashback log ???????before image ? redo log??? change vector ?  Oracle?????????????????????????????????????,??????I/O??????????????: ??hot block??,Oracle???????????block image?????; Oracle ?????????(flashback barriers)???????????????,flashback barriers???????(???15??),??????????(flashback barriers)????(flashback markers)????????? ????, ??????change?????, ???????????????????????????, ?15????????????????????flashback log????????before image?????????????,?????????????????????,?????????????? ????????,??????????????(flashback barriers), flashback barriers???????,?????15????? ?????flashback barriers????????(flashback markers)???????????????,???????????????????(????barriers?????)??????block image ,????????????????????????????????? ??????????flashback log????redo log????! ????,????????????????, ?????????? SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com SQL> create table flash_maclean (t1 varchar2(200)) tablespace users; Table created. SQL> insert into flash_maclean values('MACLEAN LOVE HANNA'); 1 row created. SQL> commit; Commit complete. SQL> startup force; ORACLE instance started. Total System Global Area 939495424 bytes Fixed Size 2233960 bytes Variable Size 713034136 bytes Database Buffers 218103808 bytes Redo Buffers 6123520 bytes Database mounted. Database opened. SQL> update flash_maclean set t1='HANNA LOVE MACLEAN'; 1 row updated. commit; Commit complete. SQL> alter system checkpoint; System altered. SQL> select dbms_rowid.rowid_block_number(rowid),dbms_rowid.rowid_relative_fno(rowid) from flash_maclean; DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) DBMS_ROWID.ROWID_RELATIVE_FNO(ROWID) ------------------------------------ ------------------------------------ 140431 4 datafile 4 block 140431 ??RDBA rdba: 0x0102248f (4/140431) SQL> ! ps -ef|grep rvwr|grep -v grep oracle 26695 1 0 15:56 ? 00:00:00 ora_rvwr_G11R23 SQL> oradebug setospid 26695 Oracle pid: 20, Unix process pid: 26695, image: [email protected] (RVWR) SQL> ORADEBUG DUMP FBTAIL 1; Statement processed. To dump the last 2000 flashback records , ??ORADEBUG DUMP FBTAIL 1????????2000?????? SQL> oradebug tracefile_name /s01/orabase/diag/rdbms/g11r23/G11R23/trace/G11R23_rvwr_26695.trc ? TRACE?????????block? before image **** Record at fba: (lno 1 thr 1 seq 1 bno 55 bof 2564) **** RECORD HEADER: Type: 1 (Block Image) Size: 28 RECORD DATA (Block Image): file#: 4 rdba: 0x0102248f Next scn: 0x0000.00000000 [0.0] Flag: 0x0 Block Size: 8192 BLOCK IMAGE: buffer rdba: 0x0102248f scn: 0x0000.00609044 seq: 0x01 flg: 0x06 tail: 0x90440601 frmt: 0x02 chkval: 0xc626 type: 0x06=trans data Hex dump of block: st=0, typ_found=1 Dump of memory from 0x00002B1D94183C00 to 0x00002B1D94185C00 2B1D94183C00 0000A206 0102248F 00609044 06010000 [.....$..D.`.....] 2B1D94183C10 0000C626 00000001 00014AD4 0060903A [&........J..:.`.] 2B1D94183C20 00000000 00320002 01022488 00090006 [......2..$......] 2B1D94183C30 00000CC8 00C00340 000D0542 00008000 [[email protected].......] 2B1D94183C40 006040BC 000F000A 00000920 00C002E4 [.@`..... .......] 2B1D94183C50 0017048F 00002001 00609044 00000000 [..... ..D.`.....] 2B1D94183C60 00000000 00010100 0014FFFF 1F6E1F77 [............w.n.] 2B1D94183C70 00001F6E 1F770001 00000000 00000000 [n.....w.........] 2B1D94183C80 00000000 00000000 00000000 00000000 [................] Repeat 500 times 2B1D94185BD0 00000000 00000000 2C000000 4D120102 [...........,...M] 2B1D94185BE0 454C4341 4C204E41 2045564F 4E4E4148 [ACLEAN LOVE HANN] 2B1D94185BF0 01002C41 43414D07 4E41454C 90440601 [A,...MACLEAN..D.] Block header dump: 0x0102248f Object id on Block? Y seg/obj: 0x14ad4 csc: 0x00.60903a itc: 2 flg: E typ: 1 - DATA brn: 0 bdba: 0x1022488 ver: 0x01 opc: 0 inc: 0 exflg: 0 Itl Xid Uba Flag Lck Scn/Fsc 0x01 0x0006.009.00000cc8 0x00c00340.0542.0d C--- 0 scn 0x0000.006040bc 0x02 0x000a.00f.00000920 0x00c002e4.048f.17 --U- 1 fsc 0x0000.00609044 bdba: 0x0102248f data_block_dump,data header at 0x2b1d94183c64 =============== tsiz: 0x1f98 hsiz: 0x14 pbl: 0x2b1d94183c64 76543210 flag=-------- ntab=1 nrow=1 frre=-1 fsbo=0x14 fseo=0x1f77 avsp=0x1f6e tosp=0x1f6e 0xe:pti[0] nrow=1 offs=0 0x12:pri[0] offs=0x1f77 block_row_dump: tab 0, row 0, @0x1f77 tl: 22 fb: --H-FL-- lb: 0x2 cc: 1 col 0: [18] 4d 41 43 4c 45 41 4e 20 4c 4f 56 45 20 48 41 4e 4e 41 end_of_block_dump SQL> select dump('MACLEAN LOVE HANNA',16) from dual; DUMP('MACLEANLOVEHANNA',16) -------------------------------------------------------------------- Typ=96 Len=18: 4d,41,43,4c,45,41,4e,20,4c,4f,56,45,20,48,41,4e,4e,41 ???????????????????????,??flashback log??before image????????? create table flash_maclean1 (t1 int) tablespace users; SQL> select vs.name, ms.value 2 from v$mystat ms, v$sysstat vs 3 where vs.statistic# = ms.statistic# 4 and vs.name in ('redo size','db block changes'); NAME VALUE ---------------------------------------------------------------- ---------- db block changes 0 redo size 0 SQL> select name,value from v$sysstat where name like 'flashback log%'; NAME VALUE ---------------------------------------------------------------- ---------- flashback log writes 49 flashback log write bytes 9306112 SQL> begin 2 for i in 1..5000 loop 3 update flash_maclean1 set t1=t1+1; 4 commit; 5 end loop; 6 end; 7 / PL/SQL procedure successfully completed. SQL> select vs.name, ms.value 2 from v$mystat ms, v$sysstat vs 3 where vs.statistic# = ms.statistic# 4 and vs.name in ('redo size','db block changes'); NAME VALUE ---------------------------------------------------------------- ---------- db block changes 20006 redo size 3071288 SQL> select name,value from v$sysstat where name like 'flashback log%'; NAME VALUE ---------------------------------------------------------------- ---------- flashback log writes 52 flashback log write bytes 10338304 ??????????? ??hot block,???20006 ?block changes???? ??? 3000k ?redo log ? ??1000k? flashback log ?

    Read the article

  • Controller Action Methods with different signatures

    - by Narsil
    I am trying to get my URLs in files/id format. I am guessing I should have two Index methods in my controller, one with a parameter and one with not. But I get this error message in browser below. Anyway here is my controller methods: public ActionResult Index() { return Content("Index "); } // // GET: /Files/5 public ActionResult Index(int id) { File file = fileRepository.GetFile(id); if (file == null) return Content("Not Found"); else return Content(file.FileID.ToString()); } Error: Server Error in '/' Application. The current request for action 'Index' on controller type 'FilesController' is ambiguous between the following action methods: System.Web.Mvc.ActionResult Index() on type FileHosting.Controllers.FilesController System.Web.Mvc.ActionResult Index(Int32) on type FileHosting.Controllers.FilesController Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Reflection.AmbiguousMatchException: The current request for action 'Index' on controller type 'FilesController' is ambiguous between the following action methods: System.Web.Mvc.ActionResult Index() on type FileHosting.Controllers.FilesController System.Web.Mvc.ActionResult Index(Int32) on type FileHosting.Controllers.FilesController Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [AmbiguousMatchException: The current request for action 'Index' on controller type 'FilesController' is ambiguous between the following action methods: System.Web.Mvc.ActionResult Index() on type FileHosting.Controllers.FilesController System.Web.Mvc.ActionResult Index(Int32) on type FileHosting.Controllers.FilesController] System.Web.Mvc.ActionMethodSelector.FindActionMethod(ControllerContext controllerContext, String actionName) +396292 System.Web.Mvc.ReflectedControllerDescriptor.FindAction(ControllerContext controllerContext, String actionName) +62 System.Web.Mvc.ControllerActionInvoker.FindAction(ControllerContext controllerContext, ControllerDescriptor controllerDescriptor, String actionName) +13 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +99 System.Web.Mvc.Controller.ExecuteCore() +105 System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +39 System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) +7 System.Web.Mvc.<c_DisplayClass8.b_4() +34 System.Web.Mvc.Async.<c_DisplayClass1.b_0() +21 System.Web.Mvc.Async.<c__DisplayClass81.<BeginSynchronous>b__7(IAsyncResult _) +12 System.Web.Mvc.Async.WrappedAsyncResult1.End() +59 System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) +44 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) +7 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8677678 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 Updated Code: public ActionResult Index(int? id) { if (id.HasValue) { File file = fileRepository.GetFile(id.Value); if (file == null) return Content("Not Found"); else return Content(file.FileID.ToString()); } else return Content("Index"); } It's still not the thing I want. URLs have to be in files?id=3 format. I want files/3 routes from global.asax routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); //files/3 //it's the one I wrote routes.MapRoute("Files", "{controller}/{id}", new { controller = "Files", action = "Index", id = UrlParameter.Optional} ); I tried adding a new route after reading Jeff's post but I can't get it working. It still works with files?id=2 though.

    Read the article

  • Doing TDD Silverlight 4 RC using Visual Studio 2010 RC

    - by user133992
    First I am glad to see better TDD support in VS2010. Support for generating code stubs from my tests is ok - not as good as more mature TDD plug-ins but a good start. I am looking for some best Silverlight 4.0 TDD practices. First Question: Anyone have links, recommendations? I know the new Silverlight Unit Test capabilities are much better (Jeff Wilcox's Mix Presentation). What I am focusing on right now is using TDD to develop pure Silverlight 4.0 Class Library projects - projects without a Silverlight UI project. I've been able to get it to work but not as cleanly as it should be. I can create an Empty VS project. Add A Silverlight 4 Class Library Project. Add a TestProject (not a silverlight Unit Test Project but a plain Test Project). Add a simple test in the Test Project such as: namespace Calculator.Test { [TestClass] public class CalculatorTests { [TestMethod] public void CalulatorAddTest() { Calc c = new Calc(); int expected = 10; int actual = c.Add(6, 4); Assert.AreEqual<int>(expected, actual); } } } Using the new Generate Type and Method from Test feature it will generate the following code in the Silverlight Project: namespace Calculator { public class Calc { public int Add(int p, int p_2) { throw new NotImplementedException(); } } } When I run the tests the first time it says the target assembly is Silverlight and not able to run test - Not exact text but the same general idea. When I change the implementation to: namespace Calculator { public class Calc { public int Add(int p, int p_2) { return p + p_2; } } } and re-run the test, it works fine and the test goes green. It also works for all other TDD code I generate after. I also get a warning Mark in the Test Project's reference to the Calculator Silverlight Class Library Assembly. Second Question: Any comments ideas if this just a bug in VS2010 RC or is Silverlight Class Library TDD not really supported. I have not created a Silverlight UI project or changed and build or debug settings so I have no idea what is hosting the silverlight DLL. Finally, some of the Silverlight Class Libraries I need to write will provide functionality that requires elevated Out-Of-Browser rights. Based on the above, it looks like I can use TDD Test Projects against regular Silverlight 4.0 Class Libraries, but I have no idea how I can TDD the elevated OOB functionality without also creating the UI component that gets installed. The UI piece is not really needed for the Library development and gets in the way of what I actually want to TDD. I know I can (and will) mock some of that functionality but at some point I will also need the real thing in my tests. Third Question: Any ideas how to TDD Silverlight 4.0 Class Library project that requires OOB elevated rights? Thanks!

    Read the article

  • Haskell: Left-biased/short-circuiting function

    - by user2967411
    Two classes ago, our professor presented to us a Parser module. Here is the code: module Parser (Parser,parser,runParser,satisfy,char,string,many,many1,(+++)) where import Data.Char import Control.Monad import Control.Monad.State type Parser = StateT String [] runParser :: Parser a -> String -> [(a,String)] runParser = runStateT parser :: (String -> [(a,String)]) -> Parser a parser = StateT satisfy :: (Char -> Bool) -> Parser Char satisfy f = parser $ \s -> case s of [] -> [] a:as -> [(a,as) | f a] char :: Char -> Parser Char char = satisfy . (==) alpha,digit :: Parser Char alpha = satisfy isAlpha digit = satisfy isDigit string :: String -> Parser String string = mapM char infixr 5 +++ (+++) :: Parser a -> Parser a -> Parser a (+++) = mplus many, many1 :: Parser a -> Parser [a] many p = return [] +++ many1 p many1 p = liftM2 (:) p (many p) Today he gave us an assignment to introduce "a left-biased, or short-circuiting version of (+++)", called (<++). His hint was for us to consider the original implementation of (+++). When he first introduced +++ to us, this was the code he wrote, which I am going to call the original implementation: infixr 5 +++ (+++) :: Parser a -> Parser a -> Parser a p +++ q = Parser $ \s -> runParser p s ++ runParser q s I have been having tons of trouble since we were introduced to parsing and so it continues. I have tried/am considering two approaches. 1) Use the "original" implementation, as in p +++ q = Parser $ \s - runParser p s ++ runParser q s 2) Use the final implementation, as in (+++) = mplus Here are my questions: 1) The module will not compile if I use the original implementation. The error: Not in scope: data constructor 'Parser'. It compiles fine using (+++) = mplus. What is wrong with using the original implementation that is avoided by using the final implementation? 2) How do I check if the first Parser returns anything? Is something like (not (isNothing (Parser $ \s - runParser p s) on the right track? It seems like it should be easy but I have no idea. 3) Once I figure out how to check if the first Parser returns anything, if I am to base my code on the final implementation, would it be as easy as this?: -- if p returns something then p <++ q = mplus (Parser $ \s -> runParser p s) mzero -- else (<++) = mplus Best, Jeff

    Read the article

  • Why is two-way binding in silverlight not working?

    - by Edward Tanguay
    According to how Silverlight TwoWay binding works, when I change the data in the FirstName field, it should change the value in CheckFirstName field. Why is this not the case? ANSWER: Thank you Jeff, that was it, for others: here is the full solution with downloadable code. XAML: <StackPanel> <Grid x:Name="GridCustomerDetails"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="*"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="300"/> </Grid.ColumnDefinitions> <TextBlock VerticalAlignment="Center" Margin="10" Grid.Row="0" Grid.Column="0">First Name:</TextBlock> <TextBox Margin="10" Grid.Row="0" Grid.Column="1" Text="{Binding FirstName, Mode=TwoWay}"/> <TextBlock VerticalAlignment="Center" Margin="10" Grid.Row="1" Grid.Column="0">Last Name:</TextBlock> <TextBox Margin="10" Grid.Row="1" Grid.Column="1" Text="{Binding LastName}"/> <TextBlock VerticalAlignment="Center" Margin="10" Grid.Row="2" Grid.Column="0">Address:</TextBlock> <TextBox Margin="10" Grid.Row="2" Grid.Column="1" Text="{Binding Address}"/> </Grid> <Border Background="Tan" Margin="10"> <TextBlock x:Name="CheckFirstName"/> </Border> </StackPanel> Code behind: public Page() { InitializeComponent(); Customer customer = new Customer(); customer.FirstName = "Jim"; customer.LastName = "Taylor"; customer.Address = "72384 South Northern Blvd."; GridCustomerDetails.DataContext = customer; Customer customerOutput = (Customer)GridCustomerDetails.DataContext; CheckFirstName.Text = customer.FirstName; }

    Read the article

  • Is READ UNCOMMITTED / NOLOCK safe in this situation?

    - by Ben Challenor
    I know that snapshot isolation would fix this problem, but I'm wondering if NOLOCK is safe in this specific case so that I can avoid the overhead. I have a table that looks something like this: drop table Data create table Data ( Id BIGINT NOT NULL, Date BIGINT NOT NULL, Value BIGINT, constraint Cx primary key (Date, Id) ) create nonclustered index Ix on Data (Id, Date) There are no updates to the table, ever. Deletes can occur but they should never contend with the SELECT because they affect the other, older end of the table. Inserts are regular and page splits to the (Id, Date) index are extremely common. I have a deadlock situation between a standard INSERT and a SELECT that looks like this: select top 1 Date, Value from Data where Id = @p0 order by Date desc because the INSERT acquires a lock on Cx (Date, Id; Value) and then Ix (Id, Date), but the SELECT acquires a lock on Ix (Id, Date) and then Cx (Date, Id; Value). This is because the SELECT first seeks on Ix and then joins to a seek on Cx. Swapping the clustered and non-clustered index would break this cycle, but it is not an acceptable solution because it would introduce cycles with other (more complex) SELECTs. If I add NOLOCK to the SELECT, can it go wrong in this case? Can it return: More than one row, even though I asked for TOP 1? No rows, even though one exists and has been committed? Worst of all, a row that doesn't satisfy the WHERE clause? I've done a lot of reading about this online, but the only reproductions of over- or under-count anomalies I've seen (one, two) involve a scan. This involves only seeks. Jeff Atwood has a post about using NOLOCK that generated a good discussion. I was particularly interested in a comment by Rick Townsend: Secondly, if you read dirty data, the risk you run is of reading the entirely wrong row. For example, if your select reads an index to find your row, then the update changes the location of the rows (e.g.: due to a page split or an update to the clustered index), when your select goes to read the actual data row, it's either no longer there, or a different row altogether! Is this possible with inserts only, and no updates? If so, then I guess even my seeks on an insert-only table could be dangerous. Update: I'm trying to figure out how snapshot isolation works. It seems to be row-based, where transactions read the table (with no shared lock!), find the row they are interested in, and then see if they need to get an old version of the row from the version store in tempdb. But in my case, no row will have more than one version, so the version store seems rather pointless. And if the row was found with no shared lock, how is it different to just using NOLOCK?

    Read the article

  • Autocomplete server-side implementation

    - by toluju
    What is a fast and efficient way to implement the server-side component for an autocomplete feature in an html input box? I am writing a service to autocomplete user queries in our web interface's main search box, and the completions are displayed in an ajax-powered dropdown. The data we are running queries against is simply a large table of concepts our system knows about, which matches roughly with the set of wikipedia page titles. For this service obviously speed is of utmost importance, as responsiveness of the web page is important to the user experience. The current implementation simply loads all concepts into memory in a sorted set, and performs a simple log(n) lookup on a user keystroke. The tailset is then used to provide additional matches beyond the closest match. The problem with this solution is that it does not scale. It currently is running up against the VM heap space limit (I've set -Xmx2g, which is about the most we can push on our 32 bit machines), and this prevents us from expanding our concept table or adding more functionality. Switching to 64-bit VMs on machines with more memory isn't an immediate option. I've been hesitant to start working on a disk-based solution as I am concerned that disk seek time will kill performance. Are there possible solutions that will let me scale better, either entirely in memory or with some fast disk-backed implementations? Edits: @Gandalf: For our use case it is important the the autocompletion is comprehensive and isn't just extra help for the user. As for what we are completing, it is a list of concept-type pairs. For example, possible entries are [("Microsoft", "Software Company"), ("Jeff Atwood", "Programmer"), ("StackOverflow.com", "Website")]. We are using Lucene for the full search once a user selects an item from the autocomplete list, but I am not yet sure Lucene would work well for the autocomplete itself. @Glen: No databases are being used here. When I'm talking about a table I just mean the structured representation of my data. @Jason Day: My original implementation to this problem was to use a Trie, but the memory bloat with that was actually worse than the sorted set due to needing a large number of object references. I'll read on the ternary search trees to see if it could be of use.

    Read the article

  • Why does the following Java Script fail to load XML?

    - by Pavitar
    I have taken an example taught to us in class,wherein a javascript is used to retrieve data from the XML,but it doesn't work.Please help I have also added the XML file below. <html> <head> <title>Customer Info</title> <script language="javascript"> var xmlDoc = 0; var xmlObj = 0; function loadCustomers(){ xmlDoc = new ActiveXObject("Microsoft.XMLDOM"); xmlDoc.async = "false"; xmlDoc.onreadystatechange = displayCustomers; xmlDoc.load("customers.xml"); } function displayCustomers(){ if(xmlDoc.readyState == 4){ xmlObj = xmlDoc.documentElement; var len = xmlObj.childNodes.length; for(i = 0; i < len; i++){ var nodeElement = xmlObj.childNodes[i]; document.write(nodeElement.attributes[0].value); for(j = 0; j < nodeElement.childNodes.length; j++){ document.write(" " + nodeElement.childNodes[j].firstChild.nodeValue); } document.write("<br/>"); } } } </script> </head> <body> <form> <input type="button" value="Load XML" onClick="loadCustomers()"> </form> </body> </html> XML(customers.xml) <?xml version="1.0" encoding="UTF-8"?> <customers> <customer custid="CU101"> <pwd>PW101</pwd> <email>[email protected]</email> </customer> <customer custid="CU102"> <pwd>PW102</pwd> <email>[email protected]</email> </customer> <customer custid="CU103"> <pwd>PW103</pwd> <email>[email protected]</email> </customer> <customer custid="CU104"> <pwd>PW104</pwd> <email>[email protected]</email> </customer> </customers>

    Read the article

  • Multiple column subselect in mysql 5 (5.1.42)

    - by rubber boots
    This one seems to be a simple problem, but I can't make it work in a single select or nested select. Retrieve the authors and (if any) advisers of a paper (article) into one row. I order to explain the problem, here are the two data tables (pseudo) papers (id, title, c_year) persons (id, firstname, lastname) plus a link table w/one extra attribute (pseudo): paper_person_roles( paper_id person_id act_role ENUM ('AUTHOR', 'ADVISER') ) This is basically a list of written papers (table: papers) and a list of staff and/or students (table: persons) An article my have (1,N) authors. An article may have (0,N) advisers. A person can be in 'AUTHOR' or 'ADVISER' role (but not at the same time). The application eventually puts out table rows containing the following entries: TH: || Paper_ID | Author(s) | Title | Adviser(s) | TD: || 21334 |John Doe, Jeff Tucker|Why the moon looks yellow|Brown, Rayleigh| ... My first approach was like: select/extract a full list of articles into the application, eg.SELECT q.id, q.title FROM papers AS q ORDER BY q.c_year and save the results of the query into an array (in the application). After this step, loop over the array of the returned information and retrieve authors and advisers (if any), via prepared statement (? is the paper's id) from the link table like:APPLICATION_LOOP(paper_ids in array) SELECT p.lastname, p.firstname, r.act_role FROM persons AS p, paper_person_roles AS r WHERE p.id=r.person_id AND r.paper_id = ? # The application does further processing from here (pseudo): foreach record from resulting records if record.act_role eq 'AUTHOR' then join to author_column if record.act_role eq 'ADVISER' then join to avdiser_column end print id, author_column, title, adviser_column APPLICATION_LOOP This works so far and gives the desired output. Would it make sense to put the computation back into the DB? I'm not very proficient in nontrivial SQL and can't find a solution with a single (combined or nested) select call. I tried sth. like SELECT q.title (CONCAT_WS(' ', (SELECT p.firstname, p.lastname AS aunames FROM persons AS p, paper_person_roles AS r WHERE q.id=r.paper_id AND r.act_role='AUTHOR') ) ) AS aulist FROM papers AS q, persons AS p, paper_person_roles AS r in several variations, but no luck ... Maybe there is some chance? Thanks in advance r.b.

    Read the article

  • Actionscript multiple file upload, with parameter passing is not working

    - by Metropolis
    Hey everyone, First off, I am very bad at flash/actionscript, it is not my main programming language. I have created my own file upload flash app that has been working great for me up until this point. It uses PHP to upload the files and sends back a status message which gets displayed in a status box to the user. Now I have run into a situation where I need the HTML to pass a parameter to the Actionscript, and then to the PHP file using POST. I have tried to set this up just like adobe has it on http://livedocs.adobe.com/flex/3/html/help.html?content=17_Networking_and_communications_7.html without success. Here is my Actionscript code import fl.controls.TextArea; //Set filters var imageTypes:FileFilter = new FileFilter("Images (*.jpg, *.jpeg, *.gif, *.png)", "*.jpg; *.jpeg; *.gif; *.png"); var textTypes:FileFilter = new FileFilter("Documents (*.txt, *.rtf, *.pdf, *.doc)", "*.txt; *.rtf; *.pdf; *.doc"); var allTypes:Array = new Array(textTypes, imageTypes); var fileRefList:FileReferenceList = new FileReferenceList(); //Add event listeners for its various fileRefList functions below upload_buttn.addEventListener(MouseEvent.CLICK, browseBox); fileRefList.addEventListener(Event.SELECT, selectHandler); function browseBox(event:MouseEvent):void { fileRefList.browse(allTypes); } function selectHandler(event:Event):void { var phpRequest:URLRequest = new URLRequest("ajax/upload.ajax.php"); var flashVars:URLVariables = objectToURLVariables(this.root.loaderInfo); phpRequest.method = URLRequestMethod.POST; phpRequest.data = flashVars; var file:FileReference; var files:FileReferenceList = FileReferenceList(event.target); var selectedFileArray:Array = files.fileList; var listener:Object = new Object(); for (var i:uint = 0; i < selectedFileArray.length; i++) { file = FileReference(selectedFileArray[i]); try { file.addEventListener(DataEvent.UPLOAD_COMPLETE_DATA, phpResponse); file.upload(phpRequest); } catch (error:Error) { status_txt.text = file.name + " Was not uploaded correctly (" + error.message + ")"; } } } function phpResponse(event:DataEvent):void { var file:FileReference = FileReference(event.target); status_txt.htmlText += event.data; } function objectToURLVariables(parameters:Object):URLVariables { var paramsToSend:URLVariables = new URLVariables(); for(var i:String in parameters) { if(i!=null) { if(parameters[i] is Array) paramsToSend[i] = parameters[i]; else paramsToSend[i] = parameters[i].toString(); } } return paramsToSend; } The flashVars variable is the one that should contain the values from the HTML file. But whenever I run the program and output the variables in the PHP file I receive the following. //Using this command on the PHP page print_r($_POST); //I get this for output Array ( [Filename] => testfile.txt [Upload] => Submit Query ) Its almost like the parameters are getting over written or are just not working at all. Thanks for any help, Metropolis

    Read the article

  • Ad Server does not serve ads in Firefox, but works fine in Chrome, IE, & Safari!?

    - by HipHop-opatamus
    I'm having a strange (likely JavaScript) related issue. I'm running Open X Ad Server ( http://www.openx.org ) which serves ads to the website http://upsidedowndogs.com . The ads load fine every time when visiting the site via Chrome, IE, or Safari, but sometimes don't load at all in FireFox - Hence, it is a client side issue, which leads me to believe its something up with the javascript. The fact that the problem is intermittent, and does not through any error codes to FireBug, also doesn't make it any easier to diagnose and address. Any ideas how to diagnose / address this issue? Thanks! Here is the code generated by OpenX (it goes in the page header - additional code is then used in each ad unit, as seen on the page) if (typeof(OA_zones) != 'undefined') { var OA_zoneids = ''; for (var zonename in OA_zones) OA_zoneids += escape(zonename+'=' + OA_zones[zonename] + "|"); OA_zoneids += '&amp;nz=1'; } else { var OA_zoneids = escape('1|2|3|4'); } if (typeof(OA_source) == 'undefined') { OA_source = ''; } var OA_p=location.protocol=='https:'?'https://ads.offleashmedia.com/server/www/delivery/spc.php':'http://ads.offleashmedia.com/server/www/delivery/spc.php'; var OA_r=Math.floor(Math.random()*99999999); OA_output = new Array(); var OA_spc="<"+"script type='text/javascript' "; OA_spc+="src='"+OA_p+"?zones="+OA_zoneids; OA_spc+="&amp;source="+escape(OA_source)+"&amp;r="+OA_r; OA_spc+=(document.charset ? '&amp;charset='+document.charset : (document.characterSet ? '&amp;charset='+document.characterSet : '')); if (window.location) OA_spc+="&amp;loc="+escape(window.location); if (document.referrer) OA_spc+="&amp;referer="+escape(document.referrer); OA_spc+="'><"+"/script>"; document.write(OA_spc); function OA_show(name) { if (typeof(OA_output[name]) == 'undefined') { return; } else { document.write(OA_output[name]); } } function OA_showpop(name) { zones = window.OA_zones ? window.OA_zones : false; var zoneid = name; if (typeof(window.OA_zones) != 'undefined') { if (typeof(zones[name]) == 'undefined') { return; } zoneid = zones[name]; } OA_p=location.protocol=='https:'?'https://ads.offleashmedia.com/server/www/delivery/apu.php':'http://ads.offleashmedia.com/server/www/delivery/apu.php'; var OA_pop="<"+"script type='text/javascript' "; OA_pop+="src='"+OA_p+"?zoneid="+zoneid; OA_pop+="&amp;source="+escape(OA_source)+"&amp;r="+OA_r; if (window.location) OA_pop+="&amp;loc="+escape(window.location); if (document.referrer) OA_pop+="&amp;referer="+escape(document.referrer); OA_pop+="'><"+"/script>"; document.write(OA_pop); } var OA_fo = ''; OA_fo += "<"+"script type=\'text/javascript\' src=\'http://ads.offleashmedia.com/server/www/delivery/fl.js\'><"+"/script>\n"; document.write(OA_fo);

    Read the article

  • PHP - Code Sample - Polymorphism Implementation - How to allow for expansion?

    - by darga33
    I've read numerous SO posts about Polymorphism, and also the other really good one at http://net.tutsplus.com/tutorials/php/understanding-and-applying-polymorphism-in-php/ Good stuff!!! I'm trying to figure out how a seasoned PHP developer that follows all the best practices would accomplish the following. Please be as specific and detailed as possible. I'm sure your answer is going to help a lot of people!!! :-) While learning Polymorphism, I came across a little stumbling block. Inside of the PDFFormatter class, I had to use (instanceof) in order to figure out if some code should be included in the returned data. I am trying to be able to pass in two different kinds of profiles to the formatter. (needs to be able to handle multiple kinds of formatters but display the data specific to the Profile class that is being passed to it). It doesn't look bad now, but imagine 10 more kinds of Profiles!! How would you do this? The best answer would also include the changes you would make. Thanks sooooooo much in advance!!!!! Please PHP only! Thx!!! File 1. FormatterInterface.php interface FormatterInterface { public function format(Profile $Profile); } File 2. PDFFormatter.php class PDFFormatter implements FormatterInterface { public function format(Profile $Profile) { $format = "PDF Format<br /><br />"; $format .= "This is a profile formatted as a PDF.<br />"; $format .= 'Name: ' . $Profile->name . '<br />'; if ($Profile instanceof StudentProfile) { $format .= "Graduation Date: " . $Profile->graduationDate . "<br />"; } $format .= "<br />End of PDF file"; return $format; } } File 3. Profile.php class Profile { public $name; public function __construct($name) { $this->name = $name; } public function format(FormatterInterface $Formatter) { return $Formatter->format($this); } } File 4. StudentProfile.php class StudentProfile extends Profile { public $graduationDate; public function __construct($name, $graduationDate) { $this->name = $name; $this->graduationDate = $graduationDate; } } File 5. index.php //Assuming all files are included...... $StudentProfile = new StudentProfile('Michael Conner', 55, 'Unknown, FL', 'Graduate', '1975', 'Business Management'); $Profile = new Profile('Brandy Smith', 44, 'Houston, TX'); $PDFFormatter = new PDFFormatter(); echo '<hr />'; echo $StudentProfile->format($PDFFormatter); echo '<hr />'; echo $Profile->format($PDFFormatter);

    Read the article

  • not able to show the file from the array...

    - by sjl
    Hi, i'm trying to do an image gallery. i was not able to display the thumbnails stored in the array. instead it keep showing the same thumbnails. anyone can help? very much appreciated.. // Arrays that store the filenames of the text, images and thumbnails var thumbnails:Array = [ "images/image1_thumb.jpg", "images/image2_thumb.jpg", "images/image3_thumb.jpg", "images/image4_thumb.jpg", "images/image5_thumb.jpg", "images/image6_thumb.jpg", "images/image7_thumb.jpg" ]; var images:Array = [ "images/image1.jpg", "images/image2.jpg", "images/image3.jpg", "images/image4.jpg", "images/image5.jpg", "images/image6.jpg", "images/image7.jpg" ]; var textFiles:Array = [ "text/picture1.txt", "text/picture2.txt", "text/picture3.txt", "text/picture4.txt", "text/picture5.txt", "text/picture6.txt", "text/picture7.txt" ]; // NOTE: thumbnail images are 60 x 43 pixels in size // loop through the array and create the thumbnails and position them vertically for (var i:int = 0; i <7; i++) { // create an instance of MyUIThumbnail var thumbs:MyUIThumbnail = new MyUIThumbnail(); // position the thumbnails on the left vertically thumbs.y = 43 * i;/*** FILL IN THE BLANK HERE ***/ // store the corresponding full size image name in the thumbnail // use the "images" array thumbs.image = "images/image1.jpg"; thumbs.image = "images/image2.jpg"; thumbs.image = "images/image3.jpg"; thumbs.image = "images/image4.jpg"; thumbs.image = "images/image5.jpg"; thumbs.image = "images/image6.jpg"; thumbs.image = "images/image7.jpg"; // store the corresponding text file name in the thumbnail // use the "textFiles" array thumbs.textFile = "text/picture1.txt"; thumbs.textFile = "text/picture2.txt"; thumbs.textFile = "text/picture3.txt"; thumbs.textFile = "text/picture4.txt"; thumbs.textFile = "text/picture5.txt"; thumbs.textFile = "text/picture6.txt"; thumbs.textFile = "text/picture7.txt"; thumbs.imageFullSize = full_image_mc ; // store the reference to the textfield in the thumbnail thumbs.infoText = info; // load and display the thumbnails // use the "thumbnails" array thumbs.loadThumbnail("images/image1_thumb.jpg"); thumbs.loadThumbnail("images/image2_thumb.jpg"); thumbs.loadThumbnail("images/image3_thumb.jpg"); thumbs.loadThumbnail("images/image4_thumb.jpg"); thumbs.loadThumbnail("images/image5_thumb.jpg"); thumbs.loadThumbnail("images/image6_thumb.jpg"); thumbs.loadThumbnail("images/image7_thumb.jpg"); addChild(thumbs); } below is my thumbanail. as package{ import fl.containers.; import flash.events.; import flash.text.; import flash.net.; import flash.display.*; public class MyUIThumbnail extends MovieClip { public var image:String = ""; public var textFile:String = ""; var loader:UILoader = new UILoader(); public var imageFullSize:MyFullSizeImage; // reference to loader to show the full size image public var infoText:TextField; // reference to textfield that displays image infoText var txtLoader:URLLoader = new URLLoader(); // loader to load the text file public function MyUIThumbnail() { buttonMode = true; addChild(loader); this.addEventListener(MouseEvent.CLICK, myClick); txtLoader.addEventListener(Event.COMPLETE, displayText); loader.addEventListener(Event.COMPLETE, thumbnailLoaded); } public function loadThumbnail(filename:String):void { loader.source = filename; preloader.trackLoading(loader); } function myClick(e:MouseEvent):void { // load and display the full size image imageFullSize.loadImage("my_full_mc"); // load and display the text textLoad("info"); } function textLoad(file:String) { // load the text description from the text file loader.load(new URLRequest(textFile)); } function displayText(e:Event) { //display the text when loading is completed addChild(infoText); } function thumbnailLoaded(e:Event) { loader.width = 60; loader.height = 43; } } }

    Read the article

  • Solaris 11 Launch Blog Carnival Roundup

    - by constant
    Solaris 11 is here! And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter go to eleven. Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure: Getting Started/Overview A lot of people speculated that the official launch of Solaris 11 would be on 11/11 (whatever way you want to turn it), but it actually happened two days earlier. Larry Wake himself offers 11 Reasons Why Oracle Solaris 11 11/11 Isn't Being Released on 11/11/11. Then, Larry goes on with a summary: Oracle Solaris 11: The First Cloud OS gives you a short and sweet rundown of what the major new features of Solaris 11 are. Jeff Victor has his own list of What's New in Oracle Solaris 11. A popular Solaris 11 meme is to write a blog post about 11 favourite features: Jim Laurent's 11 Reasons to Love Solaris 11, Darren Moffat's 11 Favourite Solaris 11 Features, Mike Gerdt's 11 of My Favourite Things! are just three examples of "11 Favourite Things..." type blog posts, I'm sure many more will follow... More official overview content for Solaris 11 is available from the Oracle Tech Network Solaris 11 Portal. Also, check out Rick Ramsey's blog post Solaris 11 Resources for System Administrators on the OTN Blog and his secret 5 Commands That Make Solaris Administration Easier post from the OTN Garage. (Automatic) Installation and the Image Packaging System (IPS) The brand new Image Packaging System (IPS) and the Automatic Installer (IPS), together with numerous other install/packaging/boot/patching features are among the most significant improvements in Solaris 11. But before installing, you may wonder whether Solaris 11 will support your particular set of hardware devices. Again, the OTN Garage comes to the rescue with Rick Ramsey's post How to Find Out Which Devices Are Supported By Solaris 11. Included is a useful guide to all the first steps to get your Solaris 11 system up and running. Tim Foster had a whole handful of blog posts lined up for the launch, teaching you everything you need to know about IPS but didn't dare to ask: The IPS System Repository, IPS Self-assembly - Part 1: Overlays and Part 2: Multiple Packages Delivering Configuration. Watch out for more IPS posts from Tim! If installing packages or upgrading your system from the net makes you uneasy, then you're not alone: Jim Laurent will tech you how Building a Solaris 11 Repository Without Network Connection will make your life easier. Many of you have already peeked into the future by installing Solaris 11 Express. If you're now wondering whether you can upgrade or whether a fresh install is necessary, then check out Alan Hargreaves's post Upgrading Solaris 11 Express b151a with support to Solaris 11. The trick is in upgrading your pkg(1M) first. Networking One of the first things to do after installing Solaris 11 (or any operating system for that matter), is to set it up for networking. Solaris 11 comes with the brand new "Network Auto-Magic" feature which can figure out everything by itself. For those cases where you want to exercise a little more control, Solaris 11 left a few people scratching their heads. Fortunately, Tschokko wrote up this cool blog post: Solaris 11 manual IPv4 & IPv6 configuration right after the launch ceremony. Thanks, Tschokko! And Milek points out a long awaited networking feature in Solaris 11 called Solaris 11 - hostmodel, which I know for a fact that many customers have looked forward to: How to "bind" a Solaris 11 system to a specific gateway for specific IP address it is using. Steffen Weiberle teaches us how to tune the Solaris 11 networking stack the proper way: ipadm(1M). No more fiddling with ndd(1M)! Check out his tutorial on Solaris 11 Network Tunables. And if you want to get even deeper into the networking stack, there's nothing better than DTrace. Alan Maguire teaches you in: DTracing TCP Congestion Control how to probe deeply into the Solaris 11 TCP/IP stack, the TCP congestion control part in particular. Don't miss his other DTrace and TCP related blog posts! DTrace And there we are: DTrace, the king of all observability tools. Long time DTrace veteran and co-author of The DTrace book*, Brendan Gregg blogged about Solaris 11 DTrace syscall provider changes. BTW, after you install Solaris 11, check out the DTrace toolkit which is installed by default in /usr/dtrace/DTT. It is chock full of handy DTrace scripts, many of which contributed by Brendan himself! Security Another big theme in Solaris 11, and one that is crucial for the success of any operating system in the Cloud is Security. Here are some notable posts in this category: Darren Moffat starts by showing us how to completely get rid of root: Completely Disabling Root Logins on Solaris 11. With no root user, there's one major entry point less to worry about. But that's only the start. In Immutable Zones on Encrypted ZFS, Darren shows us how to double the security of your services: First by locking them into the new Immutable Zones feature, then by encrypting their data using the new ZFS encryption feature. And if you're still missing sudo from your Linux days, Darren again has a solution: Password (PAM) caching for Solaris su - "a la sudo". If you're wondering how much compute power all this encryption will cost you, you're in luck: The Solaris X86 AESNI OpenSSL Engine will make sure you'll use your Intel's embedded crypto support to its fullest. And if you own a brand new SPARC T4 machine you're even luckier: It comes with its own SPARC T4 OpenSSL Engine. Dan Anderson's posts show how there really is now excuse not to encrypt any more... Developers Solaris 11 has a lot to offer to developers as well. Ali Bahrami has a series of blog posts that cover diverse developer topics: elffile: ELF Specific File Identification Utility, Using Stub Objects and The Stub Proto: Not Just For Stub Objects Anymore to name a few. BTW, if you're a developer and want to shape the future of Solaris 11, then Vijay Tatkar has a hint for you: Oracle (Sun Systems Group) is hiring! Desktop and Graphics Yes, Solaris 11 is a 100% server OS, but it can also offer a decent desktop environment, especially if you are a developer. Alan Coopersmith starts by discussing S11 X11: ye olde window system in today's new operating system, then Calum Benson shows us around What's new on the Solaris 11 Desktop. Even accessibility is a first-class citizen in the Solaris 11 user interface. Peter Korn celebrates: Accessible Oracle Solaris 11 - released! Performance Gone are the days of "Slowaris", when Solaris was among the few OSes that "did the right thing" while others cut corners just to win benchmarks. Today, Solaris continues doing the right thing, and it delivers the right performance at the same time. Need proof? Check out Brian's BestPerf blog with continuous updates from the benchmarking lab, including Recent Benchmarks Using Oracle Solaris 11! Send Me More Solaris 11 Launch Articles! These are just a few of the more interesting blog articles that came out around the Solaris 11 launch, I'm sure there are many more! Feel free to post a comment below if you find a particularly interesting blog post that hasn't been listed so far and share your enthusiasm for Solaris 11! *Affiliate link: Buy cool stuff and support this blog at no extra cost. We both win! var flattr_uid = '26528'; var flattr_tle = 'Solaris 11 Launch Blog Carnival Roundup'; var flattr_dsc = '<strong>Solaris 11 is here!</strong>And together with the official launch activities, a lot of Oracle and non-Oracle bloggers contributed helpful and informative blog articles to help your datacenter <a href="http://en.wikipedia.org/wiki/Up_to_eleven">go to eleven</a>.Here are some notable blog postings, sorted by category for your Solaris 11 blog-reading pleasure:'; var flattr_tag = 'blogging,digest,Oracle,Solaris,solaris,solaris 11'; var flattr_cat = 'text'; var flattr_url = 'http://constantin.glez.de/blog/2011/11/solaris-11-launch-blog-carnival-roundup'; var flattr_lng = 'en_GB'

    Read the article

  • Interesting articles and blogs on SPARC T4

    - by mv
    Interesting articles and blogs on SPARC T4 processor   I have consolidated all the interesting information I could get on SPARC T4 processor and its hardware cryptographic capabilities.  Hope its useful. 1. Advantages of SPARC T4 processor  Most important points in this T4 announcement are : "The SPARC T4 processor was designed from the ground up for high speed security and has a cryptographic stream processing unit (SPU) integrated directly into each processor core. These accelerators support 16 industry standard security ciphers and enable high speed encryption at rates 3 to 5 times that of competing processors. By integrating encryption capabilities directly inside the instruction pipeline, the SPARC T4 processor eliminates the performance and cost barriers typically associated with secure computing and makes it possible to deliver high security levels without impacting the user experience." Data Sheet has more details on these  : "New on-chip Encryption Instruction Accelerators with direct non-privileged support for 16 industry-standard cryptographic algorithms plus random number generation in each of the eight cores: AES, Camellia, CRC32c, DES, 3DES, DH, DSA, ECC, Kasumi, MD5, RSA, SHA-1, SHA-224, SHA-256, SHA-384, SHA-512" I ran "isainfo -v" command on Solaris 11 Sparc T4-1 system. It shows the new instructions as expected  : $ isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc 32-bit sparc applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc v8plus div32 mul32  2.  Dan Anderson's Blog have some interesting points about how these can be used : "New T4 crypto instructions include: aes_kexpand0, aes_kexpand1, aes_kexpand2,         aes_eround01, aes_eround23, aes_eround01_l, aes_eround_23_l, aes_dround01, aes_dround23, aes_dround01_l, aes_dround_23_l.       Having SPARC T4 hardware crypto instructions is all well and good, but how do we access it ?      The software is available with Solaris 11 and is used automatically if you are running Solaris a SPARC T4.  It is used internally in the kernel through kernel crypto modules.  It is available in user space through the PKCS#11 library." 3.   Dans' Blog on Where's the Crypto Libraries? Although this was written in 2009 but still is very useful  "Here's a brief tour of the major crypto libraries shown in the digraph:   The libpkcs11 library contains the PKCS#11 API (C_\*() functions, such as C_Initialize()). That in turn calls library pkcs11_softtoken or pkcs11_kernel, for userland or kernel crypto providers. The latter is used mostly for hardware-assisted cryptography (such as n2cp for Niagara2 SPARC processors), as that is performed more efficiently in kernel space with the "kCF" module (Kernel Crypto Framework). Additionally, for Solaris 10, strong crypto algorithms were split off in separate libraries, pkcs11_softtoken_extra libcryptoutil contains low-level utility functions to help implement cryptography. libsoftcrypto (OpenSolaris and Solaris Nevada only) implements several symmetric-key crypto algorithms in software, such as AES, RC4, and DES3, and the bignum library (used for RSA). libmd implements MD5, SHA, and SHA2 message digest algorithms" 4. Difference in T3 and T4 Diagram in this blog is good and self explanatory. Jeff's blog also highlights the differences  "The T4 servers have improved crypto acceleration, described at https://blogs.oracle.com/DanX/entry/sparc_t4_openssl_engine. It is "just built in" so administrators no longer have to assign crypto accelerator units to domains - it "just happens". Every physical or virtual CPU on a SPARC-T4 has full access to hardware based crypto acceleration at all times. .... For completeness sake, it's worth noting that the T4 adds more crypto algorithms, and accelerates Camelia, CRC32c, and more SHA-x." 5. About performance counters In this blog, performance counters are explained : "Note that unlike T3 and before, T4 crypto doesn't require kernel modules like ncp or n2cp, there is no visibility of crypto hardware with kstats or cryptoadm. T4 does provide hardware counters for crypto operations.  You can see these using cpustat: cpustat -c pic0=Instr_FGU_crypto 5 You can check the general crypto support of the hardware and OS with the command "isainfo -v". Since T4 crypto's implementation now allows direct userland access, there are no "crypto units" visible to cryptoadm.  " For more details refer Martin's blog as well. 6. How to turn off  SPARC T4 or Intel AES-NI crypto acceleration  I found this interesting blog from Darren about how to turn off  SPARC T4 or Intel AES-NI crypto acceleration. "One of the new Solaris 11 features of the linker/loader is the ability to have a single ELF object that has multiple different implementations of the same functions that are selected at runtime based on the capabilities of the machine.   The alternate to this is having the application coded to call getisax(2) system call and make the choice itself.  We use this functionality of the linker/loader when we build the userland libraries for the Solaris Cryptographic Framework (specifically libmd.so and libsoftcrypto.so) The Solaris linker/loader allows control of a lot of its functionality via environment variables, we can use that to control the version of the cryptographic functions we run.  To do this we simply export the LD_HWCAP environment variable with values that tell ld.so.1 to not select the HWCAP section matching certain features even if isainfo says they are present.  This will work for consumers of the Solaris Cryptographic Framework that use the Solaris PKCS#11 libraries or use libmd.so interfaces directly.  For SPARC T4 : export LD_HWCAP="-aes -des -md5 -sha256 -sha512 -mont -mpul" .. For Intel systems with AES-NI support: export LD_HWCAP="-aes"" Note that LD_HWCAP is explained in  http://docs.oracle.com/cd/E23823_01/html/816-5165/ld.so.1-1.html "LD_HWCAP, LD_HWCAP_32, and LD_HWCAP_64 -  Identifies an alternative hardware capabilities value... A “-” prefix results in the capabilities that follow being removed from the alternative capabilities." 7. Whitepaper on SPARC T4 Servers—Optimized for End-to-End Data Center Computing This Whitepaper on SPARC T4 Servers—Optimized for End-to-End Data Center Computing explains more details.  It has DTrace scripts which may come in handy : "To ensure the hardware-assisted cryptographic acceleration is configured to use and working with the security scenarios, it is recommended to use the following Solaris DTrace script. #!/usr/sbin/dtrace -s pid$1:libsoftcrypto:yf*:entry, pid$target:libsoftcrypto:rsa*:entry, pid$1:libmd:yf*:entry { @[probefunc] = count(); } tick-1sec { printa(@ops); trunc(@ops); }" Note that I have slightly modified the D Script to have RSA "libsoftcrypto:rsa*:entry" as well as per recommendations from Chi-Chang Lin. 8. References http://www.oracle.com/us/corporate/features/sparc-t4-announcement-494846.html http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/t-series/sparc-t4-1-ds-487858.pdf https://blogs.oracle.com/DanX/entry/sparc_t4_openssl_engine https://blogs.oracle.com/DanX/entry/where_s_the_crypto_libraries https://blogs.oracle.com/darren/entry/howto_turn_off_sparc_t4 http://docs.oracle.com/cd/E23823_01/html/816-5165/ld.so.1-1.html   https://blogs.oracle.com/hardware/entry/unleash_the_power_of_cryptography https://blogs.oracle.com/cmt/entry/t4_crypto_cheat_sheet https://blogs.oracle.com/martinm/entry/t4_performance_counters_explained  https://blogs.oracle.com/jsavit/entry/no_mau_required_on_a http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/t-series/sparc-t4-business-wp-524472.pdf

    Read the article

  • Time Tracking on an Agile Team

    - by Stephen.Walther
    What’s the best way to handle time-tracking on an Agile team? Your gut reaction to this question might be to resist any type of time-tracking at all. After all, one of the principles of the Agile Manifesto is “Individuals and interactions over processes and tools”.  Forcing the developers on your team to track the amount of time that they devote to completing stories or tasks might seem like useless bureaucratic red tape: an impediment to getting real work done. I completely understand this reaction. I’ve been required to use time-tracking software in the past to account for each hour of my workday. It made me feel like Fred Flintstone punching in at the quarry mine and not like a professional. Why You Really Do Need Time-Tracking There are, however, legitimate reasons to track time spent on stories even when you are a member of an Agile team.  First, if you are working with an outside client, you might need to track the number of hours spent on different stories for the purposes of billing. There might be no way to avoid time-tracking if you want to get paid. Second, the Product Owner needs to know when the work on a story has gone over the original time estimated for the story. The Product Owner is concerned with Return On Investment. If the team has gone massively overtime on a story, then the Product Owner has a legitimate reason to halt work on the story and reconsider the story’s business value. Finally, you might want to track how much time your team spends on different types of stories or tasks. For example, if your team is spending 75% of their time doing testing then you might need to bring in more testers. Or, if 10% of your team’s time is expended performing a software build at the end of each iteration then it is time to consider better ways of automating the build process. Time-Tracking in SonicAgile For these reasons, we added time-tracking as a feature to SonicAgile which is our free Agile Project Management tool. We were heavily influenced by Jeff Sutherland (one of the founders of Scrum) in the way that we implemented time-tracking (see his article http://scrum.jeffsutherland.com/2007/03/time-tracking-is-anti-scrum-what-do-you.html). In SonicAgile, time-tracking is disabled by default. If you want to use this feature then the project owner must enable time-tracking in Project Settings. You can choose to estimate using either days or hours. If you are estimating at the level of stories then it makes more sense to choose days. Otherwise, if you are estimating at the level of tasks then it makes more sense to use hours. After you enable time-tracking then you can assign three estimates to a story: Original Estimate – This is the estimate that you enter when you first create a story. You don’t change this estimate. Time Spent – This is the amount of time that you have already devoted to the story. You update the time spent on each story during your daily standup meeting. Time Left – This is the amount of time remaining to complete the story. Again, you update the time left during your daily standup meeting. So when you first create a story, you enter an original estimate that becomes the time left. During each daily standup meeting, you update the time spent and time left for each story on the Kanban. If you had perfect predicative power, then the original estimate would always be the same as the sum of the time spent and the time left. For example, if you predict that a story will take 5 days to complete then on day 3, the story should have 3 days spent and 2 days left. Unfortunately, never in the history of mankind has anyone accurately predicted the exact amount of time that it takes to complete a story. For this reason, SonicAgile does not update the time spent and time left automatically. Each day, during the daily standup, your team should update the time spent and time left for each story. For example, the following table shows the history of the time estimates for a story that was originally estimated to take 3 days but, eventually, takes 5 days to complete: Day Original Estimate Time Spent Time Left Day 1 3 days 0 days 3 days Day 2 3 days 1 day 2 days Day 3 3 days 2 days 2 days Day 4 3 days 3 days 2 days Day 5 3 days 4 days 0 days In the table above, everything goes as predicted until you reach day 3. On day 3, the team realizes that the work will require an additional two days. The situation does not improve on day 4. All of the sudden, on day 5, all of the remaining work gets done. Real work often follows this pattern. There are long periods when nothing gets done punctuated by occasional and unpredictable bursts of progress. We designed SonicAgile to make it as easy as possible to track the time spent and time left on a story. Detecting when a Story Goes Over the Original Estimate Sometimes, stories take much longer than originally estimated. There’s a surprise. For example, you discover that a new software component is incompatible with existing software components. Or, you discover that you have to go through a month-long certification process to finish a story. In those cases, the Product Owner has a legitimate reason to halt work on a story and re-evaluate the business value of the story. For example, the Product Owner discovers that a story will require weeks to implement instead of days, then the story might not be worth the expense. SonicAgile displays a warning on both the Backlog and the Kanban when the time spent on a story goes over the original estimate. An icon of a clock is displayed. Time-Tracking and Tasks Another optional feature of SonicAgile is tasks. If you enable Tasks in Project Settings then you can break stories into one or more tasks. You can perform time-tracking at the level of a story or at the level of a task. If you don’t break a story into tasks then you can enter the time left and time spent for the story. As soon as you break a story into tasks, then you can no longer enter the time left and time spent at the level of the story. Instead, the time left and time spent for a story is rolled up from its tasks. On the Kanban, you can see how the time left and time spent for each task gets rolled up into each story. The progress bar for the story is rolled up from the progress bars for each task. The original estimate is never rolled up – even when you break a story into tasks. A story’s original estimate is entered separately from the original estimates of each of the story’s tasks. Summary Not every Agile team can avoid time-tracking. You might be forced to track time to get paid, to detect when you are spending too much time on a particular story, or to track the amount of time that you are devoting to different types of tasks. We designed time-tracking in SonicAgile to require the least amount of work to track the information that you need. Time-tracking is an optional feature. If you enable time-tracking then you can track the original estimate, time left, and time spent for each story and task. You can use time-tracking with SonicAgile for free. Register at http://SonicAgile.com.

    Read the article

  • XBRL US Conference Highlights

    - by john.orourke(at)oracle.com
    Back in early November I had an opportunity to attend the XBRL US National Conference in Philadelphia.  At the event, XBRL US announced that Oracle had joined the initiative, so I had a chance to participate in a press conference and attend a number of sessions.  Oracle joined XBRL US so we can stay ahead of the standard and leverage it in our products, and to help drive awareness with customers and improve adoption of XBRL. There were roughly 250 attendees at the event, about half of which were vendors and consultants and the rest financial reporting staff from corporate filers.  Event sponsors included Ernst & Young, SWIFT and Fujitsu.  There were also a number of XBRL technology and service providers exhibiting at the conference.  On Monday Nov. 8th, the XBRL US Steering Committee meetings and Annual Members meeting and reception were held.  At the Annual Members meeting the big news was that current XBRL US President, Mark Bolgiano, is moving to a new position at Howard Hughes Medical Center.  Campbell Pryde, who had led the Taxonomy Development for XBRL US, is taking over as XBRL US President. Other items that were highlighted at the members meeting included: The US GAAP XBRL taxonomy is being used by over 1500 SEC filers and has now been handed over to the FASB to maintain and enhance 16 filer training events were held in 2010 XBRL Global Magazine was launched Corporate Actions proposal was submitted to the SEC with SWIFT in May XBRL Labs for iPhone, XBRL US Consistency Suite launched ISO 2022 Corporate Actions Alignment with XBRL achieved The XBRL Credit Rating taxonomy was accepted Tuesday Nov. 9th included Keynotes, General Sessions, Innovation Workshop for Governments and Securities Professionals, and an Opening Reception.  General sessions included: Lessons Learned from the SEC's rollout of XBRL.  More than 18,000 errors were identified in reviews of filings between June 2009 and September 2010.  Most of these related to negative values being used where they shouldn't have.  Also, the SEC feels there are too many taxonomy extensions being created - mostly in the Cash Flow Statements.  They emphasize using existing elements in the US GAAP taxonomy and advise filers not to  create extensions to improve the visual formatting of XBRL filings. Investors and XBRL - Setting the Standard for Data Quality.  In this panel discussion, the key learning was that CFA's, academics and the financial community are not using XBRL as expected.  The issues raised include the  accuracy and completeness of filings, number of taxonomy extensions, and limited number of tools available to help analyze XBRL data.  Another big issue that was raised is the lack of historic results in XBRL - most analysts need 10 quarters of historic data.  On the positive side, XBRL has the potential to eliminate re-keying of data and errors here and can improve analytic capabilities for financial analysts once more historic data is available and more companies are providing detailed tagging of their filings. A US Roadmap for XBRL Financial Reporting.  This was a panel discussion featuring Jeff Neumann(SEC), Campbell Pryde(XBRL US), and Louis Matherne(FASB).  Key points included the fact that XBRL is currently used by 1500 companies, with 8000 more companies coming in 2011.  XBRL for Mutual Fund Reporting will start in 2011 for 8000 funds, and a Credit Rating Taxonomy has now been submitted for review.  The XBRL tagging/filing process is improving each quarter - more education is helping here.  The FASB is looking at extensions to date, and potential additions to US GAAP taxonomy, while the SEC is evaluating filings for accuracy, consistency in tagging, and tools for analyzing data.  The big news is that the FASB 2011 US GAAP Taxonomy has been completed and reviewed by SEC.  The 2011 US GAAP Taxonomy supports new FASB accounting standards issued since 2009, has new taxonomy elements for certain industries (i.e airlines) and the elimination of 500 concepts.  (meaning they can't be used going forward but are still supported for historical comparison)  The 2011 US GAAP Taxonomy will be available for usage with Q2 2011 SEC filings.  More information about this can be found on the FASB web site.  http://www.fasb.org/home Accounting Firms and XBRL.  This session covered the Role of Audit Firms, which includes awareness and education, validation of XBRL filings, and in-house transition planning.  The main advice provided was that organizations should document XBRL mapping process, perform peer comparisons, and risk assessments on a regular basis. Wednesday Nov. 10th included more Keynotes, General Sessions on Corporate Actions, and XBRL Essentials Workshop Training for corporate filers.  The XBRL Essentials Training included: Getting Started Once you Have the Basics Detailed Footnote Tagging and Handling Tables Quality Control and Trust in the XBRL Process Bringing XBRL In-House:  What are the Options, What should you consider? The US GAAP Financial Reporting Taxonomy - Overview of the 2011 release The XBRL Essentials Training was well-attended with about 80 people.  This included a good overview of the SEC's XBRL mandate, limited liability issue, tagging levels, recommended planning process, internal vs. outsourced approach, and how to manage service providers.  I learned a lot from the session on detailed tagging.  This is the requirement that kicks in during a company's second year of XBRL filing with the SEC and applies to financial statements, footnotes and disclosures (it does not apply to MD&A, executive communications and other information).  The review of the Linkbase model, or dimensional table structure, was very interesting and can be complex to understand.  The key takeaway here is that using dimensional tables in XBRL filings can help limit the number of taxonomy extensions that are required.  The slides from this session are posted on the XBRL US web site. (http://xbrl.us/events/Pages/archive.aspx) For me, the main summary points and takeaways from the XBRL US conference are: XBRL for financial reporting has turned the corner and gone mainstream - with 1500 companies currently using it and 8000 more coming in 2011 The expected value is not being achieved by filers or consumers of XBRL data - this will improve when more companies are filing in XBRL, more history is available, and more software tools are available for analysis (hmm, sounds like an opportunity for Oracle) XBRL is becoming the global standard for all business communications beyond just the financials - i.e. adoption for mutual funds, corporate actions and others planned for the future If you would like to learn more about XBRL and the various training programs, services and software tools that are available check out the XBRL US web site and even better - become a member.  Here's a link:  http://xbrl.us/Pages/default.aspx

    Read the article

  • Building LMMS: "Configuring incomplete, errors occurred!"

    - by fridojet
    I tried to build Linux MultiMedia studio from the source of the SourceForge git:// repository under Ubuntu 12.04 LTS 32b: git clone git://lmms.git.sourceforge.net/gitroot/lmms/lmms cd lmms git checkout First I tried to install all the required libraries and then I cmaked. - That's what happened on cmake (errors occurred!): [DIR]lmms/build$ cmake .. -- The C compiler identification is GNU -- The CXX compiler identification is GNU -- Check for working C compiler: /usr/bin/gcc -- Check for working C compiler: /usr/bin/gcc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done PROCESSOR: i686 Machine: i686-linux-gnu -- Target host is 32 bit -- Looking for include files LMMS_HAVE_STDINT_H -- Looking for include files LMMS_HAVE_STDINT_H - found -- Looking for include files LMMS_HAVE_STDBOOL_H -- Looking for include files LMMS_HAVE_STDBOOL_H - found -- Looking for include files LMMS_HAVE_STDLIB_H -- Looking for include files LMMS_HAVE_STDLIB_H - found -- Looking for include files LMMS_HAVE_PTHREAD_H -- Looking for include files LMMS_HAVE_PTHREAD_H - found -- Looking for include files LMMS_HAVE_SEMAPHORE_H -- Looking for include files LMMS_HAVE_SEMAPHORE_H - found -- Looking for include files LMMS_HAVE_UNISTD_H -- Looking for include files LMMS_HAVE_UNISTD_H - found -- Looking for include files LMMS_HAVE_SYS_TYPES_H -- Looking for include files LMMS_HAVE_SYS_TYPES_H - found -- Looking for include files LMMS_HAVE_SYS_IPC_H -- Looking for include files LMMS_HAVE_SYS_IPC_H - found -- Looking for include files LMMS_HAVE_SYS_SHM_H -- Looking for include files LMMS_HAVE_SYS_SHM_H - found -- Looking for include files LMMS_HAVE_SYS_TIME_H -- Looking for include files LMMS_HAVE_SYS_TIME_H - found -- Looking for include files LMMS_HAVE_SYS_WAIT_H -- Looking for include files LMMS_HAVE_SYS_WAIT_H - found -- Looking for include files LMMS_HAVE_SYS_SELECT_H -- Looking for include files LMMS_HAVE_SYS_SELECT_H - found -- Looking for include files LMMS_HAVE_STDARG_H -- Looking for include files LMMS_HAVE_STDARG_H - found -- Looking for include files LMMS_HAVE_SIGNAL_H -- Looking for include files LMMS_HAVE_SIGNAL_H - found -- Looking for include files LMMS_HAVE_SCHED_H -- Looking for include files LMMS_HAVE_SCHED_H - found -- Looking for include files LMMS_HAVE_SYS_SOUNDCARD_H -- Looking for include files LMMS_HAVE_SYS_SOUNDCARD_H - found -- Looking for include files LMMS_HAVE_SOUNDCARD_H -- Looking for include files LMMS_HAVE_SOUNDCARD_H - not found. -- Looking for include files LMMS_HAVE_FCNTL_H -- Looking for include files LMMS_HAVE_FCNTL_H - found -- Looking for include files LMMS_HAVE_SYS_IOCTL_H -- Looking for include files LMMS_HAVE_SYS_IOCTL_H - found -- Looking for include files LMMS_HAVE_CTYPE_H -- Looking for include files LMMS_HAVE_CTYPE_H - found -- Looking for include files LMMS_HAVE_STRING_H -- Looking for include files LMMS_HAVE_STRING_H - found -- Looking for include files LMMS_HAVE_PROCESS_H -- Looking for include files LMMS_HAVE_PROCESS_H - not found. -- Looking for include files LMMS_HAVE_LOCALE_H -- Looking for include files LMMS_HAVE_LOCALE_H - found -- Looking for Q_WS_X11 -- Looking for Q_WS_X11 - found -- Looking for Q_WS_WIN -- Looking for Q_WS_WIN - not found. -- Looking for Q_WS_QWS -- Looking for Q_WS_QWS - not found. -- Looking for Q_WS_MAC -- Looking for Q_WS_MAC - not found. -- Found Qt4: /usr/bin/qmake (found suitable version "4.8.1", required is "4.6.0;COMPONENTS;QtCore;QtGui;QtXml;QtNetwork") -- Found Qt translations in /usr/share/qt4/translations -- checking for module 'sndfile>=1.0.11' -- found sndfile, version 1.0.25 -- Looking for include files CMAKE_HAVE_PTHREAD_H -- Looking for include files CMAKE_HAVE_PTHREAD_H - found -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - found -- Found Threads: TRUE -- Found libzip: /usr/lib/libzip.so -- Found libflac++: /usr/lib/i386-linux-gnu/libFLAC.so;/usr/lib/i386-linux-gnu/libFLAC++.so -- Found STK: /usr/lib/i386-linux-gnu/libstk.so -- checking for module 'portaudio-2.0' -- found portaudio-2.0, version 19 -- Found Portaudio: portaudio;asound;m;pthread -- checking for module 'libpulse' -- found libpulse, version 1.1 -- Found PulseAudio Simple: /usr/lib/i386-linux-gnu/libpulse.so -- Looking for vorbis_bitrate_addblock in vorbis -- Looking for vorbis_bitrate_addblock in vorbis - found -- Found OggVorbis: /usr/lib/i386-linux-gnu/libogg.so;/usr/lib/i386-linux-gnu/libvorbis.so;/usr/lib/i386-linux-gnu/libvorbisfile.so;/usr/lib/i386-linux-gnu/libvorbisenc.so -- Looking for snd_seq_create_simple_port in asound -- Looking for snd_seq_create_simple_port in asound - found -- Found ALSA: /usr/lib/i386-linux-gnu/libasound.so -- Looking for include files LMMS_HAVE_MACHINE_SOUNDCARD_H -- Looking for include files LMMS_HAVE_MACHINE_SOUNDCARD_H - not found. -- Looking for include files LMMS_HAVE_LINUX_AWE_VOICE_H -- Looking for include files LMMS_HAVE_LINUX_AWE_VOICE_H - not found. -- Looking for include files LMMS_HAVE_AWE_VOICE_H -- Looking for include files LMMS_HAVE_AWE_VOICE_H - not found. -- Looking for include files LMMS_HAVE__USR_SRC_SYS_I386_ISA_SOUND_AWE_VOICE_H -- Looking for include files LMMS_HAVE__USR_SRC_SYS_I386_ISA_SOUND_AWE_VOICE_H - not found. -- Looking for include files LMMS_HAVE__USR_SRC_SYS_GNU_I386_ISA_SOUND_AWE_VOICE_H -- Looking for include files LMMS_HAVE__USR_SRC_SYS_GNU_I386_ISA_SOUND_AWE_VOICE_H - not found. -- Looking for C++ include sys/asoundlib.h -- Looking for C++ include sys/asoundlib.h - found -- Looking for C++ include alsa/asoundlib.h -- Looking for C++ include alsa/asoundlib.h - found -- Looking for snd_pcm_resume in asound -- Looking for snd_pcm_resume in asound - found -- checking for module 'jack>=0.77' -- found jack, version 0.121.2 -- checking for module 'fftw3f>=3.0.0' -- package 'fftw3f>=3.0.0' not found CMake Error at /usr/share/cmake-2.8/Modules/FindPkgConfig.cmake:266 (message): A required package was not found Call Stack (most recent call first): /usr/share/cmake-2.8/Modules/FindPkgConfig.cmake:320 (_pkg_check_modules_internal) CMakeLists.txt:309 (PKG_CHECK_MODULES) -- checking for module 'fluidsynth>=1.0.7' -- found fluidsynth, version 1.1.5 -- Looking for include files LMMS_HAVE_LV2CORE -- Looking for include files LMMS_HAVE_LV2CORE - found -- Looking for include files LMMS_HAVE_SLV2_SCALEPOINTS_H -- Looking for include files LMMS_HAVE_SLV2_SCALEPOINTS_H - not found. -- Looking for slv2_world_new in slv2 -- Looking for slv2_world_new in slv2 - found -- Looking for librdf_new_world in rdf -- Looking for librdf_new_world in rdf - found -- Looking for wine_init in wine -- Looking for wine_init in wine - found -- Looking for C++ include windows.h -- Looking for C++ include windows.h - found -- checking for module 'samplerate>=0.1.7' -- package 'samplerate>=0.1.7' not found -- Performing Test HAVE_LRINT -- Performing Test HAVE_LRINT - Success -- Performing Test HAVE_LRINTF -- Performing Test HAVE_LRINTF - Success -- Performing Test CPU_CLIPS_POSITIVE -- Performing Test CPU_CLIPS_POSITIVE - Failed -- Performing Test CPU_CLIPS_NEGATIVE -- Performing Test CPU_CLIPS_NEGATIVE - Success -- Looking for XOpenDisplay in /usr/lib/i386-linux-gnu/libX11.so;/usr/lib/i386-linux-gnu/libXext.so -- Looking for XOpenDisplay in /usr/lib/i386-linux-gnu/libX11.so;/usr/lib/i386-linux-gnu/libXext.so - found -- Looking for gethostbyname -- Looking for gethostbyname - found -- Looking for connect -- Looking for connect - found -- Looking for remove -- Looking for remove - found -- Looking for shmat -- Looking for shmat - found -- Looking for IceConnectionNumber in ICE -- Looking for IceConnectionNumber in ICE - found -- Found X11: /usr/lib/i386-linux-gnu/libX11.so -- Found Freetype: /usr/lib/i386-linux-gnu/libfreetype.so Installation Summary -------------------- * Install Directory : /usr/local * Use system's libsamplerate : Supported audio interfaces -------------------------- * ALSA : OK * JACK : OK * OSS : OK * PortAudio : OK * PulseAudio : OK * SDL : OK Supported MIDI interfaces ------------------------- * ALSA : OK * OSS : OK * WinMM : <not supported on this platform> Supported file formats for project export ----------------------------------------- * WAVE : OK * OGG/VORBIS : OK * FLAC : OK Optional plugins ---------------- * SoundFont2 player : OK * Stk Mallets : OK * VST-instrument hoster : OK * VST-effect hoster : OK * LV2 hoster : OK * CALF LADSPA plugins : OK * CAPS LADSPA plugins : OK * CMT LADSPA plugins : OK * TAP LADSPA plugins : OK * SWH LADSPA plugins : OK * FL .zip import : OK ----------------------------------------------------------------- IMPORTANT: after installing missing packages, remove CMakeCache.txt before running cmake again! ----------------------------------------------------------------- -- Configuring incomplete, errors occurred! Here are some parts the contents of my lmms/build/CMakeCache.txt file: # This is the CMakeCache file. # For build in directory: /home/jk/Downloads/lmms-git/lmms/build # It was generated by CMake: /usr/bin/cmake # You can edit this file to change values found and used by cmake. # If you do not want to change any of the values, simply exit the editor. # If you do want to change a value, simply edit, save, and exit the editor. # The syntax for the file is as follows: # KEY:TYPE=VALUE # KEY is the name of a variable in the cache. # TYPE is a hint to GUI's for the type of VALUE, DO NOT EDIT TYPE!. # VALUE is the current value for the KEY. ######################## # EXTERNAL cache entries ######################## //Path to a file. ALSA_INCLUDES:PATH=/usr/include //Path to a library. ASOUND_LIBRARY:FILEPATH=/usr/lib/i386-linux-gnu/libasound.so //Path to a program. CMAKE_AR:FILEPATH=/usr/bin/ar //Choose the type of build, options are: None(CMAKE_CXX_FLAGS or // CMAKE_C_FLAGS used) Debug Release RelWithDebInfo MinSizeRel. CMAKE_BUILD_TYPE:STRING= //Enable/Disable color output during build. CMAKE_COLOR_MAKEFILE:BOOL=ON //CXX compiler. CMAKE_CXX_COMPILER:FILEPATH=/usr/bin/c++ //Flags used by the compiler during all build types. CMAKE_CXX_FLAGS:STRING= //Flags used by the compiler during debug builds. CMAKE_CXX_FLAGS_DEBUG:STRING=-g //Flags used by the compiler during release minsize builds. CMAKE_CXX_FLAGS_MINSIZEREL:STRING=-Os -DNDEBUG //Flags used by the compiler during release builds (/MD /Ob1 /Oi // /Ot /Oy /Gs will produce slightly less optimized but smaller // files). CMAKE_CXX_FLAGS_RELEASE:STRING=-O3 -DNDEBUG //Flags used by the compiler during Release with Debug Info builds. CMAKE_CXX_FLAGS_RELWITHDEBINFO:STRING=-O2 -g //C compiler. CMAKE_C_COMPILER:FILEPATH=/usr/bin/gcc //Flags used by the compiler during all build types. CMAKE_C_FLAGS:STRING= //Flags used by the compiler during debug builds. CMAKE_C_FLAGS_DEBUG:STRING=-g //Flags used by the compiler during release minsize builds. CMAKE_C_FLAGS_MINSIZEREL:STRING=-Os -DNDEBUG //Flags used by the compiler during release builds (/MD /Ob1 /Oi // /Ot /Oy /Gs will produce slightly less optimized but smaller // files). CMAKE_C_FLAGS_RELEASE:STRING=-O3 -DNDEBUG //Flags used by the compiler during Release with Debug Info builds. CMAKE_C_FLAGS_RELWITHDEBINFO:STRING=-O2 -g //Flags used by the linker. CMAKE_EXE_LINKER_FLAGS:STRING=' ' //Flags used by the linker during debug builds. CMAKE_EXE_LINKER_FLAGS_DEBUG:STRING= //Flags used by the linker during release minsize builds. CMAKE_EXE_LINKER_FLAGS_MINSIZEREL:STRING= //Flags used by the linker during release builds. CMAKE_EXE_LINKER_FLAGS_RELEASE:STRING= //Flags used by the linker during Release with Debug Info builds. CMAKE_EXE_LINKER_FLAGS_RELWITHDEBINFO:STRING= //Enable/Disable output of compile commands during generation. CMAKE_EXPORT_COMPILE_COMMANDS:BOOL=OFF //Install path prefix, prepended onto install directories. CMAKE_INSTALL_PREFIX:PATH=/usr/local //Path to a program. CMAKE_LINKER:FILEPATH=/usr/bin/ld //Path to a program. CMAKE_MAKE_PROGRAM:FILEPATH=/usr/bin/make //Flags used by the linker during the creation of modules. CMAKE_MODULE_LINKER_FLAGS:STRING=' ' //Flags used by the linker during debug builds. CMAKE_MODULE_LINKER_FLAGS_DEBUG:STRING= //Flags used by the linker during release minsize builds. CMAKE_MODULE_LINKER_FLAGS_MINSIZEREL:STRING= //Flags used by the linker during release builds. CMAKE_MODULE_LINKER_FLAGS_RELEASE:STRING= //Flags used by the linker during Release with Debug Info builds. CMAKE_MODULE_LINKER_FLAGS_RELWITHDEBINFO:STRING= ==[...]== That's a list of the contents of my lmms/build folder: [DIR]lmms/build$ dir CMakeCache.txt CPackSourceConfig.cmake lmmsconfig.h plugins CMakeFiles data lmms.rc CPackConfig.cmake include lmmsversion.h My Question: It just tells me that that "errors" occurred, but I can't see any error message. It seems like everything went fine. - So: Any idea what the problem could be? - Thanks.

    Read the article

  • The Faces in the Crowdsourcing

    - by Applications User Experience
    By Jeff Sauro, Principal Usability Engineer, Oracle Imagine having access to a global workforce of hundreds of thousands of people who can perform tasks or provide feedback on a design quickly and almost immediately. Distributing simple tasks not easily done by computers to the masses is called "crowdsourcing" and until recently was an interesting concept, but due to practical constraints wasn't used often. Enter Amazon.com. For five years, Amazon has hosted a service called Mechanical Turk, which provides an easy interface to the crowds. The service has almost half a million registered, global users performing a quarter of a million human intelligence tasks (HITs). HITs are submitted by individuals and companies in the U.S. and pay from $.01 for simple tasks (such as determining if a picture is offensive) to several dollars (for tasks like transcribing audio). What do we know about the people who toil away in this digital crowd? Can we rely on the work done in this anonymous marketplace? A rendering of the actual Mechanical Turk (from Wikipedia) Knowing who is behind Amazon's Mechanical Turk is fitting, considering the history of the actual Mechanical Turk. In the late 1800's, a mechanical chess-playing machine awed crowds as it beat master chess players in what was thought to be a mechanical miracle. It turned out that the creator, Wolfgang von Kempelen, had a small person (also a chess master) hiding inside the machine operating the arms to provide the illusion of automation. The field of human computer interaction (HCI) is quite familiar with gathering user input and incorporating it into all stages of the design process. It makes sense then that Mechanical Turk was a popular discussion topic at the recent Computer Human Interaction usability conference sponsored by the Association for Computing Machinery in Atlanta. It is already being used as a source for input on Web sites (for example, Feedbackarmy.com) and behavioral research studies. Two papers shed some light on the faces in this crowd. One paper tells us about the shifting demographics from mostly stay-at-home moms to young men in India. The second paper discusses the reliability and quality of work from the workers. Just who exactly would spend time doing tasks for pennies? In "Who are the crowdworkers?" University of California researchers Ross, Silberman, Zaldivar and Tomlinson conducted a survey of Mechanical Turk worker demographics and compared it to a similar survey done two years before. The initial survey reported workers consisting largely of young, well-educated women living in the U.S. with annual household incomes above $40,000. The more recent survey reveals a shift in demographics largely driven by an influx of workers from India. Indian workers went from 5% to over 30% of the crowd, and this block is largely male (two-thirds) with a higher average education than U.S. workers, and 64% report an annual income of less than $10,000 (keeping in mind $1 has a lot more purchasing power in India). This shifting demographic certainly has implications as language and culture can play critical roles in the outcome of HITs. Of course, the demographic data came from paying Turkers $.10 to fill out a survey, so there is some question about both a self-selection bias (characteristics which cause Turks to take this survey may be unrepresentative of the larger population), not to mention whether we can really trust the data we get from the crowd. Crowds can perform tasks or provide feedback on a design quickly and almost immediately for usability testing. (Photo attributed to victoriapeckham Flikr While having immediate access to a global workforce is nice, one major problem with Mechanical Turk is the incentive structure. Individuals and companies that deploy HITs want quality responses for a low price. Workers, on the other hand, want to complete the task and get paid as quickly as possible, so that they can get on to the next task. Since many HITs on Mechanical Turk are surveys, how valid and reliable are these results? How do we know whether workers are just rushing through the multiple-choice responses haphazardly answering? In "Are your participants gaming the system?" researchers at Carnegie Mellon (Downs, Holbrook, Sheng and Cranor) set up an experiment to find out what percentage of their workers were just in it for the money. The authors set up a 30-minute HIT (one of the more lengthy ones for Mechanical Turk) and offered a very high $4 to those who qualified and $.20 to those who did not. As part of the HIT, workers were asked to read an email and respond to two questions that determined whether workers were likely rushing through the HIT and not answering conscientiously. One question was simple and took little effort, while the second question required a bit more work to find the answer. Workers were led to believe other factors than these two questions were the qualifying aspect of the HIT. Of the 2000 participants, roughly 1200 (or 61%) answered both questions correctly. Eighty-eight percent answered the easy question correctly, and 64% answered the difficult question correctly. In other words, about 12% of the crowd were gaming the system, not paying enough attention to the question or making careless errors. Up to about 40% won't put in more than a modest effort to get paid for a HIT. Young men and those that considered themselves in the financial industry tended to be the most likely to try to game the system. There wasn't a breakdown by country, but given the demographic information from the first article, we could infer that many of these young men come from India, which makes language and other cultural differences a factor. These articles raise questions about the role of crowdsourcing as a means for getting quick user input at low cost. While compensating users for their time is nothing new, the incentive structure and anonymity of Mechanical Turk raises some interesting questions. How complex of a task can we ask of the crowd, and how much should these workers be paid? Can we rely on the information we get from these professional users, and if so, how can we best incorporate it into designing more usable products? Traditional usability testing will still play a central role in enterprise software. Crowdsourcing doesn't replace testing; instead, it makes certain parts of gathering user feedback easier. One can turn to the crowd for simple tasks that don't require specialized skills and get a lot of data fast. As more studies are conducted on Mechanical Turk, I suspect we will see crowdsourcing playing an increasing role in human computer interaction and enterprise computing. References: Downs, J. S., Holbrook, M. B., Sheng, S., and Cranor, L. F. 2010. Are your participants gaming the system?: screening mechanical turk workers. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010). CHI '10. ACM, New York, NY, 2399-2402. Link: http://doi.acm.org/10.1145/1753326.1753688 Ross, J., Irani, L., Silberman, M. S., Zaldivar, A., and Tomlinson, B. 2010. Who are the crowdworkers?: shifting demographics in mechanical turk. In Proceedings of the 28th of the international Conference Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010). CHI EA '10. ACM, New York, NY, 2863-2872. Link: http://doi.acm.org/10.1145/1753846.1753873

    Read the article

  • CodePlex Daily Summary for Monday, August 11, 2014

    CodePlex Daily Summary for Monday, August 11, 2014Popular ReleasesSpace Engineers Server Manager: SESM V1.15: V1.15 - Updated Quartz library - Correct a bug in the new mod managment - Added a warning if you have backup enabled on a server but no static map configuredAspose for Apache POI: Missing Features of Apache POI SS - v 1.2: Release contain the Missing Features in Apache POI SS SDK in comparison with Aspose.Cells What's New ? Following Examples: Create Pivot Charts Detect Merged Cells Sort Data Printing Workbooks Feedback and Suggestions Many more examples are available at Aspose Docs. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.AngularGo (SPA Project Template): AngularGo.VS2013.vsix: First ReleaseTouchmote: Touchmote 1.0 beta 13: Changes Less GPU usage Works together with other Xbox 360 controls Bug fixesPublic Key Infrastructure PowerShell module: PowerShell PKI Module v3.0: Important: I would like to hear more about what you are thinking about the project? I appreciate that you like it (2000 downloads over past 6 months), but may be you have to say something? What do you dislike in the module? Maybe you would love to see some new functionality? Tell, what you think! Installation guide:Use default installation path to install this module for current user only. To install this module for all users — enable "Install for all users" check-box in installation UI ...Modern UI for WPF: Modern UI 1.0.6: The ModernUI assembly including a demo app demonstrating the various features of Modern UI for WPF. BREAKING CHANGE LinkGroup.GroupName renamed to GroupKey NEW FEATURES Improved rendering on high DPI screens, including support for per-monitor DPI awareness available in Windows 8.1 (see also Per-monitor DPI awareness) New ModernProgressRing control with 8 builtin styles New LinkCommands.NavigateLink routed command New Visual Studio project templates 'Modern UI WPF App' and 'Modern UI W...ClosedXML - The easy way to OpenXML: ClosedXML 0.74.0: Multiple thread safe improvements including AdjustToContents XLHelper XLColor_Static IntergerExtensions.ToStringLookup Exception now thrown when saving a workbook with no sheets, instead of creating a corrupt workbook Fix for hyperlinks with non-ASCII Characters Added basic workbook protection Fix for error thrown, when a spreadsheet contained comments and images Fix to Trim function Fix Invalid operation Exception thrown when the formula functions MAX, MIN, and AVG referenc...SEToolbox: SEToolbox 01.042.019 Release 1: Added RadioAntenna broadcast name to ship name detail. Added two additional columns for Asteroid material generation for Asteroid Fields. Added Mass and Block number columns to main display. Added Ellipsis to some columns on main display to reduce name confusion. Added correct SE version number in file when saving. Re-added in reattaching Motor when drag/dropping or importing ships (KeenSH have added RotorEntityId back in after removing it months ago). Added option to export and r...jQuery List DragSort: jQuery List DragSort 0.5.2: Fixed scrollContainer removing deprecated use of $.browser so should now work with latest version of jQuery. Added the ability to return false in dragEnd to revert sort order Project changes Added nuget package for dragsort https://www.nuget.org/packages/dragsort Converted repository from SVN to MercurialBraintree Client Library: Braintree 2.32.0: Allow credit card verification options to be passed outside of the nonce for PaymentMethod.create Allow billingaddress parameters and billingaddress_id to be passed outside of the nonce for PaymentMethod.create Add Subscriptions to paypal accounts Add PaymentMethod.update Add failonduplicatepaymentmethod option to PaymentMethod.create Add support for dispute webhooksThe Mario Kart 8 App: V1.0.2.1: First Codeplex release. WINDOWS INSTALLER ONLYAspose Java for Docx4j: Aspose.Words vs Docx4j - v 1.0: Release contain the Code Comparison for Features in Docx4j SDK and Aspose.Words What's New ?Following Examples: Accessing Document Properties Add Bookmarks Convert to Formats Delete Bookmarks Working with Comments Feedback and Suggestions Many more examples are available at Aspose Docs. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.File System Security PowerShell Module: NTFSSecurity 2.4.1: Add-Access and Remove-Access now take multiple accoutsYourSqlDba: YourSqlDba 5.2.1.: This version improves alert message that comes a while after you install the script. First it says to get it from YourSqlDba.CodePlex.com If you don't want to update now, just-rerun the script from your installed version. To get actual version running just execute install.PrintVersionInfo. . You can go to source code / history and click on change set 72957 to see changes in the script.Manipulator: Manipulator: manipulatorXNB filetype plugin for Paint.NET: Paint.NET XNB plugin v0.4.0.0: CHANGELOG Reverted old incomplete changes. Updated library for compatibility with Paint .NET 4. Updated project to NET 4.5. Updated version to 0.4.0.0. INSTALLATION INSTRUCTIONS Extract the ZIP file to your Paint.NET\FileTypes folder.EdiFabric: Release 4.1: Changed MessageContextWix# (WixSharp) - managed interface for WiX: Release 1.0.0.0: Release 1.0.0.0 Custom UI Custom MSI Dialog Custom CLR Dialog External UIMath.NET Numerics: Math.NET Numerics v3.2.0: Linear Algebra: Vector.Map2 (map2 in F#), storage-optimized Linear Algebra: fix RemoveColumn/Row early index bound check (was not strict enough) Statistics: Entropy ~Jeff Mastry Interpolation: use Array.BinarySearch instead of local implementation ~Candy Chiu Resources: fix a corrupted exception message string Portable Build: support .Net 4.0 as well by using profile 328 instead of 344. .Net 3.5: F# extensions now support .Net 3.5 as well .Net 3.5: NuGet package now contains pro...babelua: 1.6.5.1: V1.6.5.1 - 2014.8.7New feature: Formatting code; Stability improvement: fix a bug that pop up error "System.Net.WebResponse EndGetResponse";New ProjectsDouDou: a little project.Dynamic MVC: Dynamically generate views from your model objects for a data centric MVC application.EasyDb - Simple Data Access: EasyDb is a simple library for data access that allows you to write less code.ExpressToAbroad: just go!!!!!Full Silverlight Web Video/Voice Conferencing: The Goal of this project is to provide complete Open Source (Voice/Video Chatting Client/Server) Modules Using SilverlightGaia: Gaia is an app for Windows plataform, Gaia is like Siri and Google Now or Betty but Gaia use only text commands.pxctest: pxctestSTACS: Career Management System for MIT by Team "STACS"StrongWorld: StrongWorld.WebSuiteXevas Tools: Xevas is a professional coders group of 'Nimbuzz'. We make all tools for worldwide users of nimbuzz at free of cost.????????: ????????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ????????????????: ????????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ????????????????: ????????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ???????????????: ????????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ??????????????: ????????????????: ????????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ??????????????: ???????????????: ???????????????: ??????????????: ??????????????: ??????????????: ????????????????: ?????????

    Read the article

< Previous Page | 49 50 51 52 53 54 55  | Next Page >