Search Results

Search found 2459 results on 99 pages for 'mr guy 4'.

Page 97/99 | < Previous Page | 93 94 95 96 97 98 99  | Next Page >

  • Do’s and Don’ts Building SharePoint Applications

    - by Bil Simser
    SharePoint is a great platform for building quick LOB applications. Simple things from employee time trackers to server and software inventory to full blown Help Desks can be crafted up using SharePoint from just customizing Lists. No programming necessary. However there are a few tricks I’ve painfully learned over the years that you can use for your own solutions. DO What’s In A Name? When you create a new list, column, or view you’ll commonly name it something like “Expense Reports”. However this has the ugly effect of creating a url to the list as “Expense%20Reports”. Or worse, an internal field name of “Expense_x0x0020_Reports” which is not only cryptic but hard to remember when you’re trying to find the column by internal name. While “Expense Reports 2011” is user friendly, “ExpenseReports2011” is not (unless you’re a programmer). So that’s not the solution. Well, not entirely. Instead when you create your column or list or view use the scrunched up name (I can’t think of the technical term for it right now) of “ExpenseReports2011”, “WomenAtTheOfficeThatAreMen” or “KoalaMeatIsGoodWhenBroiled”. After you’ve created it, go back and change the name to the more friendly “Silly Expense Reports That Nobody Reads”. The original internal name will be the url and code friendly one without spaces while the one used on data entry forms and view headers will be the human version. Smart Columns When building a view include columns that make sense. By default when you add a column the “Add to default view” is checked. Resist the urge to be lazy and leave it checked. Uncheck that puppy and decide consciously what columns should be included in the view. Pick columns that make sense to what the user is trying to do. This means you have to talk to the user. Yes, I know. That can be trying at times and even painful. Go ahead, talk to them. You might learn something. Find out what’s important to them and why. If they’re doing something repetitively as part of their job, try to make their life easier by including what’s most important to them. Do they really need to see the Created *and* Modified date of a document or do they just need the title and author? You’ll only find out after talking to them (or getting them drunk in a bar and leaving them in the back alley handcuffed to a garbage bin, don’t ask). Gotta Keep it Separated Hey, views are there for a reason. Use them. While “All Items” is a fine way to present a list of well, all items, it’s hardly sufficient to present a list of servers built before the Y2K bug hit. You’ll be scrolling the list for hours finally arriving at Page 387 of 12,591 and cursing that SharePoint guy for convincing you that putting your hardware into a list would be of any use to anyone. Next to collecting the data, presenting it is just as important. Views are often overlooked and many times ignored or misused. They’re the way you can slice and dice the data up so that you’re not trying to consume 3,000 years of human evolution on a single web page. Remember views can be filtered so feel free to create a view for each status or one for each operating system or one for each species of Information Worker you might be putting in that list or document library. Not only will it reduce the number of items someone sees at one time, it’ll also make the information that much more relevant. Also remember that each view is a separate page. Use it in navigation by creating a menu on the Quick Launch to each view. The discoverability of the Views menu isn’t overly obvious and if you violate the rule of columns (see Horizontally Scrolling below) the view menu doesn’t even show up until you shuffle the scroll bar to the left. Navigation links, big giant buttons, a screaming flashing “CLICK ME NOW” will help your users find their way. Sort It! Views are great so we’re building nice, rich views for the user. Awesomesauce. However sort is not very discoverable by the user. For example when you’re looking at a view how do you know if it’s ascending or descending and what is it sorted on. Maybe it’s sorted using two fields so what’s that all about? Help your users by letting them know the information they’re looking at is sorted. Maybe you name the view something appropriate like “Bogus Expense Claims Sorted By Deadbeats”. If you use the naming strategy just make sure you keep the name consistent with the description. In the previous example their better be a Deadbeat column so I can see the sort in action. Having a “Loser” column, while equally correct, is a little obtuse to the average Information Worker. Remember, they usually don’t use acronyms and even if they knew how to, it’s not immediately obvious to them that’s what you’re trying to convey. Another option is to simply drop a Content Editor Web Part above the list and explain exactly the view they’re looking at. Each view is it’s own page so one CEWP won’t be used across the board. Be descriptive in what the user is seeing but try to keep it brief. Dumping the first chapter of I, Claudius might be informative to the data but can gobble up screen real estate and miss the point of having the list. DO NOT Useless Attachments The attachments column is, in a word, useless. For the most part. Sure it indicates there’s an attachment on the list item but in the grand scheme of things that’s not overly informative. Maybe it is and by all means, if it makes sense to you include it. Colour it. Make it shine and stand like the Return of Clippy on every SharePoint list. Without it being functional it can be boring. EndUserSharePoint.com has an article to make the son of Clippy that much more useful so feel free to head over and check out this blog post by Paul Grenier on the task (Warning code ahead! Danger Will Robinson!) In any case, I would suggest you remove it from your views. Again if it’s important then include it but consider the jQuery solution above to make it functional. It’s added by default to views and one of things that people forget to clean up. Horizontal Scrolling Screen real estate is premium so building a list that contains 8,000 columns and stretches horizontally across 15 screens probably isn’t the most user friendly experience. Most users can’t figure out how to scroll vertically let alone horizontally so don’t make it even that more confusing for them. Take the Steve Krug approach in your view designs and try not to make the user think. Again views are your friend. Consider splitting up the data into views where one view contains 10 columns and other view contains the other 10. Okay, maybe your information doesn’t work that way but humans can only process 7 pieces of data at a time, 10 at most (then their heads explode and you don’t want to clean that mess up, especially on a Friday night before the big dance). It drives me batshit crazy when I see a view with 80 columns of data. I often ask the user “So what do you do with all this information”. The response is usually “With this data [the first 10 columns] I decide if I’m going to fire everyone, and with this data [the next 10 columns] I decide if I’m going to set the building on fire and collect the insurance”. It’s at that point I show them how to create two new views “People Who Are About To Get The Axe” and “Beach Time For The Executives”. Again, talk to your users and try to reason with them on cutting down the number of columns they see at once. Vertical Scrolling Another big faux pas I find is the use of multi-line comment fields in views. It’s not so bad when you have a statement like this in your view: “I really like, oh my god, thought I was going to scream when I saw this turtle then I decided what I was going to have for dinner and frankly I hate having to work late so when I was talking to the customer I thought, oh my god, what if the customer has turtles and then it appeared to me that I really was hungry so I'm going to have lunch now.” It’s fine if that’s the only column along with two or three others, but once you slap those 20 columns of data into the list, the comment field wraps and forms a new multi-page novel that takes up your entire screen. Do everyone a favour and just avoid adding the column to views. Train the user to just click through to the item if they need to see the contents. Duplicate Information Duplication is never good. Views and great as you can group data together. For example create a view of project status reports grouped by author. Then you can see what project manager is being a dip and not submitting their report. However if you group by author do you really need the Created By field as well in the view? Or if the view is grouped by Project then Author do you need both. Horizontal real estate is always at a premium so try not to clutter up the view with duplicate data like this. Oh  yeah, if you’re scratching your head saying “But Bil, if I don’t include the Project name in the view and I have a lot of items then how do I know which one I’m looking at”. That’s a hint that your grouping is too vague or you have too much data in the view based on that criteria. Filter it down a notch, create some views, and try to keep the group down to a single screen where you can see the group header at the top of the page. Again it’s just managing the information you have. Redundant, See Redundant This partially relates to duplicate information and smart columns but basically remember to not include the obvious in a view. Remember, don’t make me think. If you’ve gone to the trouble (and it was a lot of trouble wasn’t it?) to create separate views of your data by creating a “September Zombie Brain Sales”, “October Zombie Brain Sales”, etc. then please for the love of all that is holy do not include the Month and Product columns in your view. Similarly if you create a “My” view of anything (“My Favourite Brands of Spandex”, “My Co-Workers I Find The Urge To Disinfect”) then again, do not include the owner or author field (or whatever field you use to identify “My”). That’s just silly. Hope that helps! Happy customizing!

    Read the article

  • Exam 70-480 Study Material: Programming in HTML5 with JavaScript and CSS3

    - by Stacy Vicknair
    Here’s a list of sources of information for the different elements that comprise the 70-480 exam: General Resources http://www.w3schools.com (As pointed out in David Pallmann’s blog some of this content is unverified, but it is a decent source of information. For more about when it isn’t decent, see http://www.w3fools.com ) http://www.bloggedbychris.com/2012/09/19/microsoft-exam-70-480-study-guide/ (A guy who did a lot of what I did already, sadly I found this halfway through finishing my resources list. This list is expertly put together so I would recommend checking it out.) http://davidpallmann.blogspot.com/2012/08/microsoft-certification-exam-70-480.html http://pluralsight.com/training/Courses (Yes, this isn’t free, but if you look at the course listing there is an entire section on HTML5, CSS3 and Javascript. You can always try the trial!)   Some of the links I put below will overlap with the other resources above, but I tried to find explanations that looked beneficial to me on links outside those already mentioned.   Test Breakdown Implement and Manipulate Document Structures and Objects (24%) Create the document structure. o This objective may include but is not limited to: structure the UI by using semantic markup, including for search engines and screen readers (Section, Article, Nav, Header, Footer, and Aside); create a layout container in HTML http://www.w3schools.com/html/html5_new_elements.asp   Write code that interacts with UI controls. o This objective may include but is not limited to: programmatically add and modify HTML elements; implement media controls; implement HTML5 canvas and SVG graphics http://www.w3schools.com/html/html5_canvas.asp http://www.w3schools.com/html/html5_svg.asp   Apply styling to HTML elements programmatically. o This objective may include but is not limited to: change the location of an element; apply a transform; show and hide elements   Implement HTML5 APIs. o This objective may include but is not limited to: implement storage APIs, AppCache API, and Geolocation API http://www.w3schools.com/html/html5_geolocation.asp http://www.w3schools.com/html/html5_webstorage.asp http://www.w3schools.com/html/html5_app_cache.asp   Establish the scope of objects and variables. o This objective may include but is not limited to: define the lifetime of variables; keep objects out of the global namespace; use the “this” keyword to reference an object that fired an event; scope variables locally and globally http://robertnyman.com/2008/10/09/explaining-javascript-scope-and-closures/ http://www.quirksmode.org/js/this.html   Create and implement objects and methods. o This objective may include but is not limited to: implement native objects; create custom objects and custom properties for native objects using prototypes and functions; inherit from an object; implement native methods and create custom methods http://www.javascriptkit.com/javatutors/object.shtml http://www.crockford.com/javascript/inheritance.html http://stackoverflow.com/questions/1635116/javascript-class-method-vs-class-prototype-method http://www.javascriptkit.com/javatutors/proto.shtml     Implement Program Flow (25%) Implement program flow. o This objective may include but is not limited to: iterate across collections and array items; manage program decisions by using switch statements, if/then, and operators; evaluate expressions http://www.javascriptkit.com/jsref/looping.shtml http://www.javascriptkit.com/javatutors/varshort.shtml http://www.javascriptkit.com/javatutors/switch.shtml   Raise and handle an event. o This objective may include but is not limited to: handle common events exposed by DOM (OnBlur, OnFocus, OnClick); declare and handle bubbled events; handle an event by using an anonymous function http://dev.w3.org/2006/webapi/DOM-Level-3-Events/html/DOM3-Events.html http://javascript.info/tutorial/bubbling-and-capturing   Implement exception handling. o This objective may include but is not limited to: set and respond to error codes; throw an exception; request for null checks; implement try-catch-finally blocks http://www.javascriptkit.com/javatutors/trycatch.shtml   Implement a callback. o This objective may include but is not limited to: receive messages from the HTML5 WebSocket API; use jQuery to make an AJAX call; wire up an event; implement a callback by using anonymous functions; handle the “this” pointer http://www.w3.org/TR/2011/WD-websockets-20110419/ http://www.html5rocks.com/en/tutorials/websockets/basics/ http://api.jquery.com/jQuery.ajax/   Create a web worker process. o This objective may include but is not limited to: start and stop a web worker; pass data to a web worker; configure timeouts and intervals on the web worker; register an event listener for the web worker; limitations of a web worker https://developer.mozilla.org/en-US/docs/DOM/Using_web_workers http://www.html5rocks.com/en/tutorials/workers/basics/   Access and Secure Data (26%) Validate user input by using HTML5 elements. o This objective may include but is not limited to: choose the appropriate controls based on requirements; implement HTML input types and content attributes (for example, required) to collect user input http://diveintohtml5.info/forms.html   Validate user input by using JavaScript. o This objective may include but is not limited to: evaluate a regular expression to validate the input format; validate that you are getting the right kind of data type by using built-in functions; prevent code injection http://www.regular-expressions.info/javascript.html http://msdn.microsoft.com/en-us/library/66ztdbe6(v=vs.94).aspx https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Operators/typeof http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://stackoverflow.com/questions/942011/how-to-prevent-javascript-injection-attacks-within-user-generated-html   Consume data. o This objective may include but is not limited to: consume JSON and XML data; retrieve data by using web services; load data or get data from other sources by using XMLHTTPRequest http://www.erichynds.com/jquery/working-with-xml-jquery-and-javascript/ http://www.webdevstuff.com/86/javascript-xmlhttprequest-object.html http://www.json.org/ http://stackoverflow.com/questions/4935632/how-to-parse-json-in-javascript   Serialize, deserialize, and transmit data. o This objective may include but is not limited to: binary data; text data (JSON, XML); implement the jQuery serialize method; Form.Submit; parse data; send data by using XMLHTTPRequest; sanitize input by using URI/form encoding http://api.jquery.com/serialize/ http://www.javascript-coder.com/javascript-form/javascript-form-submit.phtml http://stackoverflow.com/questions/327685/is-there-a-way-to-read-binary-data-into-javascript https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/encodeURI     Use CSS3 in Applications (25%) Style HTML text properties. o This objective may include but is not limited to: apply styles to text appearance (color, bold, italics); apply styles to text font (WOFF and @font-face, size); apply styles to text alignment, spacing, and indentation; apply styles to text hyphenation; apply styles for a text drop shadow http://www.w3schools.com/css/css_text.asp http://www.w3schools.com/css/css_font.asp http://nicewebtype.com/notes/2009/10/30/how-to-use-css-font-face/ http://webdesign.about.com/od/beginningcss/p/aacss5text.htm http://www.w3.org/TR/css3-text/ http://www.css3.info/preview/box-shadow/   Style HTML box properties. o This objective may include but is not limited to: apply styles to alter appearance attributes (size, border and rounding border corners, outline, padding, margin); apply styles to alter graphic effects (transparency, opacity, background image, gradients, shadow, clipping); apply styles to establish and change an element’s position (static, relative, absolute, fixed) http://net.tutsplus.com/tutorials/html-css-techniques/10-css3-properties-you-need-to-be-familiar-with/ http://www.w3schools.com/css/css_image_transparency.asp http://www.w3schools.com/cssref/pr_background-image.asp http://ie.microsoft.com/testdrive/graphics/cssgradientbackgroundmaker/default.html http://www.w3.org/TR/CSS21/visufx.html http://www.barelyfitz.com/screencast/html-training/css/positioning/ http://davidwalsh.name/css-fixed-position   Create a flexible content layout. o This objective may include but is not limited to: implement a layout using a flexible box model; implement a layout using multi-column; implement a layout using position floating and exclusions; implement a layout using grid alignment; implement a layout using regions, grouping, and nesting http://www.html5rocks.com/en/tutorials/flexbox/quick/ http://www.css3.info/preview/multi-column-layout/ http://msdn.microsoft.com/en-us/library/ie/hh673558(v=vs.85).aspx http://dev.w3.org/csswg/css3-grid-layout/ http://dev.w3.org/csswg/css3-regions/   Create an animated and adaptive UI. o This objective may include but is not limited to: animate objects by applying CSS transitions; apply 3-D and 2-D transformations; adjust UI based on media queries (device adaptations for output formats, displays, and representations); hide or disable controls http://www.bloggedbychris.com/2012/09/19/microsoft-exam-70-480-study-guide/   Find elements by using CSS selectors and jQuery. o This objective may include but is not limited to: choose the correct selector to reference an element; define element, style, and attribute selectors; find elements by using pseudo-elements and pseudo-classes (for example, :before, :first-line, :first-letter, :target, :lang, :checked, :first-child) http://www.bloggedbychris.com/2012/09/19/microsoft-exam-70-480-study-guide/   Structure a CSS file by using CSS selectors. o This objective may include but is not limited to: reference elements correctly; implement inheritance; override inheritance by using !important; style an element based on pseudo-elements and pseudo-classes (for example, :before, :first-line, :first-letter, :target, :lang, :checked, :first-child) http://www.bloggedbychris.com/2012/09/19/microsoft-exam-70-480-study-guide/   Technorati Tags: 70-480,CSS3,HTML5,HTML,CSS,JavaScript,Certification

    Read the article

  • Migrating from SQL Trace to Extended Events

    - by extended_events
    In SQL Server codenamed “Denali” we are moving our diagnostic tracing capabilities forward by building a system on top of Extended Events. With every new system you face the specter of migration which is always a bit of a hassle. I’m obviously motivated to see everyone move their diagnostic tracing systems over to the new extended events based system, so I wanted to make sure we lowered the bar for the migration process to help ease your trials. In my initial post on Denali CTP 1 I described a couple tables that we created that will help map the existing SQL Trace Event Classes to the equivalent Extended Events events. In this post I’ll describe the tables in a bit more details, explain the relationship between the SQL Trace objects (Event Class & Column) and Extended Event objects (Events & Actions) and at the end provide some sample code for a managed stored procedure that will take an existing SQL Trace session (eg. a trace that you can see in sys.Traces) and converts it into event session DDL. Can you relate? In some ways, SQL Trace and Extended Events is kind of like the Standard and Metric measuring systems in the United States. If you spend too much time trying to figure out how to convert between the two it will probably make your head hurt. It’s often better to just use the new system without trying to translate between the two. That said, people like to relate new things to the things they’re comfortable with, so, with some trepidation, I will now explain how these two systems are related to each other. First, some terms… SQL Trace is made up of Event Classes and Columns. The Event Class occurs as the result of some activity in the database engine, for example, SQL:Batch Completed fires when a batch has completed executing on the server. Each Event Class can have any number of Columns associated with it and those Columns contain the data that is interesting about the Event Class, such as the duration or database name. In Extended Events we have objects named Events, EventData field and Actions. The Event (some people call this an xEvent but I’ll stick with Event) is equivalent to the Event Class in SQL Trace since it is the thing that occurs as the result of some activity taking place in the server. An  EventData field (from now on I’ll just refer to these as fields) is a piece of information that is highly correlated with the event and is always included as part of the schema of an Event. An Action is something that can be associated with any Event and it will cause some additional “action” to occur when ever the parent Event occurs. Actions can do a number of different things for example, there are Actions that collect additional data and, take memory dumps. When mapping SQL Trace onto Extended Events, Columns are covered by a combination of both fields and Actions. Knowing exactly where a Column is covered by a field and where it is covered by an Action is a bit of an art, so we created the mapping tables to make you an Artist without the years of practice. Let me draw you a map. Event Mapping The table dbo.trace_xe_event_map exists in the master database with the following structure: Column_name Type trace_event_id smallint package_name nvarchar xe_event_name nvarchar By joining this table sys.trace_events using trace_event_id and to the sys.dm_xe_objects using xe_event_name you can get a fair amount of information about how Event Classes are related to Events. The most basic query this lends itself to is to match an Event Class with the corresponding Event. SELECT     t.trace_event_id,     t.name [event_class],     e.package_name,     e.xe_event_name FROM sys.trace_events t INNER JOIN dbo.trace_xe_event_map e     ON t.trace_event_id = e.trace_event_id There are a couple things you’ll notice as you peruse the output of this query: For the most part, the names of Events are fairly close to the original Event Class; eg. SP:CacheMiss == sp_cache_miss, and so on. We’ve mostly stuck to a one to one mapping between Event Classes and Events, but there are a few cases where we have combined when it made sense. For example, Data File Auto Grow, Log File Auto Grow, Data File Auto Shrink & Log File Auto Shrink are now all covered by a single event named database_file_size_change. This just seemed like a “smarter” implementation for this type of event, you can get all the same information from this single event (grow/shrink, Data/Log, Auto/Manual growth) without having multiple different events. You can use Predicates if you want to limit the output to just one of the original Event Class measures. There are some Event Classes that did not make the cut and were not migrated. These fall into two categories; there were a few Event Classes that had been deprecated, or that just did not make sense, so we didn’t migrate them. (You won’t find an Event related to mounting a tape – sorry.) The second class is bigger; with rare exception, we did not migrate any of the Event Classes that were related to Security Auditing using SQL Trace. We introduced the SQL Audit feature in SQL Server 2008 and that will be the compliance and auditing feature going forward. Doing this is a very deliberate decision to support separation of duties for DBAs. There are separate permissions required for SQL Audit and Extended Events tracing so you can assign these tasks to different people if you choose. (If you’re wondering, the permission for Extended Events is ALTER ANY EVENT SESSION, which is covered by CONTROL SERVER.) Action Mapping The table dbo.trace_xe_action_map exists in the master database with the following structure: Column_name Type trace_column_id smallint package_name nvarchar xe_action_name nvarchar You can find more details by joining this to sys.trace_columns on the trace_column_id field. SELECT     c.trace_column_id,     c.name [column_name],     a.package_name,     a.xe_action_name FROM sys.trace_columns c INNER JOIN    dbo.trace_xe_action_map a     ON c.trace_column_id = a.trace_column_id If you examine this list, you’ll notice that there are relatively few Actions that map to SQL Trace Columns given the number of Columns that exist. This is not because we forgot to migrate all the Columns, but because much of the data for individual Event Classes is included as part of the EventData fields of the equivalent Events so there is no need to specify them as Actions. Putting it all together If you’ve spent a bunch of time figuring out the inner workings of SQL Trace, and who hasn’t, then you probably know that the typically set of Columns you find associated with any given Event Class in SQL Profiler is not fix, but is determine by the contents of the table sys.trace_event_bindings. We’ve used this table along with the mapping tables to produce a list of Event + Action combinations that duplicate the SQL Profiler Event Class definitions using the following query, which you can also find in the Books Online topic How To: View the Extended Events Equivalents to SQL Trace Event Classes. USE MASTER; GO SELECT DISTINCT    tb.trace_event_id,    te.name AS 'Event Class',    em.package_name AS 'Package',    em.xe_event_name AS 'XEvent Name',    tb.trace_column_id,    tc.name AS 'SQL Trace Column',    am.xe_action_name as 'Extended Events action' FROM (sys.trace_events te LEFT OUTER JOIN dbo.trace_xe_event_map em    ON te.trace_event_id = em.trace_event_id) LEFT OUTER JOIN sys.trace_event_bindings tb    ON em.trace_event_id = tb.trace_event_id LEFT OUTER JOIN sys.trace_columns tc    ON tb.trace_column_id = tc.trace_column_id LEFT OUTER JOIN dbo.trace_xe_action_map am    ON tc.trace_column_id = am.trace_column_id ORDER BY te.name, tc.name As you might imagine, it’s also possible to map an existing trace definition to the equivalent event session by judicious use of fn_trace_geteventinfo joined with the two mapping tables. This query extracts the list of Events and Actions equivalent to the trace with ID = 1, which is most likely the Default Trace. You can find this query, along with a set of other queries and steps required to migrate your existing traces over to Extended Events in the Books Online topic How to: Convert an Existing SQL Trace Script to an Extended Events Session. USE MASTER; GO DECLARE @trace_id int SET @trace_id = 1 SELECT DISTINCT el.eventid, em.package_name, em.xe_event_name AS 'event'    , el.columnid, ec.xe_action_name AS 'action' FROM (sys.fn_trace_geteventinfo(@trace_id) AS el    LEFT OUTER JOIN dbo.trace_xe_event_map AS em       ON el.eventid = em.trace_event_id) LEFT OUTER JOIN dbo.trace_xe_action_map AS ec    ON el.columnid = ec.trace_column_id WHERE em.xe_event_name IS NOT NULL AND ec.xe_action_name IS NOT NULL You’ll notice in the output that the list doesn’t include any of the security audit Event Classes, as I wrote earlier, those were not migrated. But wait…there’s more! If this were an infomercial there’d by some obnoxious guy next to me blogging “Well Mike…that’s pretty neat, but I’m sure you can do more. Can’t you make it even easier to migrate from SQL Trace?”  Needless to say, I’d blog back, in an overly excited way, “You bet I can' obnoxious blogger side-kick!” What I’ve got for you here is a Extended Events Team Blog only special – this tool will not be sold in any store; it’s a special offer for those of you reading the blog. I’ve wrapped all the logic of pulling the configuration information out of an existing trace and and building the Extended Events DDL statement into a handy, dandy CLR stored procedure. Once you load the assembly and register the procedure you just supply the trace id (from sys.traces) and provide a name for the event session. Run the procedure and out pops the DDL required to create an equivalent session. Any aspects of the trace that could not be duplicated are included in comments within the DDL output. This procedure does not actually create the event session – you need to copy the DDL out of the message tab and put it into a new query window to do that. It also requires an existing trace (but it doesn’t have to be running) to evaluate; there is no functionality to parse t-sql scripts. I’m not going to spend a bunch of time explaining the code here – the code is pretty well commented and hopefully easy to follow. If not, you can always post comments or hit the feedback button to send us some mail. Sample code: TraceToExtendedEventDDL   Installing the procedure Just in case you’re not familiar with installing CLR procedures…once you’ve compile the assembly you can load it using a script like this: -- Context to master USE master GO -- Create the assembly from a shared location. CREATE ASSEMBLY TraceToXESessionConverter FROM 'C:\Temp\TraceToXEventSessionConverter.dll' WITH PERMISSION_SET = SAFE GO -- Create a stored procedure from the assembly. CREATE PROCEDURE CreateEventSessionFromTrace @trace_id int, @session_name nvarchar(max) AS EXTERNAL NAME TraceToXESessionConverter.StoredProcedures.ConvertTraceToExtendedEvent GO Enjoy! -Mike

    Read the article

  • Building the Ultimate SharePoint 2010 Development Environment

    - by Manesh Karunakaran
    It’s been more than a month since SharePoint 2010 RTMed. And a lot of people have downloaded and set up their very own SharePoint 2010 development rigs. And quite a few people have written blogs about setting up good development environments, there is even an MSDN article on it. Two of the blogs worth noting are from MVPs Sahil Malik and Wictor Wilén. Make sure that you check these out as well. Part of the bad side-effects of being a geek is the need to do the technical stuff the best way possible (pragmatic or otherwise), but the problem with this is that what is considered “best” is relative. Precisely the reason why you are reading this post now. Most of the posts that I read are out dated/need updations or are using the wrong OS’es or virtualization solutions (again, opinions vary) or using them the wrong way. Here’s a developer’s view of Building the Ultimate SharePoint 2010 Development Rig. If you are a sales guy, it’s time to close this window. Confusion 1: Which Host Operating System and Virtualization Solution to use? This point has been beaten to death in numerous blog posts in the past, if you have time to invest, read this excellent post by our very own SharePoint Joel on this subject. But if you are planning to build the Ultimate Development Rig, then Windows Server 2008 R2 with Hyper-V is the option that you should be looking at. I have been using this as my primary OS for about 6-7 months now, and I haven’t had any Driver issue or Application compatibility issue. In my experience all the Windows 7 drivers work fine with WIN2008 R2 also. You can enable Aero for eye candy (and the Windows 7 look and feel) and except for a few things like the Hibernation support (which a can be enabled if you really want it), Windows Server 2008 R2, is the best Workstation OS that I have used till date. But frankly the answer to this question of which OS to use depends primarily on one question - Are you willing to change your primary OS? If the answer to that is ‘Yes’, then Windows 2008 R2 with Hyper-V is the best option, if not look at vmWare or VirtualBox, both are equally good. Those who are familiar with a Virtual PC background might prefer Sun VirtualBox. Besides, these provide support for running 64 bit guest machines on 32 bit hosts if the underlying hardware is truly 64 bit. See my earlier post on this. Since we are going to make the ultimate rig, we will use Windows Server 2008 R2 with Hyper-V, for reasons mentioned above. Confusion 2: Should I use a multi-(virtual) server set up? A lot of people use multiple servers for their development environments - like Wictor Wilén is suggesting - one server hosting the Active directory, one hosting SharePoint Server and another one for SQL Server. True, this mimics the production environment the best possible way, but as somebody who has fallen for this set up earlier, I can tell you that you don’t really get anything by doing this. Microsoft has done well to ensure that if you can do it on one machine, you can do it in a farm environment as well. Besides, when you run multiple Server class machine instances in parallel, there are a lot of unwanted processor cycles wasted for no good use. In my personal experience, as somebody who needs to switch between MOSS 2007/SharePoint 2010 environments from time to time, the best possible solution is to Make the host Windows Server 2008 R2 machine your Domain Controller (AD Server) Make all your Virtual Guest OS’es join this domain. Have each Individual Guest OS Image have it’s own local SQL Server instance. The advantages are that you can reuse the users and groups in each of the Guest operating systems, you can manage the users in one place, AD is light weight and doesn't take too much resources on your host machine and also having separate SQL instances for each of the Development images gives you maximum flexibility in terms of configuration, for example your SharePoint rigs can have simpler DB configurations, compared to your MS BI blast pits. Confusion 3: Which Operating System should I use to run SharePoint 2010 Now that’s a no brainer. Use Windows 2008 R2 as your Guest OS. When you are building the ultimate rig, why compromise? If you are planning to run Windows Server 2008 as your Guest OS, there are a few patches that you need to install at different times during the installation, for that follow the steps mentioned here Okay now that we have made our choices, let’s get to the interesting part of building the rig, Step 1: Prepare the host machine – Install Windows Server 2008 R2 Install Windows Server 2008 R2 on your best Desktop/Laptop. If you have read this far, I am quite sure that you are somebody who can install an OS on your own, so go ahead and do that. Make sure that you run the compatibility wizard before you go ahead and nuke your current OS. There are plenty of blogs telling you how to make a good Windows 2008 R2 Workstation that feels and behaves like a Windows 7 machine, follow one and once you are done, head to Step 2. Step 2: Configure the host machine as a Domain Controller Before we begin this, let me tell you, this step is completely optional, you don’t really need to do this, you can simply use the local users on the Guest machines instead, but if this is a much cleaner approach to manage users and groups if you run multiple guest operating systems.  This post neatly explains how to configure your Windows Server 2008 R2 host machine as a Domain Controller. Follow those simple steps and you are good to go. If you are not able to get it to work, try this. Step 3: Prepare the guest machine – Install Windows Server 2008 R2 Open Hyper-V Manager Choose to Create a new Guest Operating system Allocate at least 2 GB of Memory to the Guest OS Choose the Windows 2008 R2 Installation Media Start the Virtual Machine to commence installation. Once the Installation is done, Activate the OS. Step 4: Make the Guest operating systems Join the Domain This step is quite simple, just follow these steps below, Fire up Hyper-V Manager, open your Guest OS Click on Start, and Right click on ‘Computer’ and choose ‘Properties’ On the window that pops-up, click on ‘Change Settings’ On the ‘System Properties’ Window that comes up, Click on the ‘Change’ button Now a window named ‘Computer Name/Domain Changes’ opens up, In the text box titled Domain, type in the Domain name from Step 2. Click Ok and windows will show you the welcome to domain message and ask you to restart the machine, click OK to restart. If the addition to domain fails, that means that you have not set up networking in Hyper-V for the Guest OS to communicate with the Host. To enable it, follow the steps I had mentioned in this post earlier. Step 5: Install SQL Server 2008 R2 on the Guest Machine SQL Server 2008 R2 gets installed with out hassle on Windows Server 2008 R2. SQL Server 2008 needs SP2 to work properly on WIN2008 R2. Also SQL Server 2008 R2 allows you to directly add PowerPivot support to SharePoint. Choose to install in SharePoint Integrated Mode in Reporting Server Configuration. Step 6: Install KB971831 and SharePoint 2010 Pre-requisites Now install the WCF Hotfix for Microsoft Windows (KB971831) from this location, and SharePoint 2010 Pre-requisites from the SP2010 Installation media. Step 7: Install and Configure SharePoint 2010 Install SharePoint 2010 from the installation media, after the installation is complete, you are prompted to start the SharePoint Products and Technologies Configuration Wizard. If you are using a local instance of Microsoft SQL Server 2008, install the Microsoft SQL Server 2008 KB 970315 x64 before starting the wizard. If your development environment uses a remote instance of Microsoft SQL Server 2008 or if it has a pre-existing installation of Microsoft SQL Server 2008 on which KB 970315 x64 has already been applied, this step is not necessary. With the wizard open, do the following: Install SQL Server 2008 KB 970315 x64. After the Microsoft SQL Server 2008 KB 970315 x64 installation is finished, complete the wizard. Alternatively, you can choose not to run the wizard by clearing the SharePoint Products and Technologies Configuration Wizard check box and closing the completed installation dialog box. Install SQL Server 2008 KB 970315 x64, and then manually start the SharePoint Products and Technologies Configuration Wizard by opening a Command Prompt window and executing the following command: C:\Program Files\Common Files\Microsoft Shared Debug\Web Server Extensions\14\BIN\psconfigui.exe The SharePoint Products and Technologies Configuration Wizard may fail if you are using a computer that is joined to a domain but that is not connected to a domain controller. Step 8: Install Visual Studio 2010 and SharePoint 2010 SDK Install Visual Studio 2010 Download and Install the Microsoft SharePoint 2010 SDK Step 9: Install PowerPivot for SharePoint and Configure Reporting Services Pop-In the SQLServer 2008 R2 installation media once again and install PowerPivot for SharePoint. This will get added as another instance named POWERPIVOT. Configure Reporting Services by following the steps mentioned here, if you need to get down to the details on how the integration between SharePoint 2010 and SQL Server 2008 R2 works, see Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010 an excellent article by Alan Le Marquand Step 10: Download and Install Sample Databases for Microsoft SQL Server 2008R2 SharePoint 2010 comes with a lot of cool stuff like PerformancePoint Services and BCS, if you need to try these out, you need to have data in your databases. So if you want to save yourself the trouble of creating sample data for your PerformancePoint and BCS experiments, download and install Sample Databases for Microsoft SQL Server 2008R2 from CodePlex. And you are done! Fire up your Visual Studio 2010 and Start Coding away!!

    Read the article

  • Doing CRUD on XML using id attributes in C# ASP.NET

    - by Brandon G
    I'm a LAMP guy and ended up working this small news module for an asp.net site, which I am having some difficulty with. I basically am adding and deleting elements via AJAX based on the id. Before, I had it working based on the the index of a set of elements, but would have issues deleting, since the index would change in the xml file and not on the page (since I am using ajax). Here is the rundown news.xml <?xml version="1.0" encoding="utf-8"?> <news> <article id="1"> <title>Red Shield Environmental implements the PARCSuite system</title> <story>Add stuff here</story> </article> <article id="2"> <title>Catalyst Paper selects PARCSuite for its Mill-Wide Process...</title> <story>Add stuff here</story> </article> <article id="3"> <title>Weyerhaeuser uses Capstone Technology to provide Control...</title> <story>Add stuff here</story> </article> </news> Page sending del request: <script type="text/javascript"> $(document).ready(function () { $('.del').click(function () { var obj = $(this); var id = obj.attr('rel'); $.post('add-news-item.aspx', { id: id }, function () { obj.parent().next().remove(); obj.parent().remove(); } ); }); }); </script> <a class="del" rel="1">...</a> <a class="del" rel="1">...</a> <a class="del" rel="1">...</a> My functions protected void addEntry(string title, string story) { XmlDocument news = new XmlDocument(); news.Load(Server.MapPath("../news.xml")); XmlAttributeCollection ids = news.Attributes; //Create a new node XmlElement newelement = news.CreateElement("article"); XmlElement xmlTitle = news.CreateElement("title"); XmlElement xmlStory = news.CreateElement("story"); XmlAttribute id = ids[0]; int myId = int.Parse(id.Value + 1); id.Value = ""+myId; newelement.SetAttributeNode(id); xmlTitle.InnerText = this.TitleBox.Text.Trim(); xmlStory.InnerText = this.StoryBox.Text.Trim(); newelement.AppendChild(xmlTitle); newelement.AppendChild(xmlStory); news.DocumentElement.AppendChild(newelement); news.Save(Server.MapPath("../news.xml")); } protected void deleteEntry(int selectIndex) { XmlDocument news = new XmlDocument(); news.Load(Server.MapPath("../news.xml")); XmlNode xmlnode = news.DocumentElement.ChildNodes.Item(selectIndex); xmlnode.ParentNode.RemoveChild(xmlnode); news.Save(Server.MapPath("../news.xml")); } I haven't updated deleteEntry() and you can see, I was using the array index but need to delete the article element based on the article id being passed. And when adding an entry, I need to set the id to the last elements id + 1. Yes, I know SQL would be 100 times easier, but I don't have access so... help?

    Read the article

  • Lightning talk: Coderetreat

    - by Michael Williamson
    In the spirit of trying to encourage more deliberate practice amongst coders in Red Gate, Lauri Pesonen had the idea of running a coderetreat in Red Gate. Lauri and I ran the first one a few weeks ago: given that neither of us hadn’t even been to a coderetreat before, let alone run one, I think it turned out quite well. The participants gave positive feedback, saying that they enjoyed the day, wrote some thought-provoking code and would do it again. Sam Blackburn was one of the attendees, and gave a lightning talk to the other developers in one of our regular lightning talk sessions: In case you can’t watch the video, I’ve transcribed the talk below, although I’d recommend watching the video if you can — I didn’t have much time to do the transcribing! So, what is a coderetreat? So it’s not just something in Red Gate, there’s a website and everything, although it’s not a very big website. It calls itself a community network. The basic ideas behind coderetreat are: you’ve got one day, and you split it into one hour sections. You spend three quarters of that coding, and do a little retrospective at the end. You’re supposed to start fresh each, we were told to delete our code after every session. We were in pairs, swapping after each session, and we did the same task every time. In fact, Conway’s Game of Life is the only task mentioned anywhere that I find for coderetreat. So I don’t know what we’ll do next time, or if we’re meant to do the same thing again. There are some guiding principles which felt to us like restrictions, that you have to code in crazy ways to encourage better code. Final thing is that it’s supposed to be free for outsiders to join. It’s meant to be a kind of networking thing, where you link up with people from other companies. We had a pilot day with Michael and Lauri. Since it was basically the first time any of us had done anything like this, everybody was from Red Gate. We didn’t chat to anybody else for the initial one. The task was Conway’s Game of Life, which most of you have probably heard of it, all but one of us knew about it when did the coderetreat. I won’t got into the details of what it is, but it felt like the right size of task, basically one or two groups actually produced something working by the end of the day, and of course that doesn’t mean it’s necessarily a day’s work to produce that because we were starting again every hour. The task really drives you more than trying to create good code, I found. It was really tempting to try and get it working rather than stick to the rules. But it’s really good to stop and try again because there are so many what-ifs when you’ve finished writing something, “what if I’d done it this way?”. You can answer all those questions at a coderetreat because it’s not about getting a product out the door, it’s about learning and playing with ideas. So we had all these different practices we were trying. I’ll try and go through most of these. Single responsibility is this idea that everything should do just one thing. It was the very first session, we were still trying to figure out how do you go about the Game of Life? So by the end of forty-five minutes hadn’t produced very much for that first session. We were still thinking, “Do we start with a board, how do we represent all these squares? It can be infinitely big, help, this is getting really difficult!”. So, most of us didn’t really get anywhere on the first one. Although it was interesting that some people started with the board, one group started with the FateDecider class that decides whether things live or die. A sort of god class, but in a good way. They managed to implement all of the rules without even defining how the squares were arranged or anything like that. Another thing we tried was TDD (test-driven development). I’m sure most of you know what TDD is: Watch a test, watch it fail for the right reason Write code to pass the test, watch it pass Refactor, check the test still passes Repeat! It basically worked, we were able to produce code, but we often found the tests defined the direction that code went, which is obviously the idea of TDD. But you tend to find that by the time you’ve even written your first assertion, which is supposed to be the very first thing you write, because you write your tests backwards from the assertions back to the initial conditions, you’ve already constrained the logic of the code in some way by the time you’ve done that. You then get to this situation of, “Well, we actually want to go in a slightly different direction. Can we do this?”. Can we write tests that don’t constrain the architecture? Wrapping up all primitives: it’s kind of turtles all the way down. We had a Size, which has a Width and Height, which both derive from Dimension. You’ve got pages of code before you’ve even done anything. No getters and setters (use tell don’t ask instead): mocks and stubs for tests are required if you want to assert that your results are what you think they should be. You can’t just check the internal state of the code. And people found that really challenging and it made them think in a different way which I think is really good. Not having mutable state: that was kind of confusing because we weren’t quite sure what fitted within that rule and what didn’t, and I think we were trying too hard to follow the rule rather than the guideline. No if-statements: supposed to use polymorphism instead, but polymorphism still requires a factory with conditional behaviour. We did something really crazy to get around this: public T If(bool condition, Func<T> left, Func<T> right) { var dict = new Dictionary<bool, Func<T>> {{true, left}, {false, right}}; return dict[condition].Invoke(); } That is not really polymorphism, is it? For-loops: you can always replace a for-loop with recursion, but it doesn’t tend to make it any more readable unless it’s the kind of task that really lends itself to that. So it was interesting, it was good practice, but it wouldn’t make it easier it’s the kind of tree-structure algorithm where that would help. Having a limit on the number of levels of indentation: again, I think it does produce very nice, clean code, but it wasn’t actually a challenge because you just extract methods. That’s quite a useful thing because you can apply that to real code and say, “Okay, should this method really be going crazy like this?” No talking: we hated that. It’s like there’s two of you at a computer, and one of you is doing the typing, what does the other guy do if they’re not allowed to talk. The answer is TDD ping-pong – one person writes the tests, and then the other person writes the code to pass the test. And that creates communication without actually having to have discussion about things which is kind of cool. No code comments: just makes no difference to anything. It’s a forty-five minute exercise, so what are you going to put comments in code for? Finally, this is my fault. I discovered an entertaining way of doing the calculation that was kind of cool (using convolutions over the state of the board). Unfortunately, it turns out to be really hard to implement in C#, so didn’t even manage to work out how to do that convolution in C#. It’s trivial in some high-level languages, but you need something matrix-orientated for it to really work. That’s most of it, really. The thoughts that people went away with: we put down our answers to questions like “What have you learnt?” and “What surprised you?”, “How are you going to do things differently?”, and most people said redoing the problem is really, really good for understanding it properly. People hate having a massive legacy codebase that they can’t change, so being able to attack something three different ways in an environment where the end-product isn’t important: that’s something people really enjoyed. Pair-programming: also people said that they wanted to do more of that, especially with TDD ping-pong, where you write the test and somebody else writes the code. Various people thought different things about immutables, but most people thought they were good, they promote functional programming. And TDD people found really hard. “Tell, don’t ask” people found really, really hard and really, really, really hard to do well. And the recursion just made things trickier to debug. But most people agreed that coderetreats are really cool, and we should do more of them.

    Read the article

  • Resons why I'm using php rather then asp.net [closed]

    - by spirytus
    I have basic idea of how asp .Net works, but finds all framework hard to use if you are a newbie. I found compiling, web applications vs websites and all that stuff you should know to program in asp .net a bit tedious and so personally I go with php to create small, to medium applications for my clients. There are couple of reasons for it: php is easy scripting language, top to bottom and you done. You still can create objects, classes and if you have idea of MVC its fairly easy to create basic structure yourself so you can keep you presentation layer "relatively" clean. Although I find myself still mixing basic logic in my view's, I am trying to stick to booleans, and for each loops. ASP .net keeps it cleaner as far as I know and I agree that this is great. Heaps of free stuff for php and lots of help everywhere Although choice of IDE's for php is very limited, I still don't have to be stuck with VisualStudio. Lets be honest.. you can program in whatever you like but does anyone uses anything else other than VS? For basic applications I create, Visual Studio doesn't come even close to notepad :) / phpEdit (or similar) combination. It lacks of many features I constantly use, although armies of developers are using it and it must be for good reason. Personally not a big fan of VS though. Being on the market for that long should make editing much easier. I know .Net comes with awesome set of controls, validators etc. which is truly awesome. For me the problem starts if I want my validator to behave slightly different way and lets say fade in/out error messages. I know its possible to extend it behavior, plug into lifecycle and output different JS to the client and so on. I just never see it happen in places I work, and honestly, I don't even think most of .net developers I worked with during last couple of years would know how to do that. In php I have to grab some plugin for jQuery and use it for validation, which is fairly easy task once you had done it before. Again I'm sure its easy for .net gurus, but for newbie like me its almost impossible. I found that many asp .net programmers are very limited in what they are able to do and basically whack together .net applications using same lame set of controls, not even bothering in looking into how it works and what if? Now I don't want to anger anyone :) I know there is huge number of excellent .Net developers who know their stuff and are able to extend controls and do all that magic in no time. I found it a bit annoying though that many of them stick to what is provided without even trying to make it better. Nothing against .net here, just a thought really :) I remember when asp.net came out the idea was that front-end people will not be able to screw anything up and do their fron-end stuff without worrying what happens behind. Now its never that easy and I always tend to get server side people to fix this and that during development. Things like ID's assigned to controls can very easily make your application break and if someone is pure HTML guy using VS its easy to break something. Thats my thoughs on php and .net and reasons why for my work I go with php. I know that once learned asp .net is awesome technology and summing all up php doesn't even come close to it. For someone like me however, individually developing small basic applications for clients, php seems to work much better. Please let me know your thoughts on the above :)

    Read the article

  • In Which We Demystify A Few Docupresentment Settings And Learn the Ethos of the Author

    - by Andy Little
    It's no secret that Docupresentment (part of the Oracle Documaker suite) is powerful tool for integrating on-demand and interactive applications for publishing with the Oracle Documaker framework.  It's also no secret there are are many details with respect to the configuration of Docupresentment that can elude even the most erudite of of techies.  To be sure, Docupresentment will work for you right out of the box, and in most cases will suit your needs without toying with a configuration file.  But, where's the adventure in that?   With this inaugural post to That's The Way, I'm going to introduce myself, and what my aim is with this blog.  If you didn't figure it out already by checking out my profile, my name is Andy and I've been with Oracle (nee Skywire Software nee Docucorp nee Formmaker) since the formative years of 1998.  Strangely, it doesn't seem that long ago, but it's certainly a lifetime in the age of technology.  I recall running a BBS from my parent's basement on a 1200 baud modem, and the trepidation and sweaty-palmed excitement of upgrading to the power and speed of 2400 baud!  Fine, I'll admit that perhaps I'm inflating the experience a bit, but I was kid!  This is the stuff of War Games and King's Quest I and the demise of TI-99 4/A.  Exciting times.  So fast-forward a bit and I'm 12 years into a career in the world of document automation and publishing working for the best (IMHO) software company on the planet.  With That's The Way I hope to shed a little light and peek under the covers of some of the more interesting aspects of implementations involving the tech space within the Oracle Insurance Global Business Unit (IGBU), which includes Oracle Documaker, Rating & Underwriting, and Policy Administration to name a few.  I may delve off course a bit, and you'll likely get a dose of humor (at least in my mind) but I hope you'll glean at least a tidbit of usefulness with each post.  Feel free to comment as I'm a fairly conversant guy and happy to talk -- it's stopping the talking that's the hard part... So, back to our regularly-scheduled post, already in progress.  By this time you've visited Oracle's E-Delivery site and acquired your properly-licensed version of Oracle Documaker.  Wait -- you didn't find it?  Understandable -- navigating the voluminous download library within Oracle can be a daunting task.  It's pretty simple once you’ve done it a few times.  Login to the e-delivery site, and accept the license terms and restrictions.  Then, you’ll be able to select the Oracle Insurance Applications product pack and your appropriate platform. Click Go and you’ll see a list of applicable products, and you’ll click on Oracle Documaker Media Pack (as I went to press with this article the version is 11.4): Finally, click the Download button next to Docupresentment (again, version at press time is 2.2 p5). This should give you a ZIP file that contains the installation packages for the Docupresentment Server and Client, cryptically named IDSServer22P05W32.exe and IDSClient22P05W32.exe. At this time, I’d like to take a little detour and explain that the world of Oracle, like most technical companies, is rife with acronyms.  One of the reasons Skywire Software was a appealing to Oracle was our use of many acronyms, including the occasional use of multiple acronyms with the same meaning.  I apologize in advance and will try to point these out along the way.  Here’s your first sticky note to go along with that: IDS = Internet Document Server = Docupresentment Once you’ve completed the installation, you’ll have a shiny new Docupresentment server and client, and if you installed the default location it will be living in c:\docserv. Unix users, I’m one of you!  You’ll find it by default in  ~/docupresentment/docserv.  Forging onward with the meat of this post is learning about some special configuration options.  By now you’ve read the documentation included with the download (specifically ids_book.pdf) which goes into some detail of the rubric of the configuration file and in fact there’s even a handy utility that provides an interface to the configuration file (see Running IDSConfig in the documentation).  But who wants to deal with a configuration utility when we have the tools and technology to edit the file <gasp> by hand! I shall now proceed with the standard Information Technology Under the Hood Disclaimer: Please remember to back up any files before you make changes.  I am not responsible for any havoc you may wreak! Go to your installation directory, and locate your docserv.xml file.  Open it in your favorite XML editor.  I happen to be fond of Notepad++ with the XML Tools plugin.  Almost immediately you will behold the splendor of the configuration file.  Just take a moment and let that sink in.  Ok – moving on.  If you reviewed the documentation you know that inside the root <configuration> node there are multiple <section> nodes, each containing a specific group of settings.  Let’s take a look at <section name=”DocumentServer”>: There are a few entries I’d like to discuss.  First, <entry name=”StartCommand”>. This should be pretty self-explanatory; it’s the name of the executable that’s run when you fire up Docupresentment.  Immediately following that is <entry name=”StartArguments”> and as you might imagine these are the arguments passed to the executable.  A few things to point out: The –Dids.configuration=docserv.xml parameter specifies the name of your configuration file. The –Dlogging.configuration=logconf.xml parameter specifies the name of your logging configuration file (this uses log4j so bone up on that before you delve here). The -Djava.endorsed.dirs=lib/endorsed parameter specifies the path where 3rd party Java libraries can be located for use with Docupresentment.  More on that in another post. The <entry name=”Instances”> allows you to specify the number of instances of Docupresentment that will be started.  By default this is two, and generally two instances per CPU is adequate, however you will always need to perform load testing to determine the sweet spot based on your hardware and types of transactions.  You may have many, many more instances than 2. Time for a sidebar on instances.  An instance is nothing more than a separate process of Docupresentment.  The Docupresentment service that you fire up with docserver.bat or docserver.sh actually starts a watchdog process, which is then responsible for starting up the actual Docupresentment processes.  Each of these act independently from one another, so if one crashes, it does not affect any others.  In the case of a crashed process, the watchdog will start up another instance so the number of configured instances are always running.  Bottom line: instance = Docupresentment process. And now, finally, to the settings which gave me pause on an not-too-long-ago implementation!  Docupresentment includes a feature that watches configuration files (such as docserv.xml and logconf.xml) and will automatically restart its instances to load the changes.  You can configure the time that Docupresentment waits to check these files using the setting <entry name=”FileWatchTimeMillis”>.  By default the number is 12000ms, or 12 seconds.  You can save yourself a few CPU cycles by extending this time, or by disabling  the check altogether by setting the value to 0.  This may or may not be appropriate for your environment; if you have 100% uptime requirements then you probably don’t want to bring down an entire set of processes just to accept a new configuration value, so it’s best to leave this somewhere between 12 seconds to a few minutes.  Another point to keep in mind: if you are using Documaker real-time processing under Docupresentment the Master Resource Library (MRL) files and INI options are cached, and if you need to affect a change, you’ll have to “restart” Docupresentment.  Touching the docserv.xml file is an easy way to do this (other methods including using the RSS request, but that’s another post). The next item up: <entry name=”FilePurgeTimeSeconds”>.  You may already know that the Docupresentment system can generate many temporary files based on certain request types that are processed through the system.  What you may not know is how those files are cleaned up.  There are many rules in Docupresentment that cause the creation of temporary files.  When these files are created, Docupresentment writes an entry into a properties file called the file cache.  This file contains the name, creation date, and expiration time of each temporary file created by each instance of Docupresentment.  Periodically Docupresentment will check the file cache to determine if there are files that are past the expiration time, not unlike that block of cheese festering away in the back of my refrigerator.  However, unlike my ‘fridge cleaning tendencies, Docupresentment is quick to remove files that are past their expiration time.  You, my friend, have the power to control how often Docupresentment inspects the file cache.  Simply set the value for <entry name=”FilePurgeTimeSeconds”> to the number of seconds appropriate for your requirements and you’re set.  Note that file purging happens on a separate thread from normal request processing, so this shouldn’t interfere with response times unless the CPU happens to be really taxed at the point of cache processing.  Finally, after all of this, we get to the final setting I’m going to address in this post: <entry name=”FilePurgeList”>.  The default is “filecache.properties”.  This establishes the root name for the Docupresentment file cache that I mentioned previously.  Docupresentment creates a separate cache file for each instance based on this setting.  If you have two instances, you’ll see two files created: filecache.properties.1 and filecache.properties.2.  Feel free to open these up and check them out. I hope you’ve enjoyed this first foray into the configuration file of Docupresentment.  If you did enjoy it, feel free to drop a comment, I welcome feedback.  If you have ideas for other posts you’d like to see, please do let me know.  You can reach me at [email protected]. ‘Til next time! ###

    Read the article

  • How did I get here? My route to Android, iPhone, Windows Phone 7, and interest in Mobile Devices

    - by Wallym
    I get asked all the time how/why I got interested in mobile and jumped on this fairly early.  I tend to give half answers because it wasn't just one thing that took me to mobile, but a whole host of separate ivents culminating in a specific event where I wasdoing market research in May/June 2008.  Let me throw out the events and the facts about me: I tend to like new, different, cool stuff.  I jumped on .NET early on.  I jumped on Ajax early on.  I don't jump on every new technology that comes down the road, I'm probably the only person on the planet that doesn't "get" MVC, though I acknowledge that a lot of people do and it solves a number of problems in the default settings of ASP.NET WebForms. I remember buying an early Windows CE device. It was interesting, but dang, this stylus thing sucks. After I lost my third stylus, i just gave up.  I got my first mobile phone in early 1999.  Reception was crappy, but I could see the value in being mobile. In 1999, I worked on a manufacturing systems project.  One piece of the projects was a set of handheld devices on the shop floor.  While the UI was a crappy DOS based, yes I said DOS as in Disk Operating System Version 6.22, I could see that the wireless world was a direction I wanted to be in. In 2000, Microsoft released the first public alpha of .NET.  Very cool stuff indeed.  One piece of the puzzle was a set of mobile controls for ASP.NET.  I build numerous test apps as well as mobile version using these mobile controls.  Now, the mobile UIs of the time were based on WML, which was crap. I could real all the analysis of mobile and read all about growth rates.  Now, you have to realize that growth rates can be impressive when dealing with small numbers, but I knew it was a comer. In our first book, I got talked out of mobile because of the line from the publisher "Wally, mobile doesn't sell." Blackberry was the dominant device of the mid 2000s.  Its users were referred to as "Crackberry addicts."  Unfortunately, the mobile development experience for native apps was crap and the web experience was fairly rough as well, but if they could get the ecosystem started, other phones and better blackberryies would come out.  I finally jumped into using a blackberry. Sometime around 2006, I heard "Wally, mobile doesn't sell" again.  Now, anyone that knows me knows that someone saying something like this to me means I'll keep trying it. The phones of the mid 2000s were moving to be more graphical, but there were too many that had this idea that they had to use a stylus.  Stylus suck.  They get lost too easily. I worked on a project in 2007 and 2008 for a startup trying to answer the question of "What is there to do where I am at?"  For some reason, they wanted to be tied to PCs.  As it became obvious that they were having problems, their investor asked us to do some market research and to figure out what the marketplace did want.  One of the important things that I figured out was the we lived in a mobile world and if you had a mobile app, it need to be on a mobile device, not tied to a desktop/laptop/netbook device.  If there was any single event, this was it - I was doing some market research and sat and talked to people in a bar/restaurant in Atlanta called "The Grove" on Lavista.  The consensus of the people that I talked to was that they wanted their data where ever they were at, laptop, pc, mobile, whereever. In 2007, Apple released the iPhone.  Wow, what an impressive device, even with all the problems of a 1st generation device.  I bought an iPod Touch 1st generation to understand touch better, one of the best decisions I ever made. I decided in late 2008, to make a move into cloud, for a number of reasons.  I was working on an example app.  In April, 2009, one of my friends at Microsoft said "don't mention my name with this, but you need an iPhone front end for this app."  How do you get on the iPhone.  Well, there are a number of ways including: ObjectiveC.  Its hard to teach an old dog new tricks, and this dog knows .NET, not ObjectiveC. HTML, web, javascript optimized interface.  yeah, this is possible. PhoneGap.  Now, this is interesting, take an html interface and get it to run on the iPhone, Android, Blackberry, and other platforms.  I thought that this way made the most sense for me until......... MonoTouch.  In May/June 2009, Novell announced a way for .NET/c# developers to write apps for the iPhone.  This is the way that made the most sense to me. Titanium by Appcelerator.  This is similar in concept to PhoneGap.  I haven't played with this much but do want to learn more about it. In July, 2009, I emailed one of my contacts at Wrox to see if they would be interested in a short MonoTouch ebook in their Wrox Blox format.  I fully expected another  response along the lines of "Wally, mobile doesn't sell."  The response I got was "Wally, iPhone is H O T, get started immediately, can you have this to me before Labor Day."  Not quite the response I expected.  Thankfully, we didn't make the Labor Day, first draft date. I kept pushing back because I had a feeling that things were not going to be quite as polished and feature rich as necessary.  After all, Novell doesn't have the resouces of Microsoft's developer division. The ebook shipped on November 30, 2009. On about December, 15, 2009, my editor emailed and said "Your ebook is selling really well, lets do a full book and it by March 1 so get started."  Thankfully, guys like Craig Dunn and Chris Hardy were interested along with Martin and Ror joinged us later on. I bought my wife an iPhone 3Gs in early 2010 to go along with all my iPod Touch devices. I tried to pretend in 2010 that I wasn't that interested in mobile and still had interest in the desktop technologies.  I love the technologies and continue to use them today, but that isn't where my interest is right now.  I'm just about all mobile all the time with my energies.  Our book shipped in the beginning of July, 2010 right in the middle of the Apple FUD.I've been looking at Mobile Web as a way around the AppStores and Apple FUD problems of 2010. With all the Apple self FUD, we became interested in Android.I went up to Dino Esposito at DevConnections in Las Vegas at introduced myself. I've always tried to keep up with what Dino has been doing. I was shocked, he wanted to meet me.  We must have talked for 1.5 hours. It was way more time than I deserved. If you get a chance, go and introduce yourself to Dino. He's a great guy. Microsoft released Windows Phone 7 in the Fall of 2010.  I'm not doing development on that platform at this time.  I think they have a very interesting user interface.  The devices are being positively reviewed.  For my purposes, the devices are limited at this point in time.  We'll see what 2011 brings as far as updates to the operating system.  I need multitasking/background processing and html5 in the browser. Add that as well as acceptance in the marketplace and I'll be more interested in the device. Obviosuly, I'm now working on a MonoDroid book . I own Android and iPhone/iOS devices.  I am currently working on some startup ideas and am exploring as much in that area as I can. For 2011, I'm planning on speaking at Android Developer's Conference (AnDevCon) and Mobile Connections.  I'm really excited about this. I have a couple of magazine articles coming out in 2011 on Android and iPhone development with the Mono technologies.is Mono "The Answer"? What's "The Question?" I think it will work for me.  It might work for you, it might not.  it depends on your situation.  Its the current horse that I am riding. I might find a better horse tomorrow. So, that's how I got here.  I'm in love with mobile.  Mobile native apps on the device as well as mobile web.  I'm into all this cool stuff.  Where are you at?

    Read the article

  • How I understood monads, part 1/2: sleepless and self-loathing in Seattle

    - by Bertrand Le Roy
    For some time now, I had been noticing some interest for monads, mostly in the form of unintelligible (to me) blog posts and comments saying “oh, yeah, that’s a monad” about random stuff as if it were absolutely obvious and if I didn’t know what they were talking about, I was probably an uneducated idiot, ignorant about the simplest and most fundamental concepts of functional programming. Fair enough, I am pretty much exactly that. Being the kind of guy who can spend eight years in college just to understand a few interesting concepts about the universe, I had to check it out and try to understand monads so that I too can say “oh, yeah, that’s a monad”. Man, was I hit hard in the face with the limitations of my own abstract thinking abilities. All the articles I could find about the subject seemed to be vaguely understandable at first but very quickly overloaded the very few concept slots I have available in my brain. They also seemed to be consistently using arcane notation that I was entirely unfamiliar with. It finally all clicked together one Friday afternoon during the team’s beer symposium when Louis was patient enough to break it down for me in a language I could understand (C#). I don’t know if being intoxicated helped. Feel free to read this with or without a drink in hand. So here it is in a nutshell: a monad allows you to manipulate stuff in interesting ways. Oh, OK, you might say. Yeah. Exactly. Let’s start with a trivial case: public static class Trivial { public static TResult Execute<T, TResult>( this T argument, Func<T, TResult> operation) { return operation(argument); } } This is not a monad. I removed most concepts here to start with something very simple. There is only one concept here: the idea of executing an operation on an object. This is of course trivial and it would actually be simpler to just apply that operation directly on the object. But please bear with me, this is our first baby step. Here’s how you use that thing: "some string" .Execute(s => s + " processed by trivial proto-monad.") .Execute(s => s + " And it's chainable!"); What we’re doing here is analogous to having an assembly chain in a factory: you can feed it raw material (the string here) and a number of machines that each implement a step in the manufacturing process and you can start building stuff. The Trivial class here represents the empty assembly chain, the conveyor belt if you will, but it doesn’t care what kind of raw material gets in, what gets out or what each machine is doing. It is pure process. A real monad will need a couple of additional concepts. Let’s say the conveyor belt needs the material to be processed to be contained in standardized boxes, just so that it can safely and efficiently be transported from machine to machine or so that tracking information can be attached to it. Each machine knows how to treat raw material or partly processed material, but it doesn’t know how to treat the boxes so the conveyor belt will have to extract the material from the box before feeding it into each machine, and it will have to box it back afterwards. This conveyor belt with boxes is essentially what a monad is. It has one method to box stuff, one to extract stuff from its box and one to feed stuff into a machine. So let’s reformulate the previous example but this time with the boxes, which will do nothing for the moment except containing stuff. public class Identity<T> { public Identity(T value) { Value = value; } public T Value { get; private set;} public static Identity<T> Unit(T value) { return new Identity<T>(value); } public static Identity<U> Bind<U>( Identity<T> argument, Func<T, Identity<U>> operation) { return operation(argument.Value); } } Now this is a true to the definition Monad, including the weird naming of the methods. It is the simplest monad, called the identity monad and of course it does nothing useful. Here’s how you use it: Identity<string>.Bind( Identity<string>.Unit("some string"), s => Identity<string>.Unit( s + " was processed by identity monad.")).Value That of course is seriously ugly. Note that the operation is responsible for re-boxing its result. That is a part of strict monads that I don’t quite get and I’ll take the liberty to lift that strange constraint in the next examples. To make this more readable and easier to use, let’s build a few extension methods: public static class IdentityExtensions { public static Identity<T> ToIdentity<T>(this T value) { return new Identity<T>(value); } public static Identity<U> Bind<T, U>( this Identity<T> argument, Func<T, U> operation) { return operation(argument.Value).ToIdentity(); } } With those, we can rewrite our code as follows: "some string".ToIdentity() .Bind(s => s + " was processed by monad extensions.") .Bind(s => s + " And it's chainable...") .Value; This is considerably simpler but still retains the qualities of a monad. But it is still pointless. Let’s look at a more useful example, the state monad, which is basically a monad where the boxes have a label. It’s useful to perform operations on arbitrary objects that have been enriched with an attached state object. public class Stateful<TValue, TState> { public Stateful(TValue value, TState state) { Value = value; State = state; } public TValue Value { get; private set; } public TState State { get; set; } } public static class StateExtensions { public static Stateful<TValue, TState> ToStateful<TValue, TState>( this TValue value, TState state) { return new Stateful<TValue, TState>(value, state); } public static Stateful<TResult, TState> Execute<TValue, TState, TResult>( this Stateful<TValue, TState> argument, Func<TValue, TResult> operation) { return operation(argument.Value) .ToStateful(argument.State); } } You can get a stateful version of any object by calling the ToStateful extension method, passing the state object in. You can then execute ordinary operations on the values while retaining the state: var statefulInt = 3.ToStateful("This is the state"); var processedStatefulInt = statefulInt .Execute(i => ++i) .Execute(i => i * 10) .Execute(i => i + 2); Console.WriteLine("Value: {0}; state: {1}", processedStatefulInt.Value, processedStatefulInt.State); This monad differs from the identity by enriching the boxes. There is another way to give value to the monad, which is to enrich the processing. An example of that is the writer monad, which can be typically used to log the operations that are being performed by the monad. Of course, the richest monads enrich both the boxes and the processing. That’s all for today. I hope with this you won’t have to go through the same process that I did to understand monads and that you haven’t gone into concept overload like I did. Next time, we’ll examine some examples that you already know but we will shine the monadic light, hopefully illuminating them in a whole new way. Realizing that this pattern is actually in many places but mostly unnoticed is what will enable the truly casual “oh, yes, that’s a monad” comments. Here’s the code for this article: http://weblogs.asp.net/blogs/bleroy/Samples/Monads.zip The Wikipedia article on monads: http://en.wikipedia.org/wiki/Monads_in_functional_programming This article was invaluable for me in understanding how to express the canonical monads in C# (interesting Linq stuff in there): http://blogs.msdn.com/b/wesdyer/archive/2008/01/11/the-marvels-of-monads.aspx

    Read the article

  • Should we hire someone who writes C in Perl?

    - by paxdiablo
    One of my colleagues recently interviewed some candidates for a job and one said they had very good Perl experience. Since my colleague didn't know Perl, he asked me for a critique of some code written (off-site) by that potential hire, so I had a look and told him my concerns (the main one was that it originally had no comments and it's not like we gave them enough time). However, the code works so I'm loathe to say no-go without some more input. Another concern is that this code basically looks exactly how I'd code it in C. It's been a while since I did Perl (and I didn't do a lot, I'm more a Python bod for quick scripts) but I seem to recall that it was a much more expressive language than what this guy used. I'm looking for input from real Perl coders, and suggestions for how it could be improved (and why a Perl coder should know that method of improvement). You can also wax lyrical about whether people who write one language in a totally different language should (or shouldn't be hired). I'm interested in your arguments but this question is primarily for a critique of the code. The spec was to successfully process a CSV file as follows and output the individual fields: User ID,Name , Level,Numeric ID pax, Pax Morgan ,admin,0 gt," Turner, George" rubbish,user,1 ms,"Mark \"X-Men\" Spencer","guest user",2 ab,, "user","3" The output was to be something like this (the potential hire's code actually output this): User ID,Name , Level,Numeric ID: [User ID] [Name] [Level] [Numeric ID] pax, Pax Morgan ,admin,0: [pax] [Pax Morgan] [admin] [0] gt," Turner, George " rubbish,user,1: [gt] [ Turner, George ] [user] [1] ms,"Mark \"X-Men\" Spencer","guest user",2: [ms] [Mark "X-Men" Spencer] [guest user] [2] ab,, "user","3": [ab] [] [user] [3] Here is the code they submitted: #!/usr/bin/perl # Open file. open (IN, "qq.in") || die "Cannot open qq.in"; # Process every line. while (<IN>) { chomp; $line = $_; print "$line:\n"; # Process every field in line. while ($line ne "") { # Skip spaces and start with empty field. if (substr ($line,0,1) eq " ") { $line = substr ($line,1); next; } $field = ""; $minlen = 0; # Detect quoted field or otherwise. if (substr ($line,0,1) eq "\"") { $line = substr ($line,1); $pastquote = 0; while ($line ne "") { # Special handling for quotes (\\ and \"). if (length ($line) >= 2) { if (substr ($line,0,2) eq "\\\"") { $field = $field . "\""; $line = substr ($line,2); next; } if (substr ($line,0,2) eq "\\\\") { $field = $field . "\\"; $line = substr ($line,2); next; } } # Detect closing quote. if (($pastquote == 0) && (substr ($line,0,1) eq "\"")) { $pastquote = 1; $line = substr ($line,1); $minlen = length ($field); next; } # Only worry about comma if past closing quote. if (($pastquote == 1) && (substr ($line,0,1) eq ",")) { $line = substr ($line,1); last; } $field = $field . substr ($line,0,1); $line = substr ($line,1); } } else { while ($line ne "") { if (substr ($line,0,1) eq ",") { $line = substr ($line,1); last; } if ($pastquote == 0) { $field = $field . substr ($line,0,1); } $line = substr ($line,1); } } # Strip trailing space. while ($field ne "") { if (length ($field) == $minlen) { last; } if (substr ($field,length ($field)-1,1) eq " ") { $field = substr ($field,0, length ($field)-1); next; } last; } print " [$field]\n"; } } close (IN);

    Read the article

  • What precautions should you take when a senior employee leaves?

    - by Mahin
    EDIT : I agree one should check the reasons, why a senior level employee is leaving. But I am interested in knowing the official/management/technical/legal steps one should take after its decided that he is leaving, so that the life after him is smooth. What are the steps management should take when a senior programmer/team lead leaves your company. Some of them which I have thought about are : 1) If He used to manage hosting and domains stuff, change passwords of domain control panels and hosting panels. 2) If your published web sites have maintenance account and he is aware of credentials of that account then change this details also. 3) Suspend mail account for some time and forward all eMails of that account to some ex-employee account. After some time close that account. What are the other things one should check. I am expecting the answer to be a general check list one should follow. It should include both technical scenarios and management scenarios. Notable Suggestions so far : Effectively transfer the responsibilities of that employee to another one without causing any potential delay in your work. Protect your source code. If possible Make them to sign something to say that they don't have copies of source code.. You can also consider NDA here. Use the Notice Period to train his replacement. Now any new code to the project will be done by replacement with the help of Guy who is leaving. Ask him to create a document of things he thinks you should know. Make sure he checks everything in now and then any checkout will only be done by the replacement. Emails, copy off his email account to a pst.file (this assumes Outlook), Make this file available to his replacement. the employee should probably be given a chance to scrub the email. if you are going to keep his account open for whatever reason, check that no rules are created that forward incoming emails to an alternate address. Copy the hard drive of his computer to a network location and have someone senior go through and see if there are any files (drafts of performance reviews or other sensitive issues ) on it that someone else might need. Clearance from Accounts,Finance,Security,Library etc departments.Obtain all company property, laptops, keys, etc. If there is no reason not to, you should reward a departing person for their many years of service. Write them letters of recommendation (even if they already have a new job lined up).Say goodbye, and keep the door open. Make sure any outside clients know that the departing employee is not their main contact anymore. Never neglect the exit interview/debriefing. Confirm the last day of employment so that there is no misunderstanding Inform H/R if the employee is on H1B status, there is paperwork required to notify the government when an H1B employee leaves. Depending on how senior / what position, you might spend some time convincing him not to take the rest of the engineering staff with him. Make sure he spends his last days on a good note, because if he is not leaving on a good note, he can easily pollute the mind of his colleagues. Best Regards, Mahin Gupta EDIT : Now offered a bounty on it to get more detailed responses and practical suggestions.

    Read the article

  • SQL Cruise Alaska 2011

    - by Grant Fritchey
    I had the extreme good fortune to get sent on the last SQL Cruise to Alaska. I love my job. In case you don't what this is, SQL Cruise is a trip on a cruise ship during which you get to attend classes while on the boat, learning all about SQL Server and related topics as well as network with the instructors and the other Cruisers. Frankly, it's amazing. Classes ran from Monday, 5/30, to Saturday, 6/4. The networking was constant, between classes, at night on cruise ship, out on excursions in Alaskan rainforests and while snorkeling in ocean waters. Here's a run down of the experience from my point of view. Because I couldn't travel out 2 days early, I missed the BBQ that occurred the day before the cruise when many of the Cruisers received their swag bags. Some of that swag came from Red Gate. I researched what was useful on a cruise like this and purchased small flashlights and binoculars for all the Cruisers. The flashlights were because, depending on your cabin, ships can be very dark. The binoculars were so that the cruisers could watch all the beautiful landscape as it flowed by. I would have liked to have been there when the bags were opened, but I heard from several people that they appreciated the gifts. Cruisers "In" the hot tub. Pictured: Marjory Woody, Michele Grondin, Kyle Brandt, Grant Fritchey, John Halunen Sunday I went to board the ship with my wife. We had a bit of an adventure because I messed up our documents. It all worked out and we got on board to meet up at the back of the boat at one of the outdoor bars with the other Cruisers, thanks to tweets letting everyone know where to go. That was the end of electronic coordination on the trip (connectivity in Alaska was horrible for everyone except AT&T). The Cruisers were a great bunch of people and it was a real honor to meet them and get to spend time with them. After everyone settled into their cabins, our very first activity was a contest, sponsored by Red Gate. The Cruisers, in an effort to get to know each other and the ship, were required to go all over taking various photographs, some of them hilarious. The winning team of three would all win prizes. Some of the significant others helped out and I tagged along with a team that tied for first but lost the coin toss. The winning team consisted of Christina Leo (blog|twitter), Ryan Malcom (twitter), Neil Hambly (blog|twitter). They then had to do math and identify the cabin with the lowest prime number, oh, and get a picture of it and be the first to get back up to the bar where we were waiting. Christina came in first and very happily carried home an Ipad2. Ryan won a 1TB portable hard drive and Neil won a wireless mouse (picture below, note my special SQL Server Central Friday Shirt. Thanks Steve (blog|twitter)). Winners: Christina Leo, Neil Hambly, Ryan Malcolm. Just Lucky: Grant Fritchey Monday morning classes started. Buck Woody (blog|twitter) was a special guest speaker on this cruise. His theme was "Three C's on the High Seas: Career, Communication and Cloud." The first session was all on Career. I'm not going to type out all my notes from the session, but let's just say, if you get the chance to hear Buck talk about how to manage your career, I suggest you attend. I have a ton of blog posts that I'll be putting together over the next several months (yes, months) both here and over on ScaryDBA. I also have a bunch of work I'm going to be doing to get my career performance bumped up a notch or two (and let's face it, that won't be easy). Later on Monday, Tim Ford (blog|twitter) did a session on DMOs. Specifically the session was on Tim's Period Table of DMOs that he has put together, and how to use some of the more interesting DMOs in your day to day job. It was a great session, packed with good information. Next, Brent Ozar (blog|twitter) did a session on how to monitor and guide SAN configuration for the DBA that doesn't have access to the SAN. That was some seriously useful information. Tuesday morning we only had a single class. Kendra Little (blog|twitter) taught us all about "No Lock for Yes Fun".  It was all about the different transaction isolation levels and how they work. There is so often confusion in this area and Kendra does a great job in clarifying the information. Also, she tosses in her excellent drawings to liven up the presentation. Then it was excursion time in Juneau. My wife and I, along with several other Cruisers, took a hike up around the Mendenhall Glacier. It was absolutely beautiful weather and walking through the Alaskan rain forest was a treat. Our guide, Jason, was a great guy and it was a good day of hiking. Wednesday was an all day excursion in Skagway. My wife and I took the "Ghost and Good Time Girls" walking tour that ended up at a bar that used to be a brothel, the Red Onion. It was a great history of the town. We went back out and hit a few museums and exhibits. We also hiked up the side of the mountain to see the Dewey Lake and some great views of the town. Finally we hiked out to the far side of town to see the Gold Rush cemetery. Hiking done we went back to the boat and had a quiet dinner on our own. Thursday we cruised through Glacier Bay and saw at least four different glaciers including sitting next to the Marjory Glacier for  about an hour. It was amazing. Then it got better. We went into class with Buck again, this time to talk about Communication. Again, I've got pages of notes that I'm going to be referring back to for some time to come. This was an excellent opportunity to learn. Snorkelers: Nicole Bertrand, Aaron Bertrand, Grant Fritchey, Neil Hambly, Christina Leo, John Robel, Yanni Robel, Tim Ford Friday we pulled into Ketchikan. A bunch of us went snorkeling. Yes, snorkeling. Yes, in Alaska. Yes, snorkeling in the ocean in Alaska. It was fantastic. They had us put on 7mm thick wet suits (an adventure all by itself) so it was basically warm the entire time we were in the water (except for the occasional squirt of cold water down my back). Before we got in the water a bald eagle flew up and landed about 15 feet in front of us, which was just an incredible event. Then our guide pointed out about 14 other eagles in the area, hanging out in the trees. Wow! The water was pretty clear and there was a ton of things to see. That was absolutely a blast. Back on the boat I presented a session called Execution Plans: The Deep Dive (note the nautical theme). It seemed to go over well and I had several good questions come out of the session that will lead to new blog posts. After I presented, it was Aaron Bertrand's (blog|twitter) turn. He did a session on "What's New in Denali" that provided a lot of great information. He was able to incorporate new things straight out of Tech-Ed, so this was expanded beyond his usual presentation. The man really knows what he's talking about and communicates it well. Saturday we were travelling so there was time for a bunch of classes. Jeremiah Peschka (blog|twitter) did a great overview of some of the NoSQL databases and what they should be used for. The session was called "The Database is Dead" but it was really about how there are specific uses for these databases that SQL Server doesn't fill, but also that these databases can't replace SQL Server in other areas. Again, good material. Brent Ozar presented again with a session on Defensive Indexing. It was an overview of how indexes work and a deep dive into how to apply them appropriately in your databases to better support access. A good session, as you would expect. Then we pulled into Victoria, BC, in Canada and had a nice dinner with several of the Cruisers, including Denny Cherry (blog|twitter). After that it was back to Seattle on Sunday. By the way, the Science Fiction Museum in Seattle isn't a Science Fiction Museum any more. I was very disappointed to discover this. Overall, it was a great experience. I'm extremely appreciative of Red Gate for sending me and for Tim, Brent, Kendra and Jeremiah for having me. The other Cruisers were all amazing people and it was an honor & privilege to meet them and spend time with them. While this was a seriously fun time, it was also a very serious training opportunity with solid information coming from seasoned industry pros.

    Read the article

  • How do I use connect to DB2 with DBI and mod_perl?

    - by Matthew
    I'm having issues with getting DBI's IBM DB2 driver to work with mod_perl. My test script is: #!/usr/bin/perl use strict; use CGI; use Data::Dumper; use DBI; { my $q; my $dsn; my $username; my $password; my $sth; my $dbc; my $row; $q = CGI->new; print $q->header; print $q->start_html(); $dsn = "DBI:DB2:SAMPLE"; $username = "username"; $password = "password"; print "<pre>".$q->escapeHTML(Dumper(\%ENV))."</pre>"; $dbc = DBI->connect($dsn, $username, $password); $sth = $dbc->prepare("SELECT * FROM SOME_TABLE WHERE FIELD='SOMETHING'"); $sth->execute(); $row = $sth->fetchrow_hashref(); print "<pre>".$q->escapeHTML(Dumper($row))."</pre>"; print $q->end_html; } This script works as CGI but not under mod_perl. I get this error in apache's error log: DBD::DB2::dr connect warning: [unixODBC][Driver Manager]Data source name not found, and no default driver specified at /usr/lib/perl5/site_perl/5.8.8/Apache/DBI.pm line 190. DBI connect('SAMPLE','username',...) failed: [unixODBC][Driver Manager]Data source name not found, and no default driver specified at /data/www/perl/test.pl line 15 First of all, why is it using ODBC? The native DB2 driver is installed (hence it works as CGI). Running Apache 2.2.3, mod_perl 2.0.4 under RHEL5. This guy had the same problem as me: http://www.mail-archive.com/[email protected]/msg22909.html But I have no idea how he fixed it. What does mod_php4 have to do with mod_perl? Any help would be greatly appreciated, I'm having no luck with google. Update: As james2vegas pointed out, the problem has something to do with PHP: I disable PHP all together I get the a different error: Total Environment allocation failure! Did you set up your DB2 client environment? I believe this error is to do with environment variables not being set up correctly, namely DB2INSTANCE. However, I'm not able to turn off PHP to resolve this problem (I need it for some legacy applications). So I now have 2 questions: How can I fix the original issue without disabling PHP all together? How can I fix the environment issue? I've set DB2INSTANCE, DB2_PATH and SQLLIB variables correctly using SetEnv and PerlSetEnv in httpd.conf, but with no luck. Note: I've edited the code to determine if the problem was to do with Global Variable Persistence.

    Read the article

  • Mimicking a HBox / VBox with CSS

    - by Daniel Hai
    I'm an old school tables guy, and am pretty baffled when it comes to modern HTML. I'm trying to something as simple as vertical / horizontal layouts (i.e. Flex's hbox/vbox), but am having major difficulty replicating them. An old table would look something like this for an HBox: <table width="100%" height="100"> <tr valign="middle"> <td nowrap style="background-color:#CCC">I am text on grey</td> <td width="50%" valign="top">displays top</td> <td width="50%" align="right">Autosize Fill (displays bottom right)</td> </tr> </table> Now I'm trying to do this with div's, but to no avail. When using display:inline, I cannot set a percentage width -- it only takes explicit widths. When using float:left, settings 100% percentage width causes it to really be 100% and pushes the next div down. Here's the code I've been playing with: <html> <head> </head> <style type="text/css"> div.test { background-color: #EE9; padding:5px;} body { font-family: Arial; } ul { list-style-type:none; } ul li { float:left; } .hboxinline div { display: inline; } .hboxfloat div { float:left; } .cellA { background-color:#CCC; width:100%; } .cellB { background-color:#DDD; min-width:100; } .cellC { background-color:#EEE; min-width:200; } </style> <body> A = 100%, b = 100, c = 200 <div class="test">old school table <table cellpadding="0" cellspacing="0"> <tr> <td class="cellA">A</td> <td class="cellB">B</td> <td class="cellC">C</td> </tr> </table></div> <br/> <div class="test"> float:left <div class="hboxinline"> <div class="cellA">A</div> <div class="cellB">B</div> <div class="cellC">C</div> </div> </div> <br/> <div class="test">ul / li <ul class="ulli"> <li class="cellA">A</li> <li class="cellB">B</li> <li class="cellC">C</li> </ul> </div> <br/> <div class="test"> display:inline <div class="hboxfloat"> <div class="cellA">A</div> <div class="cellB">B</div> <div class="cellC">C</div> </div> </div> </body> </html>

    Read the article

  • Adopting Technologies for the Sake of Technologies

    - by shiju
    Unlike other engineering industries, the software engineering industry is really lacking maturity. The lack of maturity can see in different aspects of entire software development life cycle. I think other engineering industries are well organised and structured with common, proven engineering practices. The software engineering industry is greatly a diverse industry with different operating systems, and variety of development platforms, programming languages, frameworks and tools. Now these days, people are going behind the hypes and intellectual thoughts without understanding their core business problems and adopting technologies and practices for the sake of technologies and practices and simply becoming a “poster child” of technologies and practices. Understanding the core business problem and providing best, solid solution with a platform neutral approach, will give you more business values and ROI, instead of blindly adopting technologies and tailor-made your applications for the sake of technologies and practices. People have been simply migrating their solutions in favour of new technologies and different versions of frameworks without any business need. The “Pepsi Challenge” in the Software Development  Pepsi Challenge marketing campaign of the 1980s was a popular and very interesting marketing promotion in which people taste one cup of Pepsi and another cup with Coca Cola. In the taste test, more than 50% of people were preferred Pepsi  over Coca Cola. The success story behind the Pepsi was more sweetness contains in the Pepsi cola. They have simply added more sugar and more people preferred more sweet flavour. You can’t simply identify the better one after sipping one cup of cola based on the sweetness which contains. These things have been happening in the software industry for choosing development frameworks and technologies. People have been simply choosing frameworks based on the initial sugary feeling without understanding its core strengths and weakness. The sugary framework might be more harmful when you develop real-world systems. There is not any silver bullet for solving all kind of problems and frameworks and tools do have strengths and weakness. So it would be better to understand their strength and weakness. And please keep in mind that you have to develop real apps to understand the real capabilities and weakness of a framework. Evaluating a technology based on few blog posts will harm your projects and these bloggers might be lacking real-world experience with the framework. The Problem with Align a Development Practice with Tools Recently I have observed a discussion in a group where one guy asked suggestions for practicing Continuous Delivery (CD) as part of the agile based application engineering. Then the discussion quickly went to using and choosing a Continuous Integration (CI) tool and different people suggested different Continuous Integration (CI) tools for simply practicing Continuous Delivery. If you have worked with core agile engineering practices, you could clearly know that the real essence of agile is neither choosing a tool nor choosing a process. By simply choosing CI tool from a particular vendor will not ensure that you are delivering an evolving software based on customer feedback. You have to understand the real essence of a engineering practice and choose a right tool for practicing it instead of simply focus on a particular tool for a practicing an development practice. If you want to adopt a practice, you need a solid understanding on it with its real essence where tools are just helping us for better automation. Adopting New Technologies for the Sake of Technologies The another problem is that developers have been a tendency to adopt new technologies and simply migrating their existing apps to new technologies. It is okay if your existing system is having problem  with a technology stack or or maintainability challenge with existing solution, and moving to new technology for solving the current problems. We have been adopting new technologies for solving new challenges like solving the scalability challenges when the application or user bases is growing unpredictably. Please keep in mind that all new technologies will become old after working with it for few years. The below Facebook status update of Janakiraman, expresses the attitude of a typical customer. For an example, Node.js is becoming a hottest buzzword in the software industry and many developers are trying to adopt Node.js for their apps. The important thing is that Node.js is a minimalist framework that does some great things for some problems, but it’s not a silver bullet. I have been also working with Node.js which is good for some problems, but really bad for choosing it for all kind of problems. By adopting new technologies for new projects is good if we could get real business values from it because newer framework would solve some existing well known problems and provide better solutions where it can incorporate good solutions for the latest challenges . But adopting a new technology for the sake of new technology is really bad idea. Another example is JavaScript is getting lot of attention so that lot of developers are developing heavy JavaScript centric web apps. First, they will adopt a client-side JavaScript MV* framework from AngularJS, Ember, Backbone etc, and develop a Single Page App(SPA) where they are repeating the mistakes we did in the past with server-side. The mistakes we did in the server-side is transforming to client-side. The problem is that people are just adopting new technologies, but not improving their solutions. I predict that many Single Page App will suck in the future. We need a hybrid approach where we should be able to leverage both server-side and client-side for developing next-generation web apps. The another problem is that if you like a particular framework, use it for all kind of apps. In the past, I know some Silverlight passionate guys were tried to use that framework for all kind of apps including larger line of business apps. And these days developers are migrating their existing Silverlight apps in favour of HTML5 buzzword. So the real question is, what is the business values we are getting from these apps when we are developing it for the sake of a particular technology instead of business need. The another problem is that our solutions consultants are trying to provide unnecessary solutions for the sake of a particular technology or for a hype. For an example, Big Data solutions are great for solving the problem of three Vs : volume, velocity and variety. But trying to put this for every application will make problems. Let’s say, there is a small web site running with limited budget and saying that we need a recommendation engine for the web site with a Hadoop based solution with a 16 node cluster, would be really horrible. If we really need a Hadoop based solution, got for it, but trying to put this for all application would be a big disaster. It would be great if could understand the core business problems first, and later choose a right framework for providing solutions for the actual business problem, instead of trying to provide so many solutions. The Problem with Tied Up to a Platform Vendor Some organizations and teams are tied up with a particular platform vendor where they don’t want to use any product other than their preferred or existing platform vendor. They will accept any product provided by the vendor regardless of its capability. This will lets you some benefits regards with integration and collaboration of different products provided by the same vendor, but it will loose your opportunity to provide better solution for your business problems. For a real world sample scenario, lot of companies have been using SAP for their ERP solutions. When they are thinking about mobility or thinking about developing hybrid mobile apps, they can easily find out a framework from SAP. SAP provides a framework for HTML 5 based UI development named SAPUI5. If you are simply adopting that framework only based for the preference of existing platform vendor, you might be loose different opportunities for providing better solution. Initially you might enjoy the sugary feeling provided by the platform vendor, but you have to think about developing apps which should be capable for solving future challenges. I am not saying that any framework is not good and I believe that all frameworks are good over another one for solving at least one problem. My point is that we should not tied up with any specific platform vendor unless your organization is having resource availability problems. Being Polyglot for Providing Right Solutions The modern software engineering industry is greatly diverse with different tools and platforms. Lot of open source frameworks and new programming languages have been releasing to the developer community, where choosing the right platform without any biased opinion, is really a difficult task. But it would really great if we could develop an attitude with platform neutral mindset and being a polyglot developer for providing better solutions based on the actual business problems. IMHO, we should learn a new programming language and a new framework every year. This will improve the quality of our developer capabilities and also improve the quality of our primary programming language skills. Being polyglot for individual developers and organizational teams will give you greater opportunity to your developer experience and also for your applications. Organizations can analyse their business problem without tied with any technology and later they can provide solutions by choosing different platform and tools. Summary    In this blog post, what I was trying to say that we should not tied up or biased with any development platform, technology, vendor or programming language and we should not adopt technologies and practices for the sake of technologies. If we are adopting a technology or a practice for the sake of it, we are simply becoming a “poster child” of the technology and practice. We should not become a poster child of other people’s intellectual thoughts and theories, instead of it we should become solutions developers and solutions consultants where we should be able to provide better solutions for the business problems. Being a polyglot developer is a good idea for improving your developer skills which lets you provide better solutions for the business problems. The most important thing is that we should become platform neutral developers where our passion should be for providing brilliant solutions. It would be great if we could provide minimalist, pragmatic business solutions. You can follow me on Twitter @shijucv

    Read the article

  • multiple droppable

    - by Sune
    Ive have multiple droppable divs (which all have the same size) and one draggable div. The draggable div is 3 times bigger than one droppable. When I drag the draggable div on the droppables divs I want to find which droppable has been affeckted. My code looks like this: $(function () { $(".draggable").draggable({ drag: function (event, ui) { } }); $(".droppable").droppable({ drop: function (event, ui) { alert(this.id); } }); }); the html <div style="height:100px; width:200px; border-bottom:1px solid red; " id="Div0" class="droppable"> drop in me1! </div> <div style="height:100px; width:200px; border-bottom:1px solid red;" id="Div1" class="droppable"> drop in me2! </div> <div style="height:100px; width:200px; border-bottom:1px solid red; " id="Div2" class="droppable"> drop in me2! </div> <div style="height:100px; width:200px; border-bottom:1px solid red; " id="Div3" class="droppable"> drop in me2! </div> <div style="height:100px; width:200px; border-bottom:1px solid red; " id="Div4" class="droppable"> drop in me2! </div> <div style="height:100px; width:200px; border-bottom:1px solid red; " id="Div5" class="droppable"> drop in me2! </div> <div style="height:100px; width:200px; border-bottom:1px solid red; " id="Div6" class="droppable"> drop in me2! </div> <div style="height:100px; width:200px; border-bottom:1px solid red; " id="Div7" class="droppable"> drop in me2! </div> <div class="draggable" id="drag" style="height:300px; width:50px; border:1px solid black;"><span>Drag</span></div> But my alert only shows the first which my draggable div (Div0) hits, how can I find the missing (Div1 and Div2), which also is affeckted?? Here's a guy with the same problem : http://forum.jquery.com/topic/drop-onto-multiple-droppable-elements-at-once

    Read the article

  • Vista/7: How to get glass color?

    - by Ian Boyd
    How do you use DwmGetColorizationColor? The documentation says it returns two values: a 32-bit 0xAARRGGBB containing the color used for glass composition a boolean parameter that is true "if the color is an opaque blend" (whatever that means) Here's a color that i like, a nice puke green: You can notice the color is greeny, and the translucent title bar (against a white background) shows the snot color very clearly: i try to get the color from Windows: DwmGetColorizationColor(dwCcolorization, bIsOpaqueBlend); And i get dwColorization: 0x0D0A0F04 bIsOpaqueBlend: false According to the documentation this value is of the format AARRGGBB, and so contains: AA: 0x0D (13) RR: 0x0A (10) GG: 0x0F (15) BB: 0x04 (4) This supposedly means that the color is (10, 15, 4), with an opacity of ~5.1%. But if you actually look at this RGB value, it's nowhere near my desired snot green. Here is (10, 15, 4) with zero opacity (the original color), and (10,15,4) with 5% opacity against a white/checkerboard background: So the question is: How to get glass color in Windows Vista/7? i tried using DwmGetColorizationColor, but that doesn't work very well. A person with same problem, but a nicer shiny picture to attract you squirrels: So, it boils down to – DwmGetColorizationColor is completely unusable for applications attempting to apply the current color onto an opaque surface. i love this guy's screenshots much better than mine. Using his screenshots as a template, i made up a few more sparklies: For the last two screenshots, the alpha blended chip is a true partially transparent PNG, blending to your browser's background. Cool! (i'm such a geek) Edit 2: Had to arrange them in rainbow color. (i'm such a geek) Edit 3: Well now i of course have to add Yellow. Undocumented/Unsupported/Fragile Workarounds There is an undocumented export from DwmApi.dll at entry point 137, which we'll call DwmGetColorizationParameters: HRESULT GetColorizationParameters_Undocumented(out DWMCOLORIZATIONPARAMS params); struct DWMCOLORIZATIONPARAMS { public UInt32 ColorizationColor; public UInt32 ColorizationAfterglow; public UInt32 ColorizationColorBalance; public UInt32 ColorizationAfterglowBalance; public UInt32 ColorizationBlurBalance; public UInt32 ColorizationGlassReflectionIntensity; public UInt32 ColorizationOpaqueBlend; } We're interested in the first parameter: ColorizationColor. We can also read the value out of the registry: HKEY_CURRENT_USER\Software\Microsoft\Windows\DWM ColorizationColor: REG_DWORD = 0x6614A600 So you pick your poison of creating appcompat issues. You can rely on an undocumented API (which is bad, bad, bad, and can go away at any time) use an undocumented registry key (which is also bad, and can go away at any time) See also Is there a list of valid parameter combinations for GetThemeColor / Visual Styles API How does Windows change Aero Glass color? DWM - Colorization Color Handling Using DWMGetColorizationColor Retrieving Aero Glass base color for opaque surface rendering i've been wanting to ask this question for over a year now. i always knew that it's impossible to answer, and that the only way to get anyone to actually pay attention is to have colorful screenshots; developers are attracted to shiny things. But on the downside it means i had to put all kinds of work into making the lures.

    Read the article

  • Is the Unix Philosophy still relevant in the Web 2.0 world?

    - by David Titarenco
    Introduction Hello, let me give you some background before I begin. I started programming when I was 5 or 6 on my dad's PSION II (some primitive BASIC-like language), then I learned more and more, eventually inching my way up to C, C++, Java, PHP, JS, etc. I think I'm a pretty decent coder. I think most people would agree. I'm not a complete social recluse, but I do stuff like write a virtual machine for fun. I've never taken a computer course in college because I've been in and out for the past couple of years and have only been taking core classes; never having been particularly amazing at school, perhaps I'm missing some basic tenet that most learn in CS101. I'm currently reading Coders at Work and this question is based on some ideas I read in there. A Brief (Fictionalized) Example So a certain sunny day I get an idea. I hire a designer and hammer away at some C/C++ code for a couple of months, soon thereafter releasing silvr.com, a website that transmutes lead into silver. Yep, I started my very own start-up and even gave it a clever web 2.0 name with a vowel missing. Mom and dad are proud. I come up with some numbers I should be seeing after 1, 2, 3, 6, 9, 12 months and set sail. Obviously, my transmuting server isn't perfect, sometimes it segfaults, sometimes it leaks memory. I fix it and keep truckin'. After all, gdb is my best friend. Eventually, I'm at a position where a very small community of people are happily transmuting lead into silver on a semi-regular basis, but they want to let their friends on MySpace know how many grams of lead they transmuted today. And they want to post images of their lead and silver nuggets on flickr. I'm losing out on potential traffic unless I let them log in with their Yahoo, Google, and Facebook accounts. They want webcam support and live cock fighting, merry-go-rounds and Jabberwockies. All these things seem necessary. The Aftermath Of course, I have to re-write the transmuting server! After all, I've been losing money all these months. I need OAuth libraries and OpenID libraries, JSON support, and the only stable Jabberwocky API is for Java. C++ isn't even an option anymore. I'm just one guy! The Java binary just grows and grows since I need some legacy Apache include for the JSON library, and some antiquated Sun dependency for OAuth support. Then I pick up a book like Coders at Work and read what people like jwz say about complexity... I think to myself.. Keep it simple, stupid. I like simple things. I've always loved the Unix Philosophy but even after trying to keep the new server source modular and sleek, I loathe having to write one more line of code. It feels that I'm just piling crap on top of other crap. Maybe I'm naive thinking every piece of software can be simple and clever. Maybe it's just a phase.. or is the Unix Philosophy basically dead when it comes to the current state of (web) development? I'm just kind of disheartened :(

    Read the article

  • Problem with the dateformatter's in Iphone sdk.

    - by monish
    Hi guys, Here I had a problem with the date formatters actually I had an option to search events based on the date.for this Im comparing the date with the date store in the database and getting the event based on the date for this I wrote the code as follows: -(NSMutableArray*)getSearchAllLists:(EventsList*)aEvent { [searchList removeAllObjects]; EventsList *searchEvent = nil; const char* sql; NSString *conditionStr = @" where "; if([aEvent.eventName length] > 0) { NSString *str = @"'%"; str = [str stringByAppendingString:aEvent.eventName]; str = [str stringByAppendingString:@"%'"]; conditionStr = [conditionStr stringByAppendingString:[NSString stringWithFormat:@" eventName like %s",[str UTF8String]]]; } if([aEvent.eventWineName length]>0) { NSString *str = @"'%"; str = [str stringByAppendingString:aEvent.eventWineName]; str = [str stringByAppendingString:@"%'"]; if([aEvent.eventName length]>0) { conditionStr = [conditionStr stringByAppendingString:[NSString stringWithFormat:@" and (wineName like %s)",[str UTF8String]]]; } else { conditionStr = [conditionStr stringByAppendingString:[NSString stringWithFormat:@"wineName like %s",[str UTF8String]]]; } } if([aEvent.eventVariety length]>0) { NSString *str = @"'%"; str = [str stringByAppendingString:aEvent.eventVariety]; str = [str stringByAppendingString:@"%'"]; if([aEvent.eventName length]>0 || [aEvent.eventWineName length]>0) { conditionStr = [conditionStr stringByAppendingString:[NSString stringWithFormat:@" and (variety like %s)",[str UTF8String]]]; } else { conditionStr = [conditionStr stringByAppendingString:[NSString stringWithFormat:@"variety like %s" ,[str UTF8String]]]; } } if([aEvent.eventWinery length]>0) { NSString *str = @"'%"; str = [str stringByAppendingString:aEvent.eventWinery]; str = [str stringByAppendingString:@"%'"]; if([aEvent.eventName length]>0 || [aEvent.eventWineName length]>0 || [aEvent.eventVariety length]>0) { conditionStr = [conditionStr stringByAppendingString:[NSString stringWithFormat:@" and (winery like %s)",[str UTF8String]]]; } else { conditionStr = [conditionStr stringByAppendingString:[NSString stringWithFormat:@"winery like %s" ,[str UTF8String]]]; } } if(aEvent.eventRatings >0) { if([aEvent.eventName length]>0 || [aEvent.eventWineName length]>0 || [aEvent.eventWinery length]>0 || [aEvent.eventVariety length]>0) { conditionStr = [conditionStr stringByAppendingString:[NSString stringWithFormat:@" and ratings = %d",aEvent.eventRatings]]; } else { conditionStr = [conditionStr stringByAppendingString:[NSString stringWithFormat:@" ratings = %d",aEvent.eventRatings]]; } } if(aEvent.eventDate >0) { NSDateFormatter *dateFormatter = [[NSDateFormatter alloc]init]; [dateFormatter setDateFormat:@"dd-MMM-yy 23:59:59"]; NSString *dateStr = [dateFormatter stringFromDate:aEvent.eventDate]; NSDate *date = [dateFormatter dateFromString:dateStr]; [dateFormatter release]; NSTimeInterval interval = [date timeIntervalSinceReferenceDate]; printf("\n Interval in advance search:%f",interval); if([aEvent.eventName length]>0 || [aEvent.eventWineName length]>0 || [aEvent.eventWinery length]>0 || aEvent.eventRatings>0 || [aEvent.eventVariety length]>0) { conditionStr = [conditionStr stringByAppendingString:[NSString stringWithFormat:@" and eventDate = %f",interval]]; } else { conditionStr = [conditionStr stringByAppendingString:[NSString stringWithFormat:@" eventDate = %f",interval]]; } } NSString* queryString= @" select * from event"; queryString = [queryString stringByAppendingString:conditionStr]; sqlite3_stmt* statement; sql = (char*)[queryString UTF8String]; if (sqlite3_prepare_v2(database, sql, -1, &statement, NULL) != SQLITE_OK) { NSAssert1(0, @"Error: failed to prepare statement with message '%s'.", sqlite3_errmsg(database)); } sqlite3_bind_text(statement, 1, [aEvent.eventName UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text(statement, 2, [aEvent.eventWineName UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text(statement, 3, [aEvent.eventVariety UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text(statement, 4, [aEvent.eventWinery UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_int(statement, 5,aEvent.eventRatings); sqlite3_bind_double(statement,6, [[aEvent eventDate] timeIntervalSinceReferenceDate]); while (sqlite3_step(statement) == SQLITE_ROW) { primaryKey = sqlite3_column_int(statement, 0); searchEvent = [[EventsList alloc] initWithPrimaryKey:primaryKey database:database]; [searchList addObject:searchEvent]; [searchEvent release]; } sqlite3_finalize(statement); return searchList; } Here I compared the interval value in the database and the date we are searching and Im getting the different values and results were not found. Guy's help me to get rid of this. Anyone's help will be much appreciated. Thank you Monish Calapatapu.

    Read the article

  • CodePlex Daily Summary for Thursday, June 07, 2012

    CodePlex Daily Summary for Thursday, June 07, 2012Popular Releases????SDK for .Net 2.0/3.5/4.0 (OAuth2.0+??V2?API): ??VS2008?.net2.0、3.5、4.0????????: ??Upload???“?????IComparer”??????,????????。 ???????.net????VS2010??,?????????。 ????VS2008?.net2.0/3.5?!??Entities???? ???????.net???N????? ??JSON.net??????????? ?.net4.0??API?????dynamic??class ???alpha??,???? ?????????????JSON.net????,??????。 VS2005???????,?????var???,??????????! ????.net4.0???????????????,?????????LINQ to Twitter: LINQ to Twitter Beta v2.0.26: Supports .NET 3.5, .NET 4.0, Silverlight 4.0, Windows Phone 7.1, Client Profile, and Windows 8. 100% Twitter API coverage. Also available via NuGet! Follow @JoeMayo.Python Tools for Visual Studio: 1.5 Beta 1: We’re pleased to announce the release of Python Tools for Visual Studio 1.5 Beta. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including: • Supports CPython, IronPython, Jython and PyPy • Python editor with advanced member, signature intellisense and refactoring • Code navigation: “Find all refs”, goto definition, and object browser • Local and remote debugging •...Circuit Diagram: Circuit Diagram 2.0 Beta 1: New in this release: Automatically flip components when placing Delete components using keyboard delete key Resize document Document properties window Print document Recent files list Confirm when exiting with unsaved changes Thumbnail previews in Windows Explorer for CDDX files Show shortcut keys in toolbox Highlight selected item in toolbox Zoom using mouse scroll wheel while holding down ctrl key Plugin support for: Custom export formats Custom import formats Open...Umbraco CMS: Umbraco CMS 5.2 Beta: The future of Umbracov5 represents the future architecture of Umbraco, so please be aware that while it's technically superior to v4 it's not yet on a par feature or performance-wise. What's new? For full details see our http://progress.umbraco.org task tracking page showing all items complete for 5.2. In a nutshellPackage Builder Starter Kits Dynamic Extension Methods Querying / IsHelpers Friendly alt template URLs Localization Various bug fixes / performance enhancements Gett...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.0.5: JayData is a unified data access library for JavaScript developers to query and update data from different sources like WebSQL, IndexedDB, OData, Facebook or YQL. See it in action in this 6 minutes video New features in JayData 1.0.5http://jaydata.org/blog/jaydata-1.0.5-is-here-with-authentication-support-and-more http://jaydata.org/blog/release-notes Sencha Touch 2 module (read-only)This module can be used to bind data retrieved by JayData to Sencha Touch 2 generated user interface. (exam...32feet.NET: 3.5: This version changes the 32feet.NET library (both desktop and NETCF) to use .NET Framework version 3.5. Previously we compiled for .NET v2.0. There are no code changes from our version 3.4. See the 3.4 release for more information. Changes due to compiling for .NET 3.5Applications should be changed to use NET/NETCF v3.5. Removal of class InTheHand.Net.Bluetooth.AsyncCompletedEventArgs, which we provided on NETCF. We now just use the standard .NET System.ComponentModel.AsyncCompletedEvent...DotNetNuke® Links: 06.02.01: Added new DNN 6.2.0 beta social feature "friends" BugfixesApplication Architecture Guidelines: Application Architecture Guidelines 3.0.7: 3.0.7Jolt Environment: Jolt v2 Stable: Many new features. Follow development here for more information: http://www.rune-server.org/runescape-development/rs-503-client-server/projects/298763-jolt-environment-v2.html Setup instructions in downloadSharePoint Euro 2012 - UEFA European Football Predictor: havivi.euro2012.wsp (1.5): New fetures:Multilingual Support Max users property in Standings Web Part Games time zone change (UTC +1) bug fix - Version 1.4 locking problem http://euro2012.codeplex.com/discussions/358262 bug fix - Field Title not found (v.1.3) German SP http://euro2012.codeplex.com/discussions/358189#post844228 Bug fix - Access is denied.for users with contribute rights Bug fix - Installing on non-English version of SharePoint Bug fix - Title Rules Installing SharePoint Euro 2012 PredictorSharePoint E...xNet: xNet 2.1.1: Release xNet 2.1.1Command Line Parser Library: 1.9.2.4 stable: This is the first stable of 1.9.* branch. Added tests for HelpText::AutoBuild. Fixed minor formatting error in HelpText::DefaultParsingErrorsHandler.myManga: myManga v1.0.0.4: ChangeLogUpdating from Previous Version: Extract contents of Release - myManga v1.0.0.4.zip to previous version's folder. Replaces: myManga.exe BakaBox.dll CoreMangaClasses.dll Manga.dll Plugins/MangaReader.manga.dll Plugins/MangaFox.manga.dll Plugins/MangaHere.manga.dll Plugins/MangaPanda.manga.dllMVVM Light Toolkit: V4RC (binaries only) including Windows 8 RP: This package contains all the latest DLLs for MVVM Light V4 RC. It includes the DLLs for Windows 8 Release Preview. An updated Nuget package is also available at http://nuget.org/packages/MvvmLightLibsPreviewExtAspNet: ExtAspNet v3.1.7: +2012-06-03 v3.1.7 -?????????BUG,??????RadioButtonList?,AJAX????????BUG(swtseaman、????)。 +?Grid?BoundField、HyperLinkField、LinkButtonField、WindowField??HtmlEncode?HtmlEncodeFormatString(TiDi)。 -HtmlEncode?HtmlEncodeFormatString??????true,??????HTML????????。 -??????Asp.Net??GridView?BoundField?????????。 -http://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.boundfield.htmlencode -?Grid?HyperLinkField、WindowField??UrlEncode??,????URL??(???true)。 -?????????????,?????????????...LiveChat Starter Kit: LCSK v1.5.2: New features: Visitor location (City - Country) from geo-location Pass configuration via javascript for the chat box New visitor identification (no more using the IP address as visitor identification) To update from 1.5.1 Run the /src/1.5.2-sql-updates.txt SQL script to update your database tables. If you have it installed via NuGet, simply update your package and the file will be included so you can run the update script. New installation The easiest way to add LCSK to your app is by...Kendo UI ASP.NET Sample Applications: Sample Applications (2012-06-01): Sample application(s) demonstrating the use of Kendo UI in ASP.NET applications.Better Explorer: Better Explorer Beta 1: Finally, the first Beta is here! There were a lot of changes, including: Translations into 10 different languages (the translations are not complete and will be updated soon) Conditional Select new tools for managing archives Folder Tools tab new search bar and Search Tab new image editing tools update function many bug fixes, stability fixes, and memory leak fixes other new features as well! Please check it out and if there are any problems, let us know. :) Also, do not for...New ProjectsActiveAttributes: Create attributes that execute code when their target members are called.Blogger Access Library: This is a .NET library that make it easy to post your blog article to Blogger when you want to handle the blogger with .NET (C#, VB, F#...etc) code.cookie.js - Simple JavaScript Cookie Processor: Simple JavaScript Cookie ProcessorDeberPrueba: deber de tareaDNN User Redirect: Allows you to redirect users that are members of specific roles or groups to landing pages within your DotNetNuke website. This is perfect for scenarios such as the following: - redirect unauthenticated users to a Login page - redirect authenticated users to a Welcome page after successful login - redirect users to a page where they need to view/accept Terms of Service - redirect employees who are part of a specific department in your organization to a Departmental landing page When ...Evento-Pro: O Aplicativo contará com uma área de manutenção e cadastro das informações necessárias para seu funcionamento, bem como uma tela para visualização, criação/manutenção dos eventos.Exchange Mailbox Permission Reverse Lookup: Ever wondered a user in which mailboxes has full-access or send-as rights? One common strategy is to use groups to grant permissions on shared mailboxes, where querying the user which groups is member of would do the job. But in case you grant mailbox permissions directly to users (maybe because you are using the Exchange 2010 shared mailbox automapping feature), this tool can come quite handy.Find and Replace word in the sentences: This program used Java Development Kid 6.0 and i were using HighLighter class. It was completed code with source code and then everybody can use in everything. I use it for my assignment of NCC Education on IAD(International Advanced Diploma). GIMS: Graphical ImageMagick ScripterGoogleMaps .NET API: This is a wrapper to use the Google Maps API in a .NET windows application (WinForms and WPF). It works by using a browser control (either WinForms or WPF), and interacting with a JavaScript implementation of Google Maps.Grid.Mvc: Grid.Mvc - is a component that allows you easy construction of HTML tables for displaying, paging and sorting data from a collection of Model objects.HgReleaseNotesGen: A cmd line utility for automatically creating a Release Notes document from a Mercurial repository - currently used by StyleCop.HS FB: Fizz BuzzJaySvcUtil - generate JavaScript context from OData metadata: This tool generates client-side metadata for OData service endpoints, so OData services can be consumed from JavaScript using JayData. Visit http://jaydata.org for detailed documentation, example apps and tutorials. You can download JayData from the [url:JayData CodePlex project|http://jaydata.codeplex.com] Knockout Serializer: Knockout SerializerMarsExplorer: This is a personal projectMulti camera snapshot taker: **this c# winform project is based on DXSnap-2008 sample project ** A few days ago, someone told me that it was not possible to take 3 snapshots , from 3 different webcams , at the same time (e.g. this guy wanted to take snapshots of an item on 3 different axes (X,Y,Z) at the same time. so I 've tried to make it work..mvcEticaret: Ticaret sitesiPush Notification: Push Notification service sample using F# and Duplex NetTcpBinding Reproductor: ReproductorStudyMate: A Internet and mobile based website, where user can make and share Exam revision notes to read them from mobile, attempt objective type questions and chat between online users.SugarSync Folder Provider for DotNetNuke: The SugarSync Folder Provider for DotNetNuke allows you to have direct integration between your cloud-hosted files and the file system in your DotNetNuke website. In using this extension, you will be able to enjoy the management of files in a CMS with the power of cloud file hosting.The SharePoint 2010 Tag Cloud web part for blog web template: The SharePoint 2010 Tag Cloud web part for blog web templateTimeClearWinFreeTime: ?????????IT???????,????????????,????????????(??:?????,??????)。???????????????????????,???????????????,??????,????????,?????55?????Windows????,????????????。 ???????c#,Windows7 ???Windows???????????, Windows XP??????.Netframework3.5????. Netframework3.5????: http://msdn.microsoft.com/zh-cn/netframework/cc378097VoIP based Call Management System (VBCMS): This project intends to focus on new ways of providing road side assistance service and many more using VoIP based systemXaml FlowDocument to PDF Converter: Simple FlowDocument to PDF Converter with paging, header and footer

    Read the article

  • When was sys.dm_os_wait_stats last cleared?

    - by SQLOS Team
    The sys.dm_os_wait_stats DMV provides essential metrics for diagnosing SQL Server performance problems. Returning incrementally accumulating information about all the completed waits encountered by executing threads it is a useful way to identify bottlenecks such as IO latency issues or waits on locks. The counters are reset each time SQL server is restarted, or when the following command is run: DBCC SQLPERF ('sys.dm_os_wait_stats', CLEAR); To make sense out of these wait values you need to know how they change over time. Suppose you are asked to troubleshoot a system and you don't know when the wait stats were last zeroed. Is there any way to find the elapsed time since this happened? If the wait stats were not cleared using the DBCC SQLPERF command then you can simply correlate the stats with the time SQL Server was started using the sqlserver_start_time column introduced in SQL Server 2008 R2: SELECT sqlserver_start_time from sys.dm_os_sys_info However how do you tell if someone has run DBCC SQLPERF ('sys.dm_os_wait_stats', CLEAR) since the server was started, and if they did, when? Without this information the initial, or historical, wait_stats have less value until you can measure deltas over time. There is a way to at least estimate when the stats were last cleared, by using the wait stats themselves and choosing a thread that spends most of its time sleeping. A good candidate is the SQL Trace incremental flush task, which mostly sleeps (in 4 second intervals) and in between it attempts to flush (if there are new events – which is rare when only default trace is running) – so it pretty much sleeps all the time. Hence the time it has spent waiting is very close to the elapsed time since the counter was reset. Credit goes to Ivan Penkov in the SQLOS dev team for suggesting this. Here's an example (excuse formatting): 144 seconds after the server was started: select top 10 wait_type, wait_time_ms from sys.dm_os_wait_stats order by wait_time_ms desc wait_type                                                               wait_time_ms--------------------------------------------------------------------------------------------------------------- XE_DISPATCHER_WAIT                                      242273LAZYWRITER_SLEEP                                          146010LOGMGR_QUEUE                                                145412DIRTY_PAGE_POLL                                             145411XE_TIMER_EVENT                                               145216REQUEST_FOR_DEADLOCK_SEARCH             145194SQLTRACE_INCREMENTAL_FLUSH_SLEEP    144325SLEEP_TASK                                                        73359BROKER_TO_FLUSH                                           73113PREEMPTIVE_OS_AUTHENTICATIONOPS       143 (10 rows affected) Reset: DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR)" DBCC execution completed. If DBCC printed error messages, contact your system administrator. After 8 seconds: select top 10 wait_type, wait_time_ms from sys.dm_os_wait_stats order by wait_time_ms desc wait_type                                                                 wait_time_ms--------------------------------------------------------------------------------------------------------------------- REQUEST_FOR_DEADLOCK_SEARCH              10013LAZYWRITER_SLEEP                                           8124SQLTRACE_INCREMENTAL_FLUSH_SLEEP     8017LOGMGR_QUEUE                                                 7579DIRTY_PAGE_POLL                                              7532XE_TIMER_EVENT                                                5007BROKER_TO_FLUSH                                            4118SLEEP_TASK                                                         3089PREEMPTIVE_OS_AUTHENTICATIONOPS        28SOS_SCHEDULER_YIELD                                   27 (10 rows affected)   After 12 seconds: select top 10 wait_type, wait_time_ms from sys.dm_os_wait_stats order by wait_time_ms desc wait_type                                                                  wait_time_ms------------------------------------------------------------------------------------------------------ REQUEST_FOR_DEADLOCK_SEARCH               15020LAZYWRITER_SLEEP                                            14206LOGMGR_QUEUE                                                  14036DIRTY_PAGE_POLL                                               13973SQLTRACE_INCREMENTAL_FLUSH_SLEEP      12026XE_TIMER_EVENT                                                 10014SLEEP_TASK                                                          7207BROKER_TO_FLUSH                                             7207PREEMPTIVE_OS_AUTHENTICATIONOPS         57SOS_SCHEDULER_YIELD                                     28 (10 rows affected) It may not be accurate to the millisecond, but it can provide a useful data point, and give an indication whether the wait stats were manually cleared after startup, and if so approximately when. - Guy     Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Internet Explorer Automation: how to suppress Open/Save dialog?

    - by Vladimir Dyuzhev
    When controlling IE instance via MSHTML, how to suppress Open/Save dialogs for non-HTML content? I need to get data from another system and import it into our one. Due to budget constraints no development (e.g. WS) can be done on the other side for some time, so my only option for now is to do web scrapping. The remote site is ASP.NET-based, so simple HTML requests won't work -- too much JS. I wrote a simple C# application that uses MSHTML and SHDocView to control an IE instance. So far so good: I can perform login, navigate to desired page, populate required fields and do submit. Then I face a couple of problems: First is that report is opening in another window. I suspect I can attach to that window too by enumerating IE windows in the system. Second, more troublesome, is that report itself is CSV file, and triggers Open/Save dialog. I'd like to avoid it and make IE save the file into given location OR I'm fine with programmatically clicking dialog buttons too (how?) I'm actually totally non-Windows guy (unix/J2EE), and hope someone with better knowledge would give me a hint how to do those tasks. Thanks! UPDATE I've found a promising document on MSDN: http://msdn.microsoft.com/en-ca/library/aa770041.aspx Control the kinds of content that are downloaded and what the WebBrowser Control does with them once they are downloaded. For example, you can prevent videos from playing, script from running, or new windows from opening when users click on links, or prevent Microsoft ActiveX controls from downloading or executing. Slowly reading through... UPDATE 2: MADE IT WORK, SORT OF... Finally I made it work, but in an ugly way. Essentially, I register a handler "before navigate", then, in the handler, if the URL is matching my target file, I cancel the navigation, but remember the URL, and use WebClient class to access and download that temporal URL directly. I cannot copy the whole code here, it contains a lot of garbage, but here are the essential parts: Installing handler: _IE2.FileDownload += new DWebBrowserEvents2_FileDownloadEventHandler(IE2_FileDownload); _IE.BeforeNavigate2 += new DWebBrowserEvents2_BeforeNavigate2EventHandler(IE_OnBeforeNavigate2); Recording URL and then cancelling download (thus preventing Save dialog to appear): public string downloadUrl; void IE_OnBeforeNavigate2(Object ob1, ref Object URL, ref Object Flags, ref Object Name, ref Object da, ref Object Head, ref bool Cancel) { Console.WriteLine("Before Navigate2 "+URL); if (URL.ToString().EndsWith(".csv")) { Console.WriteLine("CSV file"); downloadUrl = URL.ToString(); } Cancel = false; } void IE2_FileDownload(bool activeDocument, ref bool cancel) { Console.WriteLine("FileDownload, downloading "+downloadUrl+" instead"); cancel = true; } void IE_OnNewWindow2(ref Object o, ref bool cancel) { Console.WriteLine("OnNewWindow2"); _IE2 = new SHDocVw.InternetExplorer(); _IE2.BeforeNavigate2 += new DWebBrowserEvents2_BeforeNavigate2EventHandler(IE_OnBeforeNavigate2); _IE2.Visible = true; o = _IE2; _IE2.FileDownload += new DWebBrowserEvents2_FileDownloadEventHandler(IE2_FileDownload); _IE2.Silent = true; cancel = false; return; } And in the calling code using the found URL for direct download: ... driver.ClickButton(".*_btnRunReport"); driver.WaitForComplete(); Thread.Sleep(10000); WebClient Client = new WebClient(); Client.DownloadFile(driver.downloadUrl, "C:\\affinity.dump"); (driver is a simple wrapper over IE instance = _IE) Hope that helps someone.

    Read the article

  • How you remember by default functionality/class name etc of the platform

    - by piemesons
    Hello everyone, I am 8 months experienced guy, (B.tech in computer science) In my college time i used to create simple programs in c/c++/java. Simple programs like creating linked list/binary trees programs. frankly saying those college bullshit exercise.(I am from India so Engg colleges in india sucks except few like IIT's etc). In my college time apart from my college exercises i created some better programs/games like arachnoid, snake. We had 6 months internship in our college curriculum. I worked on asp.net. Basically the work was to create a website with some random functionality. After that in my job i worked on php and successfully deployed 4 projects. All having lot of functionality and i was the only team member in all the projects. Now i am learning ruby on rails as i switched to a new firm. I also have to work on android or iphone depending upon on what mobile technology i want to choose or i can work on both of the technologies. My project manager says take your time to learn things. we are not in hurry to place you in any project. Work on things by your self. take 3 4 months to learn. But i am not getting good pace. I am quite confident with php/asp etc but i dont able to grasp things in android. Although my c/c++ background is quite good, having a good logical mind. But i am not able to grasp the things in android. Even learning some basics of rails i found it wtf. Why i have make model name singular and table name plural.By default that action name and name of the file in view is same I just hate the word MAGIC mentioned more than 100 times in the book. (agile-web-development-with-rails) (I am talking about default functionality, I can over ride them that i know, so please dont debate on that) I not saying i am not getting the things. My point is remembering the default functionality is a pissing me off. Lots of classes. Lots of files . specify this thing here. That thing there. All these things (remember which class does what) require some time or i am missing something. For my point of view i am having all these problems cause previously i never used object oriented programming approach in php. (I NEVER USED, I AM NOT SAYING THET ARE NOT) How you people explain it. How you people suggest me to do. I am looking suggestions from some seniors.From seniors in my office.They says you good dude. But i dont know i am not geting confidence in the things. When they ask me anythings about the topics i cover. I give them good answers. So when i discuss this problem with them they says there is no problem just keep on working. And sorry for my poor english.

    Read the article

  • C++ Serial Port Question

    - by Pfeffer
    Problem: I have a hand held device that scans those graphic color barcodes on all packaging. There is a track device that I can use that will slide the device automatically. This track device functions by taking ascii code through a serial port. I need to get this thing to work in FileMaker on a Mac. So no terminal programs, etc... What I've got so far: I bought a Keyspan USB/Serial adapter. Using a program called ZTerm I was successful in sending commands to the device. Example: "C,7^M^J" I was also able to do the same thing in Terminal using this command: screen /dev/tty.KeySerial1 57600 and then type in the same command above(but when I typed in I just hit Control-M and Control-J for the carriage return and line feed) Now I'm writing a plug-in for FileMaker(in C++ of course). I want to get what I did above happen in C++ so when I install that plug-in in FileMaker I can just call one of those functions and have the whole process take place right there. I'm able to connect to the device, but I can't talk to it. It is not responding to anything. I've tried connecting to the device(successfully) using these: FILE *comport; if ((comport = fopen("/dev/tty.KeySerial1", "w")) == NULL){...} and int fd; fd = open("/dev/tty.KeySerial1", O_RDWR | O_NOCTTY | O_NDELAY); This is what I've tried so far in way of talking to the device: fputs ("C,7^M^J",comport); or fprintf(comport,"C,7^M^J"); or char buffer[] = { 'C' , ',' , '7' , '^' , 'M' , '^' , 'J' }; fwrite (buffer , 1 , sizeof(buffer) , comport ); or fwrite('C,7^M^J', 1, 1, comport); Questions: When I connected to the device from Terminal and using ZTerm, I was able to set my baud rate of 57600. I think that may be why it isn't responding here. But I don't know how to do it here.... Does any one know how to do that? I tried this, but it didn't work: comport->BaudRate = 57600; There are a lot of class solutions out there but they all call these include files like termios.h and stdio.h. I don't have these and, for whatever reason, I can't find them to download. I've downloaded a few examples but there are like 20 files in them and they're all calling other files I can't find(like the ones listed above). Do I need to find these and if so where? I just don't know enough about C++ Is there a website where I can download libraries?? Another solution might be to put those terminal commands in C++. Is there a way to do that? So this has been driving me crazy. I'm not a C++ guy, I only know basic programming concepts. Is anyone out there a C++ expert? I ideally I'd like this to just work using functions I already have, like those fwrite, fputs stuff. Thanks!

    Read the article

< Previous Page | 93 94 95 96 97 98 99  | Next Page >