Search Results

Search found 1053 results on 43 pages for 'thomas kohl'.

Page 41/43 | < Previous Page | 37 38 39 40 41 42 43  | Next Page >

  • BizTalk: Instance Subscription: Details

    - by Leonid Ganeline
    It has interesting behavior and it is not always what we are waiting for. An orchestration can be enlisted with many subscriptions. In other word it can have several Receive shapes. Usually the first Receive uses the Activation subscription but other Receives create the Instance subscriptions. [See “Publish and Subscribe Architecture” in MSDN] Here is a sample process. This orchestration has two receives. It is a typical Sequential Convoy. [See "BizTalk Server 2004 Convoy Deep Dive" in MSDN by Stephen W. Thomas]. Let's experiment started.   There are three typical scenarios. First scenario: everything is OK Activation subscription for the Sample message is created when the orchestration the SampleProcess is enlisted. The Instance subscription is created only when the SampleProcess orchestration instance is started and it is removed when the orchestration instance is ended. So far so good, the Message_2 was delivered exactly in this time interval and was consumed. Second scenario: no consumers Three Sample_2 messages were delivered. One was delivered before the SampleProcess was started and before the instance subscription was created. Second message was delivered in the correct time interval. The third one was delivered after the SampleProcess orchestration was ended and the instance subscription was removed. Note: ·         It was not the first Sample_2 was consumed. It was first in the queue but in was not waiting, it was suspended when it was delivered to the Message Box and didn’t have any subscribers at this moment. The first and the last Sample_2 messages were Suspended (Nonresumable) in the Message Box. For each of this message we have got two (!) service instances associated with this suspended message. One service instance has the ServiceClass of Messaging, and we can see its Error Description:   The second service instance has the ServiceClass of RoutingFailureReport, and we can see its Error Description:   Third scenario: something goes wrong Two Sample_2 messages were delivered. Both were delivered in the same interval when the SampleProcess orchestration was working and the instance subscription was created and was working too. First Sample_2 was consumed. The second Sample_2 has the subscription but the subscriber, the SampleProcess orchestration, will not consume it. After the SampleProcess orchestration is ended (And only after! I will discuss this in the next article.), it is suspended (Nonresumable). In this time only one service instance associated with this kind of scenario is suspended. This service instance has the ServiceClass of Orchestration, and we can see its Error Description: In the Message tab we will see the Sample_2 message in the Suspended (Resumable) status. Note: ·         This behavior looks ambiguous. We see here the orchestration consumes the extra message(s) and gets suspended together with those extra messages. These messages are not consumed in term of “processed by orchestration”. But they are consumed in term of the “delivered to the subscriber”. The receive shape in the orchestration is not received these extra messages. But these messages are routed to the orchestration.     Unified Sequential convoy  Now one more scenario. It is the unified sequential convoy. That means the activation subscription is for the same message type as it for the instance subscription. The Sample_2 message is now the Sample message. For simplicity the SampleProcess orchestration consumes only two Sample messages. Usually the orchestration consumes a lot of messages inside loop, but now it is only two of them. First message starts the orchestration, the second message goes inside this orchestration. Then the next pair of messages follows, and so on. But if the input messages follow in shorter intervals we have got the problem. We lost messages in unpredictable manner. Note: ·         Maybe the better behavior would be if the orchestration removes the instance subscription after the message is consumed, not in the end on the orchestration. Right now it is a “feature” of the BizTalk subscription mechanism.

    Read the article

  • Process Power to the People that Create Engagement

    - by Michael Snow
    Organizations often speak about their engagement problems as if the problem is the people they are trying to engage - employees,  partners, customers and citizens.  The reality of most engagement problems is that the processes put in place to engage are impersonal, inflexible, unintuitive, and often completely ignorant of the population they are trying to serve. Life, Liberty and the Pursuit of Delight? How appropriate during this short week of the US Independence Day Holiday that we're focusing on People, Process and Engagement. As we celebrate this holiday in the US and the historic independence we gained (sorry Brits!) - it's interesting to think back to 1776 to the creation of that pivotal document, the Declaration of Independence. What tremendous pressure to create an engaging document and founding experience they must have felt. "On June 11, 1776, in anticipation of the impending vote for independence from Great Britain, the Continental Congress appointed five men — Thomas Jefferson, John Adams, Benjamin Franklin, Roger Sherman, and Robert Livingston — to write a declaration that would make clear to people everywhere why this break from Great Britain was both necessary and inevitable. The committee then appointed Jefferson to draft a statement. Jefferson produced a "fair copy" of his draft declaration, which became the basic text of his "original Rough draught." The text was first submitted to Adams, then Franklin, and finally to the other two members of the committee. Before the committee submitted the declaration to Congress on June 28, they made forty-seven emendations to the document. During the ensuing congressional debates of July 1-4, 1776, Congress adopted thirty-nine further revisions to the committee draft. (http://www.constitution.org) If anything was an attempt for engaging the hearts and minds of the 13 Colonies at the time, this document certainly succeeded in its mission. ...Their tools at the time were pen and ink and parchment. Although the final document would later be typeset with lead type for a printing press to distribute to the colonies, all of the original drafts were hand written. And today's enterprise complains about using "Review and Track Changes" at times.  Can you imagine the manual revision control process? or lack thereof?  Collaborative process? Time delays? Would  implementing a better process have helped our founding fathers collaborate better? Declaration of Independence rough draft below. One of many during the creation process. Great comparison across multiple versions of the document here. (from http://www.ushistory.org/): While you may not be creating a new independent nation, getting your employees to engage is crucial to your success as a company in today's world. Oracle WebCenter provides the tools that power engagement. Employees that have better tools for communication, collaboration and getting their job done are more engaged employees. Better engaged employees create more engaged customers and partners. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";}

    Read the article

  • OPN Exchange General Sessions –Fowler, Kurian & More!

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} With so much excitement about to take place at OPN Exchange @ OpenWorld, it’s hard to decide what to attend, where to go, who to meet and what to eat! Let us help you decide by first asking a question… How often do you get to choose between seven key Oracle Executives as they address the five biggest topics facing the industry today? After the Partner Keynote with Judson Althoff, join us for the OPN Exchange General Sessions: DATE: Sunday September 30th TIME: 3:30-4:30 pm LOCATION: Moscone South, Esplanade Level John Fowler & Tom LaRocca (Technology for Partners: Room 306): Learn how to grow your top and bottom line by selling Oracle on Oracle. Chris Leone (Applications for Partners: Room 303): Catch the partner-only value prop, selling secrets and competitive compares to win with the Fusions Applications product family. Angelo Pruscino & Sohan DeMel (Engineered Systems for Partners: Room 301): Get the secrets to selling and implementing Oracle’s transformational Engineered Systems products. You won’t want to miss the Oracle Database Appliance Unplugged demonstration! Sonny Singh (Industry Solutions: Room 302): Develop profitable practices answering the challenges faced by companies operating in discrete industries and the opportunity represented by Machine2Machine Java. Thomas Kurian (Cloud for Partnesr: Room 304): Today it is all about the Cloud. Oracle offers both traditional cloud infrastructure solutions, as well cloud platform and software services. Attend this session to learn more about Oracle’s Platform, Application, and Social cloud services. Put on your thinking caps because these speakers are ready to blow your mind with five tracks of exclusive content catered to you, our partners. Boom! The OPN Communication Team Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • Mastering snow and Java development at jDays in Gothenburg

    - by JavaCecilia
    Last weekend, I took the train from Stockholm to Gothenburg to attend and present at the new Java developer conference jDays. It was professionally arranged in the Swedish exhibition hall close to the amusement park Liseberg and we got a great deal out of the top-level presenters and hallway discussions. Understanding and Improving Your Java Process Our main purpose was to spread information on JVM and our monitoring tools for Java processes, so I held a crash course in the most important terms and concepts if you want to affect the performance of your Java process. From the beginning - the JVM specification to interpretation of heap usage graphs. For correct analysis, you also need to understand something about process memory - you need space for the Java heap (-Xms for initial size and -Xmx for max heap size), but the process memory also contain the thread stacks (to a size of -Xss), JVM internal data structures used for keeping track of Java objects on the heap, method compilation/optimization, native libraries, etc. If you get long pause times, make sure to monitor your application, see the allocation rate and frequency of pause times.My colleague Klara Ward then held a presentation on the Java Mission Control product, the profiling and diagnostics tools suite for HotSpot, coming soon. The room was packed and very appreciated, Klara demonstrated four different scenarios, e.g. how to diagnose and fix latencies due to lock contention for logging.My German colleague, OpenJDK ambassador Dalibor Topic travelled to Sweden to do the second keynote on "Make the Future Java". He let us in on the coming features and roadmaps of Java, now delivering major versions on a two-year schedule (Java 7 2011, Java 8 2013, etc). Also letting us in on where to download early versions of 8, to report problems early on. Software Development in teams Being a scout leader, I'm drilled in different team building and workshop techniques, creating strong groups - of course, I had to attend Henrik Berglund's session on building successful teams. He spoke about the importance of clear goals, autonomy and agreed processes. Thomas Sundberg ended the conference by doing live remote pair programming with Alex in Rumania and a concrete tips for people wanting to try it out (for local collaboration, remember to wash and change clothes). Memory Master Keynote The conference keynote was delivered by the Swedish memory master Mattias Ribbing, showing off by remembering the order of a deck of cards he'd seen once. He made it interactive by forcing the audience to learn a memory mastering technique of remembering ten ordered things by heart, asking us to shout out the order backwards and we made it! I desperately need this - bought the book, will get back on the subject. Continuous Delivery The most impressive presenter was Axel Fontaine on Continuous Delivery. Very well prepared slides with key images of his message and moved about the stage like a rock star. The topic is of course highly interesting, how to create an infrastructure enabling immediate feedback to developers and ability to release your product several times per day. Tomek Kaczanowski delivered a funny and useful presentation on good and bad tests, providing comic relief with poorly written tests and the useful rules of thumb how to rewrite them. To conclude, we had a great time and hope to see you at jDays next year :)

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • Come up with a real-world problem in which only the best solution will do (a problem from Introduction to algorithms) [closed]

    - by Mike
    EDITED (I realized that the question certainly needs a context) The problem 1.1-5 in the book of Thomas Cormen et al Introduction to algorithms is: "Come up with a real-world problem in which only the best solution will do. Then come up with one in which a solution that is “approximately” the best is good enough." I'm interested in its first statement. And (from my understanding) it is asked to name a real-world problem where only the exact solution will work as opposed to a real-world problem where good-enough solution will be ok. So what is the difference between the exact and good enough solution. Consider some physics problem for example the simulation of the fulid flow in the permeable medium. To make this simulation happen some simplyfing assumptions have to be made when deriving a mathematical model. Otherwise the model becomes at least complex and unsolvable. Virtually any particle in the universe has its influence on the fluid flow. But not all particles are equal. Those that form the permeable medium are much more influental than the ones located light years away. Then when the mathematical model needs to be solved an exact solution can rarely be found unless the mathematical model is simple enough (wich probably means the model isn't close to reality). We take an approximate numerical method and after hours of coding and days of verification come up with the program or algorithm which is a solution. And if the model and an algorithm give results close to a real problem by some degree that is good enough soultion. Its worth noting the difference between exact solution algorithm and exact computation result. When considering real-world problems and real-world computation machines I believe all physical problems solutions where any calculations are taken can not be exact because universal physical constants are represented approximately in the computer. Any numbers are represented with the limited precision, at least limited by amount of memory available to computing machine. I can imagine plenty of problems where good-enough, good to some degree solution will work, like train scheduling, automated trading, satellite orbit calculation, health care expert systems. In that cases exact solutions can't be derived due to constraints on computation time, limitations in computer memory or due to the nature of problems. I googled this question and like what this guy suggests: there're kinds of mathematical problems that need exact solutions (little note here: because the question is taken from the book "Introduction to algorithms" the term "solution" means an algorithm or a program, which in this case gives exact answer on each input). But that's probably more of theoretical interest. So I would like to narrow down the question to: What are the real-world practical problems where only the best (exact) solution algorithm or program will do (but not the good-enough solution)? There are problems like breaking of cryptographic ciphers where only exact solution matters in practice and again in practice the process of deciphering without knowing a secret should take reasonable amount of time. Returning to the original question this is the problem where good-enough (fast-enough) solution will do there's no practical need in instant crack though it's desired. So the quality of "best" can be understood in any sense: exact, fastest, requiring least memory, having minimal possible network traffic etc. And still I want this question to be theoretical if possible. In a sense that there may be example of computer X that has limited resource R of amount Y where the best solution to problem P is the one that takes not more than available Y for inputs of size N*Y. But that's the problem of finding solution for P on computer X which is... well, good enough. My final thought that we live in a world where it is required from programming solutions to practical purposes to be good enough. In rare cases really very very good but still not the best ones. Isn't it? :) If it's not can you provide an example? Or can you name any such unsolved problem of practical interest?

    Read the article

  • Cannot get net 4.5rc to work

    - by ThomasD
    I have installed .net 4.5rc from http://msdn.microsoft.com/en-us/netframework/hh854779.aspx because I would like to use the new spatial features when developing with visual visual web developer 2010 express. But when I want to change the target framework to .net 4.5 in the project properties it is not there. I have checked the directory in C:\Windows\Microsoft.NET\Framework64 and can see that there is no v4.5 folder but the v4.0 directory has been updated with a timestamp corresponding to when I installed v4.5. The version of the v4.0 directory is v4.0.30319. I am running windows 7 on my computer. Any ideas why I cannot find v4.5 ? thanks Thomas *UPDATE Based on the comment below I have found out that I am running 4.5. For others reading this post, .net 4.5 replaces the files in the .net 4.0 directory but without renaming the directory to .net 4.5 (a bit confusing). To check whether your assemblies have been updated check the product version of eg. system.dll (right click - details). According to this post http://www.west-wind.com/weblog/posts/2012/Mar/13/NET-45-is-an-inplace-replacement-for-NET-40 if the product version is above 17000 it is running 4.5.

    Read the article

  • How to prepopulate an Outlook MailItem and avoiding a com Exception from the object model guard

    - by torrente
    Hi, I work for a company that develops a CRM tool and offers integration with MS Office(2003 & 2007) from windows XP to 7. (I'm working using Win7) My task is to call an Outlook instance (using C#) from this CRM tool when the user wants to send an email and prepopulate with data of the CRM tool (email, recipient, etc..) All of this already works just fine. The problem I'm having is that Outlook's "object model guard" is throwing com Exception (Operation aborted (Exception from HRESULT: 0x80004004 (E_ABORT))) the moment I try to read a protected value from the mailItem (such as mail.bodyHTML). Example Snippet: using MSOutlook = Microsoft.Office.Interop.Outlook; //untrusted Instance _outlook = new MSOutlook.Application(); MSOutlook.MailItem mail = (MSOutlook.MailItem)_outlook.CreateItem(MSOutlook.OlItemType.olMailItem); //this where the Exception occurs string outlookStdHTMLBody = mail.HTMLBody; I've done quite a bit of reading and know that my Outlook Instance (derived by using new Application) is considered untrusted and therefore the "omg" kicks in. I do have a workaround for development: I'm running VS2010 as Administrator and if I run Outlook as Administrator as well - all is good. I suppose this is due to them having the same integrity levels(high) and the UAC(?) is not complaining. But that just ain't the way to go for deployment. Now the question is: Is there a way to obtain a trusted instance of Outlook so that I can avoid this exception? I've already read that when developing an Office Add-In using VSTO one can obtain a trusted Instance from the OnComplete event and/or using "ThisAddin" But I "merely" want to start an outlook instance and preopulate it, and do not want to develop an Add-In since this is not the requirement. And to make it clear - I have no problem with pop ups informing the user that outlook is being accessed - I just want to get rid of the exception! So how can I get around this problem using code? Any help is highly appreciated! Thomas

    Read the article

  • using ontouch to zoom in

    - by user357032
    i have used some sample code and am trying to tweak it to let me allow the user to touch the screen and zoom in the code runs fine with no errors but when i touch the screen nothing happens package com.thomas.zoom; import android.content.Context; import android.graphics.Canvas; import android.graphics.drawable.Drawable; import android.view.KeyEvent; import android.view.MotionEvent; import android.view.View; public class Zoom extends View { private Drawable image; private int zoomControler=20; public Zoom(Context context) { super(context); image=context.getResources().getDrawable(R.drawable.icon); setFocusable(true); } @Override protected void onDraw(Canvas canvas) { // TODO Auto-generated method stub super.onDraw(canvas); //here u can control the width and height of the images........ this line is very important image.setBounds((getWidth()/2)-zoomControler, (getHeight()/2)-zoomControler, (getWidth()/2)+zoomControler, (getHeight()/2)+zoomControler); image.draw(canvas); } public boolean onTouch(int action, MotionEvent event) { action= event.getAction(); if(action == MotionEvent.ACTION_DOWN){ zoomControler+=10; } invalidate(); return true; } }

    Read the article

  • Getting DirectoryNotFoundException when trying to Connect to Device with CoreCon API

    - by ageektrapped
    I'm trying to use the CoreCon API in Visual Studio 2008 to programmatically launch device emulators. When I call device.Connect(), I inexplicably get a DirectoryNotFoundException. I get it if I try it in PowerShell or in C# Console Application. Here's the code I'm using: static void Main(string[] args) { DatastoreManager dm = new DatastoreManager(1033); Collection<Platform> platforms = dm.GetPlatforms(); foreach (var p in platforms) { Console.WriteLine("{0} {1}", p.Name, p.Id); } Platform platform = platforms[3]; Console.WriteLine("Selected {0}", platform.Name); Device device = platform.GetDevices()[0]; device.Connect(); Console.WriteLine("Device Connected"); SystemInfo info = device.GetSystemInfo(); Console.WriteLine("System OS Version:{0}.{1}.{2}", info.OSMajor, info.OSMinor, info.OSBuildNo); Console.ReadLine(); } My question: Does anyone know why I'm getting this error? I'm running this on WinXP 32-bit, plain jane Visual Studio 2008 Pro. I imagine it's some config issue since I can't do it from a Console app or PowerShell. Here's the stack trace as requested: System.IO.DirectoryNotFoundException was unhandled Message="The system cannot find the path specified.\r\n" Source="Device Connection Manager" StackTrace: at Microsoft.VisualStudio.DeviceConnectivity.Interop.ConManServerClass.ConnectDevice() at Microsoft.SmartDevice.Connectivity.Device.Connect() at ConsoleApplication1.Program.Main(String[] args) in C:\Documents and Settings\Thomas\Local Settings\Application Data\Temporary Projects\ConsoleApplication1\Program.cs:line 23 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException:

    Read the article

  • How to get a volume measurement of iPhone recording in dB, with a limit of at least 120dB

    - by Cyber
    Hello, I am trying to make a simple volume meter for the iPhone. I want the volume displayed in dB. When using this turorial, I am only getting measurements up to 78 dB. I've read that that is because the dBFS spectrum for 16 bit audio recordings is only 96 dB. I tried modifying this piece of code in the init funcyion: dataFormat.mSampleRate = 44100.0f; dataFormat.mFormatID = kAudioFormatLinearPCM; dataFormat.mFramesPerPacket = 1; dataFormat.mChannelsPerFrame = 1; dataFormat.mBytesPerFrame = 2; dataFormat.mBytesPerPacket = 2; dataFormat.mBitsPerChannel = 16; dataFormat.mReserved = 0; I changed the value of mBitsPerChannel, hoping to increase the bit value of the recording. dataFormat.mBitsPerChannel = 32; With that variable set to 32, the "mAveragePower" function returns only 0. So, how can i measure more decibels? All my code is practically the same as in the tutorial i posted above. Thanks in advance, Thomas

    Read the article

  • String method crashes program...

    - by TimothyTech
    Alright so i have two identical string methods... string CreateCust() { string nameArray[] ={"Tom","Timo","Sally","Kelly","Bob","Thomas","Samantha","Maria"}; int d = rand() % (8 - 1 + 1) + 1; string e = nameArray[d]; return e; } string CreateFood() { string nameArray[] = {"spagetti", "ChickenSoup", "Menudo"}; int d = rand() % (3 - 1 + 1) + 1; string f = nameArray[d]; return f; } however no matter what i do it the guts of CreateFood it will always crash. i created a test chassis for it and it always fails at the cMeal = CreateFood(); Customer Cnow; cout << "test1" << endl; cMeal = Cnow.CreateFood(); cout << "test1" << endl; cCustomer = Cnow.CreateCust(); cout << "test1" << endl; i even switched CreateCust with CreateFood and it still fails at the CreateFood Function... NOTE: if i make createFood a int method it does work...

    Read the article

  • Perform Grouping of Resultsets in Code, not on Database Level

    - by NinjaBomb
    Stackoverflowers, I have a resultset from a SQL query in the form of: Category Column2 Column3 A 2 3.50 A 3 2 B 3 2 B 1 5 ... I need to group the resultset based on the Category column and sum the values for Column2 and Column3. I have to do it in code because I cannot perform the grouping in the SQL query that gets the data due to the complexity of the query (long story). This grouped data will then be displayed in a table. I have it working for specific set of values in the Category column, but I would like a solution that would handle any possible values that appear in the Category column. I know there has to be a straightforward, efficient way to do it but I cannot wrap my head around it right now. How would you accomplish it? EDIT I have attempted to group the result in SQL using the exact same grouping query suggested by Thomas Levesque and both times our entire RDBMS crashed trying to process the query. I was under the impression that Linq was not available until .NET 3.5. This is a .NET 2.0 web application so I did not think it was an option. Am I wrong in thinking that? EDIT Starting a bounty because I believe this would be a good technique to have in the toolbox to use no matter where the different resultsets are coming from. I believe knowing the most concise way to group any 2 somewhat similar sets of data in code (without .NET LINQ) would be beneficial to more people than just me.

    Read the article

  • should variable be released or not? iphone-sdk

    - by psebos
    Hi, In the following piece of code (from a book) data is an NSDictionary *data; defined in the header (no property). In the viewDidLoad of the controller the following occurs: - (void)viewDidLoad { [super viewDidLoad]; NSArray *keys = [NSArray arrayWithObjects:@"home", @"work", nil]; NSArray *homeDVDs = [NSArray arrayWithObjects:@"Thomas the Builder", nil]; NSArray *workDVDs = [NSArray arrayWithObjects:@"Intro to Blender", nil]; NSArray *values = [NSArray arrayWithObjects:homeDVDs, workDVDs, nil]; data = [[NSDictionary alloc] initWithObjects:values forKeys:keys]; } Since I am really new to objective-c can someone explain to me why I do not have to retain the variables keys,homeDVDs,workDVDs and values prior exiting the function? I would expect prior the data allocation something like: [keys retain]; [homeDVDs retain]; [workDVDs retain]; [values retain]; or not? Does InitWithObjects copies (recursively) all objects into a new table? Assuming we did not have the last line (data allocation) should we release all the NSArrays prior exiting the function (or we could safely assumed that all NSArrays will be autoreleased since there is no alloc for each one?) Thanks!!!!

    Read the article

  • Making RDoc Ruby Gem Default on Mac OS X

    - by jkale
    Hey all, I've recently installed RDoc version (2.4.3) through Ruby gems to replace the one shipped with Mac OS X (version 1.0.1). Unfortunately, I can still only use RDoc 1.0.1 when I call run "rdoc" at the command line. rdoc -v returns: RDoc V1.0.1 - 20041108 I tried amending the $PATH variable to point the first entry to the RDoc 2.4.3 folder but no luck. I couldn't find anything about this online either, so I thought I'd ask here. Cheers! Update: Running "gem list -d --version 1.0.1 rdoc" returns: *** LOCAL GEMS *** rdoc (2.4.3) Authors: Eric Hodel, Dave Thomas, Phil Hagelberg, Tony Strauss Rubyforge: http://rubyforge.org/projects/rdoc Homepage: http://rdoc.rubyforge.org Installed at: /usr/local/lib/ruby/gems/1.8 RDoc is an application that produces documentation for one or more Ruby source files Therefore, it's definitely the Mac OSX version of RDoc that's interfering with the Gems version. Update 2: I found out, using: `bash --debugger rdoc` that the old version of RDoc was in /opt/local/bin. I deleted it and added my gems directory to my $PATH `export PATH=/usr/local/lib/ruby/gems/1.8/gems/` I now have a fresh working copy of the latest RDoc!

    Read the article

  • How do I [legally] get the current first responder on the screen on an iPhone?

    - by Anthony D
    I submitted my app a little over a week ago and got the dreaded rejection email today. It reads as follows: Dear -----------, Thank you for submitting --------- to the App Store. Unfortunately it cannot be added to the App Store because it is using a private API. Use of non-public APIs, which as outlined in the iPhone Developer Program License Agreement section 3.3.1 is prohibited: "3.3.1 Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs." The non-public API that is included in your application is firstResponder. Regards, iPhone Developer Program Now, the offending API call is actually a solution I found here on SO: UIWindow *keyWindow = [[UIApplication sharedApplication] keyWindow]; UIView *firstResponder = [keyWindow performSelector:@selector(firstResponder)]; So this is my question; How do I get the current first responder on the screen? I'm looking for a legal way that won't get my app rejected. Thanks. I figured this out based on the solution provided by Thomas below. Here is what the final code looks like: @implementation UIView (FindFirstResponder) - (UIView *)findFirstResonder { if (self.isFirstResponder) { return self; } for (UIView *subView in self.subviews) { UIView *firstResponder = [subView findFirstResonder]; if (firstResponder != nil) { return firstResponder; } } return nil; } @end

    Read the article

  • sorting names in a linked list

    - by sil3nt
    Hi there, I'm trying to sort names into alphabetical order inside a linked list but am getting a run time error. what have I done wrong here? #include <iostream> #include <string> using namespace std; struct node{ string name; node *next; }; node *A; void addnode(node *&listpointer,string newname){ node *temp; temp = new node; if (listpointer == NULL){ temp->name = newname; temp->next = listpointer; listpointer = temp; }else{ node *add; add = new node; while (true){ if(listpointer->name > newname){ add->name = newname; add->next = listpointer->next; break; } listpointer = listpointer->next; } } } int main(){ A = NULL; string name1 = "bob"; string name2 = "tod"; string name3 = "thomas"; string name4 = "kate"; string name5 = "alex"; string name6 = "jimmy"; addnode(A,name1); addnode(A,name2); addnode(A,name3); addnode(A,name4); addnode(A,name5); addnode(A,name6); while(true){ if(A == NULL){break;} cout<< "name is: " << A->name << endl; A = A->next; } return 0; }

    Read the article

  • pdfmark for docinfo metadata in pdf is not accepting accented characters in Keywords or Subject

    - by rpilkey
    I am inserting metadata into postscript files with a program, to be distilled to pdf with Adobe Distiller. I am using this code that I grabbed from Thomas Merz's "Web Publishing with Acrobat-PDF": /pdfmark where {pop} {userdict /pdfmark /cleartomark load put} ifelse [ /Title (mot accenté) /Author (mot accenté) /Subject (mot accenté) /Keywords (mot accenté) /DOCINFO pdfmark When you look at the metadata, the accented characters turn into "?" in the Subject and Keyword fields, but not the Title and Author fields. The characters are the same ascii 233 I tried replacing them with octal encoding (\351), which came out the same (Title and Author okay, Subject and Keywords messed up). file encoding is latin-1,unix eol I found a mention on adobe forums, but the answer didn't make sense to me. http://forums.adobe.com/message/1165593 I changed the encoding to utf-8, inserted the characters binarily (in VIM : <Ctrl-v>u00e9), no change. I tried inserting the BOM in a few places, it didn't work. This is with Acrobat Pro 9 I didn't notice this problem with Acrobat Pro 7. Does anybody know of a workaround to get the accented characters into ALL the metadata fields when modifying a postscript file, or tell me if I'm doing it wrong? It seems weird that different fields would not accept the same bytes.

    Read the article

  • Encapsulate update method inside of object or have method which accepts an object to update

    - by Tom
    Hi, I actually have 2 questions related to each other: I have an object (class) called, say MyClass which holds data from my database. Currently I have a list of these objects ( List < MyClass ) that resides in a singleton in a "communal area". I feel it's easier to manage the data this way and I fail to see how passing a class around from object to object is beneficial over a singleton (I would be happy if someone can tell me why). Anyway, the data may change in the database from outside my program and so I have to update the data every so often. To update the list of the MyClass I have a method called say, Update, written in another class which accepts a list of MyClass. This updates all the instances of MyClass in the list. However would it be better instead to encapulate the Update() method inside the MyClass object, so instead I would say foreach(MyClass obj in MyClassList) { obj.update(); } What is a better implementation and why? The update method requires a XML reader. I have written an XML reader class which is basically a wrapper over the standard XML reader the language natively provides which provides application specific data collection. Should the XML reader class be in anyway in the "inheritance path" of the MyClass object - the MyClass objects inherits from the XML reader because it uses a few methods. I can't see why it should. I don't like the idea of declaring an instance of the XML Reader class inside of MyClass and an MyClass object is meant to be a simple "record" from the database and I feel giving it loads of methods, other object instances is a bit messy. Perhaps my XML reader class should be static but C#'s native XMLReader isn't static.? Any comments would be greatly appreciated Thanks Thomas

    Read the article

  • making clean page via page.tpl.php

    - by user360051
    I have a Drupal module creating a page via hook_menu(). I am trying to make it so the page has no extraneous html output, only what I want. You can view the page here, http://www.thomashansen.me/chat/thomas. If you look at the source, you can see a strange script tag at the end. My page-chat.tpl.php looks like this, <?php // $Id$ ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="<?php print $language->language ?>" lang="<?php print $language->language ?>" dir="<?php print $language->dir ?>"> <head> </head> <body> <?php print $content; ?> </body> </html> Where is that script tag coming from? and how do I get rid of it? If you need more information just ask.

    Read the article

  • Mysql count and sum from two diferent tables

    - by Agent_x
    Hi all, i have a problem with some querys in php and mysql: I have 2 diferent tables with one field in common: table 1 id | hits | num_g | cats | usr_id |active 1 | 10 | 11 | 1 | 53 | 1 2 | 13 | 16 | 3 | 53 | 1 1 | 10 | 22 | 1 | 22 | 1 1 | 10 | 21 | 3 | 22 | 1 1 | 2 | 6 | 2 | 11 | 1 1 | 11 | 1 | 1 | 11 | 1 table 2 id | usr_id | points 1 | 53 | 300 Now i use this statement to sum just the total from the table 1 every id count + 1 too SELECT usr_id, COUNT( id ) + SUM( num_g + hits ) AS tot_h FROM table1 WHERE usr_id!='0' GROUP BY usr_id ASC LIMIT 0 , 15 and i get the total for each usr_id usr_id| tot_h | 53 | 50 22 | 63 11 | 20 until here all is ok, now i have a second table with extra points (table2) I try this: SELECT usr_id, COUNT( id ) + SUM( num_g + hits ) + (SELECT points FROM table2 WHERE usr_id != '0' ) AS tot_h FROM table1 WHERE usr_id != '0' GROUP BY usr_id ASC LIMIT 0 , 15 but it seems to sum the 300 extra points to all users: usr_id| tot_h | 53 | 350 22 | 363 11 | 320 Now how i can get the total like the first try but + the secon table in one statement? because now i have just one entry in the second table but i can be more there. thanks for all the help. =============================================================================== hi thomas thanks for your reply, i think is in the right direction, but im getting weirds results, like usr_id | tot_h 22 | NULL <== i think the null its because that usr_id as no value in the table2 53 | 1033 Its like the second user is getting all the the values. then i try this one: SELECT table1.usr_id, COUNT( table1.id ) + SUM( table1.num_g + table1.hits + table2.points ) AS tot_h FROM table1 LEFT JOIN table2 ON table2.usr_id = table1.usr_id WHERE table1.usr_id != '0' AND table2.usr_id = table1.usr_id GROUP BY table1.usr_id ASC Same result i just get the sum of all values and not by each user, i need something like this result: usr_id | tot_h 53 | 53 <==== plus 300 points on table1 22 | 56 <==== plus 100 points on table2 /////////the result i need //////////// usr_id | tot_h 53 | 353 <==== plus 300 points on table2 22 | 156 <==== plus 100 points on table2 I think the structure need to be something like this Pseudo statements ;) from table1 count all id to get the number of record where the usr_id are then sum hits + num_g and from table2 select the extra points where the usr_id are the same as table1 and get teh result: usr_id | tot_h 53 | 353 22 | 156

    Read the article

  • Emails not being delivered

    - by Tomtiger11
    Comment pointed out that this may fix my problem, and it did: Why don't mails show up in the recipient's mailspool? I use Postfix with Dovecot, and when I send an email from my gmail to my server, it is received at the server, but not at my email client using POP3. I can verify it being received at the server using the mail command. This is my main.cf: queue_directory = /var/spool/postfix command_directory = /usr/sbin daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix mail_owner = postfix myhostname = tom4u.eu myorigin = $myhostname inet_interfaces = all inet_protocols = all unknown_local_recipient_reject_code = 550 relay_domains = $mydomain alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 sendmail_path = /usr/sbin/sendmail.postfix newaliases_path = /usr/bin/newaliases.postfix mailq_path = /usr/bin/mailq.postfix setgid_group = postdrop html_directory = no manpage_directory = /usr/share/man sample_directory = /usr/share/doc/postfix-2.6.6/samples readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES smtpd_tls_cert_file = /etc/postfix/certs/cert.pem milter_protocol = 2 milter_default_action = accept smtpd_milters = inet:localhost:8891 non_smtpd_milters = inet:localhost:8891 smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous smtpd_sasl_local_domain = $myhostname smtpd_recipient_restrictions = reject_non_fqdn_recipient,permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination,permit broken_sasl_auth_clients = yes smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth If you could help me with this, I'd be most grateful, if you need any more information, please ask. var/log/maillog: May 30 22:44:25 tom4u postfix/smtpd[18626]: connect from mail-we0-f181.google.com[74.125.82.181] May 30 22:44:25 tom4u postfix/smtpd[18626]: 318F679B7F: client=mail-we0-f181.google.com[74.125.82.181] May 30 22:44:25 tom4u postfix/cleanup[18631]: 318F679B7F: message-id=<CAA_0zdxY-WUFGOC57K_yVn0G+5hN=8KSXuohJqMDB5Rm7bqu8w@mail.gmail.com> May 30 22:44:25 tom4u opendkim[15006]: 318F679B7F: mail-we0-f181.google.com [74.125.82.181] not internal May 30 22:44:25 tom4u opendkim[15006]: 318F679B7F: not authenticated May 30 22:44:25 tom4u opendkim[15006]: 318F679B7F: DKIM verification successful May 30 22:44:25 tom4u opendkim[15006]: 318F679B7F: s=20120113 d=gmail.com SSL May 30 22:44:25 tom4u postfix/qmgr[16282]: 318F679B7F: from=<[email protected]>, size=1720, nrcpt=1 (queue active) May 30 22:44:25 tom4u postfix/smtpd[18626]: disconnect from mail-we0-f181.google.com[74.125.82.181] May 30 22:44:25 tom4u postfix/local[18632]: 318F679B7F: to=<[email protected]>, relay=local, delay=0.17, delays=0.12/0.01/0/0.03, dsn=2.0.0, status=sent (delivered to mailbox) May 30 22:44:25 tom4u postfix/qmgr[16282]: 318F679B7F: removed May 30 22:45:32 tom4u dovecot: pop3-login: Login: user=<tom>, method=PLAIN, rip=SNIP, lip=176.31.127.165, mpid=18679 May 30 22:45:32 tom4u dovecot: pop3(tom): Disconnected: Logged out top=0/0, retr=0/0, del=0/0, size=0 May 30 22:46:32 tom4u dovecot: pop3-login: Login: user=<tom>, method=PLAIN, rip=SNIP, lip=176.31.127.165, mpid=18725 May 30 22:46:32 tom4u dovecot: pop3(tom): Disconnected: Logged out top=0/0, retr=0/0, del=0/0, size=0

    Read the article

  • SQL Saturday #44 Huntington Beach Recap

    What a great day. It was long and tiring, but rewarding in so many ways. On Sunday morning, I was driving home and I decided to take the Pacific Coast Highway from Huntington Beach.  It was a great chance to exhale and just enjoy the sun and smells of the beach (I really love SoCal sometimes). And for future reference for all you speakers, the beach and ocean are only 5 minutes from the SQL Saturday location.  I just could help noticing also the shocking number of high priced cars on the road (4 Bentleys, 3 Ferraris, 1 Aston Martins, 3 Maserati, 1 Rolls Royce, and 2 Lamborghinis).  It made me think about this: Price of all those cars: $ 150,000+.  Impacting the ability of people to learn: Priceless.  We have positively impacted the education, knowledge, capabilities of not only our attendees, but also all of their companies and people they might help as well.  That is just staggering and something to be immensely proud of. To all of my fellow community leaders, I salute you. So lets talk about the event Overall We had over 220 people register for the event and had 180+ people attend the event. I was shooting for the magical 200 number, but I guess it just gives us more motivation to make it even bigger and better next time. We had a few snags along the way, but what event doesnt, but I think everything turned out great. I did not hear any negative comments and heard lots of positive comments along with people asking when the next one is going to be (More on that later). Location- Golden West College We could not have asked for a better partner for the event. Herb Cohen from Golden West College was the wizard behind the curtains. From the beginning, he was our advocate to the GWC Board and was instrumental in getting our event approved. The day off, Herb was a HUGE help getting any and all logistics that we needed taken care of. In the craziness of the early morning registration crush it was a big help knowing that he and Bret Stateham (Blog | Twitter) were taking care of testing projectors in all the rooms. Anything we needed he was there and was even proactive in getting some things that I had not even thought of (i.e. a dumpster for all of our garbage). I cannot thank Herb enough along with other members of the GWC staff including Minnie Higgins of the Career and Technical Education Division office, Jack Taylor, public safety, and Ron Pryor, Tech Services Support. And last, but not least, the Wireless on campus was absolutely FANTASTIC! Some lessons learned Unless you are a glutton for punishment, as I no doubt am, you most certainly want to give yourself more than six weeks to plan the event. I am lucky that I have a very understanding wife and had a wonderful set of co-coordinators helping me out. A big thanks goes out to Phil, Marlon (Blog | Twitter), Nitin (Twitter), Thomas (Blog | Twitter), Bret (Blog | Twitter), Ben, and Laurie. Thankfully, the sponsor and speaker community was hugely supportive and we were able to fill out the entire event with speakers and sponsors. I have to say that there is not a lot that I would change after this years event. There are obviously going to be some things that we can do better or differently next time, but overall I think it was a great event and I was more than happy with the response we received from the community. Sponsors We obviously could not have put together our event without our sponsors. So certainly have to show them some love. Platinum Sponsors Quest Software http://www.quest.com My Space http://www.myspace.com/ Gold Strategy Companion http://www.strategycompanion.com Silver Fusion-IO http://www.fusionio.com Bronze WestClinTech http://westclintech.com Professional Association For SQL Server http://www.sqlpass.org Attunity http://www.attunity.com Sharepoint 360 http://www.sharepoint360.com Some additional Thanks Andy Warren (Blog | Twitter) Always there to answer my question and help out when I had some issues or questions with the website. The amount of work that he and everyone else put into SQL Saturday is very amazing. What a great gift to the community! Einstein Bros. Bagels They were our Breakfast Vendor and arrived perfectly on time with yummy bagels, sweets and most importantly coffee. Luccis Deli (http://www.luccisdeli.com) Luccis was out Lunch Vendor. They were great to work with and the food was excellent. They worked with us to give us a great price. Heard lots of great comments about the lunches. Definitely not your ordinary box lunch. Moving Forward Unfortunately, the work does not end after the event. We have a few things to clear up such as surveys, sponsor stuff, presentations uploaded to the website, expense reimbursement, stuff like that. Hopefully, all that should be cleared up within the next couple weeks. After that as a group we are going to get together and decide what our next steps are. We definitely want to keep some of the momentum that we are building as a SQL Community and channel that into future SQL Saturdays and other types of community events. In the meantime, for additional training be sure to check out your local User Group and PASS. San Diego SQL Server Users Group ( http://www.sdsqlug.org/home/index.cfm ) Orange County SQL Server Users Group ( http://www.sqloc.com/ ) L.A. SQL Server Users Group ( http://www.sql.la/ ) SQL PASS ( http://www.sqlpass.org/ ) 24 Hours of PASS ( http://www.sqlpass.org/24hours/2010/ ) So stay tuned, there will be more events to come in SoCal!!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Data Mining Resources

    - by Dejan Sarka
    There are many different types of analyses, each one with its own pros and cons. Relational reports have a predefined structure, and end users cannot change it. They are simple to use for end users. Reports can use real-time data and snapshots of data to show the state of a report at specific points in time. One of the drawbacks is that report authoring is limited to IT pros and advanced users. Any kind of dynamic restructuring is very limited. If real-time data is used for a report, the report has a negative impact on the performance of the source system. Processing of the reports might be slow because the data comes from relational database management systems, which are not optimized for reporting only. If you create a semantic model of your data, your end users can create ad-hoc report structures. However, the development is more complex because a developer is needed to create these semantic models. For OLAP, you typically use specialized database management systems. You get lightning speed of analyses. End users can use rich and thin clients to interactively change the structure of the report. Typically, they do it graphically. However, the development of an OLAP system is many times quite complex. It involves the preparation and maintenance of an enterprise data warehouse and OLAP cubes. In order to exploit the possibility of real-time restructuring of reports, the users must be both active and educated. The data is usually stale, as it is loaded into data warehouses and OLAP cubes with a scheduled process. With data mining, a structure is not selected in advance; it searches for the structure. As a result, data mining can give you the most valuable results because you can discover patterns you did not expect. A data mining model structure is limited only by the attributes that you use to train the model. One of the drawbacks is that a lot of knowledge is needed for a successful data mining project. End users have to understand the results. Subject matter experts and IT professionals need to understand business problem thoroughly. The development might be sometimes even more complex than the development of OLAP cubes. Each type of analysis has its own place in an enterprise system. SQL Server has tools for all kinds of analyses. However, data mining is the most advanced way of analyzing the data; this is the “I” in BI. In order to get the most out of it, you need to learn quite a lot. In this blog post, I am gathering together resources for learning, including forthcoming events. Books Multiple authors: SQL Server MVP Deep Dives – I wrote an introductory data mining chapter there. Erik Veerman, Teo Lachev and Dejan Sarka: MCTS Self-Paced Training Kit (Exam 70-448): Microsoft SQL Server 2008 - Business Intelligence Development and Maintenance – you can find a good overview of a complete BI solution, including data mining, in this book. Jamie MacLennan, ZhaoHui Tang, and Bogdan Crivat: Data Mining with Microsoft SQL Server 2008 – can’t miss this book if you want to mine your data with SQL Server tools. Michael Berry, Gordon Linoff: Mastering Data Mining: The Art and Science of Customer Relationship Management – data mining from both, business and technical perspective. Dorian Pyle: Data Preparation for Data Mining – an in-depth book about data preparation. Thomas and Ronald Wonnacott: Introductory Statistics – if you thought that you could get away without statistics, then you are not serious about data mining. Jiawei Han and Micheline Kamber: Data Mining Concepts and Techniques – in-depth explanation of the most popular data mining algorithms. Michael Berry and Gordon Linoff: Data Mining Techniques – another book that explains data mining algorithms, more fro a business perspective. Paolo Guidici: Applied Data Mining – very mathematical book, only if you enjoy statistics and mathematics in general. Forthcoming presentations I am presenting two data mining related sessions during the PASS Summit in Charlotte, NC: Wednesday, October 16th, 2013 - Fraud Detection: Notes from the Field – I am showing how to use data mining for a specific business problem. The presentation is based on real-life projects. Friday, October 18th: Excel 2013 Advanced Analytics – I am focusing on Excel Data Mining Add-ins, and how to use them together with Power Pivot and other add-ins. This is the most you can get out of Excel. Sinergija 2013, Belgrade, Serbia Tuesday, October 22nd: Excel 2013 Analytics to the Max – another presentation focusing on the most advanced analytics you can get in Excel. SQL Rally Amsterdam, Netherlands Thursday, November 7th: Advanced Analytics in Excel 2013 – and again I am presenting about data mining in Excel. Why three different titles for the same presentation? I don’t know, I guess I forgot the name I proposed every time right after I sent the proposal. Courses Data Mining with SQL Server 2012 – I wrote a 3-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. OK, now you know: no more excuses, start learning data mining, get the most out of your data

    Read the article

  • How do you use blog content?

    - by fatherjack
    Do you write a blog, have you ever thought about it? I think people fall into one of a few categories when it comes to blogs, especially blogs with technical content. Writing articles furiously - daily, twice daily and reading dozens of others. Writing the odd piece of content and read plenty of others' output. Started a blog once and its fizzled out but reading lots. Thought about starting a blog someday but never got around to it, hopping into the occasional blog when a link or a Tweet takes them there. Never thought about writing one but often catching content from them when Google (or other preferred search engine) finds content related to their search. Now I am not saying that either of these is right or wrong, nor am I saying that anyone should feel any compulsion to be in any particular category. What I would say is that you as a blog reader have the power to move blog writers from one category to another. How, you might ask? How do I have any power over a blog writer? It is very simple - feedback. If you give feedback then the blog writer knows that they are reaching an audience, if there is no response then they we are simply writing down our thoughts for what could amount to nothing more than a feeble amount of exercise and a few more key stokes towards the onset of RSI. Most blogs have a mechanism to alert the writer when there are comments, and personally speaking, if an email is received saying there has been a response to a blog article then there is a rush of enthusiasm, a moment of excitement that someone is actually reading and considering the text that was submitted and made available for the whole world to read. I am relatively new to this blog game and could be in some extended honeymoon period as I have also recently been incorporated into the Simple Talk 'stable'. I can understand that once you get to the "Dizzy Heights of Ozar" (www.brentozar.com) then getting comments and feedback might not be such a pleasure and may even be rather more of a chore but that, I guess, is the price of fame. For us mere mortals starting out blogging, getting feedback (or even at the moment for me, simply the hope of getting feedback) is what keeps it going. The hope that you will pick a topic that hasn't been done recently by Brad McGehee, Grant Fritchey,  Paul Randall, Thomas LaRock or any one of the dozen of rock star bloggers listed here or others from SQLServerPedia and so on, and then do it well enough to be found, reviewed, or <shudder> (re)tweeted to bring more visitors is what we are striving for, along with the fact that the content we might produce is something that will be of benefit to others. There is only so much point to typing content that no-one is reading and putting it on a blog. You may as well just write it in a diary. A technical blog is not like, say, a blog covering photography techniques where the way to frame and take a picture stands true whether it was written last week, last year or last century - technical content goes sour, quite quickly. There isn't much call for articles about yesterdays technology unless its something that still applies to current versions too, so some content written no more than 2 years ago isn't worth having now. The combination of a piece of content that you know is going to not last long and the fact that no-one reads it is a strong force against writing anything else. Getting feedback counters that despair and gives a value to writing something new. I would say that any feedback is good but there are obviously comments that are just so negative or otherwise badly phrased that they would hasten the demise of a blog but, in general most feedback will encourage a writer. It may not be a comment that supports or agrees with the main theme of a post but if it generates discussion or opens up a previously unexplored viewpoint it is contributing to the blog and is therefore encouraging to the writer. Even if you only say "thank you" before you leave a blog, having taken a section of script to use for yourself or having been given a few links to some content that has widened your knowledge it will be so welcome to the blog owner. Isn't it also the decent thing to do, acknowledging that you have benefited from another's efforts?

    Read the article

< Previous Page | 37 38 39 40 41 42 43  | Next Page >